Why Each Technology Still Matters
Does COBOL even need to die?
After working through side-by-side comparisons, you might be surprised at the answer.
COBOL is still running critical infrastructure, and in some ways, it's more honest than modern alternatives.
OPEN INPUT EMPLOYEE-FILE.
READ EMPLOYEE-FILE KEY IS EMP-ID.
IF WS-SUCCESS
DISPLAY EMP-FIRST-NAME
ELSE
DISPLAY "Not found"
END-IF.
CLOSE EMPLOYEE-FILE.
Every single operation is explicit: You open files. You read them. You check status codes. You close them.
No magic. No surprises. No "framework updated, your app broke."
05 EMP-SALARY PIC 9(7)V99.
This is a DECIMAL number with EXACTLY 2 decimal places. No floating-point rounding errors. No surprise pennies disappearing. When you're calculating payroll for millions of employees, this matters.
JavaScript: 0.1 + 0.2 === 0.30000000000000004 ❌
Python Decimal: Decimal("0.1") + Decimal("0.2") == Decimal("0.3") ✓ (but you have to remember)
COBOL: Fixed-point by default ✓
01 EMPLOYEE-RECORD.
05 EMP-ID PIC X(4).
05 EMP-FIRST-NAME PIC X(20).
Every record is EXACTLY 78 bytes. Always. Forever.
No runtime "undefined is not a function." No "cannot read property of null." Your program either compiles or it doesn't.
COBOL was designed for one thing: process millions of records efficiently in batch jobs.
And it STILL does this better than most modern alternatives:
Modern "streaming" frameworks are often just reinventing COBOL's batch processing with more RAM.
SQL is declarative genius, and most NoSQL "solutions" end up rebuilding worse versions of it.
SELECT first_name, salary
FROM employees
WHERE department = 'Engineering'
AND salary > 80000
ORDER BY salary DESC;
The database figures out the optimal execution plan.
You don't manually loop. You don't manually index scan. You describe your intent, and a query optimizer with decades of research behind it does the work.
BEGIN TRANSACTION;
UPDATE accounts SET balance = balance - 100 WHERE id = 'A';
UPDATE accounts SET balance = balance + 100 WHERE id = 'B';
COMMIT;
Either both updates happen, or neither does. No partial state. No corruption.
This is not "legacy thinking" – this is fundamental correctness for financial systems, e-commerce, inventory management, and countless other domains.
MongoDB added multi-document transactions in 4.0 (2018) because developers kept asking for it.
Why? Because eventual consistency is hard to reason about when you're moving money.
FOREIGN KEY (user_id) REFERENCES users(id)
The database ENFORCES that relationships make sense.
You can't have orphaned records. You can't delete a user who still has orders. The database is your safety net.
Modern NoSQL approach: "Just be careful in application code" (pushing database-level guarantees into application logic, where they're easier to get wrong)
SQL databases have:
When you choose a NoSQL database, you're often giving up this entire optimization ecosystem.
MongoDB is genuinely useful when used for its intended purpose, and some problems truly benefit from document-orientation.
// v1 of your product catalog
{
_id: "product1",
name: "Widget",
price: 29.99
}
// v2 - just add new fields, no migration
{
_id: "product2",
name: "Gadget",
price: 49.99,
variants: [
{ size: "small", sku: "GAD-S" },
{ size: "large", sku: "GAD-L" }
],
reviews: [
{ user: "john", rating: 5, text: "Great!" }
]
}
No ALTER TABLE. No downtime. No migration scripts.
// Blog post with embedded comments - ONE query
{
_id: "post123",
title: "My Blog Post",
content: "...",
comments: [
{ user: "alice", text: "Great post!", date: "2024-01-15" },
{ user: "bob", text: "Interesting", date: "2024-01-16" }
]
}
For data that's genuinely hierarchical and accessed together, embedding is efficient.
No joins needed. No N+1 queries. Fetch the document, get everything you need.
// JSON-native API
db.products.find({ category: "electronics", price: { $lt: 100 } })
// vs SQL string building (before ORMs)
"SELECT * FROM products WHERE category = 'electronics' AND price < 100"
Working with JSON data in a JSON database with a JSON API feels natural for web developers.
MongoDB's sharding is built-in and (relatively) easy to set up.
But here's the key: Most applications don't need horizontal scalability.
Then you're paying MongoDB's costs (no joins, weaker consistency, operational complexity) for benefits you don't need.
"Modern" doesn't mean "better" – it means "recent."
Choosing MongoDB because:
This is how you end up with COBOL-style sequential scanning in Node.js.
Choosing a database because:
This is how you build systems that last.
It needs to stay where it belongs: in systems it was designed for. Batch processing, fixed-record files, financial calculations with exact decimal precision. It's good at these things.
It's fundamental computer science made practical. Relational algebra, declarative queries, ACID transactions – these concepts aren't outdated. They're often exactly what you need.
It's a tool with specific strengths and weaknesses. When you match those strengths to your problem, it's great. When you force it into relational patterns, you get the worst of both worlds.
Every technology in this comparison is still in active use in 2024.
COBOL processes your credit card transactions
SQL powers your bank account, your e-commerce checkout, your medical records
MongoDB runs content management systems, catalogs, and real-time analytics
They all survived because they're good at something.
The question isn't "which is best?" – it's "which is best for THIS problem?"
Engineering judgment is knowing when to use each tool – and having the courage to choose the "boring" technology when it's the right fit.