Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Actually it can vary "from not that" far to "all the way" (StackOverflow). It depends very much on the schema and the type of workload. But in general people really underestimate how far an RDBMS can take you.


Yes and no - I think people also underestimate how miserable running a single giant RDBMS can be. Everything gets hard and dangerous - backup/restores, upgrades, online migrations & schema changes. Adding the wrong index can easily brings the whole thing down. They are complicated beasts that get very fragile at the large end (size+throughput)


Speaking of backups, how do you get consistent backup of a sharded DB? Not everything is sharded, some data are replicated and should be consistent.

Everything becomes fragile at the large end.


well im comparing it to a natively distributed database, like for eg dynamoDB. I've worked with truly massive dynamodb "instances" and you really dont have to do much, they arent fragile, they just work. I don't necessarily know how their backup system works under the hood, but it backs up to a consistent point in time.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: