Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The real insight here is recognizing when network latency is your bottleneck. For many workloads, even a mediocre local database beats a great remote one. The question isn't "which database is best" but "does my architecture need to cross network boundaries at all?"


(author here) yes 100% this. This was never mean't to be a SQLite vs Postgres article per say, more about the fundamental limitations of the network databases in some contexts. Admittedly, at times I felt I struggle to convey this in the article.


Sure. Now keep everything in memory and use redis or memcache. Easy to get performance if you change the rules.


You can use SQLite for persistence and a hash map as cache. Or just go for Mongo since it's web scale.


yep, then add an AWS worker in-between


SQLite can also do in memory


Yeah, very good point. It all comes down to requirements. If you require persistence, then we can start talking about redundancy and backup, and then suddenly this performance metric becomes far less relevant.


Backups are to the second with litestream.


So much this. My inner perf engineer shudders every time I see one of these "modern" architectures that involve databases sited hundreds of miles from the application servers.


This article is very much a reaction to that. The problem is the problem as Mike Acton would say.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: