Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Unlocking high-performance PostgreSQL with key memory optimizations (stormatics.tech)
97 points by camille_134 36 days ago | hide | past | favorite | 8 comments


Does not instill confidence when the queries they provide don't work.

For anyone curious, the corrected query:

SELECT sum(blks_hit)::numeric / nullif(sum(blks_hit + blks_read), 0) AS cache_hit_ratio FROM pg_stat_database;


I just ran their query, and it works


It does not in PG17.


Works on this Postgres 17.7:

postgres=# show server_version; server_version ------------------------------- 17.7 (Debian 17.7-3.pgdg13+1) (1 row)

postgres=# SELECT sum(blks_hit)/nullif(sum(blks_hit+blks_read),0) AS cache_hit_ratio FROM Pg_stat_database; cache_hit_ratio ------------------------ 0.99448341937558994728 (1 row)


I generally use this: https://pgtune.leopard.in.ua/


Yup, I was expecting pgtune being mentioned in the article.

And maybe something like HammerDB to check performances.


Nice tool. Do you know maybe similar tool for MySQL ?


"How do we size shared_buffers? A common starting rule of thumb is:"

I have an improved rule of thumb:

Give a lot of RAM. Give it all the RAM you can. And then buy some more and also give it to shared_buffers. Buffering data in RAM/CPU cache is crucial for performance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: