Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can set max_parallel_workers_per_gather to zero in the session where you're running the problem query, if that's helpful. That will disable the parallel query behavior entirely. You can just reset it back to what it was once the query is complete. I've run into this issue before and that was my go-to fix.


I'd strongly oppose setting the max parallel workers on session level just to bypass an execution plab. I stead understand the logic why pg behave the way it is, and change the query accordingly


I understand why it’s making the choice it’s making, I’m at a loss for how to convince it to make the choice I consider optimal. Given the available statistics knobs, it seems like my options are [actionable suggestions like the parent and the other helpful commenters], redesign the schema (lots of work), or pick another database (also lots of work but perhaps less than redesigning the schema only to hit a similar problem again).


Yeah you can file a patch with postgres and get it into the next point release or something but in the mean time you really want to keep that query from OOMing your database :)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: