Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Each map reduce job can still only use one thread.

Before, under spider monkey, only one job could be executed per mongod instance. Now you can have many jobs executing in parallel on a single server instance but each one of them still only uses a single core.



Do you know if this is likely to be permanent? In an application I work on, I do map-reduces on Mongo data using a third-party framework at the moment specifically to work around the inability to easily parallelize map-reduce jobs on a single machine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: