Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To show how unlikely it is that it will beat the strongest GPL chess program on modest hardware, a back of the envelope calculation.

The program they want to use ranks around 500 elo points lower than the strongest open source program on the same hardware. (source: http://computerchess.org.uk/ccrl/4040/rating_list_all.html)

It is estimated that doubling the speed of the computer adds 50-70 elo points (http://en.wikipedia.org/wiki/Computer_chess cites David Levy's book how computers play chess.)

This means that they will have to achieve anywhere between a 2^7 to a 2^10 speedup over a normal single PC. Once you consider the latency problem described above, it is very unlikely that they can achieve this speed up. To put this problem into context, even on a quad-core a three times speedup is considered pretty good in computer chess.



It is indeed unlikely. But if we can actually get to the point that 2^10 web clients (1024) are as strong as Houdini or Rybka on one high-end hardware, it won't be that hard to double the size of the web grid a couple more times.

As you pointed out, the main challenge is scaling efficiently, and not getting a poor speedup even with that many workers (I think we can aim for 60% speedup with YBWC). We hope to have an edge here with Node.js' I/O performance, but it's still a big challenge (and that's what keeps it interesting ;-)


>I think we can aim for 60% speedup with YBWC

Note that most top engines barely get this speedup with 4 cores on an SMP system. I doubt you can get the same efficiency in a distributed system with 2000 cores.


As noted on the Garbochess JS page, I didn't estimate the strength of the JS version. It would be much weaker than Garbochess 2.20 on the same hardware. Having a node.js uci adapter would be fun to test this.

It will be very difficult to make Javascript competitive with Stockfish, even across 1000 computers. Still, a very cool project!


If they can get it scaling decently, it's a good achievement in its own right. Say that JS with a modern JIT is 4 times slower than C/C++. This means the latency of their system will look relatively better by a factor of 4.

If they now find an algorithm that scales to 2000 nodes with an efficiency as low as 20%, this is a breakthrough achievement. It would be portable to a C/C++ based program with "only" a 4 times faster interconnect.

That said, an engine with a primitive search scales better because the tree is more regular and it's easier to predict the real size of workloads. It could end up that the algorithm doesn't actually work for a strong engine.

But anyway, the "mere" 4x factor due to JavaScript is peanuts compared to the rest of the problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: