This is extremely impressive - it's cool that talented programmers are pushing the limits of computer science to advance the state of the art of chess engines.
However, I also wish that there were comparable efforts to create AIs that train humans. Basically, figure out a way to systematically, efficiently, and scalably train amateurs into masters. That IMO would be absolutely amazing (and something I'd gladly pay for).
In my opinion, this would be a lot more difficult. I am slightly naive as to exactly how stockfish works, but:
Computers can do a lot of accurate brute-forcing; humans must see the position in a more holistic, intuitive way.
Excellent human players and excellent computer players are presumably doing completely different calculation tasks.
I would suggest that computers are still bad at approaching the task in a human-like way, but they will always be able to improve their method via Moore's law (at a minimum) where humans are stuck at their current level.
Stockfish might be able to tell you what it was doing, but not in a way that it would be reasonable for a human to follow.
What you are looking for is a teacher. You can pay for those ;)
The closest we have got to a teacher app is perhaps a MOOC: not much computational progress has been made.
> Stockfish might be able to tell you what it was doing, but not in a way that it would be reasonable for a human to follow.
I (a non-chess player) just tried to play a game against the highest level AI (and lost, obviously).
Doing an analysis of the game afterwards, this is exactly what I experience: I do "f4" and I'm told (through the analysis tool) that the best move was "Nf3".
Now, the obvious question this leads to is: why? Why was this a better move? I don't think that, as a human being, memorizing "best moves" is going to lead to much improvement: we need to know WHY that move was the best move.
I'm sure there is a human-friendly way to explain why one move is the best move, and why my move wasn't, but the computer probably doesn't know this explanation, because it's approaching it from a brute-force perspective.
Surely, a chess computer can brute force all possible combinations, and deduce that this was the best move. But when this is not possible for a human being, just informing the me that "what you just did was not the best move", doesn't really do much to help me (as an amateur player).
"f4" creates a hole on "e4" in your pawn structure. This hole becomes a valuable outpost where your opponent can position a knight (or other piece) without it being harassed by one of your pawns. This means to get rid of it, you need to trade a piece for it, and when your opponent recaptures, it will give them a passed pawn.
"Nf3" protects your d4 pawn and also threatens the e5 square. Probably most importantly, it clears a piece so you can castle kingside.
>Stockfish might be able to tell you what it was doing, but not in a way that it would be reasonable for a human to follow.
Looking at the code posted, if the sub-scores were stored in an array and only added at the end, it would be possible to compare the positions after two moves by sub-score and find the biggest differences between subscores. Then you could say that position A is better than position B because it avoid doubled pawns, or has better bishops, etc.
It's not a huge leap from modelling the game of chess to modelling the skill of chess-playing. Stockfish already offers post-game analysis. Imagine extending that to analyze a players entire career, and offer a series of problems, games against specially-constructed opponents, analysis of relevant historical games etc, all of which are designed to help the player improve. Given Siri, Google Now, Watson and so on, we're probably not far from being able to have a meaningful natural-language conversation with a computer on a narrow subject like chess.
One could imagine this kind of thing being extended to teaching other, similarly focussed skills, at least for beginners. Piano. Tennis. Rock climbing. Maybe even things like soccer and basketball.
But that kind of teaching-based-on-deep-analysis is a long way off for subjects like physics or electrical engineering. Computers can't do physics, much less evaluate human physicists. The best we can do in these areas is something like the Khan Academy, where computers present "content" created by humans, administer standardized tests designed by humans and present the results to humans for interpretation.
So yeah, teaching chess in a really sophisticated way isn't all that useful in the sense that physics or EE are useful. But really, if we could teach computers to understand physics better than people do, we'd use them to make breakthroughs in physics, and that would be a much bigger deal than being able to teach physics more effectively.
On the other hand, we don't play chess or tennis or piano because they're useful, so expert AI teachers for these subjects would be really valuable.
> a way to systematically, efficiently, and scalably train amateurs into masters.
Why do you need an AI? I don't play chess, but I suppose the above is more or less what an elite chess school provides, and you could likely reproduce it with books + practice + private lessons. That is, what Ericson calls deliberate practice and coaching.
By far, the fastest way to improve at chess is with a coach. Books work in the beginning, but soon you are crawling around in the dark. You can't identify your weaknesses, so you can't correct them. After studying the wrong thing for a year, you fix one of your weaknesses by accident, and you improve. A coach bypasses all of that wasted time. The challenge is not to automate a chess curriculum. That already exists on many websites selling chess software. Those are useful to learn certain theory and burn it into your brain by drilling over and over. The challenge is to create a chess teacher that can identify your specific weaknesses and correct them. A middle ground approach might work well, where you take one mental model of chess and develop a program to train that specific mental model. For example there is the Nimzowitsch model where chess is seen as siege warfare, with specific meta-strategies. If that model fits with how you think, then great. But it doesn't fit everyone. One day this super efficient learning will be reality. It sounds kind of boring. With an exponential game like chess, everyone will be at about the same level, except a few who throw their life away chasing n+1 while everyone else settled for n and having a life.
The game analysis is definitely the part of this that interests me the most. That it could beat me isn't that impressive—I'm not that good. But that it could tell me what I did wrong, so I could improve my game... that's really neat.
However, I also wish that there were comparable efforts to create AIs that train humans. Basically, figure out a way to systematically, efficiently, and scalably train amateurs into masters. That IMO would be absolutely amazing (and something I'd gladly pay for).