This is extremely impressive - it's cool that talented programmers are pushing the limits of computer science to advance the state of the art of chess engines.<p>However, I also wish that there were comparable efforts to create AIs that train humans. Basically, figure out a way to systematically, efficiently, and scalably train amateurs into masters. That IMO would be absolutely amazing (and something I'd gladly pay for).
What I would love to see is work on chess engines having better strength levels for amateurs. For me at least there is a line where everything below a certain level I can beat 95 percent of the time and everything at or above that level I lose to 95 percent of the time.
note that it's just a chess "engine" and doesn't come with a human-usable front-end.<p>But it's not hard to get the thing to use xboard in linux.<p>This is what i use to invoke it: xboard -fUCI -fcp stockfish -sUCI -scp stockfish&<p>it's very challenging - and in my experience, will unfortunately peg a core unless you explicitly pause the front end (the "P" button in between the two arrows in the upper right)
<i>> During the final event, after playing 64 games against Komodo, Stockfish won with the score of 35½-28½. No doubt is further allowed: Stockfish is the best chess player ever!</i><p>From a statistical point of view, this isn't actually significant, despite the fact that draws help reduce the variance.<p>45 of those games are draws, leaving a 13-6 score in favor of Stockfish. Considering a null hypothesis of a binomial distribution with n=19 and equal chance of winning, the two-sided p-value for that score is 0.115. Unless you already have strong evidence that Stockfish is better than Komodo, you shouldn't conclude anything about which one is best.
Has anyone else watched <a href="http://en.lichess.org/tv" rel="nofollow">http://en.lichess.org/tv</a>?<p>If this is two humans playing against each other in real-time, that is really impressive! It's so mind-boggling fast (mind you I'm not a chess player).
Can anyone comment on what makes stockfish different from other chess engines? If I'm curious about state of the art in computer chess, is it worthwhile to study its source? What interesting ideas should I expect to see there beyond what I vaguely know to be the standard approach from introductory AI courses, i.e. some sort of alpha-beta pruning search?
I bet it would be unnerving for someone like Kasparov or Magnus Carlsen to play this program, where it would have 1 minute on the clock and they could have the whole day. It would make many of the moves in under a second and they'd be better than the grandmasters' moves!
Stockfish 5 was released yesterday: <a href="http://stockfishchess.org/" rel="nofollow">http://stockfishchess.org/</a><p>As mentioned elsewhere in this thread, Stockfish is just an engine - you must install a GUI separately. XBoard is well known, but there are better alternatives:<p><a href="http://chessx.sourceforge.net/" rel="nofollow">http://chessx.sourceforge.net/</a><p><a href="http://scidvspc.sourceforge.net/" rel="nofollow">http://scidvspc.sourceforge.net/</a>
I always thought of the heuristics for evaluating a chess position as the really hard part of building a chess engine; i.e. how do you capture all of the positional subtleties in a number to feed into minimax? But looking at the source, it's not really that complicated [1]. Can someone who knows more than me comment on that? Is it that the innovations are elsewhere? That good chess really can be boiled down to < 1000 LOC? That the numbers in this heuristic are just super expertly tuned?<p>[1] <a href="https://github.com/mcostalba/Stockfish/blob/master/src/evaluate.cpp" rel="nofollow">https://github.com/mcostalba/Stockfish/blob/master/src/evalu...</a>
Are chess engines still trying to play against humans in an interesting way? (I understand they beat human players, but that people feel computers play in dull ways).<p>Is there a Turing Test for computer chess, where humans and computers play each other and they, and commentators, analyse the play, but no-one knows who is a computer or human until after the commentary is published?<p>And if we ignore humans are people playing computers against other computers for some kind of machine learning play?<p>And how optimized for speed is the software? Do they really crunch out all performance they can?<p>(Sorry for the barrage of questions but I don't know enough about this space to do efficient websearches).
TCEC is cool but has less statistical power than many ratings lists out there, which show that Houdini, Komodo and Stockfish are very closely matched, with Houdini having a slight edge at long time controls and a moderate edge at quick time controls. Stockfish does release more frequently and I'm not sure which version competed in TCEC, but until the lists catch up this article is fluff.
I'm rather surprised at how relatively simple and small the codebase is: <a href="https://github.com/mcostalba/Stockfish/tree/master/src" rel="nofollow">https://github.com/mcostalba/Stockfish/tree/master/src</a>
Just try havin fun..
<a href="http://en.lichess.org/SMVJP07p" rel="nofollow">http://en.lichess.org/SMVJP07p</a>
The strongest player.. me kinda noobs..
lichess doesn't seem to be giving it enough juice to perform at its stated levels at the moment.<p>On my first try I managed to draw the highest AI level, rated at 2510, while my rating is under 2000 irl. (I was unable to find an "offer draw" button so relied on the 50-move rule)
Against stockfish running on my PC that would be impossible.
<a href="http://en.lichess.org/reDfuSvI" rel="nofollow">http://en.lichess.org/reDfuSvI</a>
I don't think that the present situation, when the top chess playing program is free and open-source, is good for innovation.<p>I estimate that it would take me six months of work to get to the top-20 in the world, and I don't see how I can justify that work to myself.