"CCC fans will be pleased to see that some of the new AlphaZero games include "fawn pawns," the CCC-chat nickname for lone advanced pawns that cramp an opponent's position. "<p>The "fawn pawn" naming comes from fans of kingscrusher on youtube who analyzes Leela Chess Zero games. <a href="https://www.youtube.com/user/kingscrusher" rel="nofollow">https://www.youtube.com/user/kingscrusher</a><p>His accent makes him saying "thorn pawn" sound like "fawn pawn", and thus the name has been given.<p>Here is a link to shirts he sells with "fawn pawn" on them. <a href="https://teespring.com/fawn-pawn?tsmac=store&tsmic=kingscrusher#pid=211&cid=5291&sid=front" rel="nofollow">https://teespring.com/fawn-pawn?tsmac=store&tsmic=kingscrush...</a>
I feel compelled to repeat the previous criticism on the games:
- Stockfisch was never designed nor tested on so many cores. Running on 44 cores may degrade the performance of Stockfish.
- Stockfish was designed to start a game with an opening book. In games with opening book, Stockfish has won significantly more games, too.
- Stockfish 8 was not a particularly good implementation. Stockfish 9 or 10 should be a better choice, though.<p>Nevertheless, the performance of AlphaZero was impressive, especially the positional knowledge it has acquired is second none. In all existing chess engines, positional knowledge is under-represented through simple heuristics. Acquiring positional knowledge was a longtime dream of chess programmers many generations. The dream was to create an engine which plays a more human-like style of chess. AlphaZero has realized this dream and even goes beyond that: extending the humans knowledge of chess.<p>I believe the most intriguing question right now is why AlphaZero stopped to improve after 9 hours of training? It’s due to the inherent problem of chess or due to the limits of ANN? If it’s the latter, how we can breakthrough and create a new generation of engines that can even surpass AlphaZero?
I'm glad that they took a lot of criticisms to heart this time.<p>The opening book and Syzygy Tablebases were enabled, so we're seeing Stockfish go at full power here. The only last bit of problem is that Stockfish's scaling isn't very good. But there's not much that the admins of the test can do about that.<p>This test seems fair IMO.
Do we know what hardware each was using? Aside from time, s criticism of the previous AlphaZero/Stockfish match was that AlphaZero was using a tremendous amount of TPU power while Stockfish was running on, essentially, an average laptop.
Previous discussion from a submission by a DeepMind engineer here: <a href="https://news.ycombinator.com/item?id=18620978" rel="nofollow">https://news.ycombinator.com/item?id=18620978</a>
> According to DeepMind, AlphaZero uses a Monte Carlo tree search, and examines about 60,000 positions per second, compared to 60 million for Stockfish.<p>The previous statement was talking about how much faster and more efficient AlphaZero is, but the interpretation I pickup from that sentence is the opposite. Is this a “golf score” situation where lower is better?
Still, Houdini is again leading on the CCCC live championship table, with Stockfish at the #2, and the open-source AlphaZero clone lc0 at #3<p><a href="https://www.chess.com/computer-chess-championship" rel="nofollow">https://www.chess.com/computer-chess-championship</a>
Will AlphaZero be available to more chess players to play? It would be interesting to find a blind spot in this engine in a format where humans could use their brains and more tools trying to beat it. Or is it really unbeatable?