TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Grandmaster-level chess without search

365 pointsby lawrenceyan7 months ago

27 comments

tzs7 months ago
OT: what&#x27;s the state of the art in non-GM level computer chess?<p>Say I want to play chess with an opponent that is at about the same skill level as me, or perhaps I want to play with an opponent about 100 rating points above me for training.<p>Most engines let you dumb them down by cutting search depth, but that usually doesn&#x27;t work well. Sure, you end up beating them about half the time if you cut the search down enough but it generally feels like they were still outplaying you for much of the game and you won because they made one or two blunders.<p>What I want is a computer opponent that plays at a level of my choosing but plays a game that feels like that of a typical human player of that level.<p>Are there such engines?
评论 #41874461 未加载
评论 #41875558 未加载
评论 #41877679 未加载
评论 #41876292 未加载
评论 #41877852 未加载
评论 #41874316 未加载
评论 #41875964 未加载
评论 #41881695 未加载
评论 #41880419 未加载
评论 #41883667 未加载
评论 #41877326 未加载
评论 #41888465 未加载
评论 #41876599 未加载
评论 #41879309 未加载
评论 #41875838 未加载
评论 #41875345 未加载
评论 #41874396 未加载
hlfshell7 months ago
I did a talk about this! (And also wrote up about my talk here[1]). This paper is a great example of both knowledge distillation. It&#x27;s less of a paper about chess and more about how complicated non linear search functions - complete with whatever tuning experts can prepare - can be distilled into a (quasi-linear, if it&#x27;s a standardized input like chess) transformer model.<p>[1]: <a href="https:&#x2F;&#x2F;hlfshell.ai&#x2F;posts&#x2F;deepmind-grandmaster-chess-without-search&#x2F;" rel="nofollow">https:&#x2F;&#x2F;hlfshell.ai&#x2F;posts&#x2F;deepmind-grandmaster-chess-without...</a>
评论 #41874003 未加载
Scene_Cast27 months ago
If anyone is looking to get into chess neural nets, I <i>highly</i> recommend this repo - <a href="https:&#x2F;&#x2F;github.com&#x2F;sgrvinod&#x2F;chess-transformers">https:&#x2F;&#x2F;github.com&#x2F;sgrvinod&#x2F;chess-transformers</a><p>It uses paradigmatic PyTorch with easy to read code, and the architecture is similar to the current best performing chess neural nets.
osti7 months ago
<a href="https:&#x2F;&#x2F;lczero.org&#x2F;blog&#x2F;2024&#x2F;02&#x2F;how-well-do-lc0-networks-compare-to-the-greatest-transformer-network-from-deepmind&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lczero.org&#x2F;blog&#x2F;2024&#x2F;02&#x2F;how-well-do-lc0-networks-com...</a><p>The best neural network chess engine&#x27;s authors wrote about this deepminds publication.
评论 #41878014 未加载
chvid7 months ago
But the gigantic synthetic dataset that is used for training is created with plenty of traditional search. So it is all a bit silly but I guess cool none the less ...
评论 #41877644 未加载
评论 #41877359 未加载
评论 #41877711 未加载
sourcepluck7 months ago
I believe GM and chess author (and all-round lovely fellow) Matthew Sadler rigged up Leela Zero to effectively play off intuition and do very little or no search for training games. He could usually beat it, but not always. Think it might have been in The Silicon Road to Chess Improvement.
评论 #41879168 未加载
评论 #41877553 未加载
sksxihve7 months ago
Isn&#x27;t generating the training data by running stockfish on all the board positions for all the games just encoding the search tree into the transformer model?<p>So increasing the number of parameters to the model would allow it to encode more of the search tree and give better performance, which doesn&#x27;t seem all that interesting.
评论 #41881152 未加载
QuantumGood7 months ago
This repository provides an implementation of our paper Grandmaster-Level Chess Without Search. <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2402.04494" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2402.04494</a><p>The recent breakthrough successes in machine learning are mainly attributed to scale: namely large-scale attention-based architectures and datasets of unprecedented scale. This paper investigates the impact of training at scale for chess. Unlike traditional chess engines that rely on complex heuristics, explicit search, or a combination of both, we train a 270M parameter transformer model with supervised learning on a dataset of 10 million chess games. We annotate each board in the dataset with action-values provided by the powerful Stockfish 16 engine, leading to roughly 15 billion data points. Our largest model reaches a Lichess blitz Elo of 2895 against humans, and successfully solves a series of challenging chess puzzles, without any domain-specific tweaks or explicit search algorithms. We also show that our model outperforms AlphaZero&#x27;s policy and value networks (without MCTS) and GPT-3.5-turbo-instruct. A systematic investigation of model and dataset size shows that strong chess performance only arises at sufficient scale. To validate our results, we perform an extensive series of ablations of design choices and hyperparameters.
评论 #41907523 未加载
jackmalpo7 months ago
what i would love is an engine that thinks more like a human. presumably since this uses stockfish annotated games, it basically ends up thinking like a computer. thinking like a human would be awesome for game reviews to walk through things to note in different positions (tuned to my elo).
评论 #41875976 未加载
评论 #41880025 未加载
amoss7 months ago
If you solve chess then you have a tree that is too large for us to currently compute (about 10^80 although my memory may be way off). Annotating that tree with win &#x2F; loss &#x2F; draw would allow an optimal player without search. The two obvious approaches to compression &#x2F; optimization are to approximate the tree, and to approximate the annotations. How well those two approaches would work depends a lot on the structure of the tree.<p>This result seems to tell us less about the power of the training approach (in absolute terms) and more about how amenable the chess game tree is to those two approaches (in relative terms). What I would take away is that a reasonable approximation of that tree can be made in 270M words of data.
评论 #41879508 未加载
BoardsOfCanada7 months ago
It&#x27;s somewhat telling that they chose Stockfish as the oracle and not AlphaZero.
评论 #41877566 未加载
评论 #41878266 未加载
imranhou7 months ago
From the page: &quot;We also show that our model outperforms AlphaZero&#x27;s policy and value networks (without MCTS) and GPT-3.5-turbo-instruct.&quot;<p>Why compare this to GPT-3.5-turbo-instruct? Is that near SOTA in this space?
评论 #41875394 未加载
RayVR7 months ago
I forget the rough adjustment factors, but it is worth noting that lichess Elo is not the same as chess.com or FIDE. I think lichess is typically ~300 points above chess.com.<p>This implies the model is around 2500 blitz vs humans. As blitz elo are often much higher than in classical time controls, 2500 elo on chess.com places it firmly in the &#x27;good but not great&#x27; level.<p>I am very curious to know whether the model suffers from the same eval problems vs the well known &quot;anti-bot&quot; openings that stockfish is susceptible to at limited search depths.
评论 #41875357 未加载
评论 #41875461 未加载
YeGoblynQueenne7 months ago
Excellent work but I suggest a slightly different title:<p>&quot;What would Stockfish Do?&quot;<p>A more appropriate title; because Stockfish is a search-base system and DeepMind&#x27;s approach wouldn&#x27;t work without it.<p>Oh, btw, this is (yet another) a Neurosymbolic system of the &quot;compiling system 2 to system 1&quot; type.
dgraph_advocate7 months ago
This is missing a makefile to automate the manual installation steps
ChrisArchitect7 months ago
Associated discussion on the paper:<p><i>Grandmaster-Level Chess Without Search</i><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39301944">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39301944</a>
squidgedcricket7 months ago
Would it be feasible to create a complete lookup table of &#x27;best&#x27; moves for all given board configurations? I&#x27;m not sure how to determine the total number of configurations. Not the same as a tablebase, just a single next move rather than sequence to checkmate.<p>It wouldn&#x27;t be competitive against top tier players and AI, but I wouldn&#x27;t be surprised if it could beat me. &#x27;Instantly&#x27; knowing the next move would be a cool trick.
评论 #41875783 未加载
评论 #41875833 未加载
评论 #41876086 未加载
bob10297 months ago
<a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2402.04494" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2402.04494</a><p>&gt; Board states <i>s</i> are encoded as FEN strings which we convert to fixed-length strings of 77 characters where the ASCII-code of each character is one token. A FEN string is a description of all pieces on the board, whose turn it is, the castling availability for both players, a potential en passant target, a half-move clock and a full-move counter. We essentially take any variable-length field in the FEN string, and convert it into a fixed-length sub-string by padding with ‘.’ if needed. We never flip the board; the FEN string always starts at rank 1, even when it is the black’s turn. We store the actions in UCI notation (e.g., ‘e2e4’ for the well-known white opening move). To tokenize them we determine all possible legal actions across games, which is 1968, sort them alphanumerically (case-sensitive), and take the action’s index as the token, meaning actions are always described by a single token (all details in Section A.1).<p>I am starting to notice a pattern in these papers - Writing hyper-specific tokenizers for the target problem.<p>How would this model perform if we made a small change to the rules of chess and continued using the same tokenizer? If we find we need to <i>rewrite the tokenizer for every problem variant</i>, then I argue this is just ordinary programming in a very expensive disguise.
评论 #41881003 未加载
评论 #41881117 未加载
评论 #41881108 未加载
barelyusable7 months ago
What are some of the go-to books&#x2F;articles for computer chess? I like the game and have a decent understanding of basics, so studying algorithms based on the game would be a good opportunity for me to learn conventional algos, but also RL&#x2F;ML&#x2F;MCTS etc. Also I wonder what is the go-to codebase these days?
评论 #41877723 未加载
评论 #41877236 未加载
rawsh7 months ago
You can actually get solid performance with pretrained chat models: <a href="https:&#x2F;&#x2F;raw.sh&#x2F;posts&#x2F;chess_puzzles" rel="nofollow">https:&#x2F;&#x2F;raw.sh&#x2F;posts&#x2F;chess_puzzles</a><p>On lichess puzzles gpt4o with the compiled prompt is around 70%, I think the 270M transformer is around 95%
评论 #41900329 未加载
joelthelion7 months ago
I wonder if you could creatively combine this model with search algorithms to advance the state of the art in computer chess? I wouldn&#x27;t be surprised to see such a bot pop up on tcec in a couple years.
评论 #41873900 未加载
评论 #41900388 未加载
评论 #41873666 未加载
dougSF707 months ago
Slightly off topic but I built <a href="https:&#x2F;&#x2F;chessladders.com" rel="nofollow">https:&#x2F;&#x2F;chessladders.com</a>
评论 #41880252 未加载
veltas7 months ago
Playing chess with a 20TB cerebellum
dudeinjapan7 months ago
OK but when will someone make grandmaster-level search without chess?
vanjajaja17 months ago
what i like about this is that it implies you can build heuristics good enough to make it to GM level. this is great because i find calculating moves a headache
评论 #41879463 未加载
xiaodai7 months ago
this paper is so dumb. so you modeled the output of stockfish? stockfish does use simulation or selfplay or search. so you&#x27;ve outsourced search and dont do it yourself so you can claim to be &quot;without search&quot;
评论 #41878189 未加载
评论 #41877623 未加载
bicsi7 months ago
They built a dumber clone of Stockfish, and they call it ‘zero’ for some reason. What is the meaning behind ‘zero’ anyways? It used to refer to zero-shot, but now it seems like it’s just a marketing term.
评论 #41877219 未加载
评论 #41877220 未加载
评论 #41877013 未加载