TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How Alphazero defeated Stockfish with much less computational power and 0 training

26 pointsby lazy_nerdover 7 years ago

2 comments

nevi-meover 7 years ago
The 10 games that are published as part of the paper are at the bottom of the page. It&#x27;s like watching an alien playing chess, it seems so ... foreign.<p>I&#x27;ve only gone through a few of the games, but on the 3rd one I was wondering things like &quot;surely AG wins this piece, or a few pawns&quot; yet it chose not to take. Obviously knowing that SF is ELO 3200+, makes one concede that it might be a poisoned carrot, but for a program that was only fed rules to be able to decide that, is crazy.<p>It makes for very entertaining chess, and I think the wonderful people who work on tuning SF and other engines will have a lot to think about.<p>What&#x27;s the highest theoretic ELO rating that a computer can get?<p>A few people mentioned that it&#x27;d be interesting to see how AG performs on a home computer. Maybe that&#x27;ll be the differentiating factor. AG that&#x27;s handicapped by input resources.<p>Lastly, does AG constantly learn as it plays? i.e. once a chess model is created, does it get updated with new info on the fly, or would it require more training?
nevi-meover 7 years ago
&quot;0 training&quot;, isn&#x27;t that incorrect? The article doesn&#x27;t mention that. Playing against oneself is still training.