TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

How Alphazero defeated Stockfish with much less computational power and 0 training

26 点作者 lazy_nerd超过 7 年前

2 条评论

nevi-me超过 7 年前
The 10 games that are published as part of the paper are at the bottom of the page. It&#x27;s like watching an alien playing chess, it seems so ... foreign.<p>I&#x27;ve only gone through a few of the games, but on the 3rd one I was wondering things like &quot;surely AG wins this piece, or a few pawns&quot; yet it chose not to take. Obviously knowing that SF is ELO 3200+, makes one concede that it might be a poisoned carrot, but for a program that was only fed rules to be able to decide that, is crazy.<p>It makes for very entertaining chess, and I think the wonderful people who work on tuning SF and other engines will have a lot to think about.<p>What&#x27;s the highest theoretic ELO rating that a computer can get?<p>A few people mentioned that it&#x27;d be interesting to see how AG performs on a home computer. Maybe that&#x27;ll be the differentiating factor. AG that&#x27;s handicapped by input resources.<p>Lastly, does AG constantly learn as it plays? i.e. once a chess model is created, does it get updated with new info on the fly, or would it require more training?
nevi-me超过 7 年前
&quot;0 training&quot;, isn&#x27;t that incorrect? The article doesn&#x27;t mention that. Playing against oneself is still training.