TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Searchformer: Beyond A* – Better planning with transformers via search dynamics

181 点作者 yeldarb大约 1 年前

4 条评论

a_wild_dandan大约 1 年前
Ah, I remember reading this paper! Essentially, they created synthetic data by solving search problems using A*. Trained on this data, transformers unsurprisingly learned to solve these problems. They then improved their synthetic data by repeatedly solving a given problem many times with A*, and keeping only the shortest step solution. Transformers learned to be competitive with this improved search heuristic too!<p>Pretty remarkable stuff. Given the obsession over chat bots, folk often miss how revolutionary transformer sequence modeling is to...well, any sequence application that <i>isn&#x27;t</i> a chat bot. Looking solely at speeding up scientific simulations by ~10x, it&#x27;s a watershed moment for humanity. When you include the vast space of <i>other</i> applications, we&#x27;re in for one wild ride, y&#x27;all.
评论 #40178700 未加载
评论 #40176669 未加载
评论 #40176665 未加载
yeldarb大约 1 年前
&gt; While Transformers have enabled tremendous progress in various application settings, such architectures still lag behind traditional symbolic planners for solving complex decision making tasks. In this work, we demonstrate how to train Transformers to solve complex planning tasks and present Searchformer, a Transformer model that optimally solves previously unseen Sokoban puzzles 93.7% of the time, while using up to 26.8% fewer search steps than standard A∗ search. Searchformer is an encoder-decoder Transformer model trained to predict the search dynamics of A∗. This model is then fine-tuned via expert iterations to perform fewer search steps than A∗ search while still generating an optimal plan. In our training method, A∗&#x27;s search dynamics are expressed as a token sequence outlining when task states are added and removed into the search tree during symbolic planning. In our ablation studies on maze navigation, we find that Searchformer significantly outperforms baselines that predict the optimal plan directly with a 5-10× smaller model size and a 10× smaller training dataset. We also demonstrate how Searchformer scales to larger and more complex decision making tasks like Sokoban with improved percentage of solved tasks and shortened search dynamics.<p>Neat; TIL about Sokoban puzzles. I remember playing Chip&#x27;s Challenge on Windows 3.1 when I was a kid which had a lot of levels like that.
评论 #40180795 未加载
评论 #40177449 未加载
评论 #40177089 未加载
nextaccountic大约 1 年前
Due to the no free lunch theorem [0], any search algorithm that makes some problems faster will necessarily make other problems slower. How does the worst case for an algorithm like this look like?<p>I think that part of the appeal of A* to me is that I can readily visualize why the algorithm failed at some pathological inputs.<p>[0] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;No_free_lunch_in_search_and_optimization" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;No_free_lunch_in_search_and_op...</a>
评论 #40177030 未加载
评论 #40178715 未加载
评论 #40178182 未加载
评论 #40178250 未加载
teleforce大约 1 年前
Previous post and discussions on HN:<p>Beyond A*: Better Planning with Transformers:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39479478">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39479478</a>