TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Thermodynamic Natural Gradient Descent

200 点作者 jasondavies12 个月前

10 条评论

thomasahle12 个月前
The main point of this is that natural gradient descent is a second-order method. The main GD update equation is:<p>∇̃L(θ) = F⁻¹∇L(θ)<p>which requires solving a linear system. For this, you can use the methods from the author&#x27;s previous paper [Thermodynamic Linear Algebra](<a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2308.05660" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2308.05660</a>).<p>Since it&#x27;s hard to implement a full neural network on a thermodynamic computer, the paper suggests running one in parallel to a normal GPU. The GPU computes F and ∇L(θ), but offloads the linear system to the thermo computer, which runs in parallel to the digital system (Figure 1).<p>It is important to note that the &quot;Runtime vs Accuracy&quot; plot in Figure 3 uses a &quot;timing model&quot; for the TNGD algorithm, since the computer necessary to run the algorithm still doesn&#x27;t exist.
评论 #40471063 未加载
cs70212 个月前
Cool and interesting. The authors propose a hybrid digital-analog training loop that takes into account the curvature of the loss landscape (i.e., it uses second-order derivatives), and show with numerical simulations that if their method is implemented in a hybrid digital-analog physical system, each iteration in the training loop would incur computational cost that is linear in the number of parameters. I&#x27;m all for figuring out ways to let the Laws of Thermodynamics do the work of training AI models, if doing so enables us to overcome the scaling limitations and challenges of existing digital hardware and training methods.
stefanpie12 个月前
I know they mainly present results on deep learning&#x2F;neural network training and optimization, but I wonder how easy it would be to use the same optimization framework for other classes of hard or large optimization problems. I was also curious about this when I saw posts about Extropic (<a href="https:&#x2F;&#x2F;www.extropic.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.extropic.ai&#x2F;</a>) stuff for the first time.<p>I tried looking into any public info on their website about APIs or software stack to see what&#x27;s possible beyond NN stuff to model other optimization problems. It looks like that&#x27;s not shared publicly yet.<p>There are certainly many NP-hard and large combinatorial or analytical optimization problems still out there that are worth being able to tackle with new technology. Personally, I care about problems in EDA and semiconductor design. Adiabatic quantum computing was one technology with the promise of solving optimization problems (and quantum computing is still playing out with only small-scale solutions at the moment). Hoping that these new &quot;thermodynamic computing&quot; startups also might provide some cool technology to explore these problems with.
评论 #40468824 未加载
rsp198412 个月前
Leveraging thermodynamics to more efficiently compute second-order updates is certainly cool and worth exploring, however specifically in the context of deep learning I remain skeptical of its usefulness.<p>We already have very efficient second-order methods running on classical hardware [1] but they are basically not being used at all in practice, as they are outperformed by ADAM and other 1st-order methods. This is because optimizing highly nonlinear loss functions, such as the ones in deep learning models, only really works with very low learning rates, regardless of whether a 1st or a 2nd order method is used. So, comparatively speaking, a 2nd order method might give you a slightly better parameter update per step but at a more-than-slightly-higher cost, so most of the time it&#x27;s simply not worth doing.<p>[1] <a href="https:&#x2F;&#x2F;andrew.gibiansky.com&#x2F;blog&#x2F;machine-learning&#x2F;hessian-free-optimization&#x2F;" rel="nofollow">https:&#x2F;&#x2F;andrew.gibiansky.com&#x2F;blog&#x2F;machine-learning&#x2F;hessian-f...</a>
评论 #40470276 未加载
esafak12 个月前
Not having read the paper carefully, could someone tell me what the draw is? It looks like it is going to have the same asymptotic complexity as SGD in terms of sample size, per Table 1. Given that today&#x27;s large, over-specified models have numerous, comparable extrema, is there even a need for this? I wouldn&#x27;t get out of bed unless it were sublinear.
gnarbarian12 个月前
this reminds me of simulated annealing which I learned about in an AI class about a decade ago.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Simulated_annealing" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Simulated_annealing</a>
killerstorm12 个月前
What&#x27;s our current best guess of how animal neurons learn?
评论 #40477179 未加载
评论 #40476827 未加载
danbmil9912 个月前
Wasn&#x27;t Geoffrey Hinton going on about this about a year ago?
mirekrusin12 个月前
I don&#x27;t get it, gradient descend computation is super frequent, state&#x2F;input changes all the time, you&#x27;d have to reset heat landscape very frequently, what&#x27;s the point? No way there is any potential speedup opportunity there, no?<p>If anything you could probably do something with electromagnetic fields, their interference, possibly in 3d.
G3rn0ti12 个月前
Sounds great until<p>&gt; requires an analog thermodynamic computer<p>Wait. What?<p>Perhaps a trained physicist can comment on that. Thanks.
评论 #40468444 未加载
评论 #40468882 未加载
评论 #40468455 未加载
评论 #40468440 未加载
评论 #40468721 未加载
评论 #40469246 未加载