TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Intuition for Simulated Annealing

67 pointsby adbgeabout 11 years ago

5 comments

joe_the_userabout 11 years ago
I first read about simulated annealing more than thirty years ago.<p>It&#x27;s a fascinating approach. The analogy to physics is especially fascinating. I have only kept up with the literature, not seriously implemented it, so I could be wrong. But what I understand is that while simulated annealing is good for many things, it hasn&#x27;t shown itself to be best for anything and the improvements on it have tended to changes that weakened the analogy with physics [1]. I find this disappointing since in its raw form, simulated annealing suggested a sort of &quot;physics of information processing&quot; mapping hard computation problems to states of matter. But it seems like analogies may be as misleading as they are productive sometimes.<p>[1] for example traveling salesman, <a href="http://en.wikipedia.org/wiki/Travelling_salesman_problem" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Travelling_salesman_problem</a>, the first problem I saw simulated annealing applied to [in a reprint someone tossed out in Evans Hall at UCB circa 1981].
评论 #7693879 未加载
perrygeoabout 11 years ago
To explain simulated annealing to lay audiences, I rely on a similar (inverted) example. You&#x27;re at the top of the mountain. You want to find the lowest spot. If you just keep walking downhill only, you may reach the sea eventually. But you may never reach Death Valley (86m below sea level) unless you are willing to climb some mountains at the beginning of your trip. IOW you need to accept sub-optimal moves (with decreasing probability) in order to adequately explore your surroundings.
评论 #7693188 未加载
gjm11about 11 years ago
Fewer pretty pictures, but much funnier and describing several different optimization algorithms (in the context of neural network training, but most of it doesn&#x27;t depend on that): &quot;Kangaroos and training neural networks&quot;.<p>ftp:&#x2F;&#x2F;ftp.sas.com&#x2F;pub&#x2F;neural&#x2F;kangaroos.txt
mrcactu5about 11 years ago
unless the global maximum is much higher, how much can be benefit from moving from the local max ?<p>in some way shape or form, I hear argument over and over - and from intelligent people.<p>We see this kind of risk-averse behavior in social situations where nobody wants to take the &quot;hit&quot; of moving from their current strategy.<p>or there many have been a time, when we identified the maximum and the entire landscape has changed around them, so the strategy is no longer optimal. this is another real situation.<p>also, &quot;simulated annealing&quot; seems to be a bit of a misnomer for this type of mixed strategy.
plantainabout 11 years ago
I think &#x27;shaking the box&#x27; is more akin to random restart hill-climbing.
评论 #7694054 未加载