TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Neural network training makes beautiful fractals

316 pointsby telotortiumover 1 year ago

19 comments

alexmolasover 1 year ago
The results of the experiment seem counterintuitive just because the used learning rates are huge (up to 10 or even 100). These are not the lr you would use in a normal setting. If you look at the region of small lr it seems all of them converge.<p>So I would say the experiment is interesting, but not representative of real world deep learning.<p>In the experiment, you have a function of 272 variables with a lot of minima and maxima, and at each gradient descent step you take huge steps (due to big lr). So my intuition is that convergence is more a matter of luck rather than hyperparameters.
评论 #39357790 未加载
评论 #39355483 未加载
评论 #39355972 未加载
评论 #39356570 未加载
telotortiumover 1 year ago
Twitter: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;jaschasd&#x2F;status&#x2F;1756930242965606582" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;jaschasd&#x2F;status&#x2F;1756930242965606582</a> ArXiv: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2402.06184" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2402.06184</a><p>Abstract:<p>&quot;Some fractals -- for instance those associated with the Mandelbrot and quadratic Julia sets -- are computed by iterating a function, and identifying the boundary between hyperparameters for which the resulting series diverges or remains bounded. Neural network training similarly involves iterating an update function (e.g. repeated steps of gradient descent), can result in convergent or divergent behavior, and can be extremely sensitive to small changes in hyperparameters. Motivated by these similarities, we experimentally examine the boundary between neural network hyperparameters that lead to stable and divergent training. We find that this boundary is fractal over more than ten decades of scale in all tested configurations.&quot;<p>Contains several cool animations zooming in to show the fractal boundary between convergent and divergent training, just like the classic Mandelbrot and Julia set animations.
PheonixPhartsover 1 year ago
I find this result absolutely fascinating, and is exactly the type of research into neural networks we should be expanding.<p>We&#x27;ve rapidly engineered our way to some <i>very</i> impressive models this past decade, and yet gap in our real understanding of what&#x27;s going on has widened. There&#x27;s a large list of very basic questions about LLMs that we haven&#x27;t answered (or in some cases, really asked). This is not a failing of people researching in this area, it&#x27;s only that things move so quickly there&#x27;s not enough time to ponder things like this.<p>At the same time, the result, unless I&#x27;m really misunderstanding, gives me the impression that anything other than grid search hyper parameter optimization is a fools errand. This would give credence to the notion that hyper parameter tuning really is akin to just re-rolling a character sheet until you get one that is over powered.
评论 #39355818 未加载
评论 #39357553 未加载
评论 #39355442 未加载
评论 #39355003 未加载
Imnimoover 1 year ago
If you are a fan of the fractals but feel intimidated by neural networks, the networks used here are actually pretty simple and not so difficult to understand if you are familiar with matrix multiplication. To generate a dataset, he samples random vectors (say of size 8) as inputs, and for each vector a target output, which is a single number. The network consists of an 8x8 matrix and an 8x1 matrix, also randomly initialized.<p>To generate an output from an input vector, you just multiply by your 8x8 matrix (getting a new size 8 vector), apply the tanh function to each element (look up a plot of tanh - it just squeezes its inputs to be between -1 and 1), and then multiply by the 8x1 matrix, getting a single value as an output. The elements of the two matrices are the &#x27;weights&#x27; of the neural network, and they are updated to push the output we got towards the target.<p>When we update our weights, we have to decide on a step size - do we make just a little tiny nudge in the right direction, or take a giant step? The plots are showing what happens if we choose different step sizes for the two matrices (&quot;input layer learning rate&quot; is how big of a step we take for the 8x8 matrix, and &quot;output layer learning rate&quot; for the 8x1 matrix).<p>If your steps are too big, you run into a problem. Imagine trying to find the bottom of a parabola by taking steps in the direction of downward slope - if you take a giant step, you&#x27;ll pass right over the bottom and land on the opposite slope, maybe even higher than you started! This is the red region of the plots. If you take really really tiny steps, you&#x27;ll be safe, but it&#x27;ll take you a long time to reach the bottom. This is the dark blue section. Another way you can take a long time is to take big steps that jump from one slope to the other, but just barely small enough to end up a little lower each time (this is why there&#x27;s a dark blue stripe near the boundary). The light green region is where you take goldilocks steps - big enough to find the bottom quickly, but small enough to not jump over it.
评论 #39354882 未加载
magicalhippoover 1 year ago
Here&#x27;s the associated blog post, which includes the videos: <a href="https:&#x2F;&#x2F;sohl-dickstein.github.io&#x2F;2024&#x2F;02&#x2F;12&#x2F;fractal.html" rel="nofollow">https:&#x2F;&#x2F;sohl-dickstein.github.io&#x2F;2024&#x2F;02&#x2F;12&#x2F;fractal.html</a><p>Not a ML&#x27;er so not sure what to make of it, beyond a fascinating connection.
fancyfredbotover 1 year ago
This is really fun, and beautiful. Also, despite what people are saying about the learning rates being unrealistic, the findings also really fit well with my own experience in using optimisation algorithms in the real world. If our code ever had a significant results difference between processor architectures (e.g. a machine taking an avx code path vs an sse one) you could be sure that every time the difference began during execution of an optimisation algorithm. The chaotic sensitivity to initial conditions really showed up there, just as it did in the author&#x27;s newton solver plot. Although I have knew at some level that this behaviour was chaotic it never would have occurred to me to ask if it made a pretty fractal!
why_only_15over 1 year ago
I appreciate that his acknowledgements here were to his daughter (&quot;for detailed feedback on the generated fractals&quot;) and wife (&quot;for providing feedback on a draft of this post&quot;)
fallingfrogover 1 year ago
This is kind of random but- I wonder, if you had a sufficiently complex lens, or series of lenses, perhaps with specific areas darkened, could you make a lens that shone light through if presented with, say, a cat, but not with anything else? Bending light and darkening it selectively could probably reproduce a layer of a neural net. That would be cool. I suppose, you would need some substance that responded to light in a <i>nonlinear</i> way.
评论 #39353839 未加载
评论 #39353774 未加载
评论 #39354097 未加载
评论 #39353960 未加载
mchinenover 1 year ago
This is really fun to see. I love toy experiments like this. I see that each plot is always using the same initialization of weights, which presumably makes it possible to have more smoothness between each pixel. I also would guess it&#x27;s using the same random seed for training (shuffling data).<p>I&#x27;d be curious to know what the plots would look like with a different randomness&#x2F;shuffling of each pixel&#x27;s dataset. I&#x27;d guess for the high learning rates it would be too noisy, but you might see fractal behavior at more typical and practical learning rates. You could also do the same with the random initialization of each dataset. This would get at if the chaotic boundary also exists in more practical use cases.
arkanoover 1 year ago
If you liked this, you may also enjoy: &quot;Back Propagation is Sensitive to Initial Conditions&quot; from the early 90&#x27;s. The discussion section is fun.<p><a href="https:&#x2F;&#x2F;proceedings.neurips.cc&#x2F;paper&#x2F;1990&#x2F;file&#x2F;1543843a4723ed2ab08e18053ae6dc5b-Paper.pdf" rel="nofollow">https:&#x2F;&#x2F;proceedings.neurips.cc&#x2F;paper&#x2F;1990&#x2F;file&#x2F;1543843a4723e...</a>
radarsat1over 1 year ago
I&#x27;m really curious what effect the common tricks for training have on the smoothness of this landscape: momentum, skip connections, batch&#x2F;layer&#x2F;etc normalization, even model size.<p>I imagine the fractal or chaos is still there, but maybe &quot;smoother&quot; and easier for metalearning to deal with?
Wherecombinatorover 1 year ago
This is pretty interesting. Can’t help but he reminded of all the times I’ve done acid. Having been deep in ‘fractal country’ a few times I’ve always felt the psychedelic effect is from my brain going haywire and messing up its pattern recognition. I wonder if it’s related to this.
KuzMenachemover 1 year ago
Reminds me of an excellent 3blue1brown video about Newton’s method [1]. You can see similar fractal patterns emerge there too.<p>[1] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=-RdOwhmqP5s" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=-RdOwhmqP5s</a>
int_19hover 1 year ago
I hope one day we&#x27;ll have generative AI capable of producing stuff like this on demand:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=8cgp2WNNKmQ" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=8cgp2WNNKmQ</a>
评论 #39355621 未加载
kaluover 1 year ago
So this author trained a neural network billions of times using different hyper parameters ? How much dod that cost ?
评论 #39356638 未加载
评论 #39354290 未加载
评论 #39354277 未加载
milliamsover 1 year ago
I&#x27;d argue that these are not fractals in the mathematical sense, but they do seem to be demonstrating chaos.
评论 #39357738 未加载
karxxmover 1 year ago
What’s that color-map called?
评论 #39355783 未加载
albertgtover 1 year ago
Dave Bowman&gt; omg it’s full of fractals<p>HAL&gt; why yes Dave what did you think I was made of
7eover 1 year ago
Today I learned that if something is detailed, it is now fractal.
评论 #39354591 未加载
评论 #39355873 未加载
评论 #39354498 未加载