TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Kan: Kolmogorov-Arnold Networks

28 点作者 chuckhend大约 1 年前

4 条评论

vessenes大约 1 年前
So, this is a bit of an opus -- the authors would definitely like to usher in a new paradigm for compute is my inexpert reading.<p>Overall the idea, if I understand it, is that rather than using linear activations between layers in a deep learning setup (so called MLP - Multi-Layer Perceptrons), you use functions (&quot;splines&quot;) which can be learned using some sort of backprop at the nodes.<p>The idea (well, one of many) is that using more complex functions and making them learnable during training means you can encapsulate more complex functions in a lower dimensionality matrix.<p>Upsides: they claim smaller number of training steps and lower dimensionality for a number of toy problems. Also, since you&#x27;re training functions and these functions might have periodicity to them, and that periodicity can be adjusted during training, and that periodicity might tie to sets of data with varying states, they claim you can update parts of the periodic functions for specific information without impacting the <i>whole</i> function, and therefore this is a better conceptual architecture for dealing with special cases, &quot;forgetting&quot; and so on.<p>Downsides: they mention it&#x27;s much slower to train than an MLP architecture, although they claim they didn&#x27;t try very hard to optimize.<p>My totally uninformed complaints: The networks are trained on REALLY simple problems, like approximating x*y. I mean this is a fairly beautifully written and illustrated 40+ page paper, with really quite a lot of LaTeX, and I (mostly) read the whole thing. It feels hard to read it without at least seeing, like, some MNIST training results.<p>Upshot - I&#x27;m not sure that we&#x27;ll all be training Kan networks in 2024.
kkylin大约 1 年前
Previous discussion:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40219205">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40219205</a>
shawntan大约 1 年前
There&#x27;s a general trap people working on deep learning tend to fall into, thinking &quot;Why don&#x27;t we learn the activation function as well?&quot;<p>The answer to that really should be that a combination of linear and non-linear activations can learn you the non-linearities you need. <a href="https:&#x2F;&#x2F;twitter.com&#x2F;bozavlado&#x2F;status&#x2F;1787376558484709691" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;bozavlado&#x2F;status&#x2F;1787376558484709691</a><p>Though there are other types of functions that these &quot;universally approximate&quot; formulations don&#x27;t extrapolate well to, and solutions to that might actually be an improvement. (think: sin,cos)
yibaimeng大约 1 年前
One thing to note is the authors mostly have &quot;AI for science&quot; in mind, instead of machine learning in general. Quite a few examples in the paper are about discovering new conservation laws.