TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Algebraic Simplification Neural Nets

3 点作者 daly大约 1 年前
It seems that there should be a way to algebraically simplify neural nets.<p>Trivially a node that has a zero weight can be removed as can any links to &#x2F; from that node.<p>It should also be possible to eliminate nodes that have a full-value (aka &#x27;1&#x27; on the 0-1 scale)<p>I have also seen work where the matrix multiplies during training can have columns &quot;collapsed&quot;.<p>The ultimate question might be applying an &quot;algebraic simplification&quot; of the final network to simplify a post-trained network used for inference.<p>The idea is to take a path through a network, constructing the equation for that path, reducing it to a shorter path by conbining nodes and weights.<p>It is certain that a node participates in several (hundred?) paths. In this case it might be useful to &quot;copy&quot; the node so it can be part of a path reduction without affecting other paths.<p>I believe that in theory some neural networks can be reduced to a single hidden layer[1]. The game would be to algebraically reduce network depth.<p>[1] Lee, et al. &quot;On the ability of neural nets to express distributions&quot; https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1702.07028.pdf (2021)

暂无评论

暂无评论