TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Pruning neural networks without any data by iteratively conserving synaptic flow

11 点作者 blopeur将近 5 年前

1 comment

seesawtron将近 5 年前
Most of the pruning algorithms depend on training data iterations to evaluate which synaptic weights can be pruned&#x2F;removed to make the network optimal without affect performance on test data. New class of pruning algorithms have been developed which prune during initialization without looking at the data which however suffer from catastrophic layer-collapse or require an impractical amount of computation to obtain them. In this study, the authors claim to present a method that is (i) data independent, (ii) computationally efficiet, and (iii) achieves better performance to existing pruning algorithms.<p>Key idea of their iterative approach:<p>&quot;..conservation alone leads to layer-collapse by assigning parameters in the largest layers with lower scores relative to parameters in smaller layers. However, if conservation is coupled with iterative pruning, then when the largest layer is pruned, becoming smaller, then in subsequent iterations the remaining parameters of this layer will be assigned higher relative scores. With sufficient iterations, conservation coupled with iteration leads to a self-balancing pruning strategy allowing IMP to avoid layer-collapse.&quot;