TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Basic Neural Network on Python

172 点作者 dfrodriguez143将近 12 年前

8 条评论

gamegoblin将近 12 年前
Very good write up. If you want to trade speed and memory for accuracy, you can make a large lookup table for your sigmoidal function which should just about double the speed of it.<p>As an aside, and not to be too critical, because the post was great, but as (presumably) a non-native English speaker, you might do a spell-checker on your post. There are also some missing pronouns which make some sentences very Spanishy.
评论 #5996241 未加载
评论 #5995283 未加载
评论 #5995143 未加载
benhamner将近 12 年前
Both datasets you used (iris and digits) are way too simple for neural networks to shine.<p>Neural networks &#x2F; deep neural networks work best in domains where the underlying data has a very rich, complex, and hierarchical structure (such as computer vision and speech recognition). Currently, training these models is both computationally expensive and fickle. Most state of the art research in this area is performed on GPU&#x27;s and there are many tuneable parameters.<p>For most typical applied machine learning problems, especially on simpler datasets that fit in RAM, variants of ensembled decision trees (such as Random Forests) to perform at least as well as neural networks with less parameter tuning and far shorter training times.
评论 #5995479 未加载
评论 #5996528 未加载
theschreon将近 12 年前
You could try the following improvements to speed up neural network training:<p>- Resilient Propagation (RPROP), it significantly speeds up training for full batch learning: <a href="http://davinci.fmph.uniba.sk/~uhliarik4/recognition/resources/rprop/rb_1993_rprop.pdf" rel="nofollow">http:&#x2F;&#x2F;davinci.fmph.uniba.sk&#x2F;~uhliarik4&#x2F;recognition&#x2F;resource...</a><p>- RMSProp, introduced by Geoffrey Hinton, also speeds up training but can also be used for mini-batch learning: <a href="https://class.coursera.org/neuralnets-2012-001/lecture/67" rel="nofollow">https:&#x2F;&#x2F;class.coursera.org&#x2F;neuralnets-2012-001&#x2F;lecture&#x2F;67</a> (sign up to view the video)<p>Please consider more datasets when benchmarking methods:<p>- MNIST ( 70k 28x28 pixel images of handwritten digits ): <a href="http://yann.lecun.com/exdb/mnist/" rel="nofollow">http:&#x2F;&#x2F;yann.lecun.com&#x2F;exdb&#x2F;mnist&#x2F;</a> . There are several wrappers for Python on github.<p>- UCI Machine Learning Repository: <a href="http://archive.ics.uci.edu/ml/datasets.html" rel="nofollow">http:&#x2F;&#x2F;archive.ics.uci.edu&#x2F;ml&#x2F;datasets.html</a>
评论 #5996195 未加载
mbq将近 12 年前
You are just doing a simple validation on a test set rather than cross-validation; the point of CV is to make many iterations of validation on different train-test splits and average the results.
评论 #5996208 未加载
lelandbatey将近 12 年前
Hmmmm... The layout of the page seems very messed up. Is anyone else having it show up like this?:<p><a href="http://puu.sh/3vTL8.png" rel="nofollow">http:&#x2F;&#x2F;puu.sh&#x2F;3vTL8.png</a>
评论 #5998995 未加载
评论 #5996913 未加载
scotty79将近 12 年前
What learning scientists think brain actually uses? Back-propagation and such seem like a method god would use to architect static brain for given task.
评论 #5998588 未加载
primelens将近 12 年前
Good writeup. Is there a feed for that blog? I only found one for the comments.
评论 #5995470 未加载
skatenerd将近 12 年前
&quot;def function(...)&quot;