TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Hacker's guide to Neural Networks (2012)

424 pointsby catherinezngalmost 8 years ago

11 comments

frenchie4111almost 8 years ago
I&#x27;ve read so many of these, none of them include the information I need.<p>If someone wrote a &quot;Hackers guide to Tuning Hyperparameters&quot; or &quot;Hackers guide to building models for production&quot; I would ready&#x2F;share the shit out of those.
评论 #14771460 未加载
评论 #14771221 未加载
评论 #14771152 未加载
评论 #14771881 未加载
评论 #14774773 未加载
评论 #14771206 未加载
NegatioNalmost 8 years ago
This has been submitted quite a few times in the past: <a href="https:&#x2F;&#x2F;hn.algolia.com&#x2F;?query=karpathy.github.io%2Fneuralnets&amp;sort=byPopularity&amp;prefix&amp;page=0&amp;dateRange=all&amp;type=story" rel="nofollow">https:&#x2F;&#x2F;hn.algolia.com&#x2F;?query=karpathy.github.io%2Fneuralnet...</a>
评论 #14769723 未加载
评论 #14770986 未加载
postitalmost 8 years ago
A good sit in probability theory and multivariate calculus is the first thing you should spend your time if you want to understand NN, ML and most of AI for once.<p>These hacker guides only scratch the surface of the subject which, in part, contributes to creating this aura of black magic that haunts the field; I&#x27;m not saying that is a bad thing though, but it needs to be a complementary material, not the way to go.
staredalmost 8 years ago
When it comes to backpropagation, PyTorch introduction contains some valuable parts: <a href="http:&#x2F;&#x2F;pytorch.org&#x2F;tutorials&#x2F;beginner&#x2F;deep_learning_60min_blitz.html" rel="nofollow">http:&#x2F;&#x2F;pytorch.org&#x2F;tutorials&#x2F;beginner&#x2F;deep_learning_60min_bl...</a>
debaclealmost 8 years ago
Static neural networks on Rosetta Code for basic things like Hello World, etc, would do a lot to aid in people&#x27;s understanding of neural networks. It would be interesting to visualize different trained solutions.
nategrialmost 8 years ago
Knew this wasn&#x27;t for me when he had to introduce what a derivative was with a weird metaphor. I like this approach to teaching things (it&#x27;s Feynman-y) but half the time I end up hung up on trying to understand a particular author&#x27;s hand-waving for a concept I already grok.
adamkochanowiczalmost 8 years ago
Thank you for posting this! I hadn&#x27;t seen it and have been looking for a simple guide like this one.
finchiskoalmost 8 years ago
thanks for sharing, apparently i missed past submits
ameliusalmost 8 years ago
Hmm, I&#x27;ve just scanned through this, but it seems this gets the concept of stochastic gradient descent (SGD) completely wrong.<p>The nice part of SGD is that you can backpropagate even functions that are not differentiable.<p>This is totally missed here.
评论 #14771512 未加载
评论 #14771146 未加载
评论 #14771808 未加载
评论 #14771171 未加载
评论 #14771210 未加载
评论 #14771589 未加载
GoldDustalmost 8 years ago
As someone who is quite new to this field and also a software developer I really look forward to seeing this progress. I write and look at code all day so for me this is much easier to read than the dry math!
du_bingalmost 8 years ago
Wonderful guide, thanks for sharing!