TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The most cited deep learning papers

452 pointsby sdominoover 8 years ago

12 comments

cr0shover 8 years ago
I can understand why it probably isn&#x27;t on the list yet (not as many citations, since it is fairly new) - but NVidia&#x27;s &quot;End to End Learning for Self-Driving Cars&quot; needs to be mentioned, I think:<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1604.07316" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1604.07316</a><p><a href="https:&#x2F;&#x2F;images.nvidia.com&#x2F;content&#x2F;tegra&#x2F;automotive&#x2F;images&#x2F;2016&#x2F;solutions&#x2F;pdf&#x2F;end-to-end-dl-using-px.pdf" rel="nofollow">https:&#x2F;&#x2F;images.nvidia.com&#x2F;content&#x2F;tegra&#x2F;automotive&#x2F;images&#x2F;20...</a><p>I implemented a slight variation on this CNN using Keras and TensorFlow for the third project in term 1 of Udacity&#x27;s Self-Driving Car Engineer nanodegree course (not special in that regard - it was a commonly used implementation, as it works). Give it a shot yourself - take this paper, install TensorFlow, Keras, and Python, download a copy of Udacity&#x27;s Unity3D car simulator (it was recently released on GitHub) - and have a shot at it!<p>Note: For training purposes, I highly recommend building a training&#x2F;validation set using a steering wheel controller, and you&#x27;ll want a labeled set of about 40K samples (though I have heard you can get by with much fewer, even unaugmented - my sample set actually used augmentation of about 8k real samples to boost it up to around 40k). You&#x27;ll also want to use GPU and&#x2F;or a generator or some other batch processing for training (otherwise, you&#x27;ll run out of memory post-haste).
评论 #13658838 未加载
评论 #13659021 未加载
评论 #13658725 未加载
pizzaover 8 years ago
<a href="http:&#x2F;&#x2F;people.idsia.ch&#x2F;~juergen&#x2F;deep-learning-conspiracy.html" rel="nofollow">http:&#x2F;&#x2F;people.idsia.ch&#x2F;~juergen&#x2F;deep-learning-conspiracy.htm...</a> oh Juergen<p>&gt; <i>Machine learning is the science of credit assignment. The machine learning community itself profits from proper credit assignment to its members. The inventor of an important method should get credit for inventing it. She may not always be the one who popularizes it. Then the popularizer should get credit for popularizing it (but not for inventing it). Relatively young research areas such as machine learning should adopt the honor code of mature fields such as mathematics: if you have a new theorem, but use a proof technique similar to somebody else&#x27;s, you must make this very clear. If you &quot;re-invent&quot; something that was already known, and only later become aware of this, you must at least make it clear later.</i>
评论 #13658335 未加载
评论 #13657801 未加载
kriroover 8 years ago
This might be as good a place to ask as any. Does anyone have suggestions on the problem of annotating natural language text to get a ground truth for things that have no readily available ground truth (subjective judgments of content etc.)? I do own the book &quot;Natural Language Annotation&quot; which is good but not exactly what I need. The part of annotation guidelines and how the annotation was done in practice is often only brushed over in many research papers. I mean I get it at a high level it&#x27;s basically have a couple of raters, calculate inter- and intrarater reliability and try to optimize that. However like I said I&#x27;m struggling a bit with details. What are actually good values to aim for, how many experts do you want, do you even want experts or crowd source, what do good annotation guidelines look like, how do you optimize them etc.? Just to play around with the idea a bit, we did a workshop with four raters and 250 tweets each (raters simply assigned one category for the entire tweet) and that was already quite a bit of work and feels like it&#x27;s on the way to little side of things.<p>I feel like I should find a lot more info on this in the sentiment analysis literature but I don&#x27;t really.
评论 #13671360 未加载
评论 #13659041 未加载
nojvekover 8 years ago
Someone needs to make a summary of the top papers and explain it in a way lay man can understand. I would pay $500 for such a book&#x2F;course explaining the techniques.<p>I&#x27;ve been reading a number of this papers but it&#x27;s really tough to understand the nitty gritties of it.
评论 #13658202 未加载
curuinorover 8 years ago
No PDP book? It&#x27;s old and weird but interesting and has a lot of original ideas, notwithstanding the actual original backprop being from before then. Nor the original backprop stuff?
评论 #13657564 未加载
评论 #13661595 未加载
评论 #13655860 未加载
评论 #13655840 未加载
评论 #13657573 未加载
pks2006over 8 years ago
I always wanted to apply the knowledge of the deep learning to my day to day work. We build our own hardware that runs the Linux on Intel CPU and then launches a virtual machine that has our propriety code. Our code generates a lot of system logs that varies based on what is the boot sequence, environment temperature, software config etc. Now we spend a significant amount of time go over these logs when the issues are reported. Most of the time, we have 1 to 1 mapping of issue to the logs but more often, RCA&#x27;ing the issue requires the knowledge of how system works and co-relating this to the logs generated. We have tons of these logs that can be used as training set. Now any clues on how we can put all these together to make RCA&#x27;ing the issue as less human involved as possible?
评论 #13655898 未加载
评论 #13655921 未加载
mathoffover 8 years ago
The most cited deep learning papers: <a href="https:&#x2F;&#x2F;scholar.google.com&#x2F;scholar?q=&quot;deep+learning&quot;" rel="nofollow">https:&#x2F;&#x2F;scholar.google.com&#x2F;scholar?q=&quot;deep+learning&quot;</a>
gv2323over 8 years ago
Has anyone downloaded them into their own separate folders and zipped the whole thing up?
gravypodover 8 years ago
This is a really lucky find for me. I was just about to do something to try and get into machine learning. Right now I need some help getting started with writing some machine learning code. I don&#x27;t know where to start. I&#x27;ve come up with a very simple project that I think this would work very well for.<p>I want to buy a Raspberry Pi Zero, put it in a nice case, add to push buttons and turn it into a car music player (hook it into the USB charger and 3.5mm jack in my car). The two buttons will be &quot;like&quot; and &quot;skip &amp; dislike&quot;. I&#x27;ll fill it with my music collection, write a python script that just finds a song, plays it, and waits for button clicks.<p>I want the &quot;like&quot; button to be positive reinforcement and the &quot;skip &amp; dislike&quot; to be negative reinforcement.<p>Could someone point me in the right direction?
评论 #13656830 未加载
评论 #13657321 未加载
评论 #13656611 未加载
applecoreover 8 years ago
Classic papers can be worth reading but it&#x27;s still useful to know what&#x27;s trending.<p>Even a simple algorithm would be effective: the number of citations for each paper decayed by the age of the paper in years.
评论 #13660322 未加载
EternalDataover 8 years ago
Nice. Super excited to read through and build out a few things myself.
husky480over 8 years ago
torchcraft is the best way to learn about machine learning.<p>If you can sim a set of boxes, you can learn whats inside them.