The paper is "ImageNet Classification with Deep Convolutional Neural Networks" by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, and is available here:<p><a href="https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks" rel="nofollow">https://papers.nips.cc/paper/4824-imagenet-classification-wi...</a>
I enjoyed genji256's comment on the article:<p>genji256:"Anecdote: I was one of the three reviewers for that paper and I tend to review harshly. A few years after it was published, I started worrying that I had given it a bad score and completely missed a field-changing paper. I frantically dug through my emails and found the review. Turns out I gave it a 7/10 so it wasn't THAT bad, though my summary makes me cringe a bit:
'A paper which, by giving precise details on the various tricks used, is a useful addition to the deep learning literature. I wish comparisons with other techniques were somewhat fairer.' "
I'm curious, are there papers in other ML fields that could be considered breakthroughs comparable in impact to AlexNet?<p>For NLP the recent ELMo and BERT papers for word embeddings come to mind, although their scope is somewhat different than AlexNet.
Man, I took a neural network class in uni and loved it. It was offered by the Psych department (however I was in Comp Sci). All I remember now is matlab labs, some of the terms, but otherwise nothing at all. My career took me nowhere near this subject matter and I've regrettably lost most recollections of it, so I appreciate this article explaining the basics again.
> Right now, I can open up Google Photos, type "beach," and see my photos from various beaches I've visited over the last decade. I never went through my photos and labeled them; instead, Google identifies beaches based on the contents of the photos themselves. This seemingly mundane feature<p>“Seemingly mundane”?? This is scary as hell.