It's hard to ask my question without sounding a bit naive :-) Back in the early nineties I did some work with convoluted neural nets, except that at that time we didn't call them "convoluted". They were just the neural nets that were not provably uninteresting :-) My biggest problem was that I didn't have enough hardware and so I put that kind of stuff on a shelf waiting for hardware to improve (which it did, but I never got back to that shelf).<p>What I find a bit strange is the excitement that's going on. I find a lot of these results pretty expected. Or at least this is what <i>I</i> and anybody I talked to at the time seemed to think would happen. Of course, the thing about science is that sometimes you have to do the boring work of seeing if it does, indeed, work like that. So while I've been glancing sidelong at the ML work going on, it's been mostly a checklist of "Oh cool. So it <i>does</i> work. I'm glad".<p>The excitement has really been catching me off guard, though. It's as if nobody else expected it to work like this. This in turn makes me wonder if I'm being stupidly naive. Normally I find when somebody thinks, "Oh it was obvious" it's because they had an oversimplified view of it and it just happened to superficially match with reality. I suspect that's the case with me :-)<p>For those doing research in the area (and I know there are some people here), what have been the biggest discoveries/hurdles that we've overcome in the last 20 or 30 years? In retrospect, what were the biggest worries you had in terms of wondering if it would work the way you thought it might? Going forward, what are the most obvious hurdles that, if they don't work out might slow down or halt our progression?