TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AI winter is well on its way

993 pointsby wei_jokalmost 7 years ago

80 comments

bitLalmost 7 years ago
I was recently &quot;playing&quot; with some radiology data. I had no chance to identify diagnoses myself with untrained eyes, something that probably takes years for a decent radiologist to master. Just by using DenseNet-BC-100-12 I ended up with 83% ROC AUC after a few hours of training. In 4 out of 12 categories this classifier beat best human performing radiologists. Now the very same model with no other change than adjusting number of categories could be used in any image classification task, likely with state-of-art results. I was surprised when I applied it to another, completely unrelated dataset and got &gt;92% accuracy right away.<p>If you think this is a symptom of AI winter, then you are probably wasting time on outdated&#x2F;dysfunctional models or models that aren&#x27;t suited for what you want to accomplish. Looking e.g. at Google Duplex (better voice synchronization than Vocaloid I use for making music), this pushed state-of-art to unbelievable levels in hard-to-address domains. I believe the whole SW industry will be living next 10 years from gradual addition of these concepts into production.<p>If you think Deep (Reinforcement) Learning is going to solve AGI, you are out of luck. If you however think it&#x27;s useless and won&#x27;t bring us anywhere, you are guaranteed to be wrong. Frankly, if you are daily working with Deep Learning, you are probably not seeing the big picture (i.e. how horrible methods used in real-life are and how you can easily get very economical 5% benefit of just plugging in Deep Learning somewhere in the pipeline; this might seem little but managers would kill for 5% of extra profit).
评论 #17188149 未加载
评论 #17187991 未加载
评论 #17188184 未加载
评论 #17188743 未加载
评论 #17188919 未加载
评论 #17187765 未加载
评论 #17187784 未加载
评论 #17187844 未加载
评论 #17242794 未加载
评论 #17216828 未加载
评论 #17188842 未加载
joe_the_useralmost 7 years ago
This is a deep, significant post (pardon pun etc).<p>The author is clearly informed and takes a strong, historical view of the situation. Looking at what the really smart people who brought us this innovation have said and done lately is a good start imo (just one datum of course, but there are others in this interesting survey).<p><i>Deepmind hasn&#x27;t shown anything breathtaking since their Alpha Go zero.</i><p>Another thing to consider about Alpha Go and Alpha Go Zero is the vast, vast amount of computing firepower that this application mobilized. While it was often repeated that ordinary Go program weren&#x27;t making progress, this wasn&#x27;t true - the best, amateur programs had gotten to about 2 Dan amateur using Makov Tree Search. Alpha Go added CNNs for it&#x27;s weighting function and petabytes of power for it&#x27;s process and got effectiveness up to best in the world, 9 Dan professional, (maybe 11 Dan amateur for pure comparison). [1]<p>Alpha Go Zero was supposedly even more powerful, learned without human intervention. BUT it cost petabytes and petabytes of flops, expensive enough that they released a total of ten or twenty Alpha Go Zero game to the world, labeled &quot;A great gift&quot;.<p>The author convenniently reproduces the chart of power versus results. Look at it, consider it. Consider the chart in the context of Moore&#x27;s Law retreating. The problems of Alpha Zero generalizes as described in the article.<p>The author could also have dived into the troubling question as of &quot;AI as ordinary computer application&quot; (what does testing, debugging, interface design, etc mean when the app is automatically generated in an ad-hoc fashion) or &quot;explainability&quot;. But when you can paint a troubling picture without these gnawing problems appearing, you&#x27;ve done well.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Go_ranks_and_ratings" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Go_ranks_and_ratings</a>
评论 #17185320 未加载
评论 #17185143 未加载
评论 #17186291 未加载
评论 #17187016 未加载
评论 #17185386 未加载
评论 #17185720 未加载
评论 #17186848 未加载
评论 #17186911 未加载
评论 #17186325 未加载
评论 #17185497 未加载
评论 #17187038 未加载
评论 #17186369 未加载
nopinsightalmost 7 years ago
A different take by Google’s cofounder, Sergey Brin, in his most recent Founders’ Letter to investors:<p>“The new spring in artificial intelligence is the most significant development in computing in my lifetime.”<p>He listed many examples below the quote.<p>“understand images in Google Photos;<p>enable Waymo cars to recognize and distinguish objects safely;<p>significantly improve sound and camera quality in our hardware;<p>understand and produce speech for Google Home;<p>translate over 100 languages in Google Translate;<p>caption over a billion videos in 10 languages on YouTube;<p>improve the efficiency of our data centers;<p>help doctors diagnose diseases, such as diabetic retinopathy;<p>discover new planetary systems; ...”<p><a href="https:&#x2F;&#x2F;abc.xyz&#x2F;investor&#x2F;founders-letters&#x2F;2017&#x2F;index.html" rel="nofollow">https:&#x2F;&#x2F;abc.xyz&#x2F;investor&#x2F;founders-letters&#x2F;2017&#x2F;index.html</a><p>An example from another continent:<p>“To build the database, the hospital said it spent nearly two years to study more than 100,000 of its digital medical records spanning 12 years. The hospital also trained the AI tool using data from over 300 million medical records (link in Chinese) dating back to the 1990s from other hospitals in China. The tool has an accuracy rate of over 90% for diagnoses for more than 200 diseases, it said.“<p><a href="https:&#x2F;&#x2F;qz.com&#x2F;1244410&#x2F;faced-with-a-doctor-shortage-a-chinese-hospital-is-betting-big-on-artificial-intelligence-to-treat-patients&#x2F;" rel="nofollow">https:&#x2F;&#x2F;qz.com&#x2F;1244410&#x2F;faced-with-a-doctor-shortage-a-chines...</a>
评论 #17184554 未加载
评论 #17184634 未加载
评论 #17186496 未加载
评论 #17186730 未加载
评论 #17184844 未加载
评论 #17184586 未加载
评论 #17184555 未加载
dekhnalmost 7 years ago
I&#x27;m a scientist from a field outside ML who knows that ML can contribute to science. But I&#x27;m also really sad to see false claims in papers. For example, a good scientist can read an ML paper, see claims of 99% accuracy, and then probe further to figure out what the claims really mean. I do that a lot, and I find that accuracy inflation and careless mismanagement of data mars most &quot;sexy&quot; ML papers. To me, that&#x27;s what&#x27;s going to lead to a new AI winter.
评论 #17187495 未加载
评论 #17187777 未加载
imhalmost 7 years ago
FYI This post is about deep learning. It could be the case that neural networks stop getting so much hype soon, but the biggest driver of the current &quot;AI&quot; (ugh I hate the term) boom is the fact that everything happens on computers now, and that isn&#x27;t changing any time soon.<p>We log everything and are even starting to automate decisions. Statistics, machine learning, and econometrics are booming fields. To talk about two topics dear to my heart, we&#x27;re getting way better at modeling uncertainty (bayesianism is cool now, and resampling-esque procedures aged really well with a few decades of cheaper compute) and we&#x27;re better at not only talking about what causes what (causal inference), but what causes what when (heterogeneous treatment effect estimation, e.g. giving you aspirin right now does something different from giving me aspirin now). We&#x27;re learning to learn those things super efficiently (contextual bandits and active learning). The current data science boom goes far far far far beyond deep learning, and most of the field is doing great. Maybe those bits will even get better faster if deep learning stops hogging the glory. More likely, we&#x27;ll learn to combine these things in cool ways (as is happening now).
评论 #17185593 未加载
评论 #17186878 未加载
评论 #17185566 未加载
MichaelMoser123almost 7 years ago
Forget about self driving cars - the real killer application of deep learning is mass surveillance - there are big customer for that (advertising, policing, political technology - we better get used to the term) and its the only technique that can get the job done.<p>I sometimes think that there really was no AI winter as we got other technologies that implemented the ideas: SQL Databases can be seen as an application of many ideas in classical AI - for example its a declarative language for defining relations among tables; you can have rules in the form of SQL stored procedures; actually it was a big break (paradigm shift is the term) in how you deal with data - the database engine has to do some real behind the scenes optimization work in order to get a workable representation of the data definition (that is certainly bordering on classical AI in complexity).<p>these boring CRUD applications are light years ahead in how data was handled back in the beginning.
评论 #17186797 未加载
评论 #17187048 未加载
rchalmost 7 years ago
&gt; Deepmind hasn&#x27;t shown anything breathtaking since their Alpha Go zero.<p>Didn&#x27;t this <i>just</i> happen? Maybe my timescales are off, but I&#x27;ve been thinking about AI and Go since the late 90s, and plenty of real work was happening before then.<p>Outside a handful of specialists, I&#x27;d expect another 8-10 years before the current state of the art is generally understood, much less effectively applied elsewhere.
评论 #17184945 未加载
评论 #17185208 未加载
mastrsushialmost 7 years ago
Warning 23 year old CS grad angst ridden post:<p>I&#x27;m very sick of the AI hype train. I took a PR class for my last year of college, and they couldn&#x27;t help but mention it. LG Smart TV ads mention it, Microsoft commercials, my 60 year old tech illiterate Dad. Do any end users really know what it&#x27;s about? Probably not, nor should that matter, but it&#x27;s very triggering to see something that was once a big part of CS turned into a marketable buzzword.<p>I get triggered when I can&#x27;t even skim through the news without hearing Elon Musk and Steven Hawking ignorantly claim AI could potentially takeover humanity. People believe them because of their credentials, when professors who actually teach AI will say otherwise. I&#x27;ll admit, I&#x27;ve never taken any course in the subject myself. An instructor I&#x27;ve had who teaches the course argues it doesn&#x27;t even exist, it&#x27;s merely a sequence of given instructions, much like any other computer program. But hey, people love conspiracies, so let their imagination run wild.<p>AI is today what Big Data was about 4 years ago. I do not look highly on any programmer that jumps bandwagons, especially for marketability. Not only is it impure in intention, it&#x27;s foolish when their are 1000 idiots just like them over-saturating the market. Stick with what you love, even if it&#x27;s esoteric. Then you won&#x27;t have to worry about your career value.
评论 #17185519 未加载
评论 #17185517 未加载
评论 #17185631 未加载
评论 #17185489 未加载
评论 #17185536 未加载
评论 #17185674 未加载
评论 #17185585 未加载
评论 #17185369 未加载
评论 #17187561 未加载
oh-kumudoalmost 7 years ago
Author&#x27;s reasons:<p>1.Hype dies down (which is really good! Meaning the chance of burst, is actually lower!)<p>2.Doesn&#x27;t scale is false claim. DL methods have scaled MUCH better than any other ML algorithms in recent history (scale SVM is no small task). Scaling for DL methods are much either as comparing to other traditional ML algorithms, where it can be naturally distributed and aggregated.<p>3. Partially true. But self-driving is a sophisticated area by itself, DL is part of it, it can&#x27;t really put full claim on its potential future success or ultimate downfall.<p>4. Gary Marcus isn&#x27;t an established figure in DL research.<p>AI winter will ultimately come. But it is because people will become more informed about DL&#x27;s strengths and limits, thus becoming smarter to tell what is BS what is not. AGI is likely not going to happen just with DL, but that is no way meaning it is a winter. DL has revolutionized the paradigm of Machine Learning itself, the shift has now complete, it will stay for a very very long time, and the successor is likely to build upon it not subvert it completely as well.
评论 #17184711 未加载
jarymalmost 7 years ago
I’m not sure how the hype wagon started but I for one am glad it’s about to pop.<p>I am working (founded) a startup and while we have AI on the roadmap for about a years time, it isn’t something that’s central to our product. (We already use some ML techniques but I’d not confidently boast its the same thing as AI).<p>Cue an informal lunch with a VC guy who takes a look, says we’re cool and tells us just to plaster the word AI in more places - he was sure we could raise a stupendous sum of cash doing that.<p>As an AI enthusiast I was bothered by this. We have everyone and their mother hyping AI into areas it’s not even relevant in, let alone effective at.<p>A toning down would be healthy. We could then focus on developing the right technology slowly and without all the lofty expectations to live up to.
评论 #17185791 未加载
solomatovalmost 7 years ago
AI winter is not on its way. We constantly get new breakthroughs and there&#x27;s no end in the view. For example, in the last year a number of improvements in GANs were introduced. This is really huge, since GANs are able to learn a dataset structure without explicit labels, and this is a large bottleneck in applying ML more widely.<p>IMO, we are far away from AGI, but even current technologies applied widely will lead to many interesting things.
评论 #17185413 未加载
xedariusalmost 7 years ago
&quot;Deepmind hasn&#x27;t shown anything breathtaking since their Alpha Go zero&quot;<p>... what about when the Google assistant near perfectly mimicked a human making a restaurant reservation .... the voice work was done at DeepMind.<p>All the problems in AI haven&#x27;t been solved yet? Well no, of course not. Limitations exist and our solutions need to be evolved.<p>I think perhaps the biggest constraint is requiring huge amounts of training data so solve problem X. Humans simply don&#x27;t need that, which must be some indication that what we&#x27;re doing isn&#x27;t quite right.
评论 #17185809 未加载
评论 #17185836 未加载
评论 #17185811 未加载
评论 #17186985 未加载
dhabalmost 7 years ago
Disclaimer: I am a lay technical person and don&#x27;t know much about AI.<p>I find this article somewhat condescending. I look at all the current development as stepping stones to progress, not an overnight success that does everything flawlessly. I imagine the future might be some combination of different solutions, and what the author proposes may or may not play a part in it.
评论 #17184657 未加载
评论 #17184640 未加载
zerostar07almost 7 years ago
An AI &quot;winter&quot; is a long period in which [edit: funding is cut because...] researchers <i>are in disbelief</i> about having a path to real intelligence. I think that is not the case at this time, because we have (or approaching) adequate tools to rationally dismiss that disbelief. The current AI &quot;spring&quot; has brought back the belief that connectionism may by the ultimate tool to explain the human brain. I mean you can&#x27;t deny that DL models of vision look eerily like the early stages in visual processing in the brain (which is a very large part of it). Even if DL researchers lose their path in search for &quot;true AI&quot;, the neuroscientists can keep probing the blueprint to find new clues to its intelligence. Even AI companies are starting to create plausible models that link to biology. So at this time, it&#x27;s unlikely that progress will be abandoned any time soon.<p>E.g. <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1610.00161" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1610.00161</a> <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1706.04698" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1706.04698</a> <a href="https:&#x2F;&#x2F;www.ncbi.nlm.nih.gov&#x2F;pubmed&#x2F;28095195" rel="nofollow">https:&#x2F;&#x2F;www.ncbi.nlm.nih.gov&#x2F;pubmed&#x2F;28095195</a>
评论 #17187236 未加载
评论 #17187326 未加载
评论 #17187229 未加载
评论 #17193376 未加载
felippeealmost 7 years ago
Author here: seriously I&#x27;m here at the front page for the second day in the a row!?<p>The sheer viral popularity of this post, which really was just a bunch of relatively loose thoughts indicates that there is something in the air regarding AI winter. Maybe people are really sick of all that hype pumping...<p>Just a note: I&#x27;m a bit overwhelmed so I can&#x27;t address all the criticism. One thing I would like to state however, is that I&#x27;m actually a fan of connectionism. I think we are doing it naively though and instead of focusing on the right problem we inflate a hype bubble. There are applications where DL really shines and there is no question about that. But in case of autonomy and robotics we have not even defined the problems well enough, not to mention solving anything. But unfortunately, those are the areas where most best&#x2F;expectations sit, therefore I&#x27;m worried about the winter.
评论 #17189148 未加载
评论 #17192069 未加载
skybrianalmost 7 years ago
The argument is that self-driving won&#x27;t work because Uber and Tesla had well-publicized crashes. But I don&#x27;t see how this tells us anything about other, apparently more cautious companies like Waymo. There seem to be significant differences in technology.<p>More generally, machine learning is a broad area and there&#x27;s no reason to believe that different applications of it will all succeed or all fail for similar reasons. It seems more likely there will be more winners along with many failed attempts.
评论 #17185027 未加载
评论 #17185100 未加载
madmax108almost 7 years ago
Honestly, I think this is a good thing for both AI researchers as well as AI practitioners. One mans AI-winter is another mans stable platform.<p>While the number of world-shattering discoveries using DL may be on the decline (ImageNet, Playing Atari, Artistic Style Transfer, CycleGAN, DeepFakes, Pix2Pix etc), now both AI researchers and practitioners can work in relative peace to fix the problem of the last 10%, which is where Deep Learning has usually sucked. 90% accuracy is great for demos and papers, but not even close to useful in real life (as the Uber fiasco is showing).<p>As an AI practitioner, it was difficult to simply keep up with the latest game-changing paper (I have friends who call 2017 the Year of the GAN!), only to later discover new shortcomings of each. Of course, you may say, why bother keeping up? And the answer is simply that when we are investing time to build something that will be in use 5-10 years from now, we want to ensure the foundation is built upon the latest research, and the way most papers talk about their results makes you believe they are best suited for all use cases, which is rarely the case. But when the foundation itself keeps moving so fast, there is no stability to build upon at all.<p>That and what jarym said is perfectly true as well.<p>The revolution is done, now it&#x27;s time to evolution of these core ideas for actual value generation , and I for one am glad about that.
afpxalmost 7 years ago
AI winter? Hardly. Current methods have only been applied to a very tiny fraction of problems that they can help solve. And, this trend will only accelerate until computing resources become too expensive.<p>As long as there is ROI, AI projects will continue to be financed, top thinkers around the world will be paid to do more research, and engineers will implement the most recent techniques into their products and services to stay competitive. This is a classic feedback system that results in exponential progress.
tim333almost 7 years ago
This seems over negative. Just the opening argument, that companies were saying &quot;that fully self driving car was very close&quot; but &quot;this narrative begins to crack&quot;<p>Yet here they are self driving <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=QqRMTWqhwzM&amp;feature=youtu.be" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=QqRMTWqhwzM&amp;feature=youtu.be</a> and you should be able to hail one as a cab this year <a href="https:&#x2F;&#x2F;www.theregister.co.uk&#x2F;2018&#x2F;05&#x2F;09&#x2F;self_driving_taxis_waymo&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.theregister.co.uk&#x2F;2018&#x2F;05&#x2F;09&#x2F;self_driving_taxis_...</a>
评论 #17185454 未加载
评论 #17187261 未加载
daveguyalmost 7 years ago
I thought there would be more of a backlash &#x2F; winter onset when people realize that Alexa is so annoying to deal with (and you basically have to learn a set of commands) because AI isn&#x27;t that clever yet. Also, when people realize that autocorrect took a dive for making edits when Google started putting a neural net in charge. (No! Stop deleting random words and squishing spaces during edits).<p>In other words I figured it would be the annoyances at what &quot;should be easy by now&quot; that would get Joe CEO to start thinking &quot;Hm. Maybe this isn&#x27;t such a good investment.&quot; When measurements are made and reliable algorithmic results attract and keep more users than narrowly trained kind of finicky AIs.<p>I don&#x27;t want there to be an AI winter, and it won&#x27;t be as bad as before. There are a lot of applications for limited scope image recognition, and other tasks that we couldn&#x27;t do before. Unfortunately,I do agree with the post that winter is on its way.
sytelusalmost 7 years ago
The OP is obviously not keeping up with the field and has lot to learn about scientific approach. He basically uses the count of tweets from AndrewNg and crashes from risk-taking companies as indicator of &quot;AI winter&quot;. He should have tried to look in to metrics such as number of papers, number of people getting in to field, number of dollars in VC money, number of commercial products using DL&#x2F;RL etc. But you see, that&#x27;s a lot of work and your conclusion might not align with whatever funky title you had in mind. Being an armchair opinion guy throwing link bait titles is much more easier.
评论 #17184988 未加载
didymosplalmost 7 years ago
I think the most important question is what &#x27;winter&#x27; really means in this context. The new concepts in AI tend to follow the hype cycle so the disillusionment will certainly come. One thing is the general public see the amazing things Tesla or Google do with deep learning and extrapolate this thinking we&#x27;re on the brink of creating artificial general intelligence. The disappointment will be even bigger if DL fails to deliver its promises like self-driving cars.<p>Of course the situation now is different than 30 years ago because AI has proved to be effective in many areas so the research won&#x27;t just stop. The way I understand this &#x27;AI winter&#x27; is that deep learning might be the current local maximum of AI techniques and will soon reach the dead end where tweaking neural networks won&#x27;t lead to any real progress.
visargaalmost 7 years ago
AI winter is not &quot;on its way&quot;. There is AI hype and anti-AI hype, and then there is actual practice. This article is anti-AI hype, just as bad as its opposite. In practice there are tons of useful applications. We haven&#x27;t even begun to apply ML and DL to all the problems laying around us, some of which are quite accessible and impactful.<p>The hype cycle will pass with time, when we learn to align our expectations with reality.
评论 #17190069 未加载
评论 #17192092 未加载
评论 #17187263 未加载
zitterbewegungalmost 7 years ago
I think that we will have AI Winter once we see the true limitations that face us having a level 5 fully autonomous self driving car. The other thing we will see happen is the deflation of the AdTech bubble. Once we see both of these events occurring that should start the AI Winter.
评论 #17185906 未加载
tananaevalmost 7 years ago
AI and machine learning is a tool. Like any other tool it&#x27;s perfect for some problems and doesn&#x27;t work well for other. Pick the right tools for the problem that you are working on. Don&#x27;t follow the hype and don&#x27;t use AI&#x2F;ML just for sake of using it.
matiaszalmost 7 years ago
Judea Pearl sees a way out of the winter.<p><a href="https:&#x2F;&#x2F;www.theatlantic.com&#x2F;technology&#x2F;archive&#x2F;2018&#x2F;05&#x2F;machine-learning-is-stuck-on-asking-why&#x2F;560675&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.theatlantic.com&#x2F;technology&#x2F;archive&#x2F;2018&#x2F;05&#x2F;machi...</a>
评论 #17184766 未加载
评论 #17186703 未加载
评论 #17184607 未加载
OliverJonesalmost 7 years ago
There&#x27;s a lot of good stuff coming from research in AI these days. Still, I think the author&#x27;s right.<p>As with the onset of the previous AI winter a generation ago, the problem is this: Once a problem gets solved (be it OCR or Bayesian recommendation engines or speech recognition or autocomplete or whatever) it stops being AI and starts being software.<p>As for self-driving cars: I recently took a highway trip in my Tesla Model S. I love adaptive cruise control and steering assistance: they reduce driver workload greatly. But, even in the lab-like environment of summertime limited access highways, driverless cars are not close. Autosteer once misread the lane markings and started to steer the car into the side of a class 8 truck. For me to sit in the back seat and let the car do all the work, that kind of thing must happen never.<p>Courtesy is an issue. I like to exit truck blindspots very soon after I enter them, for example. Autosteer isn&#x27;t yet capable of shifting slightly to the left or right so a driver ahead can see the car in a mirror. Maybe when everything is autonomous that won&#x27;t be an issue. But how do we get there.<p>Construction zones are problems too: lane markings are confusing and sometimes just plain wrong, and the margin for error is much less. Maybe the Mobileye rig in my car can detect orange barrels, but it certainly doesn&#x27;t detect orange temporary speed limit signs.<p>This author is right. AI is hype-prone. The fruits of AI generally function as they were designed, though, once people stop overselling them.
rossdavidhalmost 7 years ago
While I basically agree, really it ought to be called &quot;AI autumn is well on its way&quot;, since I&#x27;m not sure we&#x27;re into actual winter (i.e. dramatic reduction in $$ available for research) quite yet. But, probably soon.
评论 #17184468 未加载
epicmellonalmost 7 years ago
&quot;it is striking that the system spent long seconds trying to decide what exactly is sees in front (whether that be a pedestrian, bike, vehicle or whatever else) rather than making the only logical decision in these circumstances, which was to make sure not to hit it.&quot;<p>That is striking. It always sort of bothered me that AI is really a big conglomeration of many different concepts. What people are working on is deep learning <i>for machines</i>, but we think that means &quot;replicating human skill&#x2F;behavior&quot;. It&#x27;s not. Machines will be good at what they are good at, and humans good at what they&#x27;re good at. It&#x27;s an uphill battle if your expectation is for a machine that processes like a human, because the human brain does not process things like computer architectures do.<p>Now, if some aspiring scientist wanted to skip all that and <i>really</i> try to replicate (in a machine) how the human brain does things, I think such a person would be starting from a very different perspective than even modern AI computing.
评论 #17188214 未加载
ozyalmost 7 years ago
We call it deep learning, but it is deep pattern matching. Extremely useful, but don&#x27;t expect it to result in thinking machines.
评论 #17185610 未加载
dontreactalmost 7 years ago
The discussion on radiology is extremely sloppy.<p>Andrew Ng claimed human level performance on one radiology task (pneumonia). This claim seems to hold up pretty well as far as I can tell. Then the person criticizing him on twitter posts results on a completely different set of tasks which are just baseline results in order to launch a competition. These results are already close to human level performance, and after the competition it&#x27;s very possible they will exceed human level performance.<p>Yes it&#x27;s true that doing well at only Pneumonia doesn&#x27;t mean that the nets are ready to replace radiologists. However, it does mean that we now have reason to think that all of the other tasks can be conquered in a reasonably short time frame such that someone going into the field should at least consider how AI is going to shape the field going forward.
cs702almost 7 years ago
Well, the breathless hype around deep learning (with and without reinforcement learning) is bound to subside sooner or later, and attendance to staid academic conferences like NIPS sooner or later will revert back to a smaller group of academics and intellectuals who are truly interested in the subject over the long term.[a] That much is certain.<p>But we&#x27;re still in the early stages of a <i>gigantic wave of investment</i> over the next decade or two, as organizations of all sizes find ways to use deep learning in a growing number of applications. Most small businesses, large corporations, nonprofits, and governments are not using deep learning for anything yet.<p>[a] <a href="https:&#x2F;&#x2F;twitter.com&#x2F;lxbrun&#x2F;status&#x2F;908712249379966977" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;lxbrun&#x2F;status&#x2F;908712249379966977</a>
joejerryronniealmost 7 years ago
Well, now that the cat&#x27;s out of the bag in regards to AI&#x2F;ML, we can all get in on the ground floor of the next hype wave - quantum computing!
评论 #17184492 未加载
tmalsburg2almost 7 years ago
Stopped reading after the first half. The evidence for the idea that deep learning is failing is that Deep Mind haven&#x27;t produced anything revolutionary since Alpha Go Zero which was published not even a year ago? And that preformance doesn&#x27;t scale linearly with the number of parameters? And speculation about why Lecun made a certain career decision? Not very convincing.
评论 #17186395 未加载
soVeryTiredalmost 7 years ago
Only tangentially related to the article, but it&#x27;s always struck me as a little unethical that Demis Hassabis&#x27; name goes on every paper that&#x27;s written by Deepmind. No-one produces that much research output.
rschoalmost 7 years ago
No, but wait! We&#x27;re just on the verge of replacing doctors! ;-)<p>There&#x27;s still a lot of space for the improvement of &quot;curve-fitting&quot; AI in the workplace. The potential of existing tech is far from being thoroughly exploited right now. I believe the next big improvements will come more from better integration in the workplace (or road system) than new scientific advances, so that might seem less sexy. But I also believe this will be a sufficient impetus to drive the field forward for the years to come.
mircealalmost 7 years ago
I would not call it the “AI winter”. If you look at what people have called AI over time, the definition and the approaches have evolved (sometimes drastically) over time.<p>Instead of being stuck on the fact that deep learning and the current methods seem to have hit a limit I think I am actually excited about the fact that this opens the door for experimenting other approaches that may or may not build on top of what we call AI today.
评论 #17184571 未加载
评论 #17184629 未加载
majosalmost 7 years ago
This reminds me of a recent Twitter thread [1] from Zachary Lipton (new machine learning faculty at CMU) arguing that radiologists have a more complex job than we, as machine learning enthusiasts, think.<p>[1] <a href="https:&#x2F;&#x2F;mobile.twitter.com&#x2F;zacharylipton&#x2F;status&#x2F;999395902996516865" rel="nofollow">https:&#x2F;&#x2F;mobile.twitter.com&#x2F;zacharylipton&#x2F;status&#x2F;999395902996...</a>
carlbordumalmost 7 years ago
I think all talk about computer intelligence and learning is bullshit. If I&#x27;m right, then AI is probably the most &#x2F;dangerous&#x2F; field in computer science because it sounds just likely enough that it lures in great minds, just like a sitcom startup idea[0].<p>[0] <a href="http:&#x2F;&#x2F;paulgraham.com&#x2F;startupideas.html" rel="nofollow">http:&#x2F;&#x2F;paulgraham.com&#x2F;startupideas.html</a>
tim333almost 7 years ago
You could actually make a reasonable argument for the opposite of a winter, that we are heading into an unprecedented AI boom.<p>The article&#x27;s main argument for a winter is that deep learning is becoming played out. But this misses the once in history event of computer hardware reaching approximate parity with and overtaking the computing power of the human brain. I remember writing about that for my uni entrance exam 35 years ago and have been following things a bit since and the time is roughly now. You can make a reasonable argument the the computational equivalent of the brain is about 100 TFLOPS which was hard to access or not available in the past but you can now rent a 180 TFLOP TPU from Google for $6.50&#x2F;hr. While the current algorithms may be limited there are probably going to be loads of bright people trying new stuff on the newly powerful hardware, perhaps including the authors PVM and some of that will likely get interesting results.
sheeshkebabalmost 7 years ago
Deep learning maybe not the complete answer to gai, but it’s moving down the right path. Computers though are still years&#x2F;decades away from approaching human brain power and efficiency, so my take is that current ai hype is 10 years too early - a good time to get in.
评论 #17185039 未加载
评论 #17185414 未加载
ThomPetealmost 7 years ago
Great essay but this &quot;Deep learning (does not) scale&quot; I think is missing an important point.<p>There are many ways to think about scale.<p>If you think about a learned skill then that skill actually scales extremely well to other machines and thus to other industries that might benefit from the same skill.<p>The primary problem with technology is that society doesn&#x27;t just implement it as fast as it gets developed so you will have these natural bottlenecks where society can&#x27;t actually absorb the benefits fast enough.<p>In other words, Deep Learning scales as long as society can absorb it and apply it.
paulie_aalmost 7 years ago
Has anyone done something genuinely useful with ml&#x2F;ai&#x2F;whatever outside of advertising or stock trading? I am genuinely curious if it has really been applied to real commercial applications.
评论 #17185228 未加载
评论 #17193463 未加载
d--balmost 7 years ago
Sure the thing is overhyped, but the problem is that we cannot be sure about the next big thing. The advances are slow but then a giant step forwards happen all of a sudden.<p>Everyone dropped their jaws when they saw the first self driving car video or when alpha go started to win. This was totally unthinkable 10 years ago.<p>Some guy may come up with a computer model that incorporates together intentionality, some short term&#x2F;long term memory, and some reasoning, who knows?
randopalmost 7 years ago
AI is favorable for big companies to better scale their services. It seems that Facebook have also faced AI scaling drawbacks and they are developing there own AI hardware for it <a href="https:&#x2F;&#x2F;www.theverge.com&#x2F;2018&#x2F;4&#x2F;18&#x2F;17254236&#x2F;facebook-designing-own-chips-ai-report" rel="nofollow">https:&#x2F;&#x2F;www.theverge.com&#x2F;2018&#x2F;4&#x2F;18&#x2F;17254236&#x2F;facebook-designi...</a>
letitgo12345almost 7 years ago
AI has a lot to offer to the industry right now I think where you don&#x27;t need good worst case performance (ex., information retrieval, optimization, biology, etc.). The big problems in terms of application start appearing when you try and remove humans from the loop completely. That&#x27;s not even close to possible yet but that doesn&#x27;t mean the economic utility of even current AI is close to being maximized.
moistoreosalmost 7 years ago
I know this about the state of Deep Learning but I like to point out:<p>While autonomous driving systems aren&#x27;t perfect, statistically they are much better at driving than humans. Tesla&#x27;s autonomous system has had, what, 3 or 4 fatal incidents? Out of the thousands of cars on the road that&#x27;s less than 0.001%.<p>There will always be a margin of error in systems engineered by man, just hopefully moving forward fewer and fewer fatal ones.
评论 #17189524 未加载
tw1010almost 7 years ago
Woah, I was prepared to be all gung-ho for this post, given that I&#x27;ve suspected the winter was going to be here for quite a while now. But strangely, this post actually caused the opposite effect for me. The winter will probably come one day, but is all the evidence the poster can find? Andrew NG tweeting less and a statement that DNNs doesn&#x27;t scale based on flimsy data is not at all convincing to me.
tmalyalmost 7 years ago
Is this AI Winter 2.0? I was hopeful that logic programming would have developed more and spread to a larger audience at this point.
edddalmost 7 years ago
As a beginner in deep learning space, I am a bit baffled about the case &quot;You need a lot of computational power&quot;. Good models learn fast, so if potential model looks promising on local machine, one can do training on gcloud for 100$ on high end machines. Where am I wrong in this line of thinking?
评论 #17187201 未加载
fallingfrogalmost 7 years ago
Thank god. We&#x27;re definitely not ready and perhaps could never be ready for true general purpose ai.
bewe42almost 7 years ago
This is something I always wondered about AI and it promises. Sometimes, the last 1% is the hardest or can be even impossible. Self-driving cars, in particular, are a good case. We get to solve 99% of the use cases but achieving full autonomous vehicles might be just out of reach.
pascalxusalmost 7 years ago
But, they&#x27;re getting more and more data every year, right? All those almost millions of teslas running around could provide enough video input for the training data<p>Besides &quot;Good software takes 10 years&quot;, according to Joel Spolsky. As I see it, we&#x27;re, what 5 year into ML.
评论 #17192562 未加载
jgrant27almost 7 years ago
Another case in point. <a href="http:&#x2F;&#x2F;www.latimes.com&#x2F;local&#x2F;lanow&#x2F;la-me-ln-tesla-collision-20180529-story.html" rel="nofollow">http:&#x2F;&#x2F;www.latimes.com&#x2F;local&#x2F;lanow&#x2F;la-me-ln-tesla-collision-...</a>
twtwalmost 7 years ago
&gt; Nvidia car could not drive literally ten miles without a disengagement.<p>From the same source as the author cites, that&#x27;s because their test runs are typically 5 miles and resuming manual control at the end of a test counts as a disengagement.
partycoderalmost 7 years ago
Deep Learning was a noticeable improvement over previous neural models, sure. But deep learning is not the entire field of AI and ML. There has been more stuff going on like neural turing machines and differentiable neural computers.
crb002almost 7 years ago
We are beginning to see some sweet differential embeddings of discrete things like stacks and context free grammars. This is where deep learning gets really fun because it is learning to program.
jvmancusoalmost 7 years ago
<a href="https:&#x2F;&#x2F;twitter.com&#x2F;jvmancuso&#x2F;status&#x2F;1002387357776207872" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;jvmancuso&#x2F;status&#x2F;1002387357776207872</a>
xbmcuseralmost 7 years ago
For me Google is attacking on 2 main fronts 1. Quntam computing 2. Machine Learning&#x2F;AI<p>If they are able to combine the 2. A big if though the cost analysis will change for AI quite dramatically.
bfungalmost 7 years ago
Number of tweets as reliable data points? Very dubious. Simple explanation: They are busy working, so less time to tweet.<p>Maybe they&#x27;re working on something so cool, that the AI winter may not even come. Sure, there&#x27;s a lot of marketing-speak around AI at the moment.<p>But this wave of AI seems a lot stronger with better fundamentals than 20 years ago. At the very least, at least we have the hardware to actually RUN NN&#x27;s cost effectively now as oppose to grinding your system to a halt back then.<p>Before AlphaGo, it wasn&#x27;t even clear when a computer could beat a top professional in go, let alone crush humans in the game - low bound guesses were 50 years.
ggmalmost 7 years ago
<a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Lighthill_report" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Lighthill_report</a> (1973)
xpuentealmost 7 years ago
Low hanging fruits are scarce now. With 3 orders of magnitude difference in power (MW over few watts), clearly this is not the right way for reaching the tree top.
sigi45almost 7 years ago
&#x2F;shrug people need time to research;<p>Anyway i also don&#x27;t get what the issue is with the model from radiology. It is already that good?! This is impressive. One model is close to well trained experts.<p>Just today i had an small idea for a new product based on what google was showing with the capabilities to distinguis two people talking in parallel.<p>At the last Google IO i was impressed because in comparision to the previous years, ML created better and more impressive products.<p>I was listing for years at key nodes about big data and was never impressed. I hear now about ML and im getting impressed more and more.
评论 #17185944 未加载
m0lluskalmost 7 years ago
If only there were some technology that might enable us to discern patterns so that we could better predict fluctuations in demand for AI software.
thosakwealmost 7 years ago
Truly, I agree.<p>I&#x27;ve long been interested in learning about AI and deep learning, but to this day haven&#x27;t done much that truly excites me within the field. It feels more or less impossible to make anything significant without Google-scale databases and Google-scale computers. AI really does make it easier for the few to jump far ahead, leaving everyone behind.<p>I also agree that a lot the news around AI is just hype.<p>Honestly, I&#x27;m yet to see <i>anything</i> practical come out of AI.<p>But hey, if something eventually does, I&#x27;m all for it.
评论 #17184425 未加载
评论 #17184520 未加载
m3kw9almost 7 years ago
Bet your house on it if it’s “well on it’s way”
评论 #17185690 未加载
jonbarkeralmost 7 years ago
This reinforces the need to benchmark any &#x27;human expert equivalent&#x27; project against the wattage of the human brain.
mathattackalmost 7 years ago
How much of this can we pin on IBM&#x27;s overhype of Watson?
ashelmirealmost 7 years ago
Yawn. Contrarianism is easy and this article offers little. The real world application you’re speaking of has a comically small amount of data (a few million miles?). You hear about a handful of accidents that still average to better than human performance and suddenly the sky is falling.<p>When machine learning stops successfully solving new problems daily, then maybe a thread like this will be warranted.
arisAlexisalmost 7 years ago
without being an expert just by reading articles it seems to me that some people wish foe an AI winter. It makes them feel better somehow
Bromsklossalmost 7 years ago
Oh, I thought &quot;AI winter&quot; would refer to a state of ruin after AI had come into existence and destroyed everything, analogous to nuclear winter.
评论 #17187288 未加载
scalablenotionsalmost 7 years ago
A real Winter is a lack of warmth. An AI winter is a lack of ______
InclinedPlanealmost 7 years ago
If we would stop calling this stuff &quot;AI&quot; it would make all our lives a lot easier, but people can&#x27;t resist.<p>When computers first came on the scene a lot of people had a very poor conception of what it was the human mind did, computationally. So when computers turned out to be good at things that were challenging &quot;intellectual&quot; tasks for humans like chess and calculus many were duped into thinking that computers were somehow on a similar level to human brains and &quot;AI&quot; was just around the corner. The reality was that one of the most important tasks that the human brain performs: contextualization, categorization, and abstraction was taken for granted. We&#x27;ve since discovered that task to be enormously computationally difficult, and one of the key roadblocks towards &quot;true AI&quot; development.<p>Now, of course, we&#x27;re at it again. We have the computational muscle to make inference engines that work nothing like the human brain good at tasks that are difficult to program explicitly (such as image and speech recognition) and we&#x27;ve built other tools that leverage huge data sets to produce answers that seem very human or intelligent (using bayesian methods, for example). We look at this tool and too many say &quot;Is this AI?&quot; No, it might be related to AI, but it&#x27;s just a tool. Meanwhile, because of all the AI hype people overpromise on neural networks &#x2F; &quot;deep learning&quot; projects and people get lazy about programming. Why bother sitting down for 15 minutes to figure out the right SQL queries and post processing when you can just throw your raw data at a neural network and call it the future?<p>One of the consistently terrible aspects of software development as a field is that it continues to look for shortcuts and continues to shirk the basic responsibilities of building anything (e.g. being mindful of industry best practices, understanding the dangers and risks of various technologies and systems and being diligent in mitigating them, etc.) Instead the field consistently and perversely ignores all of the hard-won lessons of its history. Consistently ignores and shirks its responsibilities (in terms of ethics, public safety, etc.) And consistently looks for the short cut and the silver bullet that will allow them to shirk even the small vestiges of responsibility they labor under currently. There&#x27;s a great phrase on AI that goes: &quot;machine learning is money laundering for bias&quot;, which points to just one facet among so many of what&#x27;s wrong with &quot;AI&quot; as it&#x27;s practiced today. We see &quot;AI&quot; used to sell snake oil. We see &quot;AI&quot; used to avoid responsibility for the ethical implications inherent in many software projects. We see &quot;AI&quot; integrated into life critical systems (like self-driving cars) without putting in the effort to ensure it&#x27;s robust or protect against its failures, with the result being loss of life.<p>AI is just the latest excuse by software developers to avoid responsibility and rigor while cashing checks in the meantime. At some point this is going to become obvious and there is going to be a backlash. Responsible developers should be out in front driving for accountability and responsibility now instead of waiting until a hostile public forces it to happen.
nolemursalmost 7 years ago
I&#x27;ve always understood the claim that deep learning scales to be a claim about deployment and use of trained models, not about training. The whole point is that you can invest (substantial) resources upfront to train a sufficiently good model, but then the results of that initial investment can be used with very small marginal costs.<p>OP&#x27;s argument on this front seems disingenous to me.<p>His focus on Uber and Tesla (while not even mentioning Waymo) is also a truly strange omission. Uber&#x27;s practices and culture have historically been so toxic that their failures here are truly irrelevant, and Tesla isn&#x27;t even in the business of making actual self driving cars.<p>I&#x27;m the first to argue that right now AI is overhyped, but this is just sensationalist garbage from the other end of the spectrum.
评论 #17185180 未加载
评论 #17185253 未加载
jacksmith21006almost 7 years ago
One of the more silly articles on HN in a while. Waymo has cars as I type this driving around Arizona without safety drivers.<p>People were freaked out by the Google demo of Duplex a couple of weeks ago as it was just too human sounding.<p>Can give so many other example. One is foundational. The voice used with Google Duplex is using a DNN at 16k cycles a second in real-time and able to offer at a competitive price.<p>That was done by creating the TPU 3.0 silicon. The old way of piecing together was NOT compute intensive and therefore doing it using a DNN requires proprietary hardware to be able to offer at a competitive price to the old way.<p>But what else can be done when you can do a 16k cycles through a DNN in real-time? Things have barely even got started and they are flying right now. All you have to do is open your eyes.<p>DNN - Deep Neural Network.
myf01dalmost 7 years ago
It&#x27;s the same story again like exaggerating the influence of IoT 5 years ago. The whole thing is exaggerated to raise money from investors and attract customers instead of actually buidling superior product
评论 #17184653 未加载
bguberfainalmost 7 years ago
Deep Recession ‘18
kuroguroalmost 7 years ago
Winter is coming.
jacinaboxalmost 7 years ago
What a relief.
fourfacesalmost 7 years ago
The inconvenient but amazing truth about deep learning is that, unlike neural networks, the brain does not learn complex patterns. It can see new complex patterns and objects instantly without learning them. Besides, there are not enough neurons in the brain to learn every pattern we encounter in life. Not even close.<p>The brain does not model the world. It learns to see it.
评论 #17185717 未加载
评论 #17190169 未加载