TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Dark Secret at the Heart of AI

38 pointsby cdvonstinkpotabout 8 years ago

7 comments

russellbeattieabout 8 years ago
This is kind of a silly strawman in some ways, simply because all software - including the code helping fly jumbo jets, steer oil tankers or run MRI machines - is written by fallible humans, and is generally considered safe only because of QA testing, rather than code analysis. There are some rare instances of insanely complex code having every line thoroughly vetted like those in NASA projects, but pretty much everything else out there is simply "good enough" until a flaw is (inevitability) found and fixed. The decision trees generated by AI will be no different. Until, I guess, an AI can perform the analysis of the code of another AI... cue Inception music.
mvindahlabout 8 years ago
'Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions.'.replace(/car/g, 'human')
mtoabout 8 years ago
I&#x27;ve always heard that argument in favor of decision trees or random forests, yet those decision trees had 400k nodes :). So no one ever really looked at them, but in theory you can could check the long node paths doing arbitrary splits on weird features :).<p>Apart from that, the strength of DNNs is exactly that complex decision making compared to, say, the simple algorithms physicians learn and manually apply for diagnosis. Those are obviously vastly underfitting in many cases.
Eridrusabout 8 years ago
This article makes the assumption that we are learning a complete model that goes from sensor inputs to control outputs, but I don&#x27;t think anyone is doing this outside academia. There&#x27;s a whole lot less controversy when we use deep learning to do scene understanding, where we understand at a high level the model is recognizing entities in its sensors, and we can evaluate whether that subsystem failed, etc.
candiodariabout 8 years ago
That&#x27;s the big plus of AI algorithms. For instance, all voice recognition algorithms use a patented algorithm. Nuance holds the patent.<p>But, the reasoning goes, because this was learned, and there is no code in there implementing that algorithm (just &quot;weights&quot; implementing an unrolled version), that code does not violate patents.<p>It&#x27;s not a bug, it&#x27;s a feature. Know any valuable algorithms ? Figure out how to learn them.
评论 #14288622 未加载
评论 #14289279 未加载
TheOtherHobbesabout 8 years ago
If we want to &quot;understand&quot; what a network does, that really means we want to disentangle cause and effect and spit out simple algebraic models for it after distilling them from a training set.<p>To the extent this is even possible - which is debatable, for all kinds of reasons - we&#x27;re going to need a different set of tools. ML is not the right tool for that problem.<p>Something similar to ML may be, but ML itself definitely isn&#x27;t.
iridium5about 8 years ago
Model decision interpretation is a solved problem: <a href="https:&#x2F;&#x2F;github.com&#x2F;marcotcr&#x2F;lime" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;marcotcr&#x2F;lime</a>