This is kind of a silly strawman in some ways, simply because all software - including the code helping fly jumbo jets, steer oil tankers or run MRI machines - is written by fallible humans, and is generally considered safe only because of QA testing, rather than code analysis. There are some rare instances of insanely complex code having every line thoroughly vetted like those in NASA projects, but pretty much everything else out there is simply "good enough" until a flaw is (inevitability) found and fixed. The decision trees generated by AI will be no different. Until, I guess, an AI can perform the analysis of the code of another AI... cue Inception music.
'Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions.'.replace(/car/g, 'human')
I've always heard that argument in favor of decision trees or random forests, yet those decision trees had 400k nodes :). So no one ever really looked at them, but in theory you can could check the long node paths doing arbitrary splits on weird features :).<p>Apart from that, the strength of DNNs is exactly that complex decision making compared to, say, the simple algorithms physicians learn and manually apply for diagnosis. Those are obviously vastly underfitting in many cases.
This article makes the assumption that we are learning a complete model that goes from sensor inputs to control outputs, but I don't think anyone is doing this outside academia. There's a whole lot less controversy when we use deep learning to do scene understanding, where we understand at a high level the model is recognizing entities in its sensors, and we can evaluate whether that subsystem failed, etc.
That's the big plus of AI algorithms. For instance, all voice recognition algorithms use a patented algorithm. Nuance holds the patent.<p>But, the reasoning goes, because this was learned, and there is no code in there implementing that algorithm (just "weights" implementing an unrolled version), that code does not violate patents.<p>It's not a bug, it's a feature. Know any valuable algorithms ? Figure out how to learn them.
If we want to "understand" what a network does, that really means we want to disentangle cause and effect and spit out simple algebraic models for it after distilling them from a training set.<p>To the extent this is even possible - which is debatable, for all kinds of reasons - we're going to need a different set of tools. ML is not the right tool for that problem.<p>Something similar to ML may be, but ML itself definitely isn't.
Model decision interpretation is a solved problem: <a href="https://github.com/marcotcr/lime" rel="nofollow">https://github.com/marcotcr/lime</a>