<i>Better than either approach is to take both the objections and the computer-assisted explanations seriously. Then we might ask the following: What qualities do traditional explanations have that aren’t currently shared by computer-assisted explanations? And how can we improve computer-assisted explanations so that they have those qualities?</i><p>I am interested in this line of reasoning. Can anyone point me to relevant discussion (preferably scholarly)?
We haven't had the rise of computer-aided explanation yet. The article is more about the problem of not having it. It's needed; "Why did the classifier do <i>that</i>" is starting to become a big problem. Google is having PR problems because their image classifier labeled black people as gorillas.<p>They can probably use their classifier in feedback mode to generate a canonical image of what the gorilla recognizer is looking for. Publishing that image would create worse PR problems. But at least there's some way to get insight into what's happening. That's been a big problem with ANNs - you get a matrix of values out, but there's no "meaning" associated with it.
>> Chomsky compares the approach to a statistical model of insect behavior. Given enough video of swarming bees, for example, researchers might devise a statistical model that allows them to predict what the bees might do next. But in Chomsky’s opinion it doesn’t impart any true understanding of why the bees dance in the way that they do.<p>Chomsky is at liberty to pursue an understanding of bee behavior. The engineers and scientists who created the system were interested in translating, not understanding. It seems to me that any system, whether based on a statistical model or some other approach, that achieves a valid translation is clearly a success. For all we know the brains of human translators work similarly.