TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Explainable Artificial Intelligence (XAI) Darpa Funding

134 pointsby Dim25over 8 years ago

18 comments

wycover 8 years ago
This is important because there exists a trade-off in statistical learning models: in general, the more flexible your models are, the less understandable they become[0]. Modern machine learning techniques are typically very flexible.<p>To gain intuition and reasoning of a model is to have understanding and trust--transparency. When you strike a nail with a hammer, it&#x27;s pretty predictable what might happen: the nail could get hit, the hammer could miss, or very rarely, the hammer&#x27;s head may fly off of the handle. When you replace the hammer with a black box that works correctly 99.999% of the time, but for 0.001%, something completely unpredictable happens, then there&#x27;s a problem with volatility because that unpredictable event may have unacceptable consequences. I think explainable AI could help with intuitive and more fine-grained risk analysis, and that&#x27;s certainly a good thing in high-stakes applications such as defense.<p>[0] ISLR, Page 25: <a href="http:&#x2F;&#x2F;www-bcf.usc.edu&#x2F;~gareth&#x2F;ISL&#x2F;ISLR%20First%20Printing.pdf" rel="nofollow">http:&#x2F;&#x2F;www-bcf.usc.edu&#x2F;~gareth&#x2F;ISL&#x2F;ISLR%20First%20Printing.p...</a>
评论 #12449821 未加载
dkarapetyanover 8 years ago
This is fantastic. DARPA gets it. I look forward to whatever fruits come from this labor. Maybe one day I won&#x27;t have to look at stack traces and reverse engineer 3rd-party dependencies to figure out why things are breaking. Maybe one day error messages will have explanatory power. Maybe one day IDEs will understand abstractions other than ASTs and types and instead will understand things that convey human intent that is not so closely tied to rigid constructs like type systems. What a wonderful world that will be.
评论 #12451714 未加载
Dim25over 8 years ago
Direct link to detailed specification [PDF]: <a href="https:&#x2F;&#x2F;www.fbo.gov&#x2F;utils&#x2F;view?id=ae0b129bca1080cc7c517e8dadfa3ca2" rel="nofollow">https:&#x2F;&#x2F;www.fbo.gov&#x2F;utils&#x2F;view?id=ae0b129bca1080cc7c517e8dad...</a><p>Related FAQ [PDF]: <a href="http:&#x2F;&#x2F;www.darpa.mil&#x2F;attachments&#x2F;XAIFAQ8-26.pdf" rel="nofollow">http:&#x2F;&#x2F;www.darpa.mil&#x2F;attachments&#x2F;XAIFAQ8-26.pdf</a>
评论 #12449499 未加载
iverjoover 8 years ago
This brings Lime [1] to mind. &quot;Explaining the predictions of any machine learning classifier&quot;<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;marcotcr&#x2F;lime" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;marcotcr&#x2F;lime</a>
评论 #12450361 未加载
评论 #12451675 未加载
Houshalterover 8 years ago
There was a machine learning system designed to produce interpretable results, called Eureqa. Eureqa is a fantastic piece of software that finds simple mathematical equations that fit your data as good as possible. Emphasis on the &quot;simple&quot;, it searches for the smallest equations it can find that works, and gives you a choice of different equations at different levels of complexity.<p>But still, the results are very difficult to interpret. Yes you can verify that the equation works, that it predicts the data. But why does it work? Well who knows? No one can answer that. Understanding even simple math expressions can be quite difficult. Imagine trying to learn physics from just reading the math equations involved and nothing else.<p>One biologist put his data into the program, and found, to his surprise, that it found a simple expression that almost perfectly explained one of the variables he was interested in. But he couldn&#x27;t publish his result, because he couldn&#x27;t understand it himself. You can&#x27;t just publish a random equation with no explanation. What use is that?<p>I think the best method of understanding our models, is not going to come from making simpler models that we can compute by hand. Instead I think we should take advantage of our own neural networks. Try to train humans to predict what inputs, particularly in images, will activate a node in a neural network. We will learn that function ourselves, and then it&#x27;s purpose will make sense to us. Just looking at the gradients of the input conveys a huge amount of information of which inout features are the most and least important. And by about how much.<p>But mostly I think the effort towards explainability is fundamentally misguided. In the domains where they are supposedly the most desirable, like medicine, accuracy should matter above all. A less accurate model could cost lives. Accuracy is easy to verify through cross validation, but explainability is a mysterious unmeasurable goal.
评论 #12450772 未加载
评论 #12450051 未加载
bluetwoover 8 years ago
(Looks over shoulder)<p>Anyone else thinking this mirrors their own experimental work?<p>Anyone else thinking of putting in an abstract?<p>Abstract Due Date: September 1, 2016, 12:00 noon (ET)<p>Proposal Due Date: November 1, 2016, 12:00 noon (ET)
Eliezerover 8 years ago
(I am relatively excited about this research direction. It seems like the sort of thing that might lead to genuinely useful components of a safer AGI system later.)
dschiptsovover 8 years ago
Yeah, all they want is a simple mechanism of how to jump from a mere &quot;blind&quot;, mechanistic feature extraction to the notion that creatures of this Nature usually have two eys and make a hard-wired heuristic, a short-cut which improves pattern recognition in orders of magnitude with less computational cost.<p>Every child will tell you that cars have eyes, and even a crow could track the direction of your gaze.<p>Well, I would also give away some govt. printed money to know how to make this kind of a jump from raw pixels to high-level shapes.)<p>The answer, by the way, is that the code (which is data) should be evolved too, not just weights of a model. This is an old fundamental idea from the glorious times of using Lisp as AI language - everything in the brain is a structure made out of conses^W neurons.<p>And feature extraction and heuristics should be &quot;guided&quot;. In the process of evolution it is guided by way too many iterations of training and <i>random</i> selection of emerging features. Eventually a short-cut &quot;creatures have eyes&quot; will be found and selected as much more efficient. We need just a few millions of years or so of brute forcing.<p>Hey, Darpa, do you fund lone gunmen?)
vonnikover 8 years ago
Integration of Neural Networks with Knowledge-Based Systems <a href="https:&#x2F;&#x2F;www.uni-marburg.de&#x2F;fb12&#x2F;datenbionik&#x2F;pdf&#x2F;pubs&#x2F;1995&#x2F;ultsch95integration2" rel="nofollow">https:&#x2F;&#x2F;www.uni-marburg.de&#x2F;fb12&#x2F;datenbionik&#x2F;pdf&#x2F;pubs&#x2F;1995&#x2F;ul...</a>
yoav_hollanderover 8 years ago
If there is real progress towards Explainable AI, this would also be very useful for _verifying_ machine-learning-based systems (i.e. finding the bugs in them).<p>I wrote about this in [1], but I am not a machine-learning expert (I am coming from the verification side), so would love to hear comments from other people.<p>[1] <a href="https:&#x2F;&#x2F;blog.foretellix.com&#x2F;2016&#x2F;08&#x2F;31&#x2F;machine-learning-verification-and-explainable-ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.foretellix.com&#x2F;2016&#x2F;08&#x2F;31&#x2F;machine-learning-veri...</a>
michaelscottover 8 years ago
This is sorely needed for machine learning if it&#x27;s to get both more complex and more accurate. Coincidentally Alan Kay brought the &quot;expert systems&quot; idea up in his recent AMA as well. It&#x27;d be inconceivable to write code today that couldn&#x27;t be thoroughly debugged, so we should expect the same of our machine learning systems.
vonnikover 8 years ago
Bringing some form of feature introspection to deep neural networks will probably involve clever ways of visualizing the feature activations of unstructured data <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1603.02518" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1603.02518</a>
pattisapuover 8 years ago
&quot;If you can&#x27;t explain it to a six year old, you don&#x27;t understand it yourself.&quot; -Einstein
评论 #12449712 未加载
breezestover 8 years ago
The goal is ambitious. I have similar ideas in my mind but do not have a team to complete the details.
hackcasualover 8 years ago
My Aunt worked on systems for explaining early generation networks for medical diagnoses: <a href="http:&#x2F;&#x2F;link.springer.com&#x2F;article&#x2F;10.1007&#x2F;BF01413743" rel="nofollow">http:&#x2F;&#x2F;link.springer.com&#x2F;article&#x2F;10.1007&#x2F;BF01413743</a>
syatsover 8 years ago
&quot;I know of an uncouth region whose librarians repudiate the vain and superstitious custom of finding a meaning in books and equate it with that of finding a meaning in dreams or in the chaotic lines of one&#x27;s palm ... &quot;<p>JL Borges, The Library of Babel
dmixover 8 years ago
Off topic: I checked other FizBizOpp listings (which is now famous thanks to the War Dogs film in theaters). There is a listing for a &quot;Big Ass Fan&quot; 16&#x27; long for the Air Force: <a href="https:&#x2F;&#x2F;www.fbo.gov&#x2F;index?s=opportunity&amp;mode=form&amp;id=8de699e71f1dc9a084509651f2221cf3&amp;tab=core&amp;_cview=0" rel="nofollow">https:&#x2F;&#x2F;www.fbo.gov&#x2F;index?s=opportunity&amp;mode=form&amp;id=8de699e...</a>
评论 #12458640 未加载
eli_gottliebover 8 years ago
Well, if you were gonna submit an abstract, the deadline was a week ago. Good luck getting a grant proposal ready if you haven&#x27;t already!