TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

'Explainable Artificial Intelligence': Cracking Open the Black Box of AI

57 pointsby sherm8nabout 8 years ago

6 comments

harperleeabout 8 years ago
Newbie question: I&#x27;ve heard that PGMs are a superset of neural networks. In PGM materials that&#x27;s I&#x27;ve read, the topology of the networks shown as example is made of node that are manually chosen and represent concept (smart student, good grades, difficult subject, etc.). Whereas a neural network example is usually a huge set of nodes that end up finding their meaning on their own. I also vaguely recall a tutorial in which you can highlight the nodes that contributed to the classification - the only thing is that they don&#x27;t have meaning for a human. Then when the article states:<p>&gt; restrict the way nodes in a neural network consider things to ‘concepts’ like colour and shapes and textures.<p>Aren&#x27;t these just PGMs? Are they NNs? Is it just a methodology approach on how to select the topology? Don&#x27;t you lose the automatic meaning &#x2F; structure search? I&#x27;m a little bit confused...
评论 #14080429 未加载
评论 #14079179 未加载
bkoabout 8 years ago
I think the author is overstating the importance of being able to explain in human terms decisions made by a neural network. For instance , there is no one reason that I am able to recognize a dog as such. Any feature or combination of features I can think of can be had in another animal. Something deeper is happening when I am able to correctly identify dogs that is unexplainable, at least by me.<p>The examples normally given for wildly inaccurate predictions were concocted by training a separate neural network to trick the original neural network which seems be just showcasing the effectiveness of neural networks rather than highlighting a weakness.<p>Also, I would note that human intuition is not immune to tricks. For instance optical illusions regularly trick our perception.
评论 #14078766 未加载
评论 #14078628 未加载
评论 #14079520 未加载
评论 #14079161 未加载
评论 #14089199 未加载
评论 #14086247 未加载
评论 #14079084 未加载
评论 #14079027 未加载
yummyfajitasabout 8 years ago
Similarly, I feel that a car shouldn&#x27;t drive too fast. If it does drive too fast then a human running after it might be unable to catch up!
评论 #14078205 未加载
评论 #14078365 未加载
bencollier49about 8 years ago
There&#x27;s a hell of a lot of money to be made by the person who cracks this. The major blockers preventing a lot of AI being rolled out across the EU are laws which stipulate that you have to be able to explain a decision to, for example, refuse a person credit.<p>Not to mention the fact that we can correct faulty assumptions on the fly if we can get the networks to introspect.
评论 #14079517 未加载
TeMPOraLabout 8 years ago
One issue I don&#x27;t see considered is - how to ensure that explainable artificial intelligence <i>doesn&#x27;t lie</i>? Right now, it may not be an issue, but as AI systems get complex (&quot;smart&quot;) enough, one need to be sure that the introspective output isn&#x27;t crafted to influence people looking at it.
评论 #14079495 未加载
cr0shabout 8 years ago
Let&#x27;s say this is possible. How would we know that it (the AI) isn&#x27;t doing a post-hoc rationalization, or just outright lying about its reasoning?<p>In other words, why do we trust humans more than machines? In fact, why do we not think of humans as machines - just ones made out of different materials? Why do we have this bias that machines are and must-be deterministic, and since humans aren&#x27;t, they must not be machines? Furthermore, since we know that these AI models are sometimes stochastic, why do we still insist that they be explainable; when humans exhibit the same kind of output, we don&#x27;t insist upon their determinism...?<p>I&#x27;m not certain that we can make these models - especially complex deep-learning CNNs and others like them - explainable, any more than an individual can tell you how his or her brain came up with the solution; most of the time, we employ post-hoc reasoning to explain our decisions, depending on how the output resolves. That - or we lie. Rarely do we say &quot;I don&#x27;t know&quot; - because to do so is to admit a form of failure. Not admitting such is what helps religion continue, because when we don&#x27;t know, we can ascribe the reason to another external force instead. If we would just be willing to say &quot;I don&#x27;t know - but let&#x27;s try to find out&quot; (insert XKCD here), we might be better off as a species.<p>I don&#x27;t think an AI model will be any different - or can be. If we insist on having an AI be able to deterministically and truthfully tell us exactly how it arrived at such a conclusion, we must be ready to accept that we should do the same with human reasoning as well. Anything less would be hypocritical at best.