TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Sometimes, requiring explanability is counterproductive

148 pointsby metahostover 5 years ago

31 comments

6gvONxR4sf7oover 5 years ago
Okay, I feel like a crazy person arguing against a luminary of the field, but this is so misleading as to be disingenuous and make me trust the guy less. He&#x27;s throwing causality and explainability into the same bucket and arguing against the need for the latter.<p>&gt;We often hear that AI systems must provide explanations and establish causal relationships, particularly for life-critical applications.<p>&gt;Yes, that can be useful. Or at least reassuring.<p>&gt;But sometimes people have accurate models of a phenomenon without any intuitive explanation or causation that provides an accurate picture of the situation.<p>It goes on to argue mostly against the need for intuitive explanations, not the establishing of causal relationships.<p>&gt;Now, if there ever was a life-critical physical phenomenon, it is lift production by an airliner wing.<p>&gt;But we don&#x27;t actually have a &quot;causal&quot; explanation for it, though we do have an accurate mathematical model and decades of experimental evidence.<p>The physical models we have are causal ones. The intuitive abstractions like bernoulli&#x27;s principle may not work, but analysis based on navier stokes sure does. You plug your shape (cause) into the equations and see what forces (effect) occur. That&#x27;s causation.<p>&gt;You know what other life-critical phenomena we don&#x27;t have good causal explanations for?<p>&gt; The mechanism of action of many drugs (if not most of them).<p>Using an industry that&#x27;s nearly synonymous with a randomized controlled trial as a refutation for the need of a causal relationship is crazy talk. The <i>mechanism</i> may be missing, but the causal explanation is that via a series of RCTs it&#x27;s established that the drug <i>causes</i> the effects.<p>I get that half of this is trying to go against a percieved need for intuitive explanations, but it weirdly lumps causation in there.
评论 #22249807 未加载
评论 #22252683 未加载
评论 #22250448 未加载
评论 #22249579 未加载
arcanusover 5 years ago
&gt; Now, if there ever was a life-critical physical phenomenon, it is lift production by an airliner wing. But we don&#x27;t actually have a &quot;causal&quot; explanation for it, though we do have an accurate mathematical model and decades of experimental evidence<p>This is argument is complete nonsense. Navier Stokes has a rigorous derivation based on extremely high-fidelity assumptions, such as conservation of mass, momentum, and energy. We understand these assumptions, and we understand regimes in which using N-S would result in catastrophe (such as rarefied gases, relativistic velocities, etc.)<p>Neural networks require data. The Navier stokes does not need to be &#x27;trained&#x27;. Deep networks have very little a-priori knowledge baked-in (from a baysian perspective there are priors such as translation invariance that are intrinsic). They are admittedly extremely useful, because they are high dimensional (and so are universal approximators) and can be trained efficiently.<p>Furthermore, you can develop an intuitive approach to many fluid flows. I can provide a much better estimate of the drag profile for a given wing geometry than an untrained person. No such analog is possible with deep nets, which as significantly more opaque in terms of dynamics and non-linear response.<p>The only way his comments make any sense is if you assume he isn&#x27;t talking about physical models, like Navier stokes, and instead consider turbulence models, such as RANS or LES. These are parameterized models, and are used for turbulence modeling. They also have little physical intuition. However, this is not the same as saying we do not have high confidence in physical-based systems such as Navier stokes.<p>Source: I have a PhD in CFD and several ML publications.
评论 #22248937 未加载
评论 #22249023 未加载
gamblerover 5 years ago
<i>&gt;Now, if there ever was a life-critical physical phenomenon, it is lift production by an airliner wing.</i><p>I nominate this as the worst analogy of the year.<p>Airplanes are rigorously tested under the same conditions they will operate in. AI <i>by definition</i> is tested under conditions that are different from the environment it will operate in, <i>because that&#x27;s the whole point of AI</i> - we want algorithms that adapt themselves to novel information.
huyegnover 5 years ago
I agree with Yann here ... I think the demand for explainability is like a person asking for a &quot;faster horse&quot; when what they really need is a car.<p>When people ask for explainable models, what they really want (in my opinion) is calibrated and robust <i>uncertainty estimates</i> .<p>Good uncertainty estimates would let them know when to trust a model&#x27;s prediction and when to ignore it.<p>For example, a model trained to predict dog breeds should know nothing about cat breeds, and there should be some way to quantify when it doesn&#x27;t know!<p>I&#x27;ve been doing a review of techniques that are becoming more popular in this area:<p><a href="https:&#x2F;&#x2F;blog.everyhue.me&#x2F;posts&#x2F;why-uncertainty-matters&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.everyhue.me&#x2F;posts&#x2F;why-uncertainty-matters&#x2F;</a>
评论 #22253484 未加载
评论 #22251692 未加载
评论 #22252760 未加载
wnoiseover 5 years ago
I hate the comparison to how airfoils work. It&#x27;s a single question that can be answered at a variety of levels, but the basics of how they work is simple and cannot be gainsaid: that they deflect air down.<p>All the complications come in the exact details of <i>how</i> they deflect air down. How much is lower than ambient pressure above the wing redirecting slipstreams vs higher than ambient pressure below the wing doesn&#x27;t fundamentally change the answer, though those details certainly matter (especially when designing a wing).<p>In contrast, even for a single &quot;AI&quot;, how it responds differently to different input is unlikely to be even remotely explainable by the same high level principles, and it&#x27;s not clear that the details don&#x27;t matter.
评论 #22252527 未加载
oplessover 5 years ago
I challenged LeCun about explain-ability of neural nets a few years ago, he seemed to dismissed the need for it. As I recall his explanation was that the weights &amp; configuration of the neural net is enough to give you an equation which is explanation enough. I&#x27;d link the post but it seems to have disappeared.<p>LeCun is also heavily biased against Bayes.
评论 #22250605 未加载
评论 #22250055 未加载
heyitsguayover 5 years ago
I think the type of explainability Yann LeCun is describing in this post is not the same as what people really want when they talk about explainable AI. To use his example, I think most people would be fine with an AI that said &quot;we&#x27;re doing X because that&#x27;s what heuristic Y prescribes for the situation&quot;, and getting from where we are now to something like that is the challenge. FWIW i see a similar dialogue in biomedical contexts quite a bit, where there&#x27;s some cool but hard-to-trust work being done in data-driven image restoration (<a href="https:&#x2F;&#x2F;www.biorxiv.org&#x2F;content&#x2F;10.1101&#x2F;236463v5.abstract" rel="nofollow">https:&#x2F;&#x2F;www.biorxiv.org&#x2F;content&#x2F;10.1101&#x2F;236463v5.abstract</a>). There&#x27;s still a lot that could be done short of programs providing a priori mathematical proof for all models.
评论 #22248240 未加载
omarhaneefover 5 years ago
Interesting exchange in the comments:<p>Gaurav Mukherjee:<p>Well argued! Yes, it is true that we don’t have causal relationships for a number of phenomena. But the absence of evidence is not the evidence of absence. So corner cases of failure may exist in all these phenomena which can cost lives. Does this mean that we halt the fast paced progress of AI research or any other scientific pursuit? No! But leveraging empirical evidence that can’t be fully explained in situations where lives are at stake, should require a very high bar of regulation. Responsible scientists and engineers agree that causality is important to understand and they do all they can to understand how systems work. However there are likely many among us who do not employ similar standards to the application of ill understood techniques. When it comes to regulation, we must pay heed to the worst in us.<p>Yann LeCun:<p>Actually, there is no clear definition of causality in classical and quantum physics, simply because the elementary equations of motions are time reversible (if you also reverse charge and parity). For every phenomenon that can occur, the same phenomenon can occur backwards in time (with corresponding anti-particles).<p>Take a Feynman diagram where an electron and a positron annihilate to produce a gamma ray photon. It can be interpreted as a gamma ray photon spontaneously creating an electron-positron pair. It&#x27;s the same diagram where time goes right to left instead of left to right.<p>How can one possibly say that A causes B, when B could very have caused A in a time-reversed view point?<p>Even worse, most physical phenomena have loopy causal graphs. Motion &quot;causes&quot; friction. But friction limits the velocity of motion. Most differential equations have coupled terms with loopy interactions in which quantity x affects quantity y and vice versa. You rarely have y(t+dt)=f(x(t)) in physics. More often than not, you have coupled equations y(t+dt)=f(x(t)) and x(t+dt)=g(y(t)).<p>In these (frequent) cases, x cause y, but y also causes x. There is something like that going on in fluid dynamics, which is why it&#x27;s difficult to come up with &quot;simple&quot; causal explanations.<p>Only when collective phenomena are considered does the &quot;arrow of time&quot; appear to have a definite direction (with the 2nd law of thermodynamics).<p>Edit: para spacing
评论 #22249598 未加载
评论 #22250289 未加载
评论 #22249310 未加载
评论 #22249461 未加载
评论 #22257175 未加载
joe_the_userover 5 years ago
From article:<p>&quot;<i>How does lithium treat bipolar disorder?</i>&quot;<p>The thing is, that this isn&#x27;t a question that&#x27;s <i>insignificant</i>. There has been quite a bit of debate whether various drugs reverse the basic process happening with a given psychiatric disorder or whether they provide a different change which allows a person function. The phrase &quot;chemical imbalance&quot; has been based on the supposition that various drugs that change brain chemical distributions &quot;cure&quot; various conditions - but the question these drug are directly reversing a condition or whether they are adding something more seems important, even if we assume the drugs are broadly useful for helping people function in society.
screyeover 5 years ago
Yann is undoubtedly a pillar of the deep learning era of AI. But, I strongly disagree with what he is saying.<p>Firstly, the Navier-stokes equations existed before mechanized flight was discovered. It was not explicitly invented to reason about flight, in that way that some of the deep learning &quot;theory&quot; is being projected. It also worked perfectly (in sub-sonic speeds) for almost every situation involving flight.<p>ML has time and again proven to not be internally consistent, with contradictions on nearly every corner.<p>Batch norm was &quot;theoretically&quot; thought to be an essential part of neural networks, until suddenly it wasn&#x27;t. Drop out was considered essential to avoid overfitting, until it wasn&#x27;t. We still do not know if an information bottle neck is good or bad. ADAM&#x27;s whole formula was incorrect and no one realized that for 6 years.<p>&gt; The mechanism of action of many drugs (if not most of them). &gt; An example? How does lithium treat bipolar disorder?<p>This example does more to disapprove his point that approve of it. Medical and nutritional sciences are among the least understood fields out there, with some of their &quot;fundamentals&quot; flipping on their head over the last 50 years. The only reason these &quot;sciences&quot; continue being used is because medicine is essential. A reasonably effective solution with side effects is still better than dead people. It is a begrudging compromise, not a example to be emulated.<p>AI is not essential. AI imposes on your life without consent. AI will soon be ingrained into every single aspect of your life.<p>Yann seems to be conflating explainability with causality. Explanaibility can also mean fully observed correlation. Explainbility can mean predictable and reproducible behavior of ML models given a hypothesis. Explanability can mean the ability to ascertain if the change in model performance was because of the hypothesis or a way to exploit an unintended aspect of the data&#x2F;model architecture.<p>Explainability fundamentally allows ML researchers to make strides in the field in a meaningful way, and not blindly throw different computational structures at thousands of GPUs and let luck of the draw decide what works and when.<p>Looking back at ideas such as Transformers and ResNets, there was literally no way for the authors to guess that this new computational structure would revolutionize the field. It could easily have been an idea someone else tried or rotted in someone&#x27;s TO-TRY list. Explainbility and some theoretical logic around NN development would allow for a systematic way to go about research in the ML community as whole. That&#x27;s unlikely to happen, but I would rather see people strive for it than not.
naresh_xaiover 5 years ago
When a person’s claim to fame and research is dependent on ignoring explainability and causality in research, he will ignore it to the best of his means.<p>To him, all the precursors to clinical trials (selection of a molecule from restricted set of molecules which satisfy certain causal criteria) then followed by extensive multi million dollar experiments to safeguard the last 1-2% of uncertainty is equivalent to a barely preprocessed neural network.<p>I mean let’s ignore the basic principles of pharmacology and medicine and just run every possible molecule through humans since that is a valid approach according to him.
tomrodover 5 years ago
For those interested in a great read on why causality really does matter, give &quot;The Book of Why&quot; a read. I&#x27;m not convinced Judea Pearl&#x27;s modeling approach is the most rigorous, but it does a clear and convincing job putting causality (esp. w&#x2F; data fusion!) at the heart of modern systems (including ML).<p>For those on a more mathematical bend, check out &quot;Causality&quot; by the same author or &quot;Causal Inference for Statistics, Social, and Biomedical Sciences&quot; by Imbens and Rubin
carlosdpover 5 years ago
Another more basic example is gravity. We have a statistical intuition of gravity, we depend on it being there and acting in a consistent way for almost everything in physics. But we don&#x27;t have an accurate understanding of how gravity works or why it&#x27;s there. We have theories, but nothing proven.<p>But it&#x27;s so easy to prove gravity&#x27;s properties (on Earth) by consistent results with experimentation, that it&#x27;s literally a science project in every elementary school.
krishnagadeabout 5 years ago
It depends, while Yann Lecun has an interesting point of view.<p>The argument for explainability depends on the risk of harm from an AI model decision. Explaining why airplanes fly is moderately interesting but why did the 737-max crash is much more interesting and needed. While the former is probably only needed for people studying aerodynamics, the latter is meant for passengers, regulators, airlines, governments, etc.<p>Here is a tweet thread we posted in the past: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;krishnagade&#x2F;status&#x2F;1182317508667133952" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;krishnagade&#x2F;status&#x2F;1182317508667133952</a>
haecceityover 5 years ago
In re airplanes, deflecting air down is a perfectly intuitive explanation. Change in momentum of air means change in momentum of wing in opposite direction by the third law.
buboardover 5 years ago
he s probably correct. A lot of what we call &quot;human explanation&quot; is in reality a rationalized version of a hunch, but not necessarily the best representation of it. At most situations, some part of the brain crosses a threshold and causes a decision or action, but it&#x27;s rather rare that humans can explain that action correctly. It&#x27;s also rare that causation can be established from unambiguous temporal order (e.g. clouds&#x2F;rain); in most cases our rationalizations are post-hoc.<p>We get a glimpse of it from recent language models: It&#x27;s become rather easy to start blurting out language that is convincibly and comfortingly coherent, and it can be nudged to point to one direction or another. that doesnt mean it&#x27;s true
RookyNumbasover 5 years ago
I&#x27;m surprised he made the drug analogy. It takes decades and hundreds of millions dollars to be able to test a drug on humans. I&#x27;m curious if he thinks we need a similar safeguard for all AI-human interactions.
评论 #22248947 未加载
leereevesover 5 years ago
Yann LeCun and others debated this for an hour at NeurIPS 2017.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=93Xv8vJ2acI" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=93Xv8vJ2acI</a><p>There&#x27;s a lot of good points in there from both sides, but what really stuck with me is that given the choice between an explainable model and a black box model that works better (more accurate predictions), most people choose the black box model.
评论 #22254086 未加载
评论 #22252795 未加载
xarykover 5 years ago
Not all explanations are causal. The explanation literature in the philosophy of science goes pretty far back, but here are some of the highlights:<p>The Deductive Nomological Model (Hempel and Oppenheim, 1948) tries to explain a phenomenon using a deductive argument where the premises include particular facts and a general lawlike statement (like a law of nature) and the conclusion is the thing to be explained.[1]<p>The Statistical Relevance Model (Wesley Salmon) attempts to fix some shortcomings in the DN model that allowed explanations using particular facts and general laws that were not at all relevant to the phenomenon being explained. The idea is that you can explain why X hasn&#x27;t become pregnant by saying that X has taken birth control, and people who take birth control do not become pregnant, and that would fit the DN model, but this explanation is not statistically relevant if X is male.[2]<p>Unificationist accounts (Philip Kitcher) seek to unify scientific explanations under a common umbrella as was done with, e.g. electromagnetism. If it is possible to have a unified theory of something, each element becomes more explainable based on its position within that unified theory [3]<p>pragmatic and psychological accounts tend to fit more closely with the kinds of rationalizations that we&#x27;ve seen as some explanations of AI. They can be fictional, but they don&#x27;t have to be [4]<p>IMO we don&#x27;t currently have an adequate account of explanation within the philosophy of science that works for deep neural networks. This is what my dissertation research focuses on.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Deductive-nomological_model" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Deductive-nomological_model</a><p>[2] <a href="https:&#x2F;&#x2F;plato.stanford.edu&#x2F;entries&#x2F;scientific-explanation&#x2F;#SRMod" rel="nofollow">https:&#x2F;&#x2F;plato.stanford.edu&#x2F;entries&#x2F;scientific-explanation&#x2F;#S...</a><p>[3] <a href="https:&#x2F;&#x2F;plato.stanford.edu&#x2F;entries&#x2F;scientific-explanation&#x2F;#UniAccExp" rel="nofollow">https:&#x2F;&#x2F;plato.stanford.edu&#x2F;entries&#x2F;scientific-explanation&#x2F;#U...</a><p>[4] <a href="https:&#x2F;&#x2F;plato.stanford.edu&#x2F;entries&#x2F;scientific-explanation&#x2F;#PraTheExp" rel="nofollow">https:&#x2F;&#x2F;plato.stanford.edu&#x2F;entries&#x2F;scientific-explanation&#x2F;#P...</a>
t_serpicoover 5 years ago
As a somewhat outsider to deep learning, intuitively it seems to be true that if you were able to demystify the black box, then it would be easier to improve your models (as you understand where it succeeds and fails and why). From this perspective, explainability would be incredibly productive.
评论 #22249001 未加载
halayliover 5 years ago
I wonder where&#x2F;if incompleteness theorem fits in this picture.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;G%C3%B6del%27s_incompleteness_theorems" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;G%C3%B6del%27s_incompleteness_...</a>
anthony_doanover 5 years ago
Yeah AI world have a loose definition of explanability and interpretablity.<p>I also see this very dogmatic mindset that Deep Learning will do prediction and interpretability.<p>What is stopping you from building two models? a regression statistic model to do interpretability&#x2F;explanability and another deep learning for prediction?<p>Like each coefficient in a regression have a t-test for significant in correlation with response. You don&#x27;t have something like that in deep learning. Also I&#x27;ve seen many MLer use logistic regression as a classifier and ignoring the probability aspect like the Titanic dataset highlight the different mindset between statistician and ML. ML often will see this as a classify problem dead or not dead. Statistician will phrase to &quot;What&#x27;s the probability of this person dying with these covariates?&quot;<p>You know why this matter? It really matter in health&#x2F;medical&#x2F;social science. Often time inference is what they want and they need to know what affect your health not just shoving in tons of data and covariates&#x2F;features. Not only that you many not even have enough data for these data hungry ML models.<p>Another example is biostatistician figure out threshold between the benefit of taking an invasive procedure versus not taking it. We figure it out but giving a percentage and the doctor and experts will tell you where the threshold is, 20%, 40%? It&#x27;s certainly not 50% that many MLer do.<p>&gt; We often hear that AI systems must provide explanations and establish causal relationships, particularly for life-critical applications. Yes, that can be useful. Or at least reassuring.<p>To me this just an excuse to not learn statistic. He should really look into Propensity modeling under Rubin-Neymer causality model. This is what statistic is going into after regression for observational data.<p>With all the criticism I have for ML. I think it&#x27;s just the mind set. I think the ML algorithms have a place and they&#x27;re very good in certain domain such as NLP and computer vision. But to act as if they&#x27;re the end all be all when statistic models have been there and use extensive in biostatistic and econometric fields is just hubris and ignorance.<p>While ML is making excuses for causality. Econometric and statistician are working to build causality model. IIRC econometric is doing structure equation while statistician are going for Rubin-Neyman model. There is debate on which model is better but that&#x27;s ongoing we&#x27;ll wait and see from all the research papers.
krnsllover 5 years ago
Would be interested to see what Judea Pearl says about this. Not on twitter any more but back when I was, I recall him regularly tweeting about causality and AI&#x27;s shortcomings in that regard.
monadic2over 5 years ago
How do you validate the model without understanding where you might find problematic results? That doesn’t require a narrative, maybe, but it does require not using black-box-only testing.
gatherhuntererover 5 years ago
Hopefully a mod can change the link.<p>&gt; Please submit the original source. If a post reports on something found on another site, submit the latter.
评论 #22248616 未加载
bordercasesover 5 years ago
This is true for the strict Pearl-style form of causation.<p>The concept of what counts as a causative explanation can be more expansive, and it varies between disciplines. See the work of Nancy Cartwright.<p><a href="https:&#x2F;&#x2F;www.researchgate.net&#x2F;publication&#x2F;30527010_What_Is_Wrong_With_Bayes_Nets" rel="nofollow">https:&#x2F;&#x2F;www.researchgate.net&#x2F;publication&#x2F;30527010_What_Is_Wr...</a><p>TL;DR we&#x27;ve been explaining &quot;causes&quot; without Bayes Nets for awhile, Bayes Nets unsubtly disregards the common-sense logic scientists use for their practice, including the way that explanations tend to be qualified by context.
pesentiover 5 years ago
Einstein had a great quote: &quot;The most incomprehensible thing about the world is that it is comprehensible.&quot; Well, turns out, a lot of the world is not comprehensible. And that&#x27;s what made Einstein irrelevant for the latter part of his life.<p>Our overestimation of the comprehensibility of the world may very well be some version of the Drunkard&#x27;s search principle. We are much more likely to know about what&#x27;s comprehensible than what&#x27;s not.
评论 #22248449 未加载
评论 #22248257 未加载
Rumudiezover 5 years ago
The word the author was looking for when they wrote &quot;explainability&quot; is actually &quot;explicable,&quot; as in something that can be explained, and is converse to the more commonly used word &quot;inexplicable.&quot;
评论 #22253317 未加载
pesentiover 5 years ago
For those not on FB:<p>Link: <a href="https:&#x2F;&#x2F;www.scientificamerican.com&#x2F;article&#x2F;no-one-can-explain-why-planes-stay-in-the-air&#x2F;?fbclid=IwAR0-S3nl5P0-vKUsr0oN5eeXmTVPTHMRZ9L_5bDqQStD-7uEHdukXC5JR6o" rel="nofollow">https:&#x2F;&#x2F;www.scientificamerican.com&#x2F;article&#x2F;no-one-can-explai...</a><p>&quot;We often hear that AI systems must provide explanations and establish causal relationships, particularly for life-critical applications. Yes, that can be useful. Or at least reassuring.<p>But sometimes people have accurate models of a phenomenon without any intuitive explanation or causation that provides an accurate picture of the situation. In many cases of physical phenomena, &quot;explanations&quot; contain causal loops where A causes B and B causes A.<p>A good example is how a wing causes lift. The computational fluid dynamics model, based on Navier-Stokes equations, works just fine. But there is no completely-accurate intuitive &quot;explanation&quot; of why airplanes fly. Is it because of Bernoulli principle? Because a wing deflects the air downwards? Because the air above the wing want to keep going straight but by doing so creates a low-pressure region above the wing that forces the flow downwards sucks the wing upwards? All of the above, but none of the above by itself.<p>Now, if there ever was a life-critical physical phenomenon, it is lift production by an airliner wing. But we don&#x27;t actually have a &quot;causal&quot; explanation for it, though we do have an accurate mathematical model and decades of experimental evidence.<p>You know what other life-critical phenomena we don&#x27;t have good causal explanations for? The mechanism of action of many drugs (if not most of them). An example? How does lithium treat bipolar disorder? We do have considerable empirical evidence provided by extensive clinical studies.<p>This is not to say that causality is not an important area of research for AI. It is. But sometimes, requiring explanability is counterproductive.&quot;
评论 #22248822 未加载
datastoatover 5 years ago
If Facebook lawyers understood the implications of LeCun&#x27;s argument, they wouldn&#x27;t be happy!<p>There are two types of explanations here: (1) why did the data come to be as it is, (2) why did my ML make the prediction it did.<p>Science looks for the answer to (1), and causal models are a great way to think about it. Science and engineering, when they go hand in hand, build a machine by saying &quot;Here is data, let me do science to understand nature&#x27;s underlying laws, and my machine shall be based on those laws&quot;. The machine is inherently explainable because it&#x27;s based on scientific laws.<p>In the ML world, we can bypass the &quot;learn scientific laws&quot; part, and jump straight to &quot;build a machine based on data&quot;. So the best answer to (2) has got to be &quot;my ML made the prediction it did because of its training data&quot;. As Pearl said, ML is just curve fitting, so the only way to &quot;explain&quot; a ML prediction is to say &quot;here are the points that the curve was fitted to&quot;. Prediction is just reading a value off the curve. Think the machine is biased? Look for bias in the training dataset! Think the machine is inaccurate? Look for sparsity or conflict in the training dataset!<p>So the consequence of LeCun&#x27;s distinction is that, when the GDPR calls for explainability of ML decision making, it is really calling for sharing of the training data. Facebook, watch out!
Antwanover 5 years ago
This guy is a total genius. Shame he doesn&#x27;t care more about who he is working for and what his researches are used for.