TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

A Novel Framework for Explaining Machine Learning Using Shapley Values

49 pointsby krishnagadeover 4 years ago

3 comments

SpaceManNabsover 4 years ago
This paper extends a lot of the ideas in &quot;A Unified Approach to Interpreting Model Predictions&quot; (reference 19). The term unified is also used in the OP paper.<p>That reference paper is unifying in the sense that it could tie together a lot of frameworks such as LIME, QII, and others.<p>The paper in OP is unifying in the sense of the explanation games&#x2F;game formulations (while also referencing QII).<p>The perspective of using contrastive explanations is very intuitive and becoming more common in different aspects of ML. My only wish is that I could try this FAE stuff myself. The authors of SHAP have a pretty immediately accessible python library. Could not find the one for this novel framework. It is not that I doubt the results. My point is a lot of good ideas like the ones presented in the paper often don&#x27;t take off unless the authors also give easily pip-able libraries.<p>edit: I found the code[1], but it is not an easily conda-able library.<p>[1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;fiddler-labs&#x2F;the-explanation-game-supplemental" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;fiddler-labs&#x2F;the-explanation-game-supplem...</a>
Der_Einzigeover 4 years ago
This paper is neat!<p>More broadly, I am very concerned by how difficult it is to actually meaningfully run model explainability techniques on black-box models in the context of text&#x2F;images in the real world. Most people are not AI researchers and do not want to deal with the hassle of trying random research code off of github (which usually is chalk full of bugs)<p>the ELI5 library implements LIME really well for text, and I have not found an equivalent with SHAP that has all of the same functionality (highlighting most important words and exporting in pretty HTML being the big one).<p>It&#x27;s even worse when you start moving towards the neural network gradient explanations. There are libraries like Captum except in practice it fails on all the big-azz transformer language models I try to train. I&#x27;m still not sure as to how to run any of these gradient explanation methods on pytorch transformer models. Even if they do run, they don&#x27;t display saliency maps with as much detail as the ELI5 library does...<p>There&#x27;s so much good research in explainable AI, and so much engineering work which remains to be done! FFS I had to hack the hell out of ELI5&#x2F;LIME to get it to work on clustering (and I still don&#x27;t know why explainable clustering is not really done, and why I couldn&#x27;t find an off-the-shelf solution for this.)
评论 #25185108 未加载
krishnagadeover 4 years ago
At Fiddler.AI, we&#x27;re working on cutting-edge Explainable AI Algorithms, here is our latest publication that was recently awarded the Best Paper award at CD-Make 2020. #WeekendReading #ExplainableAI #ShapleyValues<p>Please let us know if you have any thoughts or feedback.
评论 #25180964 未加载
评论 #25181425 未加载