TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Explaining Large Language Models Decisions Using Shapley Values

89 点作者 veryluckyxyz5 个月前

3 条评论

xianshou5 个月前
This doesn&#x27;t replicate using gpt-4o-mini, which always picks Flight B even when Flight A is made somewhat more attractive.<p>Source: just ran it on 0-20 newlines with 100 trials apiece, raising temperature and introducing different random seeds to prevent any prompt caching.
评论 #42533092 未加载
goldemerald5 个月前
While I love XAI and am always happy to see more work in this area, I wonder if other people use the same heuristics as me when judging a random arxiv link. This paper has one author, was not written in latex, and no comment referencing a peer reviewed venue. Do other people in this field look at these same signals and pre-judge the paper negatively?<p>I did attempt to check my bias and skim the paper, it does seem well written and takes a decent shot towards understanding LLMs. However, I am not a fan of black-box explanations, so I didn&#x27;t read much (I really like Sparse autoencoders). Has anyone else read the paper? How is the quality?
评论 #42530232 未加载
评论 #42529417 未加载
评论 #42529238 未加载
评论 #42533115 未加载
评论 #42531642 未加载
scottiescottie5 个月前
explainable AI just ain&#x27;t there yet.<p>I wonder if the author took a class with Lipton, since he&#x27;s at CMU. We literally had a lecture about Shapley Values &quot;explaining&quot; AI. It&#x27;s BS.