TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Explaining Large Language Models Decisions Using Shapley Values

89 pointsby veryluckyxyz5 months ago

3 comments

xianshou5 months ago
This doesn&#x27;t replicate using gpt-4o-mini, which always picks Flight B even when Flight A is made somewhat more attractive.<p>Source: just ran it on 0-20 newlines with 100 trials apiece, raising temperature and introducing different random seeds to prevent any prompt caching.
评论 #42533092 未加载
goldemerald5 months ago
While I love XAI and am always happy to see more work in this area, I wonder if other people use the same heuristics as me when judging a random arxiv link. This paper has one author, was not written in latex, and no comment referencing a peer reviewed venue. Do other people in this field look at these same signals and pre-judge the paper negatively?<p>I did attempt to check my bias and skim the paper, it does seem well written and takes a decent shot towards understanding LLMs. However, I am not a fan of black-box explanations, so I didn&#x27;t read much (I really like Sparse autoencoders). Has anyone else read the paper? How is the quality?
评论 #42530232 未加载
评论 #42529417 未加载
评论 #42529238 未加载
评论 #42533115 未加载
评论 #42531642 未加载
scottiescottie5 months ago
explainable AI just ain&#x27;t there yet.<p>I wonder if the author took a class with Lipton, since he&#x27;s at CMU. We literally had a lecture about Shapley Values &quot;explaining&quot; AI. It&#x27;s BS.