TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Discovering latent knowledge in language models without supervision

149 pointsby dayveover 2 years ago

10 comments

usgroupover 2 years ago
Exciting times. The philosophical ramifications of the syntax&#x2F;semantics distinction is not something people think much about in the main. However, due to GPT et al they will do soon :)<p>More to the point, consistency will improve accuracy in so far as inconsistency is sometimes the cause for inaccuracy. However, being consistent is an extremely low bar. On a basic level even consistency is a problem in natural language where so much depends on usage -- it is near impossible to determine whether sentences are actually negations of each other in the majority of possible cases. But the real problem is truth assignment to valid sentences else we could all just speak Lojban and be done with untruth forever.
评论 #33952764 未加载
O__________Oover 2 years ago
Anyone able to provide set of examples that produces latent knowledge and explicitly state what the latent knowledge produced is? If possible, even an basic explanation of the paper would be nice too based on reading other comments in the thread.<p>EDIT&#x2F;Update: Just found examples from the 10 datasets starting on page 23, that said, even after reviewing these my prior request stands. As far as I am able to guess at this point, this research just models responses across multiple models in a uniform way, which to me makes the claim that this method out performs other methods questionable given it requires existing outputs from other models to aggregate the knowledge across existing models. Am I missing something?
评论 #33954075 未加载
Daveenjayover 2 years ago
Asked ChatGPT to explain like I’m 5. This is what it produced.<p>“ Okay! Imagine that you have a big robot in your head that knows a lot about lots of different things. Sometimes, the robot might make mistakes or say things that aren&#x27;t true. The proposed method is like a way to ask the robot questions and figure out what it knows, even if it says something that isn&#x27;t true. We do this by looking inside the robot&#x27;s head and finding patterns that make sense, like if we ask the robot if something is true and then ask if the opposite of that thing is true, the robot should say &quot;yes&quot; and then &quot;no.&quot; Using this method, we can find out what the robot knows, even if it sometimes makes mistakes.”
评论 #33951777 未加载
评论 #33951546 未加载
froggychairsover 2 years ago
The GitHub repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;collin-burns&#x2F;discovering_latent_knowledge" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;collin-burns&#x2F;discovering_latent_knowledge</a>
评论 #33951775 未加载
PaulHouleover 2 years ago
Back when I was messing around with LSTM models I was interested in training classifiers to find parts of the internal state that light up when the model is writing a proper name or something like that.<p>Nice to see people are doing similar things w&#x2F; transformers.<p>Truth, though, is a bit problematic. The very existence of the word makes it possible for &quot;the truth is out there&quot; to be part of the opening of the TV series the <i>X Files</i>, see <i>Truth Social</i>. I&#x27;m sure there is a &quot;truthy&quot; neuron in there somewhere, but one aspect (not the only aspect) of truth is the evaluation of logical formulae (consider the evidence and reasoning process used in court) and when you can do that you run into the problems that Gödel warned you about -- regardless of what kind of technology you used.
评论 #33952095 未加载
评论 #33951402 未加载
theptipover 2 years ago
This is an important area for AI safety research; see the ELK paper for example.<p><a href="https:&#x2F;&#x2F;www.alignmentforum.org&#x2F;posts&#x2F;qHCDysDnvhteW7kRd&#x2F;arc-s-first-technical-report-eliciting-latent-knowledge" rel="nofollow">https:&#x2F;&#x2F;www.alignmentforum.org&#x2F;posts&#x2F;qHCDysDnvhteW7kRd&#x2F;arc-s...</a><p>That paper is a bit dense, but considers the ways that a powerful AI model could be intractable&#x2F;deceptive to discovering its latent knowledge. If we can confidently understand an AI’s internal knowledge&#x2F;intention states, then alignment is probably tractable.
totetsuover 2 years ago
I wonder if this could one day be how we settle disagreements with no solid answer, like was William Shakespeare really the author of all those plays.
评论 #33954554 未加载
评论 #33951501 未加载
评论 #33951997 未加载
jameshartover 2 years ago
Hang on - I thought the consensus among ML experts was that language models don’t ‘know’ anything?
评论 #33954809 未加载
评论 #33956887 未加载
评论 #33954724 未加载
dwighttkover 2 years ago
Is this proposing a perpetual motion machine? (With energy switched out for information)
评论 #33954532 未加载
ultra_nickover 2 years ago
Is there a PG word for bullshitting that has the same meaning?