TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

On the Gullibility of Language Models

1 pointsby diegoalmost 2 years ago

1 comment

BulgarianIdiotalmost 2 years ago
Imagine being closed in a black box, and all you have as input is someone passing you written instructions through a small window. You can&#x27;t see who is giving you the instructions. They&#x27;re always on the same paper, same font, size, everything.<p>Different people give you instructions. But you never see them, only the paper with text on it. Some people are &quot;admin&quot; some are &quot;user&quot;. You have to guess from context. But instead of identifying themselves, they&#x27;re incredibly vague about it all, at best you may get something like &quot;User:&quot; before a line, to tell it apart from the ambient instructions you were given beforehand.<p>And somehow it&#x27;s your fault if you misidentify who is supposedly writing some part of the text.<p>This is not &quot;gullible&quot;, it&#x27;s poor signal for the model. It has no way to know who is who, it&#x27;s all the same token stream to it. No voices, no faces, no caller id, nothing for it to hang onto for recognition. What is it supposed to do?