TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Can LLMs produce orignal work or it just limited answering Q&A?

1 pointsby lex2rossabout 2 years ago
As someone who is not very knowledgeable in the AI domain, I&#x27;ve been exploring the capabilities of LLMs (large language models). From what I understand, LLMs are trained on existing content that is assumed to be original. However, this means that LLMs are limited to what they have learned from past resources and cannot think beyond that.<p>Considering this limitation, would it be safe to assume that LLMs cannot produce new, original work?<p>I&#x27;m also concerned that the internet will soon be flooded with LLM-generated content, which will be used to train LLMs further. In my opinion, such feedback will hardly help and may even make LLMs more rigid in their answers.<p>Although there are parallel efforts to detect machine-generated content and prevent it from ranking on top, this technology is still in its nascent phase.<p>What do you think about the limitations and potential risks of LLMs?

1 comment

PaulHouleabout 2 years ago
I hesitate to get involved in any discussion where “original” is used because people can redefine it so that the machine always loses. It’s like getting into a discussion with the people who write the editorial page of <i>The New York Times</i> or <i>Fox News</i> for that matter.<p>But, ChatGPT is trained by reinforcement to play a character that is helpful and answers questions. It could just as well be trained to do something else. Question answering is not fundamental to how that technology works, the way it is for Wolfram Alpha.<p>Now people like that character ChatGPT plays, they find it helpful, they engage with it, etc. It seems to be motivated to move some goal and particularly it doesn’t seem to “get stuck” because people give it ever changing goals. If it were acting on longer-range goals it might well follow a weak signal in circles forever, satisfy itself on ‘special interests’ other people don’t get, and otherwise fail to pay for the electricity it uses.<p>A real weakness it has has right now it only remembers 4096 tokens of short-term memory, it is mostly running out of a long-term memory that was trained into it and it won’t add anything to that.<p>A person might write a book and then refer to what they wrote to build on that and write another book. ChatGPT can’t do that, but that’s not a fundamental limit of that kind of system, for instance you could feed the things it generates back into it and retrain it. It might not be so simple because it is as likely to go bad like cheese as anything else if it does that (see ‘getting stuck’ above.)