Home
3 comments
gryfft9 months ago
From the original paper:<p>> However, it must be
emphasised that this does not include other dangers
posed through the misuse of these models, such as
the use of LLMs to generate fake news. Similarly,
we do not contend that future AI systems could
never pose an existential threat. Instead, we clarify
that, contrary to prevailing narratives, the evidence
from LLM abilities does not support this concern.<p>I find myself unconvinced that LLMs are "inherently controllable, predictable and safe."
pjkundert10 months ago
What are the odds this "study" was produced by an AI? ;)
yawpitch9 months ago
> [LLMs] cannot learn independently or acquire new skills, meaning they pose no existential threat to humanity.<p>Interesting… guess I really don’t need to worry about meteors, greenhouse gasses, zoonotic diseases, nuclear weapons, or, you know, time and entropy.