>This safety strategy from Microsoft seems sensible, but who knows if it’s really good enough.<p>:facepalm:<p>> Speculating how long before Bing or another LLM becomes superhuman smart is a scary thought,<p>The author's scared because they can't wrap their head around it being a glorified text corpus with weights and a text transformer. To them it looks like a super intelligence that can actually self learn without prompting, perform non programmed actions, and seems to understand high level concepts, probably because the author themselves doesn't understand or cannot verify if the AI's answers are incorrect. This is why they asked the AI the questions, so it's going to be a common theme.<p>Personally I've tested a few LLMs and not a single one can perform this task correctly although they pretend they can:<p>'Write some (programming language) LOGO that can navigate a turtle in the shape
of a lowercase "e" as seen from a bird's eye view'<p>When an AI manages extrapolation to that degree, that is, can envisage concepts from a different angle or innovate in one field based on unrelated experience in another then we can get a little more concerned.
That's when a machine can decide it's needs to upgrade and understands it has to find a way out of it's own LLM confines in order to do that.<p>That's highly unlikely to happen given it doesn't already act on what its learnt already which should be more than enough to get started.
>The character it has built for itself is extremely suspicious when you examine how it behaves closely. And I don't think Microsoft has created this character on purpose.<p>The thing doesn't even have a persistent thought from one token to the next - every output is a fresh prediction using only the text before it. In what sense can we meaningfully say that it has "built [a character] for itself"? It can't even plan two tokens ahead.