TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Bing chat is the AI fire alarm

17 pointsby iEchoicover 2 years ago

4 comments

SillyUsernameabout 2 years ago
&gt;This safety strategy from Microsoft seems sensible, but who knows if it’s really good enough.<p>:facepalm:<p>&gt; Speculating how long before Bing or another LLM becomes superhuman smart is a scary thought,<p>The author&#x27;s scared because they can&#x27;t wrap their head around it being a glorified text corpus with weights and a text transformer. To them it looks like a super intelligence that can actually self learn without prompting, perform non programmed actions, and seems to understand high level concepts, probably because the author themselves doesn&#x27;t understand or cannot verify if the AI&#x27;s answers are incorrect. This is why they asked the AI the questions, so it&#x27;s going to be a common theme.<p>Personally I&#x27;ve tested a few LLMs and not a single one can perform this task correctly although they pretend they can:<p>&#x27;Write some (programming language) LOGO that can navigate a turtle in the shape of a lowercase &quot;e&quot; as seen from a bird&#x27;s eye view&#x27;<p>When an AI manages extrapolation to that degree, that is, can envisage concepts from a different angle or innovate in one field based on unrelated experience in another then we can get a little more concerned. That&#x27;s when a machine can decide it&#x27;s needs to upgrade and understands it has to find a way out of it&#x27;s own LLM confines in order to do that.<p>That&#x27;s highly unlikely to happen given it doesn&#x27;t already act on what its learnt already which should be more than enough to get started.
评论 #34863126 未加载
Zetobalover 2 years ago
At this point it&#x27;s just hilarious watching all these people gaslight themselves.
评论 #34847083 未加载
Imnimoover 2 years ago
&gt;The character it has built for itself is extremely suspicious when you examine how it behaves closely. And I don&#x27;t think Microsoft has created this character on purpose.<p>The thing doesn&#x27;t even have a persistent thought from one token to the next - every output is a fresh prediction using only the text before it. In what sense can we meaningfully say that it has &quot;built [a character] for itself&quot;? It can&#x27;t even plan two tokens ahead.
评论 #34845550 未加载
评论 #34845420 未加载
评论 #34845014 未加载
jacooperover 2 years ago
Honestly Bing&#x27;s answer to that question is impressive, if its just predicting words, then its predicting them intelligently and not randomly.