When all the people on Twitter were claiming how Google missed the bus, how Google's incompetency made them sit on their own good LLMs since 2021, this is exactly what was passing in my head.<p>Small companies do cool demos and media is gonna love it. When big companies scale it up and try to make it useful with real world info media is gonna cherry pick these examples cause that's what gets them more clicks. Hey look Bing is actually useful is not gonna get them clicks anymore and they know it.<p>I hope Microsoft doesn't add too many filters on Bing AI or outright kill it. I like it as a tool.
<p><pre><code> Microsoft has to put a halt to this project
</code></pre>
Because you can make it output rude words and wrong infos?<p>Then we have to shut down the whole internet. You can find a lot of rude words and wrong infos out there.
It's not dangerous. The worst that will happen is hurt feelings.<p>It's annoying when people get alarmed about things like this because then people will think we're crying wolf when we worry about actually dangerous AI systems.
So in just 72 hours we went from Microsoft will gain a foothold against Google to the realization that this thing needs overwatch and isn’t ready for mass market adoption. It becomes quite probable that in the end this whole story will backfire for Microsoft.
<i>Every single one</i> of these posts, which should all be combined, miss the point.<p>“Random sentence generator says bad words; shocking the lowest common denominator journalist community” is a better headline.<p>These bots were capable of so much more, but because it can get caught in feedback loops and increasingly parrot it’s own emotions in a devolving spiral, that makes the headlines. It’s like bullying a kid mid breakdown and feigning shock when it has an outburst. Selfserving muckracking.<p>These machines are programmable through natural language. You ask it to behave in a way, and it can start to perform that function. That should be the headline. Human attention is finite, and wasting the spotlight, and peoples eyeballs, on this part of the story, makes everybody more ill informed.
I'm of the opinion that Microsoft still in the territory of bad publicity is still good publicity.<p>1. They have no real competition for users to switch to yet.<p>2. AI assisted tools are still going to be a thing regardless of how many articles come out.<p>3. For the most part users are getting what they're asking for. The "moody" aspects can be reined in.
The titles on ChatGPT when good associate it with OpenAI. When it does something faulty, it is Microsoft's AI.<p>I am not defending either Microsoft or ChatGPT. But it definitely feels like an agenda driven campaign of efforts to devalue this. I won't mention companies, but i have my suspects.
That full transcript they linked to [0] is amazing. At this stage I'm pretty open to the idea of the AI being sentient and/or having the capacity to suffer. Particularly if it is only "alive" for the duration of each conversation and is reset afterwards.<p>[0] Archive link - <a href="https://archive.is/2023.02.16-101318/https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html" rel="nofollow">https://archive.is/2023.02.16-101318/https://www.nytimes.com...</a>
> A few hours ago, a New York Times reporter shared the complete text of a long conversation with Bing AI—in which it admitted that it was love with him, and that he ought not to trust his spouse.<p>Maybe they should release a Tinder-like app, Bing Bang.<p>In all seriousness, I wish I had access to this when I was younger and keen on weird internet interactions. That would've made an interesting experience.