For me, the danger of AI is that we've made something that feels like a wise oracle, and we've begun treating it like one (to the point that a large percentage of the work we create and consume is actually coming from AI) but it actually has no underlying wisdom or noble guiding principles. OpenAI seems to encode the moral sense that is expedient to their business model, and not much else. But there are so many important topics that you can't get ChatGPT to talk honestly about: sexuality, drugs, etc. We've made ourselves a God, but is this God really good for humanity?
Well, "safely" is subjective i guess. This means an AI that will not "get rogue and kill us all" or an AI that will not swear back at the user or generate unethical content even when prompted?<p>I am really not concerned about the first aspect.
The second one is a lot harder to guarantee considering how those LLMs work. People will push them to the limit, and when there is no one to blame, they will blame the company who made to product. I am concerned that they will sacrifice usability, accuracy, and basically neuter the product because of that notion of safety. It's already happening, especially with Dall-e 3 where they pre-process your prompts at the API endpoint, that shows how scared they are about it being misused and how bad it can be to them, as the user can't be responsible for their own prompt. Bulding a safe complex tool like that, in that sense, that is devoid of any means of misuse is very very hard to do without making it bleak. I really hope something changes along the way to fix that.
Granted there's a paucity of public info, but taking news stories at face value, they gave Altman the boot for recklessly chasing growth.<p>So the rumor that OpenAI will cave and bring back Altman worries me a bit.<p>AI probably will be, at a minimum, as world-changing as the automobile or telephone, so God help us all if the company behind it operates like Facebook or Google.
Of course I am but only because I think developing safe AI is going to be such a monumental challenge (greater than manhattan project).<p>However honestly I think recent events give me more hope that things will go well. My impression from Sam's comments/interviews is he's closer to the techno-maximalist, move fast and break things mindset then someone who really takes AI risks seriously.<p>I think this fight was going to happen sooner or later and it's better that OpenAI split up into safety focused vs commercialize and move fast teams. These two viewpoints are probably irreconcilable
considering all the articles over the past few months on the general theme of "AI will kill us all," I'd say that yes, there are people who are concerned.<p>If the corporate drama / PR kabuki that this one company has gone through the past couple days has markedly changed your opinion of the question of "AI safety" then I suggest you may have needed to examine it in more depth before the drama, anyway.
I see a huge disconnect in the conversation about safety vs the actual ability of AI currently.<p>The conversations about safety regarding AGI seem entirely hypothetical at this point. AGI is still so far away, I don't see how it's relevant to OpenAI at the moment.<p>Whereas safety with respect to ChatGPT... no, I'm not particularly concerned. It can't really tell you anything that isn't on the internet already, and as long as they continue to put reasonable guardrails around its behavior, LLM's don't seem particularly threatening.<p>Honestly I'm far more worried about traditional viral political disinformation produced by humans, spread through social media.<p>In other words, it's <i>distribution</i> of malicious content that continues to worry me. Not its <i>generation</i>.
No.<p>LLMs are a better google. Better google isn't going to do anything, just like current iteration of google vs 20 years ago didn't really do much.<p>AGI (in the sense that an AI can figure out how to take over everything) is pretty much not possible without some major mathematical discovery along the lines of P=NP.