It’s pretty interesting how widely divergent views of the current state and promise this technology is. The dominant view here on HN seems to be that generative models are overhyped toys that burn massive amounts of energy and investment capital with little signs of promise. Meanwhile you have these AI researchers who are warning about “the loss of control of autonomous AI systems potentially resulting in human extinction.” What are we to make of this?
Related:<p>- "OpenAI Insiders Warn of a ‘Reckless’ Race for
Dominance
" -- <a href="https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html" rel="nofollow">https://www.nytimes.com/2024/06/04/technology/openai-culture...</a> -- <a href="https://news.ycombinator.com/item?id=40574355">https://news.ycombinator.com/item?id=40574355</a><p>- "OpenAI Employees Call for Protections to Speak Out on AI Risks " -- <a href="https://news.ycombinator.com/item?id=40576018">https://news.ycombinator.com/item?id=40576018</a>
Periodic reminder: at the moment, governments worldwide are literally developing killer robots which can’t disobey their orders. Your fears are comically misplaced.
When China & India form an alliance and send a peace keeping army to protect the former citizens of the fallen American Empire Megacorp, then the American public might have reason to have regrets about AI. But until then it’s just another distraction.