Recently heard that Meta and Microsoft are both disbanding their teams which focus on AI safety.<p>Can’t help but connect dots of recent events at open AI where the board fired Altman for what seems like lack of alignment between profit motives and non profit (safety first) goals.
It appears there is a tension between folks who want to progress AI fast (for profit by capturing more market share) vs folks who want to slow down the rate of progress for safety reasons (let the world be ready regulation and ethicswise for something so powerful )<p>Ironically seems like the real threat at this time to humans is not AI but Humans’ Greed and or Power.<p>The core of the question is there such a thing as progress that is too fast for one’s own good? Are there examples in history where this (fast progress) has back fired? Would like to see some comparisons from folks we versed in history
CEOs and Investors are realising that the interest of AI safety teams mostly aren’t aligned with corporate goals. They were good for press as long as AGI was a distant dream but now as the AI wars start over the big tech companies, there is no need for self-sabotage. The drama at openai was likely the death sentence for serious backing of those teams.
IDK about big companies, but the open source side seems to have realized that heavily safety aligning AI kind of lobotomizes it, yet is still trivial to defeat.<p>It appears safety tools are better developed as wrappers on a case-by-case basis.<p>Also, all the noise about emerging AGI is just fearmongering. Near future multimodal LLMs are extremely dangerous, but they are not AGI.