TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Is OpenAI governance structure fit for purpose?

4 点作者 uxhacker超过 1 年前
As chatGPT just told me : “ As for the governance structure and aims, OpenAI is very focused on ensuring the safe and ethical development of AI technology. Part of this involves putting in place strong governance measures to ensure that advanced AI, including potential AGI, is developed and used in a way that benefits humanity and minimizes risks, including the hypothetical scenario of a superintelligent AGI acting against human interests.”<p>So the question is does not such a small board lead to a narrow views and therefore narrow governance?

4 条评论

ilaksh超过 1 年前
I love OpenAI&#x27;s product. I build on it. I also am looking to open models for the future. And I think AI is unlocking amazing potential.<p>However, I believe that it is very easy to see how current AI can quickly advance to the point where it is dangerous.<p>Create a whole bunch of agents connected to the internet, motivated by profit. Probably will be amazing. For a year or two or three.<p>But then look at say GPT-5&#x2F;6 or whatever a little ways down the line. Nvidia or other new startups put out amazing new AI accelerators.<p>Now the agents are operating at 10 times human thinking speed, robust cognition, 160 IQ, swarms of them, in a large marketplace, accessing APIs to purchase or control just about anything. For many companies, if you want to compete, you need to have an agent swarm in these markets. And if you try to make them pause for human feedback, they will instantly lose out to the competition that is operating at ten times human decision making speed.<p>Practically speaking, I don&#x27;t think the GPT Store would be the least bit dangerous in it&#x27;s initial form. But at least for me it&#x27;s very easy to project forward. So for people who have made public pledges to keep everything under control, the pace of commercialization and trajectory seemed unsafe.<p>I think the board is operating as it was designed.<p>However, I also think that within a year or two, it won&#x27;t matter as far as AI safety. Because the open models will also be much smarter and faster than the average human. There will be many agent marketplaces controlling real-world systems.<p>My own belief has always been that you need to limit the AI hardware speed and make other physical limitations so people don&#x27;t just get left in the dust and end up handing over control to AI systems by default. They won&#x27;t (necessarily) be alive or anything, but it could be inherently unsafe to have so many autonomous highly intelligent systems controlling everything for us to such a large degree. Especially if it&#x27;s solely profit driven, doesn&#x27;t have any limits on speed, and hasn&#x27;t been done deliberately and carefully.
mvkel超过 1 年前
OpenAI should have flipped its org structure:<p>1. A for-profit with a board made up of folks who can handle $100B juggernauts.<p>2. A non-profit arm focused on AI safety and research on the march towards AGI. Same board it has today.<p>Non-profit boards are inherently dysfunctional. It should have reformed when revenue went foom.
majikaja超过 1 年前
Different question: if they are serious about &#x27;AGI&#x27;, do these engineers and businesspeople think the government will just stand by letting them do what they want as they kick off a potentially dangerous international arms race?<p>The public mission statement sounds ridiculously naive. Maybe ex-Jane Street?
评论 #38320800 未加载
gardenhedge超过 1 年前
OpenAI&#x27;s board can only control and influence what research and development happens within OpenAI.<p>Does China care? Does India care? Does Russia care? No, and they&#x27;ll continue working on AI regardless of what happens within OpenAI.