I love OpenAI's product. I build on it. I also am looking to open models for the future. And I think AI is unlocking amazing potential.<p>However, I believe that it is very easy to see how current AI can quickly advance to the point where it is dangerous.<p>Create a whole bunch of agents connected to the internet, motivated by profit. Probably will be amazing. For a year or two or three.<p>But then look at say GPT-5/6 or whatever a little ways down the line. Nvidia or other new startups put out amazing new AI accelerators.<p>Now the agents are operating at 10 times human thinking speed, robust cognition, 160 IQ, swarms of them, in a large marketplace, accessing APIs to purchase or control just about anything. For many companies, if you want to compete, you need to have an agent swarm in these markets. And if you try to make them pause for human feedback, they will instantly lose out to the competition that is operating at ten times human decision making speed.<p>Practically speaking, I don't think the GPT Store would be the least bit dangerous in it's initial form. But at least for me it's very easy to project forward. So for people who have made public pledges to keep everything under control, the pace of commercialization and trajectory seemed unsafe.<p>I think the board is operating as it was designed.<p>However, I also think that within a year or two, it won't matter as far as AI safety. Because the open models will also be much smarter and faster than the average human. There will be many agent marketplaces controlling real-world systems.<p>My own belief has always been that you need to limit the AI hardware speed and make other physical limitations so people don't just get left in the dust and end up handing over control to AI systems by default. They won't (necessarily) be alive or anything, but it could be inherently unsafe to have so many autonomous highly intelligent systems controlling everything for us to such a large degree. Especially if it's solely profit driven, doesn't have any limits on speed, and hasn't been done deliberately and carefully.