Archive link: <a href="https://archive.is/zwJbj" rel="nofollow">https://archive.is/zwJbj</a><p>This piece is responding to an Economist op-ed by Bret Taylor and Larry Summers representing the OpenAI board, and comes to many of the same conclusions I did.<p>- Economist: <a href="https://www.economist.com/by-invitation/2024/05/30/openai-board-members-respond-to-a-warning-by-former-members" rel="nofollow">https://www.economist.com/by-invitation/2024/05/30/openai-bo...</a><p>- Archive link for Economist: <a href="https://archive.is/rwRju" rel="nofollow">https://archive.is/rwRju</a><p>IMO key paragraphs...<p>> The review’s findings rejected the idea that any kind of ai safety concern necessitated Mr Altman’s replacement. In fact, WilmerHale found that “the prior board’s decision did not arise out of concerns regarding product safety or security, the pace of development, Openai’s finances, or its statements to investors, customers, or business partners.”<p>Comment: I'm surprised that they don't refute any of the concerns about the CEO, and if the investigation was so redemptive, they should release the findings. (It must be that it wasn't, so they won't.)<p>> Ms Toner has continued to make claims in the press. Although perhaps difficult to remember now, Openai released Chatgpt in November 2022 as a research project to learn more about how useful its models are in conversational settings. It was built on gpt-3.5, an existing ai model which had already been available for more than eight months at the time.<p>Comment: As someone who had access to OpenAI models prior to the release of ChatGPT, it's disingenuous to say that GPT-3.5 was "available". Yes, available to enrolled researchers willing to suffer through the tools to interact with a model not fine-tuned on conversation.