Reading HN's reactions to an OpenAI statement about open weights is about as satisfying / interesting as reading an r/conservatives thread about affirmative action. The opposition is built-in by now, to the point people aren't reacting to the article at all so much as reacting to the general idea of "OpenAI says bad things I don't like". I'd wager half of the people posting here didn't even skim the article, let alone read it.<p>That's a shame, because OpenAI's statement makes some <i>very</i> interesting observations, eg:<p>> <i>For instance, strengthening resilience against AI-accelerated cyberattack risks might involve providing critical infrastructure providers early access to those same AI models, so they can be used to improve cyber-defense (as in the early projects we have funded as part of the OpenAI Cybersecurity Grant Program). Strengthening resilience against AI-accelerated biological threat creation risks may involve solutions totally unrelated to AI, such as improving nucleic acid synthesis screening mechanisms (as called for in Executive Order 14110), or improving public health systems’ ability to screen for and identify new pathogen outbreaks.</i><p>I think considerations like that would be interesting to examine on their own merits, instead of just bashing OpenAI.<p>But again, I don't expect that to happen, for the same reasons I don't expect r/conservatives to have an in-depth debate about the problems and merits of an affirmative action proposal. Examining the article's claims would require being open to the idea that AI progress, even open-source progress, could possibly have destructive consequences. Ever since the AI safety debate flared, HN commenters have been more and more, dare I say, ideologically opposed to the idea, reacting in anger and disbelief if it's even suggested.<p>Anyway, I thought the article was interesting. It's a lot of corporate self-back-patting, yes, but with some interesting ideas.