First Amendment hasn't been fully destroyed yet, and we're talking about large 'language' models here, so most mandates might not even be enforceable (except for requirements on selling to the government, which can be bypassed by simply not selling to the government).<p>Edited to add:<p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" rel="nofollow noreferrer">https://www.whitehouse.gov/briefing-room/statements-releases...</a><p>Except for the first bullet point (and arguably the second), everything else is a directive to another federal agency - they have NO POWER over general-purpose AI developers (as long as they're not government contractors)<p>The first point:
"Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public."<p>The second point:
"Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety."<p>Since the actual text of the executive order has not been released yet, I have no idea what even is meant by "safety tests" or "extensive red-team testing". But using them as a condition to prevent release of your AI model to the public would be blatantly unconstitutional as prior restraint is prohibited under the First Amendment. Prior restraint was confirmed by the Supreme Court to apply even when "national security" is involved in New York Times Co. v. United States (1971) - the Pentagon Papers case. The Pentagon Papers were actually relevant to "national security", unlike LLMs or diffusion models.
More on prior restraint here: <a href="https://firstamendment.mtsu.edu/article/prior-restraint/" rel="nofollow noreferrer">https://firstamendment.mtsu.edu/article/prior-restraint/</a><p>Basically, this EO is toothless - have a spine and everything will be all right :)