TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence

241 pointsby Mandelmusover 1 year ago

72 comments

wolframhempelover 1 year ago
I feel there is a strong interest by large incumbents in the AI space to push for this sort of regulation. Models are increasingly cheap to run and open source and there isn&#x27;t too much of a defensible moat in the model itself.<p>Instead, existing AI companies are using the government to increase the threshold for newcomers to enter the field. A regulation for all AI companies to have a testing regime that requires a 20 headstrong team is easy to meet for incumbents, but impossible for newcomers.<p>Now, this is not to diminish that there are genuine risks in AI - but I&#x27;d argue that these will be exploited, if not by US companies, then by others. And the best weapon against AI might in fact be AI. So, pulling the ladder up behind the existing companies might turn out to be a major mistake.
评论 #38071162 未加载
评论 #38072326 未加载
评论 #38070218 未加载
评论 #38077219 未加载
评论 #38072027 未加载
评论 #38079822 未加载
评论 #38080624 未加载
评论 #38072002 未加载
stanfordkidover 1 year ago
Regulatory capture in action. The real immediate risks of AI is in privacy, bias, data leakage, fraud, control of infrastructure&#x2F;medical equipment etc. not manufacturing biological weapons. This seems like a classic example of government doing something that looks good to the public, satisfies incumbents and does practically nothing.
评论 #38070243 未加载
评论 #38070246 未加载
评论 #38070264 未加载
评论 #38074304 未加载
评论 #38070457 未加载
sschuellerover 1 year ago
There is no way to prevent AI from being researched on or to make it safe by government oversight because the rest of the world has places that don&#x27;t care.<p>What does work is to pass laws to not permit certain automation such as insurance claims or life and death decisions. These laws are needed even without AI as automation is already doing such things to a concerning degree like banning people due to a mistake without recourse.<p>Is the whitehouse going to ban the use of AI in the decision making when dropping a bomb?
评论 #38068516 未加载
评论 #38068607 未加载
评论 #38068475 未加载
elicksaurover 1 year ago
From the E.O.[1]<p>&gt; (b) The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.<p>Oops, I made a regulated artificial intelligence!<p><pre><code> import random print(&quot;Prompt:&quot;) x = input() model = [&quot;pizza&quot;, &quot;ice cream&quot;] if x == &quot;What should I have for dinner?&quot;: pick = random.randint(0, 1) print(&quot;You should have &quot; + model[pick] + &quot; for dinner.&quot;) </code></pre> [1] <a href="https:&#x2F;&#x2F;www.whitehouse.gov&#x2F;briefing-room&#x2F;presidential-actions&#x2F;2023&#x2F;10&#x2F;30&#x2F;executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.whitehouse.gov&#x2F;briefing-room&#x2F;presidential-action...</a>
评论 #38081298 未加载
评论 #38078584 未加载
parasenseover 1 year ago
I used to work on AI.<p>Now I work on Artificial Stupidity...<p>Jokes aside, this is ludicrous. The president cannot enforce this regulation over open source projects, because code is free speech going back to the 1990s ATT v BSD case law, and many other cases the establish how source code is an artistic form of expression, thus protected speech.<p>The president has no authority to regulate speech, so they can pretty much fuck off.
评论 #38079680 未加载
评论 #38079372 未加载
评论 #38079025 未加载
评论 #38083183 未加载
yoranover 1 year ago
&quot;Every industry that has enough political power to utilise the state will seek to control entry.&quot; - George Stigler, Nobel prize winner in Economics, and worked extensively on regulatory capture<p>This explains why BigTech supports regulation. It distorts the free market by increasing the barriers to entry for new, innovative AI companies.
评论 #38077354 未加载
giantg2over 1 year ago
&quot;requirements that the most advanced A.I. products be tested to assure they cannot be used to produce weapons&quot;<p>In the information age, AI is the weapon. This can even apply to things like weaponizing economics. In my opinion ths information&#x2F;propaganda&#x2F;intelligence gathering and economic impacts are much greater than any traditional weapon systems.
评论 #38070168 未加载
评论 #38070576 未加载
marcinzmover 1 year ago
Reading this all I&#x27;m seeing is &quot;we&#x27;ll research these things&quot;, &quot;we&#x27;ll look into how to keep AIs from doing these things&quot; and &quot;tell the US government how you tested your foundational models.&quot; Except for the last one none of the others are really restrictions on anything or requirements for working with AI. There&#x27;s a lot of fearful comments here, am I missing something?
评论 #38079770 未加载
评论 #38068489 未加载
评论 #38068655 未加载
评论 #38068340 未加载
评论 #38068345 未加载
otoburbover 1 year ago
&gt;&gt;<i>The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.</i><p>I find the definition of AI to be eerily broad enough to encompass most programs operating on most data inputs. Would this mean that calls to FFmpeg or ImageMagick rolled into a script with some rand() calls would count as an AI system and be under federal purview and enforcement (whatever that means in this context)?
评论 #38078313 未加载
mr_toadover 1 year ago
Be a shame if your AI was deemed a risk to national security.<p>Not to worry, for a reasonable fee our surprisingly large team of auditors with even larger overheads can ensure you meet lengthy and ambiguous best practice checklists (which we totally did not just make up now) by producing enough compliance documentation to keep even the most anal of bureaucrats satisfied.
andrewmutzover 1 year ago
Fortunately, these regulations don&#x27;t seem too extreme. I hope it stays at this point and doesn&#x27;t escalate to regulations that severely impact the development of AI technology.<p>Many people spend time talking about the lives that may be lost if we don&#x27;t act to slow the progress of AI tech. There are just as many reasons to fear the lives lost if we do slow down the progress of AI tech (drug cures, scientific breakthroughs, etc).
评论 #38069962 未加载
评论 #38070568 未加载
评论 #38072459 未加载
imranhouover 1 year ago
This is clever, begin with a point that most people can agree on. Once that foundation is set, you can continue to build upon it, claiming that you&#x27;re only making minor adjustments.<p>The real challenge for the government isn&#x27;t about what can be managed legally. Rather, like many significant societal issues, it&#x27;s about what malicious organizations or governments might do beyond regulation and how to stop them. In this situation, that&#x27;s nearly impossible.
评论 #38078649 未加载
mark_l_watsonover 1 year ago
Andrew Ng argues against government regulation that will make it difficult for smaller companies and startups to compete against the tech giants.<p>I am all in favor of stronger privacy and data reuse regulation, but not AI regulation.
unboxingelfover 1 year ago
Tools for me, but not thee.
评论 #38099189 未加载
评论 #38070030 未加载
perihelionsover 1 year ago
The White House just invoked the <i>Defense Production Act</i> ( <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Defense_Production_Act_of_1950" rel="nofollow noreferrer">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Defense_Production_Act_of_1950</a> ) to assert sweeping authority over private-company software developers. What the fuck are they smoking?<p>- <i>&quot;In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.&quot;</i><p>I assume this is a major constitutional overreach that will be overturned by courts at the first challenge?<p>Or else, all the AI companies who haven&#x27;t captured their regulators will simply move their R&amp;D to some other country—like how the OpenSSH (?) core development moved to Canada during in the 1990&#x27;s crypto wars. (edit: Maybe that&#x27;s the real goal–scare away OpenAI&#x27;s competition, dredge for them a deeper regulatory moat).
评论 #38068578 未加载
评论 #38075959 未加载
评论 #38068604 未加载
评论 #38094882 未加载
ru552over 1 year ago
I wonder if the laws will be written in a way that we can get around them by just dropping the “AI” marketing fluff and saying that we’re building some ML&#x2F;stats system.
评论 #38070005 未加载
评论 #38070157 未加载
评论 #38069946 未加载
bilsbieover 1 year ago
Can anyone understand how they can make all these regulations without an act of congress?
评论 #38073026 未加载
评论 #38070475 未加载
评论 #38070442 未加载
RecycledEleover 1 year ago
In Robert Heinlein&#x27;s Starship Troopers, only those who had served in the military could vote on going to war. (I know that I&#x27;m oversimplifying.)<p>I want a society where you have to prove competence in a field to regulate that field.
评论 #38077148 未加载
nh23423fefeover 1 year ago
They can&#x27;t regulate finance, they can&#x27;t regulate AI either.
评论 #38070359 未加载
BenoitPover 1 year ago
Earlier on HN:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38067314">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38067314</a><p><a href="https:&#x2F;&#x2F;www.whitehouse.gov&#x2F;briefing-room&#x2F;statements-releases&#x2F;2023&#x2F;10&#x2F;30&#x2F;fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.whitehouse.gov&#x2F;briefing-room&#x2F;statements-releases...</a>
Rebuff5007over 1 year ago
It boggles my mind that this is getting so much attention instead of things like digital privacy &#x2F; data tracking which is actually affecting peoples lives.
DebtDeflationover 1 year ago
&gt;The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release.<p>So if, for example, Llama3 does not pass the government&#x27;s safety test, then Meta will be forbidden from releasing the model? Welcome to a world where only OpenAI, Anthropic, Google, and Amazon are allowed to release foundation models.
评论 #38078674 未加载
评论 #38079570 未加载
ThinkBeatover 1 year ago
&gt; biological or nuclear weapons,<p>You know aside from the AIs the intelligence and military use &#x2F; will soon use.<p>&gt; watermarked to make clear that they were created by A.I.<p>Good luck on that. It is fine that the systems do this. But if you are making images for nefarious reasons then bypassing whatever they ad should be simple.<p>screencap &#x2F; convert between different formats, add &#x2F; remove noise
RationalDinoover 1 year ago
I am afraid that this will just lead down the path to what <a href="https:&#x2F;&#x2F;twitter.com&#x2F;ESYudkowsky&#x2F;status&#x2F;1718654143110512741" rel="nofollow noreferrer">https:&#x2F;&#x2F;twitter.com&#x2F;ESYudkowsky&#x2F;status&#x2F;1718654143110512741</a> was mocking. We&#x27;re dictating solutions to today&#x27;s threats, leaving tomorrow to its own devices.<p>But what will tomorrow bring? As Sam Altman warns in <a href="https:&#x2F;&#x2F;twitter.com&#x2F;sama&#x2F;status&#x2F;1716972815960961174" rel="nofollow noreferrer">https:&#x2F;&#x2F;twitter.com&#x2F;sama&#x2F;status&#x2F;1716972815960961174</a>, superhuman persuasion is likely to be next. What does that mean? We&#x27;ve already had the problem of social media echo chambers leading to extremism, and online influencers creating cult-like followings. <a href="https:&#x2F;&#x2F;jonathanhaidt.substack.com&#x2F;p&#x2F;mental-health-liberal-girls" rel="nofollow noreferrer">https:&#x2F;&#x2F;jonathanhaidt.substack.com&#x2F;p&#x2F;mental-health-liberal-g...</a> is a sober warning about the dangers to mental health from this.<p>These are connected humans accidentally persuading each other. Now imagine AI being able to drive that intentionally to a particular political end. Then remember that China controls TikTok.<p>Will Biden&#x27;s order keep China from developing that capability? Will we develop tools to identify how that might be being actively used against us? I doubt both.<p>Instead, we&#x27;ll almost certainly get security theater leading to a regulatory moat. Which is almost certain to help profit margins at established AI companies. But is unlikely to address the likely future problems that haven&#x27;t materialized yet.
评论 #38069770 未加载
评论 #38071274 未加载
maytcover 1 year ago
Regulatory capture for AI is here?<p>Looking at Bill Gurley&#x27;s 2,851 Mile talk (<a href="https:&#x2F;&#x2F;12mv2.com&#x2F;2023&#x2F;10&#x2F;05&#x2F;2851-miles-bill-gurley-transcript-slides&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;12mv2.com&#x2F;2023&#x2F;10&#x2F;05&#x2F;2851-miles-bill-gurley-transcri...</a>)
14over 1 year ago
The cat is out of the bag. This will have no meaningful effect except to stop the lowest tier players.
评论 #38070482 未加载
whywhywhywhyover 1 year ago
Any major restrictions will be handing the future to China, Russia and UAE for the short term gain of presumably some kickbacks from incumbents.
honeybadger1over 1 year ago
Expect trash that protects big business and puts a boot on everyone else&#x27;s neck.
numpad0over 1 year ago
How do any of these work when everyone is cargo-cult &quot;programming&quot; AI by verbally asking nicely? Effectively no one but very few up there in OpenAI et al has any understanding, let alone have controls.
评论 #38068671 未加载
rvzover 1 year ago
OpenAI, Anthropic Microsoft and Google are not your friends and the regulatory capture scam is being executed to destroy open source and $0 AI models since they are indeed a threat to their business models.
评论 #38070885 未加载
评论 #38068536 未加载
rmbyrroover 1 year ago
I see Salt Man&#x27;s bureau trips are paying off.
venatiodecorusover 1 year ago
The way to make AI content safe is the same way to improve general network security for everyone: cryptographically signed content standards. We should be able to sign our tweets, blog posts, emails, and most network access. This would help identify and block regular bots along with AI powered automatons. Trusted orgs can maintain databases people can subscribe to for trust networks, or you can manage your own. Your key(s) can be used to sign into services directly.
评论 #38068587 未加载
评论 #38068324 未加载
评论 #38068652 未加载
评论 #38068508 未加载
评论 #38068440 未加载
EMM_386over 1 year ago
I don&#x27;t see any way of stopping this. If the risks are as great as some claim, that is not a great situation.<p>So now we have an executive order with a very limited scope. Tomorrow, suddenly the world&#x27;s most powerful AI is now announced, not in the United States.<p>Ok, so now we want to make sure that is safe. An executive order from the White House has no affect on it. This can continue, until it&#x27;s decided the stakes are getting too high. Then I suppose you could have the United Nations start trying to figure out how to maintain safety. Of course, there will be countries that will simply ignore anything that is decided, hiding increasingly advanced systems with unknown purposes. It will probably take longer for nations to determine a what defines &quot;human values&quot; so that AI respects them then it does to create another leap in AI capabilities.<p>Then there would simply be more concerns coming into play. Countries will go to war to try to stop other countries nuclear ambitions, is it possible that AI poses enough of a threat that similar problems arise?<p>Basically, if AI is as potentially large a threat as we are envisioning, there are so many different potential threats that trying to solve them while trying to stay ahead of pace of advancements seems unrealistic. While someone is trying to ensure we don&#x27;t end up with systems going rogue, someone else needs to handle the fact that we can&#x27;t have AI creating certain things. The AI systems are not allowed to tinker with viruses, as an example, where unexpected creations can lead to extremely bad situations.<p>The initial stages of this have already begun, and time is ticking. I guess we&#x27;ll see.
ilakshover 1 year ago
Good start. But if you are in or approaching WWIII, you will see military AI control systems as a priority, and be looking for radical new AI compute paradigms that push the speed, robustness, and efficiency of general purpose AI far beyond any human ability to keep up. This puts Taiwan even more in the hot seat. And aims for a dangerous level of reliance on hyperspeed AI.<p>I don&#x27;t see any way to continue to have global security without resolving our differences with China. And I don&#x27;t see any serious plans for doing that. Which leaves it to WWIII.<p>Here is an article where the CEO of Palantir advocated for the creation of superintelligent AI weapons control systems: <a href="https:&#x2F;&#x2F;www.nytimes.com&#x2F;2023&#x2F;07&#x2F;25&#x2F;opinion&#x2F;karp-palantir-artificial-intelligence.html" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.nytimes.com&#x2F;2023&#x2F;07&#x2F;25&#x2F;opinion&#x2F;karp-palantir-art...</a>
honeybadger1over 1 year ago
This will just make it harder for businesses not lining the pockets of congress and buddying up with the government.
stevevover 1 year ago
Let the regulations, antitrust lawsuits and monopolies begin!
评论 #38068628 未加载
rmbyrroover 1 year ago
Why&#x27;s there a bat flying over the white house logo?
评论 #38068108 未加载
评论 #38068471 未加载
评论 #38069793 未加载
评论 #38068777 未加载
engcoachover 1 year ago
Impotent action to appear relevant.
almatabataover 1 year ago
These regulations will only impact the public. I expect the army and secret service to gain access to the complete unrestricted model officially or unofficially. I would like to see the final law to check if they have a carve out for the military usage.<p>The threat includes the whole world including every single country in the world. You will see US using AI to mess with China and Russian. And you will see Russian and China use AI to mess with US. No regulation will stop this and it will inevitably happen.<p>Maybe in a 100 years you will have the equivalent of the geneva convention but with AI when we have wrought enough chaos on each other.
jiggawattsover 1 year ago
Everyone forgets that all of this should have applied to every major search engine:<p>1. They’ve all used much more than the regulatory threshold compute power for indexing and collating.<p>2. They can be used to answer arbitrary questions, including how to kill oneself or produce weapons to kill others. Yes, including detailed nuclear weapons designs.<p>3. Can be used to find pornography, racist material, sexist literature, and on, and on… largely without censure or limit.<p>So… why the sudden need to curtail what we can and can’t do with computers?
AlexanderTheGr8over 1 year ago
As far as I can tell, the only concerning thing in this is &quot;Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.&quot;<p>They are being intentionally vague here. Define &quot;most powerful&quot;. And what do they mean by &quot;share&quot;. Do we need approval or just acknowledgement?<p>This line is a slippery slope for requiring approval for any AI model which effectively kills start-ups, who cannot afford extensive safety precautions
collsniover 1 year ago
This isn&#x27;t about regulation this is about Market control
pyuser583over 1 year ago
A lot of folks are talking about “incumbents in AI taking regulatory control.”<p>That is extremely premature. There are no real incumbents. The only companies with real cash flow from this are hardware.<p>We still don’t know what commercial AI will look like - much less have massive incumbents.<p>Maybe we should be a bit more skeptical of privacy laws that conventionally make it harder to start a social networking site or search engine.<p>But AI still doesn’t have a clear application.
adolphover 1 year ago
Said executive order was not linked to in the document.
评论 #38068058 未加载
monksyover 1 year ago
The privacy section is just a facepalm all arround there.<p>The US Government has been leading the way to collect information without a warrant from friendly commerical interests.. and they&#x27;ve been expanding futher in tracking large groups of people, without their consent. [I&#x27;m talking about people that are not under investigation nor are the current subject of interest ... yet]
saturn8601over 1 year ago
I don&#x27;t see how they will enforce many of these rules on Open Source AI.<p>Also:<p>&quot;Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.&quot;<p>I fear the end of pwning your own device to free it from DRM or other lockouts is coming to an end with this. We have been lucky that C++ is still used <i>badly</i> in many projects and that has been an achilles heel for many a manager wanting to lock things down. Now this door is closing faster with the rise of AI bug catching tools.
评论 #38068063 未加载
评论 #38079189 未加载
orbital-decayover 1 year ago
<i>&gt; They include requirements that the most advanced A.I. products be tested to assure that they cannot be used to produce biological or nuclear weapons</i><p>How is &quot;AI&quot; defined? Does this mean US nuclear weapons simulations will have to completely rely on hard methods, with absolutely no ML involved for some optimizations? What does it mean for things like AlphaFold?
评论 #38069970 未加载
评论 #38069990 未加载
pr337h4mover 1 year ago
First Amendment hasn&#x27;t been fully destroyed yet, and we&#x27;re talking about large &#x27;language&#x27; models here, so most mandates might not even be enforceable (except for requirements on selling to the government, which can be bypassed by simply not selling to the government).<p>Edited to add:<p><a href="https:&#x2F;&#x2F;www.whitehouse.gov&#x2F;briefing-room&#x2F;statements-releases&#x2F;2023&#x2F;10&#x2F;30&#x2F;fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.whitehouse.gov&#x2F;briefing-room&#x2F;statements-releases...</a><p>Except for the first bullet point (and arguably the second), everything else is a directive to another federal agency - they have NO POWER over general-purpose AI developers (as long as they&#x27;re not government contractors)<p>The first point: &quot;Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.&quot;<p>The second point: &quot;Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.&quot;<p>Since the actual text of the executive order has not been released yet, I have no idea what even is meant by &quot;safety tests&quot; or &quot;extensive red-team testing&quot;. But using them as a condition to prevent release of your AI model to the public would be blatantly unconstitutional as prior restraint is prohibited under the First Amendment. Prior restraint was confirmed by the Supreme Court to apply even when &quot;national security&quot; is involved in New York Times Co. v. United States (1971) - the Pentagon Papers case. The Pentagon Papers were actually relevant to &quot;national security&quot;, unlike LLMs or diffusion models. More on prior restraint here: <a href="https:&#x2F;&#x2F;firstamendment.mtsu.edu&#x2F;article&#x2F;prior-restraint&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;firstamendment.mtsu.edu&#x2F;article&#x2F;prior-restraint&#x2F;</a><p>Basically, this EO is toothless - have a spine and everything will be all right :)
评论 #38070508 未加载
评论 #38071588 未加载
评论 #38072569 未加载
siliconc0wover 1 year ago
Both approaches - watermarking and &#x27;requiring testing&#x27; seem pretty pointless. Bad actors won&#x27;t watermark and tools will quickly emerge to remove them. The &#x27;megasyn&#x27; AI that generated bioweapon molecules wasn&#x27;t even an LLM and doesn&#x27;t need insane amounts of compute.
batch12over 1 year ago
This line is a little scary:<p>&gt; Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.
评论 #38079150 未加载
Nifty3929over 1 year ago
I&#x27;m worried about the idea of a watermark.<p>The watermark could be &quot;Created by DALL-E3&quot; or it could be &quot;Created by Susan Johnson at 2023-01-01-02-03-23:547 in &lt;Lat&#x2F;Long&gt; using prompt &#x27;blah&#x27; with DALL-E3&quot;<p>One of those watermarks seems not too bad. The other seems a bit worse.
I_am_uncreativeover 1 year ago
Is there a penalty for non-compliance here? Because if you were a wealthy recluse with 50,000x H100 cards, the executive order might say you have to report your models, but I&#x27;m pretty sure that there&#x27;s no penalty that could be enforced without a law.
nojitoover 1 year ago
There’s some cool stuff in here about providing assistance to smaller researchers. That should help a lot given how hard it currently is to train a foundational model.<p>The restrictions around government use of AI and data brokers is also refreshing to see as well.
brodouevencodeover 1 year ago
How much will this regulation cost in 5, 10, 50 years? Who will write the regulations?
photochemsynover 1 year ago
If they try to limit LLMs from discussing nuclear, biological and chemical issues, they&#x27;ll have no choice but to ban all related discussion because of the &#x27;dual-use technology&#x27; issue - including of nuclear energy production, antibiotic and vaccine production, insecticide manufacturing, etc. Similarly, illegal drug synthesis only differs from legal pharmaceutical synthesis in minor ways. ChatGPT will tell you everything you want about how to make aspirin from willow bark using acetic anhydride - and if you replace the willow bark with morphine from opium poppies, you&#x27;re making heroin.<p>Also, script kiddies aren&#x27;t much of a threat in terms of physical weapons compared to cyberattack issues. Could one get an LLM to code up a Stuxnet attack of some kind? Are the regulators going to try to ban all LLM coding related to industrial process controllers? Seems implausible, although concerns are justified I suppose.<p>I&#x27;m sure the regulatory agencies are well aware of this and are just waving this flag around for other reasons, such as gaining censorship power over LLM companies. With respect to the DOE&#x27;s NNSA (see article), ChatGPT is already censorsing &#x27;sensitive topics&#x27;:<p>&gt; &quot;Details about any specific interactions or relationships between the NNSA and Israel in the context of nuclear power or weapons programs may not be publicly disclosed or discussed... As of my last knowledge update in January 2022, there were no specific bans or regulations in the U.S. Department of Energy (DOE) that explicitly prohibited its employees from discussing the Israeli nuclear weapons program.&quot;<p>I&#x27;m guessing the real concern is that LLMs don&#x27;t start burbling on about such politically and diplomatically embarrassing subjects at length without any external controls. In this case, NNSA support for the Israeli nuclear weapons program would constitute a violation of the Non-Proliferation Treaty.
epupsover 1 year ago
This looks even more heavy-handed than the regulation from the EU so far.
评论 #38068285 未加载
coding123over 1 year ago
Unfortunately he doesn&#x27;t know what he signed.
ThrowawayTestrover 1 year ago
I&#x27;m so glad this country is run by a geriatric that can barely pronounce AI let alone understand it.
评论 #38079224 未加载
billy_bitchtitsover 1 year ago
Code is free speech. Reminds me of the cryptography fights.
baggy_troughover 1 year ago
Disturbing that this sort of thing can be decreed by the executive.
Eumenesover 1 year ago
This is pretty ironic, trying to insure AI is &quot;safe, secure, and trustworthy&quot;, from an administration that is fighting free speech on social media, and want back door communication with social media companies.
px43over 1 year ago
Huh, interesting.<p>&gt; Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.
atleastoptimalover 1 year ago
To those worried about regulatory capture, this EO just being about keeping incumbents in power, etc:<p>Even sans-regulation, do non-incumbents really have a chance at this point? The most recent major player in the field, Anthropic, only reached its level of prominence due to taking a critical mass of former OpenAI employees, and in a year reached 700 million in funding. Every company that became a major player in the AI space in the last 10 years either<p>1. Is an existing huge company (Google, Facebook, Microsoft, etc)<p>2. Secured 99.99th percentile level venture funding within the first year of its inception due to its founders preexisting connections&#x2F;prestige<p>Realistically there isn&#x27;t going to be a &quot;Facebook&quot; moment for AI where some scrappy genius in college cooks up a SOTA model and goes stratospheric overnight, even in a libertarian fantasyland just due to market&#x2F;network effects. People just have to be realistic about the way things are.
Koshkinover 1 year ago
DPRK will make this their law ASAP
bbitmasterover 1 year ago
What a lot of nonsense, where is the executive order banning gain of function research?
normalaccessover 1 year ago
All joking aside I firmly believe that this “crisis” is manufactured or at least heavily influenced by those that want to shut down the internet and free communications. Up until now they have been unsuccessful. Copyright infringement, hate speech, misinformation, disinformation, child exploitation, deep fakes, none have worked to garner support. Now we have an existential threat. Video, audio, text, nothin is off limits and soon it will be in real time (note: the GOV tries to stay 25 years ahead of the private sector).<p>This meme video incapsulates this perfectly.<p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;-gGLvg0n-uY?si=B719mdQFtgpnfWvH" rel="nofollow noreferrer">https:&#x2F;&#x2F;youtu.be&#x2F;-gGLvg0n-uY?si=B719mdQFtgpnfWvH</a><p>Mark my words, in five years or less we will be begging the governments of earth to implement permanent global real time tracking for every man woman and child on earth.<p>Privacy is dead. And WE killed it.
评论 #38071671 未加载
Eumenesover 1 year ago
This kinda thing should not be legislated via executive order. Congress needs a committee and must deliberate. Sad.
评论 #38068010 未加载
评论 #38068675 未加载
评论 #38071130 未加载
评论 #38068303 未加载
RandomLensmanover 1 year ago
Does Microsoft need to share how it is testing Excel? Some subtle bug there might do an awful lot of damage.
评论 #38067925 未加载
sirmike_over 1 year ago
This is useless just like everything they do. Masterfully full of synergy and nonsense talk.<p>Is there anyone hear who actually believes this will do something? Sincere question.
评论 #38068381 未加载
iinnPPover 1 year ago
Criminals don&#x27;t follow the rules. Large corps don&#x27;t follow the rules.<p>The only people this impacts are the ones you don&#x27;t need it to impact. The bit about detection and authentication services is also alarming.
评论 #38068145 未加载
评论 #38072798 未加载
tomohawkover 1 year ago
In my history book, I read where we fought a war to not have a king.<p>In my civics class, I learned that Congress passes laws, not the President.<p>I guess a public school education only goes so far.
评论 #38068247 未加载
评论 #38068356 未加载
评论 #38068564 未加载
评论 #38068198 未加载
d--bover 1 year ago
I was downvoted 35 days ago, for daring to state that deepfakes will lead to AI being regulated.<p>Of course “these are just recommendations”, but we’re getting there.
评论 #38070111 未加载
评论 #38070280 未加载
评论 #38070038 未加载
评论 #38071753 未加载