What do they expect to happen if they win? OpenAI can't use GPT-4 (or build/release GPT-5), but the innovation will continue in areas of the world not subject to this regular?<p>I understand that the LLM models are advancing quickly and aren't easily explainable or transparent. The models feel like magic at times. But that doesn't mean society should shut them down.<p>This is fearful behavior and spreading FUD really. These folks should take the time to understand how an LLM works before taking this action.
The fact that they are targeting GPT4 for supposed bias and safety risks, when GPT4 is the least biased and safest (hardest to jailbreak) model that OpenAI has released, makes this look like just an unsophisticated attack on their business model.
What bothers me about this is that, yes, moving too fast with Ai to the point of disruption or where society can't keep up is a problem.<p>Yet, restricting OpenAi on the other hand won't prevent other big companies from building their own in-house GPT-4 (or GPT5) level model. We're going there whether the government likes it or not, At the very least OpenAi is transparent (more than Google or facebook at least).
Therein lies Sam Altman's biggest fear: regulatory crackdown or restrictions on OpenAI. At this point the government presents the only risk to the progress and success of OpenAI.<p>Sure there are competitors like Google but so far OpenAI is the leader and doesn't seem to be slowing down. The market could evolve into a natural duopoly, especially given the huge capital expenditures and technical know-how required to stand up and maintain a cutting edge LLM like GPT-4.<p>Once GPT-4/chatGPT reaches a certain tipping point for disruption, and public sentiment turns from curiosity to fear, the resulting backlash and scrutiny could be on the level of Microsoft's antitrust case in the 1990s. If I were Sam, I'd be pouring resources and money into DC to try to get ahead of this coming storm.
One way or other, the cat is out of the bag and there is no going back. Remember Napster. Even if Open AI/ChatGPT is taken down, however unlikely, there is no slowing down the innovation that is about to transform our lives. This moment in time feels like early 2000s when Web 1.0 became real to masses and suddenly everyone had a use for web. We are at precipice of the next big technology cycle, and this is showing all the classical symptoms of incumbents fighting the inevitable disruption
I wonder how much of this attack on "AI" is directed by China. Slowing down AI development in the western world until they can catch up seems like a big win for China.
Here's the press release from the organization that filed the complaint, which has a bit more detail: <a href="https://s899a9742c3d83292.jimcontent.com/download/version/1680174583/module/8450182663/name/PRESS-CAIDP-OpenAI-FTC-Complaint.pdf" rel="nofollow">https://s899a9742c3d83292.jimcontent.com/download/version/16...</a>
This complaint seems somewhat unlikely to lead to an actual FTC action. The criticism is about unfair or deceptive business practices under the FTC act. The FTC has a fairly specific definition of what constitutes unfair or deceptive business practises[1]<p><pre><code> > “Deceptive” practices are defined in the Commission’s Policy Statement on Deception as involving a material representation, omission or practice that is likely to mislead a consumer acting reasonably in the circumstances. An act or practice is “unfair” if it “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.”
</code></pre>
[1] <a href="https://www.ftc.gov/about-ftc/mission/enforcement-authority" rel="nofollow">https://www.ftc.gov/about-ftc/mission/enforcement-authority</a>
The FTC has been begging the complain-for-profit sector to give it a formal path to regulate AI. The FTC's only enforcement hook in this area is that it can take action against companies that have unfair or deceptive trade practices. This is how the FTC began regulating privacy and security in the US, and it's been waiting to use it for AI.<p>It comes as no surprise that this complaint is from Mark Rotenberg, former head of EPIC. He's very well aware of the boundaries of the FTC's power, and this complaint effectively serves as a letter to the FTC from an expert about how the FTC can position itself to begin regulating AI.
My first instinct after reading the complaint is... Fuck off!! How nice it must be for members of this so called "Center for AI and Digital Policy" to dictate - isn't that the result of a complaint enforced by the FTC - from their nice and comfortable chairs at what OpenAI and by inference, every other AI research company in the US, what to do and how. Is this the new form of virtue signaling? The FTC should stop OpenAI because of all the possible negative outcomes their AI work MAY create? Right off the top of my head, I can come up with at least 10 places in the US and around the world where members of CAID can go to right now and make a real difference to people who have real problems NOW. Instead, they want to tell and force others on what to do with their expression - yes, AI research and its output is a form of expression protected in the US (home of OpenAI) under free speech laws. How about taking a page from their org name and create an AI that can do all those things they're asking for automatically? No, that's not an option for them because it would actually require them to do more work than the complaint they could've probably used ChatGPT to write. In their infinite wisdom, couldn't they have foreseen in the last 10 years that an AI-based tool like ChatGPT would emerge? Where's their AI tool that could save us all now from the awful and destructive AI companies that are creating so much value for the world? Have they even read OpenAI's System Card on GPT-4? Did CAID even see the tradeoffs and concerns explored? On the way to reading the card they should dust off a copy of Lessig's Code, checkout a copy of The Moon is a Harsh Mistress and remember that this the US, we don't force people to do anything, engage in dialogue instead.<p>“If liberty means anything at all, it means the right to tell people what they do not want to hear.”
― George Orwell
The last thing want is to talk to a AI bot when calling a company or health provider with questions. Due to where I live and my accent, these voice bots never work. So, anything to stop these from being commercialized is good to me.<p>But these articles about AI are nuts, some state AI will destroy all life on Earth. That was a headline I ran across that was suppose to be signed by some scientists. I did not read it because it sounded crazy.<p>Also, these GPT* things is <i>not</i> really AI, but word/sentence parsers and probably some fancy database lookup.
I mean, can't they just move the company to a friendly island nation, or elsewhere?<p>It's in our best interest the an American company is far and away the leader in this field.
This is how our society becomes the dystopia in Atlas Shrugged. Every single fast moving technology that we do not understand, needs to be stopped on its track and regulated to check for “safety”, “inclusion”etc because really the biggest problem facing the world right now is Unchecked Technological Progress. In fact the biggest problem facing Black People and Women are biased AI, god forbid humans are always fair. Clown world!
I may not agree with:<p>> CAIDP calls GPT-4 “biased, deceptive, and a risk to privacy and public safety.”<p>But the rest looks good to me:<p>> The group says the large language model fails to meet the agency’s standards for AI to be “transparent, explainable, fair, and empirically sound while fostering accountability.”<p>> The group wants the FTC to require OpenAI establish a way to independently assess GPT products before they’re deployed in the future. It also wants the FTC to create a public incident reporting system for GPT-4 similar to its systems for reporting consumer fraud. It also wants the agency to take on a rulemaking initiative to create standards for generative AI products.<p>Sure, there will be (more) FOSS clones, and non-American clones. NBD — if they can't pass stuff like this, they're not going to be as valuable regardless.
When I read this, I was wondering if the moratorium advocacy signed by prominent members of Google research labs and OpenAI could be viewed as collusion/market-sharing?
case text:<p><a href="https://www.caidp.org/app/download/8450269463/CAIDP-FTC-Complaint-OpenAI-GPT-033023.pdf" rel="nofollow">https://www.caidp.org/app/download/8450269463/CAIDP-FTC-Comp...</a>
>> ” Tesla
CEO Elon Musk, who co-founded OpenAI, and Apple
co-founder Steve Wozniak were among the other signatories.<p>I like Elon and his companies, but this is ridiculous. Autopilot AI has been killing people for years now and he always defends it.
Neo-Luddism.<p>Like they are going to stop China or any other country outside US or EU.<p>The most they can hope to do, is to force some companies to move off-shore.<p>Wonder where was all this people when Elon Musk started releasing betas on FSD.<p>Self-appointed 'Center for AI and Digital Policy', nothing more to add.