> "expectation that companies like yours must make sure their products are safe before making them available to the public."<p>Lets make a guess, they are going to say its dangerous and we need regulation to prevent competitio---terorrism.<p>Here is what you need to do instead, get some smart people from various agencies that are trustworthy, have them use the openAI playground and see what can be accomplished. Then show them that you can torrent facebooks LLM right now, and that its already on computers worldwide. The cat is out of the bag.<p>Then let them make policy decisions.<p>Hard to imagine this is anything other than a ploy for regulations and lobbying.
Considering Anthropic and Open AI will be there, I think the right players are at the table. I would've liked to have seen Meta there since I think they're focused on generative deep learning. That said, with the administration's AI Bill of Rights top of mind, I don't have faith in the gerontocracy to regulate this sector [1].<p>As a jocular aside, I wonder if Chat GPT could be used to write these articles? The second to last paragraph in this article is the exact same as the second paragraph from this earlier one: [2].<p>[1] - <a href="https://www.whitehouse.gov/ostp/ai-bill-of-rights/" rel="nofollow">https://www.whitehouse.gov/ostp/ai-bill-of-rights/</a><p>[2] - <a href="https://www.reuters.com/technology/us-begins-study-possible-rules-regulate-ai-like-chatgpt-2023-04-11/" rel="nofollow">https://www.reuters.com/technology/us-begins-study-possible-...</a>
This honestly feels like a good step. I see a lot of comments here lamenting potential regulatory overreach and while that is definitely a risk there are also a lot of people calling for regulations on AI and LLMs. There are credible risks and a lot of people are concerned. At the end of the day it’s a democracy: ignoring these people will not work out. Enough people are concerned that doing nothing is not an option (numerous septuagenarians in my life have serious and legitimate concerns about this. The government has done nothing to curtail rampant text/phone scams targeting the elderly and LLMs can really amplify these scams).<p>The White House inviting leaders from industry to represent their position at a tentative stage feels like a measured and sensible approach to regulation. Industry is given a seat at the table and hopefully they can reach an agreement that satisfies the needs of industry while also placating the widespread fears about AI. This is a good incremental approach to crafting good laws. While they are at it I wouldn’t mind if the White House also did something about the rampant social security phone scams, but one step at a time.
"In early May 1945, Secretary of War Henry L. Stimson, with the approval of President Harry S. Truman, formed an Interim Committee of top officials charged with recommending the proper use of atomic weapons in wartime and developing a position for the United States on postwar atomic policy. Stimson headed the advisory group composed of Vannevar Bush, James Conant, Karl T. Compton, Under Secretary of the Navy Ralph A. Bard, Assistant Secretary of State William L. Clayton, and future Secretary of State James F. Byrnes. Robert Oppenheimer, Enrico Fermi, Arthur Compton, and Ernest Lawrence served as scientific advisors (the Scientific Panel), while General George Marshall represented the military. The committee met on May 31 and then again the next day with leaders from the business side of the Manhattan Project, including Walter S. Carpenter of DuPont, James C. White of Tennessee Eastman, George H. Bucher of Westinghouse, and James A. Rafferty of Union Carbide."
Here's the argument that (as a USA-ian) persuades me the most: if these AI systems are <i>weapons</i> they we get to have them by the <i>2nd Amendment</i>. It's the same as the we-get-to-have-strong-encryption argument, eh?<p>The gov and the corps are not supposed to be the ultimate arbiters of authority. The was the crux of the American Revolution: throwing out the king.<p>Remember that e.g. Palmer Luckey and co. are busy making <i>Skynet</i> (Anduril Industries). The system is poised to enforce policy.
Re-gu-gu-la-to ... ry<p>Cap-cap-cap-cap-cap ... ture<p>(♫ cue in football gallery tune)<p>Soon we will know that only evil people have LLaMA finetunes on their desktops. Good citizens use an official provider like OpenAI.
One thought about AI. Testing for correct answers is not a useful metric for AI.
People can learn something that is wrong as easily as something that is "less wrong", as long as it makes sense. Sometimes things that are very counterintuitive are proven correct, and our intellect has to kind of reason a way to believe it.<p>Also, AI doesn't need to be "human" to be very useful. The argument of birds vs. planes comes to mind.
That's a hello of a title that reads like AI called Google and Microsoft CEOs to meet at the White House.<p>Or that AI CEOs of Google and Microsoft are having an AI pow-wow at the White House
Zuck is such a champ.<p>Drops LLM into open source world, leaves without explaining. Plausible deniability through leak. No one punished.<p>Legend. Like handing everyone in America a nail gun.
Wrapping up with<p><i>"I think we should be cautious with AI, and I think there should be some government oversight because it is a danger to the public," Tesla Chief Executive Elon Musk said last month in a television interview</i><p>As one of the few actors having already literally killed people with hyperbolic statements about "AI" in a high-stakes control context, his authority is not as good as it could have been. Maybe Reuters should have picked another face for urging caution.
The article highlights the White House's efforts to engage with top AI companies and discuss concerns related to artificial intelligence. However, it's worth considering whether these meetings might serve as a double-edged sword, given the potential for the administration to manipulate the AI community for political gain. As the next election cycle approaches, there is a risk that the White House could use its influence to shape AI development in ways that benefit the incumbent administration.<p>For instance, the Biden administration's call for AI companies to ensure the safety of their products before releasing them to the public could be seen as a way to exert control over these influential technologies. While it is important to address the potential risks of AI, such as privacy violations, bias, and misinformation, it is crucial to ensure that the government's involvement does not lead to undue interference or censorship that could sway public opinion in favor of the ruling party.<p>Moreover, as AI technologies like ChatGPT gain more prominence and widespread adoption, the potential for misuse by political actors becomes increasingly concerning. The administration's interest in regulating AI systems may be well-intended, but there is a danger that such regulation could be used to manipulate the information landscape in a way that serves the interests of those in power.<p>In conclusion, while the White House's engagement with the AI community is a necessary step in addressing the challenges and concerns surrounding artificial intelligence, it is important to remain vigilant against the potential for political manipulation. The AI community must work together with government officials to strike a balance between addressing legitimate concerns and preserving the integrity and independence of AI development.<p>note: I did prod ChatGPT the direction of criticism from the prompt, but this is as is generated response. Well, I be damned.