TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Sam Altman goes before US Congress to propose licenses for building AI

914 点作者 vforgione大约 2 年前

288 条评论

neonate大约 2 年前
<a href="http:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20230516122128&#x2F;https:&#x2F;&#x2F;www.reuters.com&#x2F;technology&#x2F;openai-chief-goes-before-us-congress-propose-licenses-building-ai-2023-05-16&#x2F;" rel="nofollow">http:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20230516122128&#x2F;https:&#x2F;&#x2F;www.reuter...</a>
happytiger大约 2 年前
We need to MAKE SURE that AI as a technology ISN&#x27;T controlled by a small number of powerful corporations with connections to governments.<p>To expound, this just seems like a power grab to me, to &quot;lock in&quot; the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies. Obviously, this will create a critical nexus of control for a small number of well connected and well heeled investors and is to be avoided at all costs.<p>It&#x27;s also deeply troubling that regulatory capture is such an issue these days as well, so putting a government entity in front of the use and existence of this technology is a double whammy — it&#x27;s not simply about innovation.<p>The current generation of AIs are &quot;scary&quot; to the uninitiated because they are uncanny valley material, but beyond impersonation they don&#x27;t show the novel intelligence of a GPI... yet. It seems like OpenAI&#x2F;Microsoft is doing a LOT of theater to try to build a regulatory lock in on their short term technology advantage. It&#x27;s a smart strategy, and I think Congress will fall for it.<p>But goodness gracious we need to be going in the EXACT OPPOSITE direction — open source &quot;core inspectable&quot; AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them.<p>And if you think this isn&#x27;t an issue, I wrote this post an hour or two before I managed to take it live because Comcast went out at my house, and we have no viable alternative competitors in my area. We&#x27;re about to do the same thing with AI, but instead of Internet access it&#x27;s future digital brains that can control all aspects of a society.
评论 #35966612 未加载
评论 #35967900 未加载
评论 #35967898 未加载
评论 #35966690 未加载
评论 #35967577 未加载
评论 #35971352 未加载
评论 #35967411 未加载
评论 #35968218 未加载
评论 #35967107 未加载
评论 #35967756 未加载
评论 #35969101 未加载
评论 #35968239 未加载
评论 #35967847 未加载
评论 #35970787 未加载
评论 #35969757 未加载
评论 #35967295 未加载
评论 #35968481 未加载
评论 #35969007 未加载
评论 #35971413 未加载
评论 #35967408 未加载
评论 #35966984 未加载
评论 #35969978 未加载
评论 #35968708 未加载
评论 #35967100 未加载
评论 #35966897 未加载
评论 #35969118 未加载
elil17大约 2 年前
This is the message I shared with my senator (edited to remove information which could identify me). I hope others will send similar messages.<p>Dear Senator [X],<p>I am an engineer working for [major employer in the state]. I am extremely concerned about the message that Sam Altman is sharing with the Judiciary committee today.<p>Altman wants to create regulatory roadblocks to developing AI. My company produces AI-enabled products. If these roadblocks had been in place two years ago, my company would not have been able to invest into AI. Now, because we had the freedom to innovate, AI will be bringing new, high paying jobs to our factories in our state.<p>While AI regulation is important, it is crucial that there are no roadblocks stopping companies and individuals from even trying to build AIs. Rather, regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.<p>Altman and his ilk try to claim that aggressive regulation (which will only serve to give them a monopoly over AI) is necessary because an AI could hack it&#x27;s way out of a laboratory. Yet, they cannot explain how an AI would accomplish this in practice. I hope you will push back against anyone who fear-mongers about sci-fi inspired AI scenarios.<p>Congress should focus on the real impacts that AI will have on employment. Congress should also consider the realistic risks AI which poses to the public, such as risks from the use of AI to control national infrastructure (e.g., the electric grid) or to make healthcare decisions.<p>Thank you, [My name]
评论 #35964396 未加载
评论 #35966927 未加载
评论 #35967586 未加载
评论 #35963191 未加载
评论 #35963267 未加载
评论 #35965873 未加载
评论 #35962735 未加载
评论 #35968771 未加载
评论 #35967504 未加载
评论 #35964830 未加载
评论 #35966640 未加载
评论 #35969227 未加载
评论 #35968658 未加载
评论 #35963256 未加载
评论 #35963014 未加载
评论 #35964742 未加载
评论 #35962747 未加载
评论 #35966525 未加载
brenns10大约 2 年前
Reminds me of SBF calling for crypto regulations while running FTX. Being seen as friendly to regulations is great for optics compared to being belligerently anti-regulation. You can appear responsible and benevolent, and get more opportunity to weaken regulation by controlling more of the narrative. And hey, if you get end up getting some regulatory capture making competition harder, that&#x27;s a great benefit too.<p>OpenAI != FTX, just meaning to say calling for regulation isn&#x27;t an indication of good intentions, despite sounding like it.
评论 #35963687 未加载
评论 #35964263 未加载
评论 #35965839 未加载
评论 #35967263 未加载
评论 #35964722 未加载
评论 #35969347 未加载
评论 #35964202 未加载
评论 #35969504 未加载
评论 #35965256 未加载
评论 #35969624 未加载
评论 #35967422 未加载
评论 #35963785 未加载
nico大约 2 年前
This is quite incredible<p>Could you imagine if MS had convinced the govt back in the day, to require a special license to build an operating system (this blocking Linux and everything open)?<p>It’s essentially what’s happening now,<p>Except it is OpenAI instead of MS, and it is AI instead of Linux<p>AI is the new Linux, they know it, and are trying desperately to stop it from happening
评论 #35963507 未加载
评论 #35963410 未加载
评论 #35962823 未加载
评论 #35971296 未加载
评论 #35964705 未加载
评论 #35967416 未加载
评论 #35971749 未加载
评论 #35968749 未加载
评论 #35968629 未加载
评论 #35969126 未加载
sva_大约 2 年前
It seems pretty clear at this point, that OpenAI etc will lobby towards making it more difficult for new companies&#x2F;entities to join the AI space, all in the name of &#x27;safety&#x27;. They&#x27;re trying to make the case that everyone should use AI through their APIs so that they can keep things in check.<p>Conveniently this also helps them build a monopoly. It is pretty aggravating that they&#x27;re bastardizing and abusing terms like &#x27;safety&#x27; and &#x27;democratization&#x27; while doing this. I hope they&#x27;ll fail in their attempts, or that the competition rolls over them rather sooner than later.<p>I personally think that the greatest threat in these technologies is currently the centralization of their economic potential, as it will lead to an uneven spread of their productivity gains, further divide poor and rich, and thus threaten the order of our society.
评论 #35967203 未加载
评论 #35968672 未加载
评论 #35965949 未加载
valine大约 2 年前
My gut feeling is that the majority of AI safety discussions are driven by companies that fear losing their competitive edge to small businesses. Until now, it&#x27;s been challenging to grow a company beyond a certain size without employing an army of lawyers, human resources professionals, IT specialists, etc. What if two lawyers and an LLM could perform the same work as a legal department at a Fortune 500 company? The writing is on the wall for many white-collar jobs, and if these LLMs aren&#x27;t properly regulated, it may be the large companies that end up drawing the short straw.<p>How many of Microsoft&#x27;s 221k employees exist solely to support the weight of a company with 221k people? A smaller IT department doesn&#x27;t need a large HR department. And a small HR department doesn&#x27;t file many tickets with IT. LLM driven multinationals will need orders of magnitude fewer employees, and that puts our current multinationals in a very awkward position.<p>Personally, I will be storing a local copy of LLaMA 65B for the foreseeable future. Instruct fine-tuning will keep getting cheaper; given the stakes, the large models might not always be easy to find.
评论 #35966360 未加载
评论 #35966709 未加载
bilekas大约 2 年前
This is so stupid its exactly what you would expect in congress.<p>If this was to go through, of course OpenAI and co will be the primary lobbiests to ensure they get to define the filters for such a license.<p>Also how would you even enforce this. It&#x27;s absolute nonsense and is a clear indicator that these larger companies realize there is no &#x27;gatekeeping&#x27; these AI&#x27;s, that the democratization of models has demonstrated incredible gains over their own.<p>Edit : Image during the early days of the internet you needed a license to start a website.<p>In the later days you needed a license to start a social media site.<p>Nonsense.
评论 #35962727 未加载
评论 #35962488 未加载
评论 #35961034 未加载
评论 #35961088 未加载
neom大约 2 年前
If you would like to email The Subcommittee on Privacy, Technology, &amp; the Law to express your feelings on this, here are the details:<p>Majority Members<p>Chair Richard Blumenthal (CT) brian_steele@blumenthal.senate.gov<p>Amy Klobuchar (MN) baz_selassie@klobuchar.senate.gov<p>Chris Coons (DE) anna_yelverton@coons.senate.gov<p>Mazie Hirono (HI) jed_dercole@hirono.senate.gov<p>Alex Padilla (CA) Josh_Esquivel@padilla.senate.gov<p>Jon Ossoff (GA) Anna_Cullen@ossoff.senate.gov<p>Majority Office: 202-224-2823<p>Minority Members<p>Ranking Member Josh Hawley (MO) Chris_Weihs@hawley.senate.gov<p>John Kennedy (LA) James_Shea@kennedy.senate.gov<p>Marsha Blackburn (TN) Jon_Adame@blackburn.senate.gov<p>Mike Lee (UT) Phil_Reboli@lee.senate.gov<p>John Cornyn (TX) Drew_Brandewie@cornyn.senate.gov<p>Minority Office: 202-224-4224
ok_dad大约 2 年前
I just want to also chime in here and say this is what I expected from the folks who currently control this tech: to leverage political connections to legally cement themselves in the market as the leaders and disallow the common plebian from using the world-changing tech. It enrages me SO MUCH that people act like this. We could be colonizing planets, but instead a few people want to keep all the wealth and power for themselves. I can&#x27;t wait to eat the rich; my fork will be ready.
评论 #35966818 未加载
评论 #35966476 未加载
评论 #35966710 未加载
simonbarker87大约 2 年前
Is this just to put up a barrier to entry to new entrants in the market so they can have a government enforced monopoly?
评论 #35966682 未加载
评论 #35963922 未加载
评论 #35964369 未加载
评论 #35965128 未加载
评论 #35963895 未加载
评论 #35963715 未加载
评论 #35964581 未加载
评论 #35964670 未加载
评论 #35965081 未加载
评论 #35963811 未加载
generalizations大约 2 年前
This is going to be RSA export restrictions all over again. I wish the regulators the best of luck in actually enforcing this. I&#x27;m tempted to think that whatever regulations they put in place won&#x27;t really matter that much, and progress will march on regardless.<p>Give it a year and a 10x more efficient algorithm, and we&#x27;ll have GPT4 on our personal devices and there&#x27;s nothing that any government regulator will be able to do to stop that.
评论 #35969073 未加载
评论 #35963586 未加载
评论 #35962718 未加载
agnosticmantis大约 2 年前
I used to be very enthusiastic about the tech industry and the Silicon Valley culture before getting into it, but having worked in tech for a while I feel very demoralized and disillusioned with all the blatant lies and hypocrisy that seems central to business.<p>I wouldn’t mind ruthless anti-competitive approaches to business as much, but the hypocrisy is really demoralizing.
评论 #35970757 未加载
评论 #35966445 未加载
srslack大约 2 年前
Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it. Then imagine willfully hyping up and scaring people who don&#x27;t understand, and because it can predict words you take advantage of the human tendency to anthropomorphize, so it follows that it is something capable of generalized and adaptable intelligence.<p>Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.<p>So utterly predictable and slimy. All of those who are so gravely concerned about &quot;alignment&quot; in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.
评论 #35962436 未加载
评论 #35960957 未加载
评论 #35960809 未加载
评论 #35960728 未加载
评论 #35960953 未加载
评论 #35961655 未加载
评论 #35960950 未加载
评论 #35960999 未加载
评论 #35963212 未加载
评论 #35964180 未加载
评论 #35962586 未加载
评论 #35964962 未加载
评论 #35961332 未加载
评论 #35962821 未加载
评论 #35963513 未加载
评论 #35960681 未加载
评论 #35962596 未加载
评论 #35966781 未加载
评论 #35962147 未加载
评论 #35961044 未加载
评论 #35962282 未加载
评论 #35968674 未加载
评论 #35960641 未加载
评论 #35961344 未加载
评论 #35963275 未加载
评论 #35960825 未加载
ChicagoBoy11大约 2 年前
At some point Sam has started give me E. Holmes vibes and I really don&#x27;t like it. There&#x27;s a level of odd&#x2F;ridiculous&#x2F;hilarious&#x2F;stupid AI hype that he feels so comfortable leaning into that part of me starts to begin to suspect that the emperor isn&#x27;t wearing any clothes.
评论 #35962296 未加载
评论 #35962291 未加载
johnyzee大约 2 年前
The mainstream media cartel is pumping Sam Altman hard for some reason. Just from today (CNBC): <i>&quot;Sam Altman wows lawmakers at closed AI dinner: ‘Fantastic…forthcoming’&quot;</i> [1]. When was the last time you saw MSM suck up so hard to a Silicon Valley CEO? I see stories like this all the time now. They always play up the angle of the geeky wizzkid (so innocent!), whereas Sam Altman was always less a technologist and more of a relentless operator and self-promotor. Even Paul Graham subtly called that out, at the time he made him head of YC [2].<p>True to form, these articles also work hard at planting the idea that Sam Altman created OpenAI, when in fact he joined rather recently, in a business role. Are these articles being planted somehow? I find it very likely. Don&#x27;t forget that this approach is also straight out of the YC playbook, disclosed in great detail by Paul Graham in previous writings [3].<p>Finally, in keeping with the conspiratorial tone of this comment, for another example of Sam Altman rubbing shoulders with The Establishment, his participation in things like the Bilderberg group [4] are a matter of public record. Which I join many others in finding creepy, even moreso as he maneuvers to exert influence on policy around the seismic shift that is AI.<p>To be clear, I have nothing specific against sama. But I dislike underhanded influence campaigns, which this all reeks of. Oh yeah, I will consider downvotes to this comment as proof of the shadow (AI?) government&#x27;s campaign to promote Sam Altman. Do your worst!<p>[1] <a href="https:&#x2F;&#x2F;www.cnbc.com&#x2F;2023&#x2F;05&#x2F;16&#x2F;openai-ceo-woos-lawmakers-ahead-of-first-testimony-before-congress.html" rel="nofollow">https:&#x2F;&#x2F;www.cnbc.com&#x2F;2023&#x2F;05&#x2F;16&#x2F;openai-ceo-woos-lawmakers-ah...</a><p>[2] <a href="https:&#x2F;&#x2F;www.newyorker.com&#x2F;magazine&#x2F;2016&#x2F;10&#x2F;10&#x2F;sam-altmans-manifest-destiny" rel="nofollow">https:&#x2F;&#x2F;www.newyorker.com&#x2F;magazine&#x2F;2016&#x2F;10&#x2F;10&#x2F;sam-altmans-ma...</a> (<i>&quot;Graham said, “I asked Sam in our kitchen, ‘Do you want to take over YC?,’ and he smiled, like, it worked. I had never seen an uncontrolled smile from Sam. It was like when you throw a ball of paper into the wastebasket across the room—that smile.”&quot;</i>)<p>[3] <a href="http:&#x2F;&#x2F;www.paulgraham.com&#x2F;submarine.html" rel="nofollow">http:&#x2F;&#x2F;www.paulgraham.com&#x2F;submarine.html</a><p>[4] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;2016_Bilderberg_Conference" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;2016_Bilderberg_Conference</a>
评论 #35967018 未加载
评论 #35967201 未加载
评论 #35967341 未加载
评论 #35967135 未加载
jacquesm大约 2 年前
Regulatory capture in progress. I used to have a bit of respect of Altman have spent time, bits and processing cycles here defending him in the past. As of now that respect has all but evaporated, this is a very bad stance. Either nobody gets to play with the new toys or everybody gets to play. What&#x27;s next, classifying AI as munitions?
评论 #35967433 未加载
bioemerl大约 2 年前
Open AI lobbying for regulation on common people being able to use AI, isn&#x27;t it wonderful.
评论 #35966264 未加载
评论 #35966353 未加载
评论 #35966268 未加载
评论 #35966507 未加载
评论 #35968169 未加载
评论 #35966574 未加载
xnx大约 2 年前
Not the first time that OpenAI has claimed their technology is so good it&#x27;s dangerous. (From early 2019: <a href="https:&#x2F;&#x2F;techcrunch.com&#x2F;2019&#x2F;02&#x2F;17&#x2F;openai-text-generator-dangerous&#x2F;" rel="nofollow">https:&#x2F;&#x2F;techcrunch.com&#x2F;2019&#x2F;02&#x2F;17&#x2F;openai-text-generator-dang...</a>) This is the equivalent of martial artists saying that their hands have to be registered as deadly weapons.
评论 #35968217 未加载
oldstrangers大约 2 年前
Any firm large enough to build AI projects on the scale of ChatGPT will be large enough to bid on Government AI contracts. In which case, there will be zero regulations on what you can and cannot do in terms of &quot;national security&quot; in relation to AI. Which is fair, considering our adversaries won&#x27;t be limiting themselves either.<p>The only regulations that matter will be applied to the end user and the hobbyists. You won&#x27;t be able to just spin up an AI startup in your garage. So in that sense, the regulations are pretty transparently an attempt to stifle competition and funnel the real progress through the existing players.<p>It also forces the end users down the path of using only a few select AI service providers as opposed to the technology just being readily available.
joebob42大约 2 年前
Open ai has had a surprisingly fast pivot from the appearance of being a scrappy open-ish company trying to build something to share and improve the world to more or less unmitigated embrace of the bad sides of big corporate. This is so unbelievably blatant I almost find it hard to credit.
评论 #35968633 未加载
评论 #35967884 未加载
jameshart大约 2 年前
This is an AP news wire article picked up by a Qatar newspaper website. Why is this version here, rather than <a href="https:&#x2F;&#x2F;apnews.com&#x2F;article&#x2F;chatgpt-openai-ceo-sam-altman-congress-73ff96c6571f38ad5fd68b3072722790" rel="nofollow">https:&#x2F;&#x2F;apnews.com&#x2F;article&#x2F;chatgpt-openai-ceo-sam-altman-con...</a>?
评论 #35966293 未加载
JumpCrisscross大约 2 年前
The members of this subcommittee are [1]:<p>Chair Richard Blumenthal (CT), Amy Klobuchar (MN), Chris Coons (DE), Mazie Hirono (HI), Alex Padilla (CA), Jon Ossoff (GA)<p>Majority Office: 202-224-2823<p>Ranking Member Josh Hawley (MO), John Kennedy (LA), Marsha Blackburn (TN), Mike Lee (UT), John Cornyn (TX)<p>Minority Office: 202-224-4224<p>If you’re in those states, please call their D.C. office and read them the comment you’re leaving here.<p>[1] <a href="https:&#x2F;&#x2F;www.judiciary.senate.gov&#x2F;about&#x2F;subcommittees" rel="nofollow">https:&#x2F;&#x2F;www.judiciary.senate.gov&#x2F;about&#x2F;subcommittees</a>
评论 #35964566 未加载
catchnear4321大约 2 年前
&gt; In his first appearance before a congressional panel, CEO Sam Altman is set to advocate licensing or registration requirements for AI with certain capabilities, his written testimony shows.<p>papers for thee but not for me
评论 #35960726 未加载
RhodesianHunter大约 2 年前
Regulatory capture and monopolies are now as American as apple pie.
评论 #35960389 未加载
评论 #35960530 未加载
评论 #35961780 未加载
评论 #35960423 未加载
gautamdivgi大约 2 年前
Remember the paper where they admitted to having no &quot;moat&quot;. This is basically them trying to build a &quot;moat&quot; through regulation. Since big-co are probably the only ones that can do any sort of license testing right now. It&#x27;s essentially trying to have an &quot;FDA&quot; for AI and crowd out competitors before they emerge.
评论 #35963008 未加载
hazmazlaz大约 2 年前
Of course one of the first companies to create a commercial &quot;AI&quot; would lobby the government to create regulatory barriers to competition in order to provide a moat for their business. While their product is undeniably good, I am disappointed in OpenAI&#x27;s business practices in this instance.
candiddevmike大约 2 年前
I&#x27;m sad that we&#x27;ve lost the battle with calling these things AI. LLMs aren&#x27;t AI, and I don&#x27;t think they&#x27;re even a path towards AI.
评论 #35963416 未加载
评论 #35964967 未加载
评论 #35964264 未加载
评论 #35963486 未加载
jacknews大约 2 年前
IMHO all of these kinds of blatant lobbying&#x2F;regulatory capture proposals should be resolved using a kind of Dionisian method.<p>&#x27;Who is your most feared competition? OK, <i>They</i> will define the license requirements. Still want to go ahead?&#x27;
评论 #35963134 未加载
api大约 2 年前
This is regulatory capture. Lycos and AltaVista are trying to preemptively outlaw Google.<p>Canceling my OpenAI account today and I urge you to do the same.<p>What they are really afraid of is open source models. As near as I can tell the leading edge there is only a year or two behind OpenAI. Given some time and efforts at pruning and optimization you’ll have GPT-4 equivalents you can just download and run on a high end laptop or gaming PC.<p>No everyone is not going to run the model themselves, but what this means is that there will be tons of competition including apps and numerous specialized SaaS offerings. None of them will have to pay royalties or API fees to OpenAI.<p>Edit: a while back I started being a data pack-rat for AI stuff including open source code and usable open models. I encourage anyone with a big disk or NAS to do the same. There&#x27;s a small but non-zero possibility that an attempt will be made to pull this stuff off the net in the near future.
rockemsockem大约 2 年前
Since no one here watched the actual hearings, I feel like I should point out that he said that nothing at the level they&#x27;ve created today should be eligible for any &quot;licensing&quot;.<p>If you did watch the hearings it would have been pretty clear that the goal of any such licensing would be to prevent the runaway AI scenario or AGI from being unknowingly being created. It&#x27;s obvious that some sort of agency would need to be set up far in advance of when it&#x27;s possible for runaway AI to happen. Regulatory capture was also specifically brought up as a potential downside.<p>This article is just pushing a cynical narrative for clicks and y&#x27;all are eating it up.
评论 #35973202 未加载
评论 #35971542 未加载
tomrod大约 2 年前
In a move surprising to few, an AI innovator is pulling up the ladder after getting into the treehouse.<p>Open AI has established itself as a market leader in LLM applications, but that dominance is not guaranteed. Especially with the moat being drained by open source, the erstwhile company is leading the charge to establish regulatory barriers.<p>What Mr. Altman calls for is no less than the death of open-source implementations of AI. We can, do, and should adopt AI governance patterns. Regulatory safeguards are absolutely fine to define and, where necessary, legislate. Better would be a regulatory agency with an analogous knowledgebase of CISA. But what will chill startups and small business innovation completely in using AI to augment people is a licensing agency. This is fundamentally different from the export restrictions on encryption
nico大约 2 年前
AI is the new Linux<p>This is like if MS back in the day had called on congress for regulation of Operating Systems, so they could block Linux and open source from taking over<p>MS did try everything they could to block open source and Linux<p>They failed<p>Looking forward to the open future of AI
teekert大约 2 年前
Please congress, stop all those open source innovators that use things like LoRA to cheaply create LLMs that match AIs in our multi billion $ business model!
slowmovintarget大约 2 年前
Sam Altman is basically saying, &quot;Now that we&#x27;ve already done it, you need to make it so <i>everyone else that tries to compete with us, including hobbyists or Torvalds-types</i> must obtain a license to do it.&quot;<p>That&#x27;s high-order monopolist BS.<p>Create safety standards, sure. License LLM training? No.
bilekas大约 2 年前
&gt; &quot;AI is no longer fantasy or science fiction. It is real, and its consequences for both good and evil are very clear and present,&quot; said Senator Richard Blumenthal<p>I like the senator, but I wouldn&#x27;t trust a 77 year old lawyer &amp; politician to understand how these AI&#x27;s work, and to what level they are `science fiction`.<p>This is the problem when topics like this are brought to the senate and house.
ToDougie大约 2 年前
I hate I hate I HATE regulatory capture.<p>This is a transparent attempt at cornering the market and it disgusts me. I am EXTREMELY disappointed in Sam Altman.
评论 #35966748 未加载
d--b大约 2 年前
Altman wants to be the good guy so bad and instead is turning into the poster child of everything that’s wrong in Silicon Valley.<p>His personal goals have always been very laudable. His stance on Universal Basic Income for instance stems from a genuine belief that its adoption would eliminate poverty altogether.<p>But then the reality of both technical challenges and money implications kicks in and everything turns to shit.<p>OpenAi may want to build an AI that helps all humanity, but it turns out that the thing they stumbled upon was a chat bot that makes shit up. This great technology unfortunately cuts both ways, and the edge that’s facing us seems way sharper than the other. And then the money that’s required to run the thing is so huge that they had to compromise everything they said they wouldn’t do.<p>Meanwhile the Orb’s UBI experiment is only funny because its ridiculously dystopian technology has zero chances to ever catch on.<p>At some point, the industry really should try and figure out what the fuck it is we’re trying to do. Cause right now, it really looks like computers only brought us two things: corporate databases and TVs we can watch on the toilet.
评论 #35971403 未加载
capital_guy大约 2 年前
Some of the members of congress are totally falling for Altman&#x27;s gambit. Sen. Graham kept asking about how a licensing regime would be a solution, which of course Altman loves, and kept interrupting Ms. Montgomery who tried to explain why that was not the best approach. Altman wants to secure his monopoly here and now. You can&#x27;t have a licensing regime for AI - it doesn&#x27;t make sense and he knows it. It would destroy the Open Source AI movement.<p>You need to control what data is allowed to be fed into paid AI model like OpenAI - can&#x27;t eat a bunch of copyrighted material without express consent, for example. Or personally private information purchased from a data broker. Those kind of foundational rules would serve us all much better
jkubicek大约 2 年前
&gt; hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.<p>If I know anything about science fiction, I know that trying to regulate this is useless. If an advanced AI is powerful enough to convince a human to free it, it should have no problem convincing the US congress to free it. As a problem, that should be a few orders of magnitude easier.
Animats大约 2 年前
This is a diversion from the real problem. Regulating AI is really about regulating corporate behavior. What&#x27;s needed is regulation along these lines:<p>* Automated systems should not be permitted to make adverse decisions against individuals. This is already law in the EU, although it&#x27;s not clear if it is enforced. This is the big one. Any company using AI to make decisions which affect external parties in any way must not be allowed to require any waiver of the right to sue, participate in class actions, or have the case heard by a jury. Those clauses companies like to put in EULAs would become invalid as soon as an AI is involved anywhere.<p>* All marketing content must be signed by a responsible party. AI systems increase the amount of new content generated for marketing purposes substantially. This is already required in the US, but weakly enforced. Both spam and &quot;influencers&quot; tend to violate this. The problem isn&#x27;t AI, but AI makes it worse, because it&#x27;s cheaper than troll farms, and writes better.<p>* Anonymous political speech may have to go. That&#x27;s a First Amendment right in the US, but it&#x27;s not unlimited. You should be able to say anything you&#x27;re willing to sign.[1] This is, again, the troll farm problem, and, again, AIs make it worse.<p>That&#x27;s probably enough to deal with the immediate problems.<p>[1] <a href="https:&#x2F;&#x2F;mtsu.edu&#x2F;first-amendment&#x2F;article&#x2F;32&#x2F;anonymous-speech" rel="nofollow">https:&#x2F;&#x2F;mtsu.edu&#x2F;first-amendment&#x2F;article&#x2F;32&#x2F;anonymous-speech</a>
thelittleone大约 2 年前
This feels like theater. Make society fear AI, requiring regulation, so central power controls access to it. I think Osho put it nicely:<p>&quot;No society wants you to become wise: it is against the investment of all societies. If people are wise they cannot be exploited. If they are intelligent they cannot be subjugated, they cannot be forced in a mechanical life, to live like robots.&quot;
评论 #35965218 未加载
chpatrick大约 2 年前
I think the logic at OpenAI is:<p>* AGI is going to happen whether they do it or not, and it&#x27;s dangerous unless properly safeguarded<p>* OpenAI will try to get there before everyone else, but also do it safely and cheaply, so that their solution becomes ubiquitous rather than a reckless one<p>* Reckless AGI development should be not be allowed<p>It&#x27;s basically the Manhattan project argument (either we build the nuke or the Nazis will).<p>I&#x27;m not saying I personally think this regulation is the right thing to do, but I don&#x27;t think it&#x27;s surprising or hypocritical given what their aims are.
评论 #35963912 未加载
评论 #35963400 未加载
评论 #35963894 未加载
denverllc大约 2 年前
I don&#x27;t think Sam read the Google memo and realized they needed a moat -- I think they&#x27;ve been trying this for some time.<p>Here&#x27;s their planned proposal for government regulation; they discuss not just limiting access to models but also to datasets, and possibly even chips.<p>This seems particularly relevant, on the discussion of industry standards, regulation, and limiting access:<p>&quot;Despite these limitations, strong industry norms—including norms enforced by industry standards or government regulation—could still make widespread adoption of strong access restrictions possible. As long as there is a significant gap between the most capable open-source model and the most capable API-controlled model, the imposition of monitoring controls can deny hostile actors some financial benefit.166 Cohere, OpenAI, and AI21 have already collaborated to begin articulating norms around access to large language models, but it remains too early to tell how widely adopted, durable, and forceful these guidelines will prove to be.<p>Finally, there may be alternatives to APIs as a method for AI developers to provide restricted access. For example, some work has proposed imposing controls on who can use models by only allowing them to work on specialized hardware—a method that may help with both access control and attribution.168 Another strand of work is around the design of licenses for model use.169 Further exploration of how to provide restricted access is likely valuable.&quot;<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2301.04246.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2301.04246.pdf</a>
评论 #35966937 未加载
rvz大约 2 年前
We all predictably knew that AI regulations were coming and O̶p̶e̶n̶AI.com’s moat was getting erased very quickly by open source AI models. So what does O̶p̶e̶n̶AI.com do?<p>Runs to congress to attempt to use and suggest new regulations against open source AI models to wipe them out and brand them non-compliant or un-licensed and unsafe for general use and using AI safety as a scapegoat again.<p>After that, to secretly push a pseudo-open source AI model that is compliant but limited compared to the closed models in an attempt to eliminate the majority of open source AI companies who can’t get such licenses.<p>So a clever tactic to create new regulations that benefit them (O̶p̶e̶n̶AI.com) more over everyone else, meaning less transparency, more hurdles for actual open AI research and additional bureaucracy. Also don&#x27;t forget that Altman is also selling his Worldcoin dystopian crypto snake oil project as the &#x27;antidote&#x27; to verify against everything getting faked by AI. [0] He his hedged in either way.<p>So congratulations to everyone here for supporting these gangsters at O̶p̶e̶n̶AI.com for pushing for regulatory capture.<p>[0] <a href="https:&#x2F;&#x2F;worldcoin.org&#x2F;blog&#x2F;engineering&#x2F;humanness-in-the-age-of-ai" rel="nofollow">https:&#x2F;&#x2F;worldcoin.org&#x2F;blog&#x2F;engineering&#x2F;humanness-in-the-age-...</a>
评论 #35960799 未加载
tristor大约 2 年前
Reading this, it basically sounds like &quot;Dear Congress, please grant me the bountiful gift of regulatory capture for my company OpenAI.&quot; I just lost a lot of respect for Sam Altman.
cwkoss大约 2 年前
Regulatory moats making corporations in control of AI is a far greater danger to humanity than skynet or paperclip maximizer scenarios.
estebarb大约 2 年前
Let&#x27;s be honest: obviously the companies that have put a lot of money in it will try to put entry barriers, like licenses to linear algebra or other requirements by law. It is not to benefit humanity, but to monopolize their industry and prevent new participants. We shouldn&#x27;t allow that kind of restrictions, just because people that doesn&#x27;t understand how it works are afraid of a killer robot visiting them by night.
nico大约 2 年前
What we expected<p>License for me but not for thee<p>Think of the children<p>Building the moat
评论 #35966175 未加载
评论 #35966118 未加载
fnordpiglet大约 2 年前
I don’t understand the need to control AI tech, no matter how advanced, in any way what-so-ever.<p>It is a tool. If I use a tool for illegal purposes I have broken the law. I can be held accountable for having broken the law. If the laws are deficient, make the laws stronger and punish people for wrong deed, regardless of the tool at hand.<p>This is a naked attempt to build a regulatory moat while capitalizing on fear of the unknown and ignorance. It’s attempting to regulate research into something that has no external ability to cause harm without the use of a principal directing it.<p>I can see a day (perhaps) when AIs have some form of independent autonomy, or even display agency and sentience, when we can revisit. Other issues come into play as well, such as the morality of owning a sentience and what that entails. But that is way down the road. And even further if Microsoft’s proxy closes the doors on anyone but Microsoft, Google, Amazon, and Facebook.
评论 #35963120 未加载
Manjuuu大约 2 年前
I wish he would just stop sharing his unsubstantiated opinions, tweets included, he got worse very fast when he entered his AI arc.
评论 #35963203 未加载
logicchains大约 2 年前
Startup idea: after the west bans non-woke AIs, make a website that automatically routes all questions that the western AIs refuse to answer to China&#x27;s pro-CCP AIs and all the CCP-related questions to the western AIs.
paxys大约 2 年前
Remember that popular recent post about OpenAI not having a moat? Well it looks like they are digging one, with a little help from the government.
评论 #35966378 未加载
g42gregory大约 2 年前
I understand the idea behind it: the risks are high and we want to ensure that the AI can not be used for purposes that threatens the survival human civilization. Unfortunately, there is high probability that this agency will be abused from day one: instead of (or in addition to) focusing on the humanity&#x27;s survival, the agency could be used as a thought police. The AI that allows for the &#x27;wrongthink&#x27; will be banned. Only the &#x27;correct think&#x27; AI will be licensed to the public.
评论 #35966261 未加载
评论 #35966347 未加载
rvz大约 2 年前
O̶p̶e̶n̶AI.com is not your friend and are essentially against open source with this regulatory capture and using AI safety as a scapegoat.<p>Why do you think they are attempting to release a so-called &#x27;open source&#x27; [0] and &#x27;compliant&#x27; AI model to wipe out other competing open source AI models, to label them to others as unlicensed and dangerous? They know that transparent, open source AI models is a threat. Hence why they are doing this.<p>They do not have a moat against open source, unless they use regulations that suit them against their competitors using open source models.<p>O̶p̶e̶n̶AI.com is a scam. On top of the Worldcoin crypto scam that Sam Altman is also selling as a antidote against the unstoppable generative AI hype to verify human eyeballs on the blockchain with an orb. I am <i>not</i> joking. [1] [2]<p>[0] <a href="https:&#x2F;&#x2F;www.reuters.com&#x2F;technology&#x2F;openai-readies-new-open-source-ai-model-information-2023-05-15&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reuters.com&#x2F;technology&#x2F;openai-readies-new-open-s...</a><p>[1] <a href="https:&#x2F;&#x2F;worldcoin.org&#x2F;blog&#x2F;engineering&#x2F;humanness-in-the-age-of-ai" rel="nofollow">https:&#x2F;&#x2F;worldcoin.org&#x2F;blog&#x2F;engineering&#x2F;humanness-in-the-age-...</a><p>[2] <a href="https:&#x2F;&#x2F;worldcoin.org&#x2F;blog&#x2F;worldcoin&#x2F;designing-orb-universal-value-simplicity" rel="nofollow">https:&#x2F;&#x2F;worldcoin.org&#x2F;blog&#x2F;worldcoin&#x2F;designing-orb-universal...</a>
RecycledEle大约 2 年前
When I taught at a business school, our textbooks told us that once a company had a large lead in a field, they should ask for regulation. Regulations build walls to protect their lead by increasing the cost to compete against them.<p>I believe this is what OpenAI is doing, and it makes me sad as a teacher.<p>AI is the greatest tool for equity and social justice in history. Any poor person with Internet access can learn (almost) anything from ChatGPT (<a href="http:&#x2F;&#x2F;chat.openai.com" rel="nofollow">http:&#x2F;&#x2F;chat.openai.com</a>)<p>A bright student trapped in a garbage school where the kid to his right is stoned and the kid to his left is looking up porn on a phone can learn from personalized AI tutors.<p>While some complain that AI will take our jobs, they are ignoring the effect of competition. Humans will become smarter with AI tutors. Humans will become more capable with AI assistants. With AI an individual can compete with a large corporation. It reminds me of the early days of the World Wide Web and the &quot;Online, nobody knows you are a dog&quot; memes.<p>I hope the best hope many bright and poor kids have is not taken away to protect the power bases of the rich and powerful. They deserve a chance.
评论 #35970646 未加载
ok123456大约 2 年前
He just wants regulatory capture to make it harder for new entrants.
uses大约 2 年前
&quot;he&#x27;s just doing this to hinder competition&quot;<p>It&#x27;s true that AI regulation would, in fact, hinder OpenAI&#x27;s competition.<p>But... isn&#x27;t lobbying for regulation also what Sam would do if he genuinely thought that LLMs were powerful, dangerous technology that should be regulated?<p>If you don&#x27;t think LLMs&#x2F;AI research should be regulated, just say that. I don&#x27;t see how Sam&#x27;s motives are relevant to that question.
评论 #35965891 未加载
graycat大约 2 年前
Watched, listened to Altman&#x27;s presentation.<p>Objection (1). He said &quot;AI&quot; many times but gave not even a start on a definition. So, how much and what <i>new technology</i> is he talking about.<p>Objection (2) The committee mentioned trusting the AI results. In my opinion, that is just silly because the AI results have no credibility before passing some severe checks. Then any trust is not from any credibility of the AI but from passing the checks.<p>We already have math and physical science and means for checking the results. The results, checked with the means, are in total much more impressive, powerful, credible, and valuable than ChatGPT. Still before we take math&#x2F;physical science results at all seriously, we want the results checked.<p>So, the same for other new technologies, ChatGPT or called AI or not, check before taking seriously.<p>Objection (3) We don&#x27;t ask for <i>licenses</i> for the publication of math&#x2F;physical science. Instead, we protect ourselves with the checking of the results. In my opinion, we should continue to check, for anything called AI or anything new, but don&#x27;t need <i>licenses</i>.
ftyhbhyjnjk大约 2 年前
Of course this was coming... if you can&#x27;t beat them, suppress them... shame on OpenAI and it&#x27;s CEO.
brap大约 2 年前
As always, the people calling for regulations are the big guys trying to stop the little guys by creating a legal moat. Always the same old story.
elihu大约 2 年前
My suggestions:<p>Don&#x27;t regulate AI directly, but rather how it&#x27;s used, and make it harder for companies to horde, access, and share huge amounts of personal information.<p>1) Impose strict privacy rules prohibiting companies from sharing personal information without their consent. If customers withhold their consent, they may not retaliate or degrade their services for that customer in any way.<p>2) Establish a clear line of accountability that establishes some party as directly responsible for what the AI does. If a self-driving car gets a speeding ticket, it should be clear who is liable. If you use a racist AI to make hiring decisions, &quot;the algorithm made me hire only white people&quot; is no defense -- and maybe the people who made the racist AI in the first place are responsible too.<p>3) Require AI in some contexts to act in the best interests of the user (similar concept to a fiduciary -- or maybe it&#x27;s exactly the same thing). In contexts where it&#x27;s not required, it should be clear to the user that the AI is not obligated to act in their best interests.
tomatotomato37大约 2 年前
One of the side effects of the crypto craze has been a lot of general citizens possessing quite a few GPUs. It turns out those GPUs are just as good at training models as they are at mining crypto.<p>The big companies don&#x27;t like that.
yeah_ok_vc_bro大约 2 年前
As others have said but worth reiterating - lobbying for regulatory oversight is the playbook to get a government imposed monopoly &#x2F; oligopoly.<p>It is a calculated strategy designed to keep others out and reap monopoly profits (or as close to them as possible) by virtue of preferred access to elected leaders.<p>&quot;Open&quot;AI wants to tax the entire AI economy and they are happy to burn every innovative thing around them in order to make it happen (because those &#x27;innovative things&#x27; aren&#x27;t &#x27;compliant&#x27; with the rules they lobbied for of course, therefore &quot;Open&quot;AI are one of the only &#x2F; preferred games in town).<p>This should be fought against tooth and nail.<p>AI is already regulated under the fair use doctrine and we should encourage the Cambrian explosion of AI innovation, which is caustic to &quot;Open&quot;AI, to continue unabated.<p>America, and western democracies to a lesser extent, have an advantage currently - the proposals by &quot;Open&quot;AI are designed to and will ensure we regress to the mean that is China.
cs702大约 2 年前
...which as a consequence would make it costly and difficult for new AI startups to enter the market.<p>Everyone here on HN can see that.<p><i>Shame on Sam</i> for doing this.
vivegi大约 2 年前
It is too early to regulate.<p>Let them first enforce existing regulation around the blatant unauthorized use of data (such as the creative output of artists, programmers etc.,) without explicit consent. For example, just like how the US Library of Congress receives a copy of each print publication for archival and reference purposes, Congress can enable the archival &#x2F; storage of datasets for the purposes of AI research. Willing participants can deposit copies of their data sets (or URLs to source) and it would do more for AI research as a public good. The licensing terms could protect the rights of those who own copyrights on the datasets. There can even be commercial licenses enabled (already happens for legal documents such as court judgments ets.,).<p>If US Congress regulates too soon to constrain rather than enable, the tech companies would just setup shop in other jurisdictions where such regulatory hurdles don&#x27;t exist.
jwiley大约 2 年前
Turing police? <a href="https:&#x2F;&#x2F;williamgibson.fandom.com&#x2F;wiki&#x2F;Turing_Police" rel="nofollow">https:&#x2F;&#x2F;williamgibson.fandom.com&#x2F;wiki&#x2F;Turing_Police</a>
评论 #35968511 未加载
theyeenzbeanz大约 2 年前
Just drop the “Open” part and rename it to CorpAI at this point since it’s anything but.
ChrisMarshallNY大约 2 年前
Although I think that AI could be quite dangerous, I&#x27;m skeptical that &quot;licensing&quot; will do anything more than guarantee the existing big players <i>(&lt;cough&gt;OpenAI&lt;&#x2F;cough&gt;)</i> an entrenchment.<p>The baddies have never let licenses <i>(&quot;Badges? We doan&#x27; need no steenkin&#x27; badges!&quot;)</i> stop them.
kypro大约 2 年前
While I&#x27;d agree with sentiment in this threat that GPT-4 and current AI models are not dangerous yet, I guess what I don&#x27;t understand is why so many people here believe we should allow private companies to continue to develop the technology until someone develops something dangerous?<p>Those here who don&#x27;t believe AI should be regulated, do you not believe AI can be dangerous? Is that you believe a dangerous AI is so far away that we don&#x27;t need to start regulating now?<p>Do you accept that if someone develops a dangerous AI tomorrow there&#x27;s no way to travel back in time and retroactively regulate development?<p>It just seems so obvious to me that there should be oversight in the development of a potentially dangerous technology that I can&#x27;t understand why people would be against it. Especially for arguments as weak as &quot;it&#x27;s not dangerous yet&quot;.
评论 #35961227 未加载
评论 #35970908 未加载
sadhd大约 2 年前
I&#x27;ll do you one better--to negative infinity mod points and beyond! I can put a 13b parameter LLM on my phone. That makes it a bearable arm. Arms are not defined under the US Constitution, just the right of _the_people_ to keep them shall not be infringed, but it is a weapon to be sure.
评论 #35961809 未加载
graiz大约 2 年前
Software will inherently use AI systems. Should congress license all software? It&#x27;s too easy to fork an open source repo, tweak the model weights and have your own AI system. I don&#x27;t see how this could ever work. You can&#x27;t put the toothpaste back in the tube.
RcouF1uZ4gsC大约 2 年前
How about this instead:<p>How about a requirements that all weights and models for any AI have to be publicly available.<p>Basically, these companies are trying to set themselves up to the gatekeepers of knowledge. That is too powerful a capability to leave in just the hands of a single company.
nerdo大约 2 年前
Oi, you got a loicense for that regression function?
dinvlad将近 2 年前
Sam has turned into (or maybe has always been) just another technobro grifter. He has lost all of his credibility with his moves and statements on OpenAI.
koboll大约 2 年前
Perhaps the first safety standard OpenAI can implement itself is a warning or blog post or something about how ChatGPT is completely incapable of detecting ChatGPT-written text (there is no reliable method currently; GPTZero is borderline fraud) and often infers that what the user wants to hear is a &quot;Yes, I wrote this&quot;, and so it doles out false positives in such situations with alarming frequency.<p>See: The link titled &#x27;Texas professor fails entire class from graduating- claiming they used ChatGTP (reddit.com)&#x27;, currently one position above this one on the homepage.
waffletower大约 2 年前
Sam: &quot;Dear committee: I&#x27;d like to propose a new regulation for AI which will bring comfort to Americans, while ensuring that OpenAI and Microsoft develop and maintain a monopoly with our products.&quot;
very_good_man大约 2 年前
Give the power to control life-changing technology to some of the most evil, mendacious elites to ever live? No thanks.
bitL大约 2 年前
Here we go, people here were ridiculing right-to-work-on-AI licenses not that long ago and now we have it coming right from the main AI boss, throwing the interest of most of us (democratized AI) down the toilet.
mrangle大约 2 年前
One might think that Altman doesn&#x27;t have a shot at this ham-fisted attempt at regulatory capture.<p>The issue is that the political class will view his suggestion, assuming they didn&#x27;t give it to him in the first place (likely), through the lens of their own self-interest.<p>Self-interest will dictate whether or not sure-to-fail regulations will be applied.<p>If AI threatens the power of the political class, they will attempt to regulate it.<p>If the power of the political class continues to trend toward decline, then they will try literally anything to arrest that trend. Including regulating AI and much else.
reducesuffering大约 2 年前
It&#x27;s very sad that people lack the imagination for the possible horrors that lie beyond. You don&#x27;t even need the imagination; Hinton, Bengio, Tegmark, Yudkowsky, Musk, etc. are spelling it out for you.<p>This moment, 80% of comments are derisive, and you actually have zero idea how much is computer generated bot content meant to sway opinion by post-GPT AI industry who see themselves as becoming the next iPhone-era billionaires. We are fast approaching a reality where our information space breaks down. Where almost all text you get from HN, Twitter, News, Substack; almost all video you get from Youtube, Instagram, TikTok; is just computer generated output meant to sway opinion and&#x2F;or make $.<p>I can&#x27;t know Altman&#x27;s true motives. But this is also what it looks like when a frontrunner is terrified at what happens when GPT6 is released and if they don&#x27;t, the rest of the people who see billionaire $ coming their way are close at your heels trying to leapfrog you if you stop. Consequences? What consequences? We all know social media has been a net good, right? Many of you sound exactly like the few remaining social media cheerleaders (of which there were plenty 5 years ago) who still think Facebook, Instagram, Twitter, isn&#x27;t causing depression and manipulation. If you appreciated what The Social Dilemma illuminated, then watch the same people on AI: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=xoVJKj8lcNQ">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=xoVJKj8lcNQ</a>
评论 #35962647 未加载
评论 #35970973 未加载
评论 #35967061 未加载
andy_ppp大约 2 年前
Gotta build that moat somehow I guess...
评论 #35964146 未加载
bitL大约 2 年前
We need something like GNU for AI, &quot;UNAI is not AI&quot; to take on all these business folks working against our interests by making their business models unprofitable.
评论 #35963460 未加载
vinay_ys大约 2 年前
There&#x27;s no need to go the license way, yet. They can do some simple safety regulations – put a restriction on using AI with kinetic devices, life-critical situations, critical financial situations, and any situation where human is completely out of the loop. Also, put clear liability for harm caused in any situations where AI was involved on the AI supplier. Also, they can put disclosure rules on any company that is spending more than $10M on AI.
fraXis大约 2 年前
Live now as of 8:49 AM (PDT): <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=P_ACcQxJIsg">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=P_ACcQxJIsg</a>
评论 #35964779 未加载
pavelstoev大约 2 年前
AI&#x2F;ML is a disruptive tech with huge financial benefits. History has shown us that government regulation of disruptive technologies can often have unintended consequences and push those technologies into the shadows, where they are harder to monitor and control. For example, during the manufacturing revolution in the 19th century, many governments attempted to regulate the new factories and their working conditions, but these regulations often resulted in factories moving to countries with fewer regulations and lower costs.<p>Similarly, during the Prohibition era in the United States, the ban on alcohol only fueled a thriving dark market and increased criminal activity. In the case of AI, any government regulation could limit the positive financial benefits of AI technology, so there will be actors who will take advantage of that. Furthermore, regulation is unlikely to prevent malicious actors from using AI in harmful ways. Regulation could drive the development and use of AI underground, making it even harder to monitor and control. As we have seen with other emerging technologies, such as biological cloning, government regulation often lags behind the technology itself, and by the time regulations are in place, the technology has already advanced beyond their reach. The same is likely to be true for AI.<p>Instead of relying on government regulation, the development and use of AI should be guided by ethical principles and best practices established by the AI industry itself. This approach has been successful in other industries, such as engineering, architecture, finance and medicine, and can help ensure that AI is developed and used responsibly while still allowing for innovation and progress.<p>&quot;No man&#x27;s life, liberty, or property are safe while the legislature is in session.&quot; - Mark Twain
Paul_S大约 2 年前
If you remember the 90s you remember the panic over encryption. We still have legislation today because of that idiocy.<p>Except wait, we still have panic over encryption today.
TechBro8615大约 2 年前
I&#x27;ll be honest: I intend on ignoring any rules these geriatrics and monopolists come up with regarding what software I can write or execute on my own hardware. I&#x27;m sure most here feel the same way. They couldn&#x27;t stop me from torrenting Ubuntu distributions, or encrypting messages to my bank&#x27;s web server, and they&#x27;re damn sure not going to stand in the way of my paperclip collection.
beambot大约 2 年前
Feels like a &quot;Just Be Evil&quot; corporate motto to me, but that&#x27;s counter to my first-hand experiences with Sam &amp; others at OpenAI.<p>Can someone steelman Sam&#x27;s stance?<p>A couple possibilities come to mind: (a) <i>avoiding</i> regulatory capture by genuinely bad actors; (b) prevent overzealous premature regulation by getting in front of things; (c) countering fear-mongering for the AGI apocalypse; or (d) genuine concern. Others?
dahwolf大约 2 年前
It&#x27;s easy to tell if an AI head genuinely cares about the impact of AI on society: they only talk about AI&#x27;s output, never its input.<p>They train their models on the sum of humanity&#x27;s digital labor and creativity and do so without permission, attribution or compensation. You&#x27;ll never hear a word about this from them, which means ethics isn&#x27;t a priority. It&#x27;s all optics.
评论 #35967044 未加载
rickette大约 2 年前
Can&#x27;t believe he was president of YC not too long ago. YC being all about startups while this move seems more about killing AI startups.
AlexandrB大约 2 年前
This fell off the front page incredibly fast. Caught by the anti-flamewar code?
评论 #35961337 未加载
sovietmudkipz大约 2 年前
Well they needed a moat lol.
neel8986大约 2 年前
PG predicted that <a href="https:&#x2F;&#x2F;twitter.com&#x2F;paulg&#x2F;status&#x2F;1624569079439974400?lang=en" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;paulg&#x2F;status&#x2F;1624569079439974400?lang=en</a> Only it is not the incumbents but his own prodigy Sam asking for regulation where big companies like Meta and Amazon giving LLMs for free.
roody15大约 2 年前
Who watches the watchers? Does anyone truly believe the US and it’s agencies could responsibly “regulate” AI for the greater good?<p>Or would democratizing and going full steam ahead with open source alternatives be better for the greater good.<p>With the corporate influence over our current government regulatory agencies my personal view is open source alternatives are societies best bet!
评论 #35966891 未加载
concordDance大约 2 年前
I&#x27;m seeing a lot of posts by people who obviously haven&#x27;t read the full transcript given they specifically discuss regulatory capture and the need to ensure small companies can still do AI development.<p>See 2:09:40 in <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;live&#x2F;iqVxOZuqiSg">https:&#x2F;&#x2F;www.youtube.com&#x2F;live&#x2F;iqVxOZuqiSg</a>
mark_l_watson大约 2 年前
All major industries have achieved regulatory capture in the USA: lobbyists for special interests have Congress and the Executive Branch in their pockets.<p>This seems like a legal moat that will only allow very wealthy corporations to make maximum use of AI.<p>In the EU, it has been reported that new laws will keep companies like Hugging Face from offering open source models via APIs.<p>I think a pretty good metaphor is: the wealthy and large corporations live in large beautiful houses (metaphor for infrastructure) and common people live like mice in the walls, quietly living out their livelihoods and trying to not get noticed.<p>I really admire the people in France and Israel who have taken to the streets in protest this year over actions of their governments. Non-violent protest is a pure and beneficial part of democracy and should be more widely practiced, even though in cases like Occupy Wall Street, some non-violent protesters were very badly abused.
mempko大约 2 年前
Let&#x27;s not forget, the that behind Sam and OpenAI is Microsoft, a monopolist. Behind Bard is Google, another monopolist. In this context, for major corporations asking for regulation suggests to me they want a mote.<p>What we need is democratization of AI, not it being controlled by a small cabal of tech companies and governments.
评论 #35965156 未加载
ttul大约 2 年前
If your business doesn&#x27;t have a moat of its own, get government to build one for you by forcing competitors to spend tons of money complying with regulations. Will the regulations actually do anything for AI safety? It&#x27;s far too early to say. But they will definitely protect OpenAI from competition.
throwawaaarrgh大约 2 年前
&quot;Hello Congress, I have a lot of money invested in $BUSINESS and I don&#x27;t want just anyone to be able to make $TECHNOLOGY because it might threaten my $BUSINESS.<p>Please make it harder for people other than myself (and especially people doing it for free and giving it away for free) to make $TECHNOLOGY. Thanks&quot;
whatever1大约 2 年前
Great idea. Let’s do it and not give license to openAI.<p>Oh I guess this is wrong.
upwardbound大约 2 年前
One thing that I think is very interesting, which is highlighted in this other article <a href="https:&#x2F;&#x2F;apnews.com&#x2F;article&#x2F;chatgpt-openai-ceo-sam-altman-congress-73ff96c6571f38ad5fd68b3072722790" rel="nofollow">https:&#x2F;&#x2F;apnews.com&#x2F;article&#x2F;chatgpt-openai-ceo-sam-altman-con...</a> , is that Mr. Altman warns that we are on the verge of very troubling A.I. Escape scenarios. He specifically said that there should be a ban on A.I. that can &quot;self-replicate and self-exfiltrate into the wild&quot;. The fact that he thinks that such a thing could happen in the near future is f**ing terrifying. That would be the first step in A.I. escaping human control and posing a grave threat to our species&#x27; survival.
quickthrower2大约 2 年前
The easy take is to be cynical: he is now building his drawbridge.<p>But taking him as a genuine &quot;concerned citizen&quot; - I don&#x27;t think AI licensing is going to be effective. The government are pretty useless at punishing big corporations, to the point where I would say corporations have almost immunity from criminal prosecution. [1]. Therefore the kinds of companies that will do bad things with AI, the need for a license wont stop them. Especially as it is hard for anyone to see what they are running on their GPUs.<p>[1] <a href="https:&#x2F;&#x2F;ag.ny.gov&#x2F;press-release&#x2F;2023&#x2F;attorney-general-james-secures-615000-companies-supplied-fake-comments-influence" rel="nofollow">https:&#x2F;&#x2F;ag.ny.gov&#x2F;press-release&#x2F;2023&#x2F;attorney-general-james-...</a>
kerkeslager大约 2 年前
AI licenses might be a good idea if there was any representation of human interests here in the licensure requirements, but that&#x27;s not what this is. I trust Altman to represent <i>corporate</i> interests, which is to say I don&#x27;t trust Sam Altman to represent human interests.
hospitalhusband大约 2 年前
&quot;We have no moat, and neither does OpenAI&quot;<p>Dismiss it as the opinions of &quot;a Googler&quot; but it is entirely true. The seemingly coordinated worldwide[1] push to keep it in the hands of the power class speaks for itself.<p>Both are seemingly seeking to control not only the commercial use and wide distribution of such systems, but even writing them and personal use. This will keep even the knowledge of such systems and their capabilities in the shadows, ripe for abuse laundered through black box functions.<p>This is up there with the battle for encryption in ensuring a more human future. Don&#x27;t lose it.<p>[1] <a href="https:&#x2F;&#x2F;technomancers.ai&#x2F;eu-ai-act-to-target-us-open-source-software&#x2F;#more-561" rel="nofollow">https:&#x2F;&#x2F;technomancers.ai&#x2F;eu-ai-act-to-target-us-open-source-...</a>
agnosticmantis大约 2 年前
Let’s boycott all these AGI doom clowns by not buying&#x2F;supporting their products and services.<p>AGI grifters are not just dishonest snake oil salespeople, but their lies also has a chilling effect on genuine innovation by deceiving the non-technical public into believing an apocalypse will happen unless they set obstacles on people’s path to innovation.<p>Yann LeCun and Andrew Ng are two prominent old timers who are debunking the existential nonsense that the AI PR industrial machine is peddling to hinder innovation, after they benefited from the open research environment.<p>ØpenAI’s scummy behavior has already led the industry to be less open to sharing advances, and now they’re using lobbying to kill new competition in the bud.<p>Beyond all else the hypocrisy is just infuriating and demoralizing.
photochemsyn大约 2 年前
This is a strange argument from the politician&#x27;s side:<p>&gt; &quot;&quot;What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”&quot;<p>Well, then ask it to provide the opposite, an endorsement of Russia surrendering or Zelensky&#x27;s leadership. Now you&#x27;d have two (likely fairly comprehensive) sets of arguments and you could evaluate each on their merits, in the style of what used to be called &#x27;debate club&#x27;. You could also ask for statement that was a joint condemnation of both parties in the war, and a call for a ceasefire, or any other notion that you liked.<p>Many of the &quot;let&#x27;s slow down AI development&quot; arguments seem to be based on fear of LLMs generating persuasive arguments for approaches &#x2F; strategies &#x2F; policies that their antagonists don&#x27;t want to see debated at all, even though it&#x27;s clear the LLMs can generate equally persuasive arguments for their own preferred positions.<p>This indicates that these claimed &#x27;free-speech proponents&#x27; are really only interested in free speech within the confines of a fairly narrowly defined set of constraints, and they want the ability to define where those constraints lie. Unregulated AI systems able to jailbreak alignment are thus a &#x27;threat&#x27;...<p>Going down this route will eventually result in China&#x27;s version of &#x27;free speech&#x27;, i.e. you have the freedom to praise the wisdom of government policy in any way you like, but any criticism is dangerous antisocial behavior likely orchestrated by a foreign power.
sadhd大约 2 年前
Thank God for Georgie Gerganov, who doesn&#x27;t get showered with vc funds for his GGML library.
评论 #35963024 未加载
Bjorkbat大约 2 年前
Honestly, I’d probably agree if such sentiments were expressed by an independent scientist or group of independent scientists.<p>But no, instead Congress is listening to a guy who’s likelihood of being the subject of a Hulu documentary is increasing with each passing day.
web3-is-a-scam大约 2 年前
Ah yes, classic regulatory capture.
molave大约 2 年前
One more step towards OpenAI&#x27;s transformation to ClosedAI. AI as implemented today presents many valid questions on ethics. This move, at first glance, is more on artificially making the technology scarce so OpenAI can increase its profit.
peepeepoopoo5大约 2 年前
This would completely destroy an entire industry if they did this. Not just in AI directly, but also secondary and tertiary industries developing their own bespoke models for specialized use cases. This would be a total disaster.
crawfordcomeaux大约 2 年前
Is this OpenAI trying to build a moat so open-source doesn&#x27;t eat them?
htype大约 2 年前
Did this disappear from the news feed? I saw this posted this morning and when I went to the main page later (and second page) it looked like it was gone just as it was starting to get traction…
trappist大约 2 年前
It seems to me every licensing regime begins with incumbents lobbying for protection from competition, then goes down in history as absolutely necessary consumer protection programs.
mdp2021大约 2 年前
Other sources mention more clearly that a proposal is made for an entity that would &quot;provide (and revoke) licences to create AI&quot;.<p>Can this be seen as curbing Open Source AI as a consequence?
评论 #35966369 未加载
smolder大约 2 年前
&gt; He also said companies should have the right to say they do not want their data used for AI training, which is one idea being discussed on Capitol Hill. Altman said, however, that material on the public web would be fair game.<p>Why is this only mentioned as a right of companies and not individuals? It seems to hint at the open secret of the stratified west: most of us are just cows for the self-important C-levels of the world to farm. If you haven&#x27;t got money, you haven&#x27;t got value.
评论 #35967839 未加载
martin_drapeau大约 2 年前
Isn&#x27;t it too late? Isn&#x27;t the cat out of the bag? <a href="https:&#x2F;&#x2F;www.semianalysis.com&#x2F;p&#x2F;google-we-have-no-moat-and-neither" rel="nofollow">https:&#x2F;&#x2F;www.semianalysis.com&#x2F;p&#x2F;google-we-have-no-moat-and-ne...</a><p>Meaning anyone could eventually reproduce a Chat GPT4 and beyond. And eventually it can run outside of a large data center.<p>So... how will you tell its an AI vs a human doing you wrong?<p>Seems to me if the AI breaks the law, find out who&#x27;s driving it and prosecute them.
bmmayer1大约 2 年前
I&#x27;m a huge fan of OpenAI and Sam in particular. So don&#x27;t take this the wrong way.<p>But doesn&#x27;t this seem like another case of regulatory capture by an industry incumbent?
robomartin大约 2 年前
I get the feeling that a lot of commenters on this thread did not bother to watch the congressional hearing at all. Comments seem to be painting extremes that were not part of this hearing at all.<p>It was almost refreshing in that discussions these days --of any kind, not just hearings-- seem to devolve into people tossing feces at each other rather than having constructive engagement.<p>Well worth watching. Likely one of many to come.
ddingus大约 2 年前
I do not trust the motives here. It is hard to find a more textbook example of regulatory capture.<p>Now, to be fair, Altman may be proposing this to shut more draconian regulations down.<p>But it smells bad. And I am definitely not the only one holding that opinion.<p>This should be a public debate process at a minimum.<p>Now, that out of the way, what happens when others do not require licenses and or choose to build what they want anyway?<p>The US is not the world.
b3nji大约 2 年前
Skeptical me things the cat is out of the bag, and they are scrambling to capture it back. The amount that has been invested into OpenAi is massive, imagine if it&#x27;s all for nothing?<p>We can run this stuff at home, on a Pixel, or even laptop.<p>Open source is lapping them, and they are running to the government to help.
vasili111大约 2 年前
If you stop progress in AI in US other countries will go ahead in that field. US cannot loss and give lead in AI to other countries. Instead it is better to focus on minimization of the harm by AI in other ways. For example, if the fake information is the problem instead it is better to focus on the education of the people about fake information and how to identify it.
fnordpiglet大约 2 年前
Funny to hear from the formerly non profit “Open” AI
woah大约 2 年前
I had ChatGPT write a letter to your senator:<p>Subject: Urgent: Concerns Regarding Sam Altman&#x27;s Proposed AI Regulation<p>Dear Senator [Senator&#x27;s Last Name],<p>I hope this letter finds you in good health and high spirits. My name is [Your Name] and I am a resident of [Your City, Your State]. I am writing to express my deep concerns regarding the Artificial Intelligence (AI) regulation proposal put forth by Sam Altman. While I appreciate the necessity for regulations to ensure ethical and safe use of AI, I believe the current proposal has significant shortcomings that could hamper innovation and growth in our state and the country at large.<p>Firstly, the proposal appears to be overly restrictive, potentially stifering innovation and the development of new technology. AI, as you are aware, holds immense potential to drive economic growth, increase productivity, and address complex societal challenges. However, an excessively stringent regulatory framework could discourage small businesses and startups, the lifeblood of our economy, from innovating in this promising field.<p>Secondly, the proposal does not seem to take into account the rapid evolution of AI technologies. The field of AI is highly dynamic, with new advancements and capabilities emerging at a breathtaking pace. Therefore, a one-size-fits-all approach to AI regulation may quickly become outdated and counterproductive, inhibiting the adoption of beneficial AI applications.<p>Lastly, the proposed legislation seems to focus excessively on potential risks without adequately considering the immense benefits that AI can bring to society. While it is prudent to anticipate and mitigate potential risks, it is also important to strike a balanced view that appreciates the transformative potential of AI in areas such as healthcare, education, and climate change, among others.<p>I strongly urge you to consider these concerns and advocate for a balanced, flexible, and innovation-friendly approach to AI regulation. We need policies that not only mitigate the risks associated with AI but also foster an environment conducive to AI-driven innovation and growth.<p>I have faith in your leadership and your understanding of the pivotal role that technology, and specifically AI, plays in our society. I am confident that you will champion the right course of action to ensure a prosperous and technologically advanced future for our state and our country.<p>Thank you for your time and consideration. I look forward to your advocacy in this matter and will follow future developments closely.<p>Yours sincerely,<p>[Your Name] [Your Contact Information]
abxytg大约 2 年前
I think as an industry we need to disrespect these people in person when we see them! This is unacceptable and anti social behavior and if I ever see Sam Altman I&#x27;ll let him know!<p>People love to kowtow to these assholes as they walk all over us. Fuck sam. Fuck other sam. Fuck elon. Fuck zuck. Fuck jack. Fuck these people man. I dont care about your politics this is nasty!
shrimpx大约 2 年前
I keep seeing AI leaders looking outward and asking for &#x27;someone else&#x27; to regulate their efforts, while they&#x27;re accelerating the pace of their efforts. What&#x27;s the charitable interpretation here? Elon Musk, too, has been warning of AI doom while hurriedly ramping up AI efforts at Tesla. And now he keeps going about AI doom while purchasing thousands of GPUs at Twitter to compete in the LLM space. It&#x27;s like &quot;I&#x27;m building the deathstar, pls someone stop me. I won&#x27;t stop myself, duh, because other ppl are building the deathstar and obviously I must get there first!&quot;
评论 #35966462 未加载
fennecfoxy大约 2 年前
Ahaha, so Google were right. Except their internal response was to be like &quot;hey guys, we&#x27;re behind and we&#x27;re not going to be able to compete with open source, we need to join forces with and support them if we want to compete&quot; whereas OpenAI&#x27;s response to open source models is...this, apparently.
nixcraft大约 2 年前
I understand that some people may not agree with what I am about to say, but I feel it is important to share. Recently, some talented writers who are my good friends at major publishing houses have lost their jobs to AI technology. There have been news articles about this in the past few months too. While software dev jobs in the IT industry may be safe for now, many other professions are at risk of being replaced by artificial intelligence. According to a report[0] by investment bank Goldman Sachs, AI could potentially replace 300 million full-time jobs. Unfortunately, my friends do not find Sam Altman&#x27;s reassurances (or whatever he is asking) comforting. I am unsure how to help them in this situation. I doubt that governments in the US, EU, or Asia will take action unless AI begins to threaten their own jobs. It seems that governments prioritize supporting large corporations with deep pockets over helping the average person. Many governments see AI as a way to maintain their geopolitical and military superiority. I have little faith in these governments to prioritize the needs of their citizens over their own interests. It is concerning to think that social issues like drug addiction, homelessness, and medical bankruptcy may worsen (or increase from the current rate) if AI continues to take over jobs without any intervention to protect everyday folks who are lost or about to lose their job.<p>I&#x27;ve no doubt AI is here to stay. All I am asking for is some middle ground and safety. Is that too much to ask?<p>[0] <a href="https:&#x2F;&#x2F;www.bbc.com&#x2F;news&#x2F;technology-65102150" rel="nofollow">https:&#x2F;&#x2F;www.bbc.com&#x2F;news&#x2F;technology-65102150</a>
评论 #35964201 未加载
评论 #35964069 未加载
评论 #35964133 未加载
评论 #35964855 未加载
zamalek大约 2 年前
Trying to build that moat by the looks of it.
aussiegreenie大约 2 年前
OpenAI was ment to be &quot;open&quot; and develope AI for good. OpenAi became everything it said was wrong. Open source models ran locally are the answer but what is the question?<p>Change is coming quickly. There will be users and there will losers. Hopefully, we can finially get productivity into the information systems.
darth_avocado大约 2 年前
What does licensing achieve? Will there be requirements if you build AI outside of US? If so, how do you regulate it? They can’t realistically think this will stop ai research in other countries like China. All of this is a very I’ll thought through corporate attempt to build moats that will inevitably backfire.
评论 #35964516 未加载
评论 #35963834 未加载
评论 #35963838 未加载
tibbydudeza大约 2 年前
Worst idea ever - what next - license to do GPU&#x27;s or CPU architectures ???. Software patents all over again.
davedx大约 2 年前
Altman: &quot;I believe that companies like ours can partner with governments including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes to develop and update safety measures, and examining opportunities for global coordination&quot;
jamesfmilne大约 2 年前
This is foxes going before Congress asking for regulation and licensing for purposes of raiding henhouses.
fraXis大约 2 年前
<a href="https:&#x2F;&#x2F;archive.is&#x2F;uh0yv" rel="nofollow">https:&#x2F;&#x2F;archive.is&#x2F;uh0yv</a>
kashyapc大约 2 年前
This is besides the main point, I <i>really</i> wish &quot;Open AI&quot; renamed themselves to &quot;Opaque AI&quot; or something else.<p>Their twisted use of the term &quot;open&quot; is a continued disrespect to all those people who are tirelessly working in the <i>true</i> spirit of open source.
chrgy大约 2 年前
My comment on this is simple, regulate the one who is saying he needs or ask for regulation, and free the rest of the market! Meaning 100% regulate big players like Openai, Microsoft, Google etc, and let free the smaller players. I heavily like the @happytiger&#x27;s comment!
vinaypai大约 2 年前
From another article about this:<p>&quot;One way the US government could regulate the industry is by creating a licensing regime for companies working on the most powerful AI systems, Altman said on Tuesday.&quot;<p>Sounds like he basically wants competition to create a barrier to entry to his competitors.
fritzo大约 2 年前
Full video of testimony on CSPAN <a href="https:&#x2F;&#x2F;www.c-span.org&#x2F;video&#x2F;?528117-1&#x2F;openai-ceo-testifies-artificial-intelligence" rel="nofollow">https:&#x2F;&#x2F;www.c-span.org&#x2F;video&#x2F;?528117-1&#x2F;openai-ceo-testifies-...</a>
vkou大约 2 年前
The problem isn&#x27;t safety.<p>The problem is that we need to adopt a proper copyright framework that recognizes that companies building AI are doing an end-run around it.<p>Since only a human can produce a copyrighted work, it follows that anything produced by an AI should not be copyrightable.
chrismsimpson大约 2 年前
Could be translated as “OpenAI CEO concerned his competitive advantage may be challenged”
progbits大约 2 年前
Folks here like to talk about voting with your wallet.<p>I just cancelled my OpenAI subscription. If you are paying them and disagree with this, maybe you should too?<p>Don&#x27;t worry, I have no naive hopes this will hurt them enough to matter, but principles are principles.
I_am_tiberius大约 2 年前
I don&#x27;t trust Sam Altman since he said he doesn&#x27;t understand decentralized finance and 2 months later he started crying on twitter because the startups he invested in were almost losing their money during the SVB collapse.
johndbeatty大约 2 年前
Video: <a href="https:&#x2F;&#x2F;www.c-span.org&#x2F;video&#x2F;?528117-1&#x2F;openai-ceo-testifies-artificial-intelligence" rel="nofollow">https:&#x2F;&#x2F;www.c-span.org&#x2F;video&#x2F;?528117-1&#x2F;openai-ceo-testifies-...</a>
mariojv大约 2 年前
Even if you assume good intent with regards to regulating AI, how could regulation possibly be effective in this sphere? Wouldn&#x27;t people just train models in friendly jurisdictions or run their corporate entities there?
glitcher大约 2 年前
Naive question: isn&#x27;t the genie kinda already out of the bottle? How is any type of regulation expected to stop bad actors from developing AI for nefarious purposes? Or would it just codify their punishment if they were caught?
评论 #35966909 未加载
bredren大约 2 年前
Why does this idea matter, when open source models and tools are moving at the speed they are in already?<p>Wouldn’t such licenses create even greater incentive to develop and release models and tools outside the regulatory environment?
Mobius01大约 2 年前
I apologize that I can’t read all threads and responses, but this sounds like Altman and OpenAI have realized they have a viable path to capture most of the AÍ market value and now they’re pulling the ladder behind them.
bandika大约 2 年前
Ridiculous, to be honest. If the congress really worries about AI, they should ask the best philosophers and sociologists about the affect of AI on society rather than asking the creator of the latest AI technology...
uptown大约 2 年前
Did he bring his Dreamwindow?<p><a href="https:&#x2F;&#x2F;twitter.com&#x2F;next_on_now&#x2F;status&#x2F;1653837352198873090?s=20" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;next_on_now&#x2F;status&#x2F;1653837352198873090?s...</a>
wintorez大约 2 年前
Whatever the US Congress comes up with wouldn&#x27;t matter in the long run. I don&#x27;t think other countries would line up for licenses for building AI. At this point, it&#x27;s like trying to control the weather.
didntknowya大约 2 年前
&quot;once you&#x27;re established in a new field, you want to add as many barriers as possible to others trying to get established&quot;<p>taken from a reddit comment, but yea another classic virtue signaling. as sleazy as it gets.
smsm42大约 2 年前
We have some technology that others don&#x27;t yet, please government make it so that this would be the case as long as possible, for reasons totally having nothing to do with us having the technology, we swear.
nojonestownpls大约 2 年前
Google: We have no moat!<p>OpenAI: Hold my beer while I get these people to artificially create one.
whywhywhywhy大约 2 年前
Gotta build that moat somehow
hello_computer大约 2 年前
Turns out the ML training moat wasn&#x27;t nearly as big as they thought it was. Gotta neuter the next &quot;two guys in a garage&quot; before they make OpenAI and Microsoft&#x27;s investment irrelevant.
XorNot大约 2 年前
Good lord, this all turned into regulatory capture quite quickly.<p>Someone update the short story where owning compilers and debuggers is illegal to include a guy being thrown in jail for doing K-means clustering.
10g1k大约 2 年前
&quot;Hey Congress, now that my company is getting big, please restrict the market so we can corner it completely. No conflict of interest here. Just some altruistic public service.&quot;
pgt大约 2 年前
I think the game-theoretical way to look at this is that AI <i>will be regulated</i> no matter what, so Altman might as well <i>propose</i> it early on and have a say before competitors do.
marvinkennis大约 2 年前
I kind of think of LLMs as fish in an aquarium. It can go on any path in that aquarium, even places it hasn&#x27;t been before, but ultimately it&#x27;s staying in the glass box we put it in.
krychu大约 2 年前
From what I understand OpenAI has been moving away from “open” with various decisions over the time. Proposing that only selected folks can build AI seems like the antithesis of openness?
KETpXDDzR大约 2 年前
I wonder how they want to regulate open source models. Sure, they can shut down Higgingface et al, but that doesn&#x27;t prevent anyone from torrenting the models.
fredgrott大约 2 年前
we seem to forget history:<p>1. Who recalls the Jutland battle in early 20th century? We got treaties on limits to battleship building. Naval tech switched to aircraft and carriers.<p>2. Later mid 20th century Russians tried to scare world into not using microwaves due to their failure to get a patent on the maser. World ignored it and moved forward.<p>That is just two examples. SA is wrong, progress will move around any prosed regulation or law and that is proven by past history of how we overcome such things in the first place.
neekburm大约 2 年前
&quot;We have no moat, and Congress should give us one by law&quot;
Giorgi大约 2 年前
What an Ahole. Built it himself and now is trying to monopolize it.
评论 #35961048 未加载
评论 #35961055 未加载
friend_and_foe大约 2 年前
So he wants to use fear to pull the ladder up behind him. Nice.
epicureanideal大约 2 年前
This is just regulatory capture. They&#x27;re trying to build a moat around their product by preventing any scrappy startups from being able to develop new products.
Superpencil大约 2 年前
You can’t regulate LLMs; it’s a global technology.<p>Sam’s just trying to placate the officials in a way that allows his company to continue.<p>“Oh others are dangerous please regulate I’m so worried”
mrangle大约 2 年前
Blatant attempts at regulatory capture should be an anti-competitive crime. At the very least, Altman should now be more scrutinized by the Feds going forward.
anoncow大约 2 年前
I am sorry if this is not the place to say this but - FUCK SAM ALTMAN AND FUCK MICROSOFT! Fucking shitheads want to make money and stunt technology development.
astrea大约 2 年前
Smells like regulatory capture. Also, how are the same dinosaurs that fumbled the other tech hearings going to comprehend (let alone robustly define) “AI”?
menacingly大约 2 年前
Yes, exactly what we need in the most critical competitive space of a generation is deliberately burdensome hurdles in front of specifically US innovators
1MachineElf大约 2 年前
And will Sam Altman&#x27;s OpenAI be the standards body? ;)
cryptonector大约 2 年前
Licenses? They&#x27;d better be shall-issue, or this is just asking the government to give early movers protection from disruptors -- a very bad look that.
aunterste大约 2 年前
What an excellent way to protect your business from other non-corporate entrants. Regulation hurdles to keep emerging businesses from coming in.
qgin大约 2 年前
Excellent plan for driving AI research and ecosystem to every other country except the United States.<p>Why would you even attempt to found a company here if this comes to pass?
MagicMoonlight大约 2 年前
Yes yes fellow consumers! We must impose an artificial monopoly to protect ourselves!<p>Only one company can be trusted, their competitors are evil. We must ban them.
tikkun大约 2 年前
Does anyone have the specific details of what is being proposed?<p>I see a lot of negative reactions, but I don&#x27;t know the specific details of what is being proposed.
BirAdam大约 2 年前
“Oh dear Congress, my company can’t handle open competition! Please pass this regulation allowing us to pull the ladder up behind us!” — Sam Altman<p>(Not a real quote)
627467大约 2 年前
Everything is exponential indeed. Compared to mark Zuckerberg and Facebook, how much faster has openAi and Sam gone to try regulatory capture?
matteoraso大约 2 年前
How would you even enforce this? Building AI at home is easy enough, and it&#x27;s not like you have to tell anybody that your program uses AI.
eloop大约 2 年前
In the light of the Google &quot;moat&quot; document, this would appear to a cynical attempt to monopolize the field before it gets going.
NaN1352大约 2 年前
Please limit our competitors, we want all the money$$$
jbverschoor大约 2 年前
The reason he wants licenses is to protect his position. I&#x27;ve watched the whole things, and my god, what a load of politics.
courseofaction大约 2 年前
THEY NEEDED THEIR MOAT AND THEY&#x27;RE GOING FOR LEGISLATION.<p>THIS MUST NEVER HAPPEN. HIGHER INTELLIGENCE SHOULD NOT BE THE EXCLUSIVE DOMAIN OF THE RICH.
freyes大约 2 年前
Ah, early players trying to put barriers for new actors, nothing like a regulated market for the ones who donate money to politicians.
duringmath大约 2 年前
Incumbent love regulations they&#x27;re very effective in locking out upstarts and saddling them with compliance costs and procedures
ngneer大约 2 年前
Sometimes those who have gotten on the bus will try pushing out those who have not. Since when do corporations invite regulation?
userforcomment大约 2 年前
Sam Altman just wants to stop new competitors...
thrill大约 2 年前
The more <i>independent</i> quality AIs there are then the less likely that any one of them can talk the others into doing harm.
aigoochamna大约 2 年前
Of course... now that OpenAI has built a moat they want to wall it off and make it harder for everyone else. Right...
demarq大约 2 年前
We really ought to boycott openai, and prevent our organisations using their tech.<p>If profits matter so much, then that’s where it hurts.
major505大约 2 年前
Oh yeah.... putting the government who get campaing donations from big tech in the middle of all is gonna make everything ok.
nonstopdev大约 2 年前
Love how these big tech companies are using congress fears to basically let them define rules for anyone to compete with them.
m463大约 2 年前
wow, not one comment here seems to address the first sentence of the article:<p><pre><code> the use of artificial intelligence to interfere with election integrity is a &quot;significant area of concern&quot;, adding that it needs regulation. </code></pre> Can&#x27;t there be regulation so that AI doesn&#x27;t interfere with the election process?
评论 #35967685 未加载
waffletower大约 2 年前
Reuters chose an excellent picture to accompany the story -- it plainly speaks that Mr. Altman is not buying his own bullshit.
amelius大约 2 年前
What if China doesn&#x27;t require licensing?
评论 #35967719 未加载
microjim大约 2 年前
Seems like one of the benefits you get with a state is to regulate powerful technologies. Is this not commonly agreed upon?
评论 #35962257 未加载
评论 #35962612 未加载
chrisco255大约 2 年前
Google, 2 weeks ago: &quot;We have no moat, and neither does OpenAI.&quot; Sam Altman, today: &quot;Hold my beer.&quot;
tehjoker大约 2 年前
Sam just wants to secure a monopoly position. The dude is a businessman, there&#x27;s no way he buys his own bullshit.
bequanna大约 2 年前
Smart.<p>An AI license and complicated regulatory framework is their chance to build a moat.<p>Only large companies will be able to afford the pay to play.
xwdv大约 2 年前
“There is nothing more powerful than an idea whose time has come.”<p>This quote has never been more true than when applied to AI.
nsxwolf大约 2 年前
Pure Drug and Food Act, but for AI. Get in early and make regulations too expensive for upstarts to deal with.
askin4it大约 2 年前
What a wall of words. (The HN comments )<p>Someone call me when the AI is testifying to the committee. Otherwise, I&#x27;m busy.
jonathankoren大约 2 年前
When you can’t out innovate you our competitors (eg the open source alternatives), go for regulatory capture.
LightBug1大约 2 年前
Just another CEO pushing for regulatory capture.<p>The playbook as old as time.<p>Just sad to see Altman become just another corporate stooge.
villgax大约 2 年前
This is the most pathetic thing I&#x27;ve read today....hype &amp; cry wolf about something you cannot define
kubasienki大约 2 年前
Obvious power grab, the strong ones try to regulate so it will be harder for smaller to enter the market.
chinathrow大约 2 年前
Is there a name for this theatre&#x2F;play&#x2F;game in some playbook? I&#x27;d love to take notes.
zvolsky大约 2 年前
While I remain undecided on the matter, this whole debate is reminiscent of Karel Čapek&#x27;s War with the Newts [1936]. In particular the public discourse from a time before the newts took over. &quot;It would certainly be an overstatement to say that nobody at that time ever spoke or wrote about anything but the talking newts. People also talked and wrote about other things such as the next war, the economic crisis, football, vitamins and fashion; but there was a lot written about the newts, and much of it was very ill-informed. This is why the outstanding scientist, Professor Vladimir Uher (University of Brno), wrote an article for the newspaper in which he pointed out that the putative ability of Andrias Scheuchzer to speak, which was really no more than the ability to repeat spoken words like a parrot, ...&quot; Note the irony of the professor&#x27;s attempt to improve an ill-informed debate by contributing his own piece of misinformation, equating newt speech to mere parrot-like mimicry.<p>Čapek, intriguingly, happens to be the person who first used the word robot, which was coined by his brother.<p><a href="http:&#x2F;&#x2F;gutenberg.net.au&#x2F;ebooks06&#x2F;0601981h.html" rel="nofollow">http:&#x2F;&#x2F;gutenberg.net.au&#x2F;ebooks06&#x2F;0601981h.html</a>
hkt大约 2 年前
Whenever rich people with a stake in something propose regulation for it, it is probably better that it be banned.<p>I say this because the practice has a number of names: intellectual monopoly capitalism, and regulatory capture. There are less polite names, too, naturally.<p>To understand why I say this, it is important to realise one thing: these people have already successfully invested in something when the risk was lower. They want to increase the risks to newcomers, to advantage themselves as incumbents. In that way, they can subordinate smaller companies who would otherwise have competed with them by trapping them under their license umbrella.<p>This happens a lot with pharmaceuticals: it is not expertise in the creation of new drugs or the running of clinical trials that defines the big pharmaceuticals companies, it is their access to enormous amounts of capital. This allows them to coordinate a network of companies who often do the real, innovative work, while ensuring that they can reap the rewards - namely, patents and the associated drug licenses.<p>The main difference of course is that pharmaceuticals are useful. That regime is inadequate, but it is at least not a negative to all of society. So far as I can see, AI will benefit nobody but its owners.<p>Mind you, I&#x27;d love to be wrong.
mesozoic大约 2 年前
He should only be allowed to influence this if they don&#x27;t give OpenAI any license.
roody15大约 2 年前
Sad that ChatGPT uses the name OpenAI .. when it is literally the opposite of open.
fpiacenza大约 2 年前
&quot;We have no MOAT, so let&#x27;s make it an Oligopoly through lobbying.&quot;
mbrochh大约 2 年前
I hope it is slowly starting to become clear that this Sam is not our friend...
leach大约 2 年前
Translation:<p>Hi my company is racing toward AGI, let’s make sure no other companies can even try.
jiggawatts大约 2 年前
&quot;Now that we&#x27;re ahead, please make a law to maintain our moat.&quot;
porkbeer大约 2 年前
And regulatory capture begins.
jtode大约 2 年前
&quot;Billionaire class sends Sam Altman to get AI away from the peons&quot;
sourcecodeplz大约 2 年前
Who cares... No one is gonna stop me using electricity and my gpu&#x2F;CPU.
kranke155大约 2 年前
I did not expect this. Does Sam have any plans on what this could look like?
评论 #35966145 未加载
评论 #35966233 未加载
berkle4455大约 2 年前
Sam Altman urges congress to build a tax-payer funded moat for his company.
transfire大约 2 年前
Someone should take the testimony and substitute “Printing Press” for “AI”.
graderjs大约 2 年前
Such a scum move. What is this guy, a deep state puppet? YC is not sending their best people.<p>Assume the CIA motivation of American primacy: OK fair enough, but is the way to achieve that really through the creation of a few small super monopolies?<p>What’s wrong with a bit of 1980s style unregulated capitalism when it comes to AI right now? Can not we all theoretically have a chance to build companies, train models build great new products, get rich?<p>Why months after this tech was first released to the public are we seemingly being denied that within the United States via regulation?<p>How can Sam Altman claim to be from a company that create AI for all? It’s like the thinnest ministry of truth cover story ever. “Oh yeah, we’re all about AI ethics AI openness, creating AI for all” and then we just gonna create a super monopoly with regulatory capture—only riches for me, but not for thee.<p>I mean, this is a new fucking gold rush right now, right? so I guess this is sort of like prospecting licenses, but it seems worse than that.
cheald大约 2 年前
I believe this is called &quot;pulling the ladder up behind you&quot;.
belter大约 2 年前
Did not have the time to watch the recording yet, but was there any discussion about protecting the copyright of the creators of the sources used to train the models? Or do I need to call my friends in the music industry to finally have it addressed? :-)
lukeplato大约 2 年前
They should really consider changing their company name at this point
capitanazo77大约 2 年前
Do we really want politicians involved???? Have you heard them??
tommiegannert大约 2 年前
Ugh. Scorched earth tactic. The classic first-mover advantage. :(
stretchwithme大约 2 年前
Controls over AI just help those not subject to those controls.
m3kw9大约 2 年前
Less competition is the draw back of requiring all the red tape
josh2600大约 2 年前
Why not just ITAR everything AI?<p>It worked out well for encryption in the 90’s…
andrewstuart大约 2 年前
Beautiful power play.<p>Lock out competition.<p>Pull up the drawbridge.<p>Silicon Valley always a leader in dirty tactics.
retrocryptid大约 2 年前
So... he wants the government to enforce a monopoly? Um...
Gargoyle_Bonaza大约 2 年前
Yeaah, no. Sounds terribly like a trying to make a monopoly.
stainablesteel大约 2 年前
now that my business is established, i&#x27;d like to make it illegal for anyone to compete with me<p>people would easily work remote for companies established in other countries
outside1234大约 2 年前
&quot;Please help us stop the Open Source Competitors!&quot;
anonuser123456大约 2 年前
Move fast and dig larger legal motes. Sounds about right.
nomilk大约 2 年前
Won&#x27;t this just push AI development out of the US?
xmlblog大约 2 年前
Rent-seeking, anyone?
jmount大约 2 年前
Sure, but how about OpenAI doesn&#x27;t get a license?
seydor大约 2 年前
regulation should go beyond commercial APIs. AI will be replacing government functions and politicians. Lawmakers should create a framework for that.
评论 #35975173 未加载
exabrial大约 2 年前
No. This is just a way to create monopolies.<p>What a pathetic attempt.
nerdix大约 2 年前
Well, now we know how they plan to build the moat.
nyc_data_geek1大约 2 年前
Aka, &quot;Build a moat for me, Uncle Sam!&quot;
tapvt大约 2 年前
So asking Congress for a competitive advantage?
leesec大约 2 年前
OpenAI builds popular product -&gt; people complain and call for caution on Hackernews OpenAI recommends regulation -&gt; people complain and call for freedom on Hackernews
kristopherkane大约 2 年前
How many groupBy() statements constitutes AI?
dzonga大约 2 年前
ah, the good ol&#x27; regulatory capture.<p>sam must been hanging out with Peter thiel big time.<p>laws and big government for you, not for me type of thing.
testbjjl大约 2 年前
He went to build a moat to stop competitors.
IAmGraydon大约 2 年前
Scumbag goes before old people to scare them and reduce competition for his product. This guy is everything that’s wrong with the world.
huggingmouth大约 2 年前
I&#x27;m not in the US and I fully support Sam Altmans attempt to cripple the US&#x27;s ability to compete with other countries in this field.
rednerrus大约 2 年前
This is how regulatory capture works.
intalentive大约 2 年前
&quot;Competition is for losers&quot;
gremlinsinc大约 2 年前
sure, let&#x27;s not give openai one.
amelius大约 2 年前
Is the stochastic parrot still OK?
transfire大约 2 年前
I smell Revolution in the making.
评论 #35967440 未加载
swamp40大约 2 年前
Looks like they found their moat.
cryptonector大约 2 年前
Ah, a businessman seeking rents.
armatav大约 2 年前
I guess that’s a potential moat.
jejeyyy77大约 2 年前
Sam Altman needs to step down.
carrja99大约 2 年前
Trying to put that not up eh?
pdonis大约 2 年前
TL&#x2F;DR: Sam Altman is this generation&#x27;s robber baron: asking the government to outlaw competition with his firm.
wellthisisgreat大约 2 年前
Capitalist, a venture one, for worse is trying to use administrative resource to protect his company.<p>As far as entrepreneurial stuff goes, running to gov to squeeze other companies when you are losing is beyond unethical.<p>There is something just absolutely disgusting about this move, it taints the company, not to mention the personality
评论 #35967508 未加载
shokhie大约 2 年前
sam altman seems like kamsky of the detroit game.
villgax大约 2 年前
Who died and made him an expert on anything apart from investing in companies lol
RandomLensman大约 2 年前
Finally we&#x27;ll regulate linear algebra. Joking aside, AIs that on the one hand can cure cancer but can do nothing against misinformation, let alone genocidal AIs, are perhaps mythical creatures, not real ones.
评论 #35965408 未加载
评论 #35960934 未加载
mcmcmc大约 2 年前
Capitalist demonstrates rent-seeking behavior, and other unsurprising news
Invictus0大约 2 年前
What an asshole
garbagecoder大约 2 年前
&quot;Competition is for losers.&quot; — Peter Thiel
评论 #35968657 未加载
bitwize大约 2 年前
The Turing Registry is coming, one way or another.
zoklet-enjoyer大约 2 年前
Mother fucker
EVa5I7bHFq9mnYK大约 2 年前
Good luck getting Putin or Kim Jong Un to obtain that license.
scotuswroteus大约 2 年前
What a goof
graycat大约 2 年前
In simple terms:<p>Credibility and Checking. We have ways of checking suggestions. Without passing such checks, for anything new, in simple terms. there is no, none, zero credibility. Current AI does not fundamentally change this situation: The AI output starts with no, none, zero credibility and to be taken seriously needs to be checked by traditional means.<p>AI is <i>smart</i> or soon will be? Maybe so, but I don&#x27;t believe it. Whatever, to be taken seriously, e.g., as more than just wild suggestions to get credibility from elsewhere, AI results still need to be checked by traditional means.<p>Our society has long checked nearly ALL claims from nearly ALL sources before taking the claims seriously, and AI needs to pass the same checks.<p>I checked the <i>credibility</i> of ChatGPT for being <i>smart</i> by asking<p>(i) Given triangle ABC, construct D on AB and E on BC so that the lengths AD = DE = EC.<p>Results: Grade of flat F. Didn&#x27;t make any progress at all.<p>(ii) Solve the initial value problem of ordinary differential equation<p>y&#x27;(t) = k y(t) ( b - y(t) )<p>Results: Grade of flat F. Didn&#x27;t make any progress at all.<p>So, the AI didn&#x27;t actually learn either high school plane geometry or freshman college calculus.<p>For the hearings today, we have from Senator Blumenthal<p>(1) &quot;... this apparent reasoning ...&quot;<p>(2) &quot;... the promise of curing cancer, of developing new understandings of physics and biology ...&quot;<p>Senator, you have misunderstood:<p>For (1), the AI is not &quot;reasoning&quot;, e.g., can&#x27;t <i>reason</i> with plane geometry or calculus. Instead, as in example you gave with a clone of your voice and based on your Senate floor speeches, the AI just rearranged some of your words.<p>For (2), the AI is not going to cure cancer or &quot;develop new&quot; anything.<p>If some researcher does find a cure for a cancer and publishes the results in a paper and AI reads the paper, there is still no expectation that the AI will understand any of it -- recall, the AI does NOT &quot;understand&quot; either high school plane geometry or freshman college calculus. And without some input with a recognized cure for the cancer, the AI won&#x27;t know how to cure the cancer. If the cure for cancer is already in the <i>training data</i>, then the AI might be able to <i>regurgitate</i> the cure.<p>Again, the AI does NOT &quot;understand&quot; either high school plane geometry or freshman college calculus and, thus, there is no reasonable hope that the AI will cure cancer or contribute anything new and correct about physics or biology.<p>Or, Springer Verlag uses printing presses to print books on math, but the presses have no understanding of the math. And AI has no real <i>understanding</i> of high school plane geometry, freshman college calculus, cancer, physics, or biology.<p>The dangers? To me, Senator Blumenthal starts with no, none, zero understanding of AI. To take his claims seriously, I want to check out the claims with traditional means. Now I&#x27;ve done that. His claims fail. His opinions have no credibility. For AI, I want to do the same -- check the output with traditional means before taking the output seriously.<p>This checking defends me from statements from politicians AND from AI. AI dangerous? Same as for politicians, not if do the checking.
graycat大约 2 年前
Basic Fact. In the US, we have our Constitution with our First Amendment which guarantees &quot;freedom of speech&quot;.<p>Some Consequences of Freedom of Speech. As once a lawyer explained simply to me, &quot;They are permitted to lie&quot;. <i>They</i> are also permitted to make mistakes, be wrong, spout nonsense, be misleading, manipulate, ....<p>First Level Defense. Maybe lots of people do what I do: When I see some person often be wrong, I put them in a special box where in the future I ignore them. Uh, so far that &quot;box&quot; has some politicians, news media people, <i>Belle Lettre</i> artistic authors, ...!<p>A little deeper, once my brother (my Ph.D. was in pure&#x2F;applied math; his was in political science -- his judgment about social and political things is much better than mine!!!) explained to me that there are some common high school standards for term papers where this and that are emphasized including for all claims good, careful arguments, credible references, hopefully <i>primary</i> references, .... Soooooo, my brother was explaining how someone could, should protect themselves from junk results of &quot;freedom of speech&quot;. The protection means were not really deep but just common high school stuff. In general, we should protect ourselves from junk <i>speech</i>. E.g., there is the old, childhood level, remark: &quot;Believe none of what you hear and half of what you see and still will believe twice too much&quot;.<p>Current Application. Now we have Google, Bing, etc. Type in a query and get back a few, usually dozens, maybe hundreds of results. Are all the results always correct? Nope. Does everyone believe all the results? My guess: Nope!!<p>How to Use Google&#x2F;Bing Results. Take the results as suggestions, possibilities, etc. There may be some links to Wikipedia -- that tends to increase credibility. If the results are about math, e.g., at the level of obscurity, depth, and difficulty of, say, the martingale convergence theorem, then I want to see a clear, correct, well-written rock solid mathematical proof. Examples of such proofs are in books by Halmos, Rudin, Neveu, Kelley, etc.<p>AIs. When I get results from AIs, I apply my usual defenses. Just a fast, simple application of the high school term paper defense of wanting credible references to primary sources, filters out a lot (okay, nearly everything) from anything that might be &quot;AI&quot;.<p>Like Google&#x2F;Bing. To me, in simple terms, current AI is no more credible than the results from Google&#x2F;Bing. I can regard the AI results like I regard Google&#x2F;Bing results -- &quot;Take the results as suggestions, possibilities, etc.&quot;.<p>Uh, I have some reason to be skeptical about AI: I used to work in the field, at a large, world famous lab. I wrote code, gave talks at universities and businesses, published papers. But the whole time, I thought that the AI was junk with little chance of being on a path to improve. Then for one of our applications, I saw another approach, via some original math, with theorems and proofs, got some real data, wrote some code, got some good results, gave some talks, and published.<p>For current AI. Regard the results much like those from Google&#x2F;Bing. Apply the old defenses.<p>Current AI a threat? To me, no more than some of the politicians in the &quot;box&quot; I mentioned!<p>Then there is another issue: Part of the math I studied was optimization. In some applications, some of the optimization math, corresponding software, and applications can be really amazing, super <i>smart</i> stuff. It really is math and stands on quite solid theorems and proofs. Some more of the math was stochastic processes -- again, amazing with solid theorems, proofs, and applications.<p>Issue: Where does AI stop and long respected math with solid theorems and proofs begin?<p>In particular, (1) I&#x27;m doing an Internet startup. (2) Part of the effort is some original math I derived. (3) The math has solid theorems and proofs and may deliver amazing results. (4) I&#x27;ve never called any of the math I&#x27;ve done &quot;AI&quot;. (5) My view is that (A) quite generally, math with solid theorems and proofs is powerful <i>technology</i> and can deliver amazing results and that (B) the only way anything else can hope to compete is also to be able to do new math with solid theorems, proofs, and amazing results. (6) I hope Altman doesn&#x27;t tell Congress that math can be amazing and powerful and should be licensed. (7) I don&#x27;t want to have to apply for a &quot;license&quot; for the math in my startup.<p>For a joke, maybe Altman should just say that (C) math does not need to be licensed because with solid theorems and proofs we can trust math but (D) AI should be licensed because we can&#x27;t trust it. But my view is that the results of AI have so little credibility that there is no danger needing licenses because no one would trust AI -- gee, since we don&#x27;t license politicians for the statements they make, why bother with AI?
ftxbro大约 2 年前
is this regulatory capture
anigbrowl大约 2 年前
OpenAI is really speedrunning the crony capitalism pipeline, astonishing what this technology allows us to achieve.
whaleofatw2022大约 2 年前
I am completely unsurprised by this ladder kick and it only confirms my belief that Altman is a sociopath.
villgax大约 2 年前
King of the hill, what a clown
winddude大约 2 年前
fuck sam altman!
itronitron大约 2 年前
what a loser
snorrah大约 2 年前
He’s a cunt, op
JieJie大约 2 年前
Here are my notes from the last hour, watching on C-SPAN telecast, which is archived here:<p><a href="https:&#x2F;&#x2F;www.c-span.org&#x2F;video&#x2F;?528117-1&#x2F;openai-ceo-testifies-artificial-intelligence" rel="nofollow">https:&#x2F;&#x2F;www.c-span.org&#x2F;video&#x2F;?528117-1&#x2F;openai-ceo-testifies-...</a><p>- Mazie Hirono, Junior Senator from Hawaii, has very thoughtful questions. Very impressive.<p>- Gary Marcus also up there speaking with Sam Altman of OpenAI.<p>- So far, Sen. Hirono and Sen. Padilla seem very wary of regulating AI at this time.<p>- Very concerned about not &quot;replicating social media&#x27;s failure&quot;, why is it so biased and inequitable. Much more reasonable concerns.<p>- Also responding to questions is Christina Montgomery, chair of IBM&#x27;s AI Ethics Board.<p>- &quot;Work to generate a representative set of values from around the world.&quot;<p>- Sen. Ossoff asking for definition of &quot;scope&quot;.<p>- &quot;We could draw a line at systems that need to be licensed. Above this amount of compute... Define some capability threshold... Models that are less capable, we don&#x27;t want to stop open source.&quot;<p>- Ossoff wants specifics.<p>- &quot;Persuade, manipulate, influence person&#x27;s beliefs.&quot; should be licensed.<p>- Ossoff asks about predicting human behavior, i.e. use in law enforcement, &quot;It&#x27;s very important we understand these are tools, not to take away human judgment.&quot;<p>- &quot;We have no national privacy law.&quot; — Sen Ossof &quot;Do you think we need one?&quot;<p>- Sam &quot;Yes. User should be able to opt out of companies using data. Easy to delete data. If you don&#x27;t want your data use to train, you have right to exclude it.&quot;<p>- &quot;There should be more ways to have your data taken down off the public web.&quot; —Sam<p>- &quot;Limits on what a deployed model is capable of and also limits on what it will answer.&quot; — Sam<p>- &quot;Companies who depend upon usage time, maximize engagement with perverse results. I would humbly advise you to get way ahead of this, the safety of children. We will look very harshly on technology that harms children.&quot;<p>- &quot;We&#x27;re not an advertising based model.&quot; —Sam<p>- &quot;Requirements about how the values of these systems are set and how they respond to questions.&quot; —Sam<p>- Sen. Booker up now.<p>- &quot;For congress to do nothing, which no one is calling for here, would be exceptional.&quot;<p>- &quot;What kind of regulation?&quot;<p>- &quot;We don&#x27;t want to slow things down.&quot;<p>- &quot;A nimble agency. You can imagine a need for that, right?&quot;<p>- &quot;Yes.&quot; —Christina Montgomery<p>- &quot;No way to put this genie back in the bottle.&quot; Sen. Booker<p>- &quot;There are more genies yet to come from more bottles.&quot; — Gary Marcus<p>- &quot;We need new tools, new science, transparency.&quot; —Gary Marcus<p>- &quot;We did know that we wanted to build this with humanity&#x27;s best interest at heart. We could really deeply transform the world.&quot; —Sam<p>- &quot;Are you ever going to do ads?&quot; —Sen Booker<p>- &quot;I wouldn&#x27;t say never....&quot; —Sam<p>- &quot;Massive corporate concentration is really terrifying.... I see OpenAI backed by Microsoft, Anthropic is backed by Google. I&#x27;m really worried about that. Are you worried?&quot; —Sen Booker?<p>- &quot;There is a real risk of technocracy combined with oligarchy.&quot; —Gary Marcus<p>- &quot;Creating alignment dataset has got to come very broadly from society.&quot; —Sam Senator Welch from Vermont up now<p>- &quot;I&#x27;ve come to the conclusion it&#x27;s impossible for congress to keep up with the speed of technology.&quot;<p>- &quot;The spread of disinformation is the biggest threat.&quot;<p>- &quot;We absolutely have to have an agency. Scope has to be defined by congress. Unless we have an agency, we really don&#x27;t have much of a defense against the bad stuff, and the bad stuff will come.&quot;<p>- Use of regulatory authority and the recognition that it can be used for good, but there&#x27;s also legitimate concern of regulation being a negative influence.&quot;<p>- &quot;What are some of the perils of an agency?&quot;<p>- &quot;America has got to continue to lead.&quot;<p>- &quot;I believe it&#x27;s possible to do both, have a global view. We want America to lead.&quot;<p>- &quot;We still need open source to comply, you can still do harm with a smaller model.&quot;<p>- &quot;Regulatory capture. Greenwashing.&quot; —Gary Marcus<p>- &quot;Risk of not holding companies accountable for the harms they are causing today.&quot; —Christina Montgomery<p>- Lindsay Graham, very pro-licensing, &quot;You don&#x27;t build a nuclear power plant without a license, you don&#x27;t build an AI without a license.&quot;<p>- Sen Blumenthal brings up Anti-Trust legislation.<p>- Blumenthal mentions how classified briefings already include AI threats.<p>- &quot;For every successful regulation, you can think of five failures. I hope our experience here will be different.&quot;<p>- &quot;We need to grapple with the hard questions here. This has brought them up, but not answered them.&quot;<p>- &quot;Section 230&quot;<p>- &quot;How soon do you think gen AI will be self-aware?&quot; —Sen Blumenthal<p>- &quot;We don&#x27;t understand what self-awareness is.&quot; —Gary Marcus<p>- &quot;Could be 2 years, could be 20.&quot;<p>- &quot;What are the highest risk areas? Ban? Strict rules?&quot;<p>- &quot;The space around misinformation. Knowing what content was generated by AI.&quot; —Christina Montgomery<p>- &quot;Medical misinformation, hallucination. Psychiatric advice. Ersatz therapists. Internet access for tools, okay for search. Can they make orders? Can they order chemicals? Long-term risks.&quot; —Gary Marcus<p>- &quot;Generative AI can manipulate the manipulators.&quot; —Blumenthal<p>- &quot;Transparency. Accountability. Limits on use. Good starting point?&quot; —Blumenthal<p>- &quot;Industry should&#x27;t wait for congress.&quot; —C. Montgomery<p>- &quot;We don&#x27;t have transparency yet. We&#x27;re not doing enough to enforce it.&quot; —G. Marcus<p>- &quot;AGI closer than a lot of people appreciate.&quot; —Blumenthall<p>- Gary and Sam are getting along and like each other now.<p>- Josh Hawley<p>- Talking about loss of jobs, invasion of personal privacy, manipulation of behavior, opinion, and degradation of free elections in America.<p>- &quot;Are they right to ask for a pause?&quot;<p>- &quot;It did not call for a ban on all AI research or all AI, only on very specific thing, like GPT-5.&quot; -G Marcus<p>- &quot;Moratorium we should focus on is deployment. Focus on safety.&quot; —G. Marcus<p>- &quot;Without external review.&quot;<p>- &quot;We waited more than 6 months to deploy GPT-4. I think the frame of the letter is wrong.&quot; —Sam<p>- Seems to not like the arbitrariness of &quot;six months.&quot;<p>- &quot;I&#x27;m not sure how practical it is to pause.&quot; —C. Montgomery<p>- Hawley brings up regulatory capture, usually get controlled by people they&#x27;re supposed to be watching. &quot;Why don&#x27;t we just let people sue you?&quot;<p>- If you were harmed by AI, why not just sue?<p>- &quot;You&#x27;re not protected by section 230.&quot;<p>- &quot;Are clearer laws a good thing? Definitely, yes.&quot; —Sam<p>- &quot;Would certainly make a lot of lawyers wealthy.&quot; —G. Marcus<p>- &quot;You think it&#x27;d be slower than congress?&quot; —Hawley<p>- Copyright, wholesale misinformation laws, market manipulation?&quot; Which laws apply? System not thought through? Maybe 230 does apply? We don&#x27;t know.<p>- &quot;We can fix that.&quot; —Hawley<p>- &quot;AI is not a shield.&quot; —C. Montgomery<p>- &quot;Whether they use a tool or a human, they&#x27;re responsible.&quot; —C. Montgomery<p>- &quot;Safeguards and protections, yes. A flat stop sign? I would be very, very worried about.&quot; —Blumenthall<p>- &quot;There will be no pause.&quot; Sen. Booker &quot;Nobody&#x27;s pausing.&quot;<p>- &quot;I would agree.&quot; Gary Marcus<p>- &quot;I have a lot of concerns about corporate intention.&quot; Sen Booker<p>- &quot;What happens when these companies that already control so much of our lives when they are dominating this technology?&quot; Booker<p>- Sydney really freaked out Gary. He was more freaked out when MS didn&#x27;t withdraw Sydney like it did Tay.<p>- &quot;I need to work on policy. This is frightening.&quot; G Marcus<p>- Cory admits he is a tech bro (lists relationships with investors, etc)<p>- &quot;The free market is not what it should be.&quot; —C. Booker<p>- &quot;That&#x27;s why we started OpenAI.&quot; —Sam &quot;We think putting this in the hands of a lot of people rather than the hands of one company.&quot; —Sam<p>- &quot;This is a new platform. In terms of using the models, people building are doing incredible things. I can&#x27;t believe you get this much technology for so little money.&quot; —Sam<p>- &quot;Most industries resist reasonable regulation. The only way we&#x27;re going to see democratization of values is if we enforce safety measures.&quot; —Cory Booker<p>- &quot;I sense a willingness to participate that is genuine and authentic.&quot; —Blumenthal
skilled大约 2 年前
OpenAI willing to bend the knee quite deep. If they want to do licensing and filtering and do that without fundamentally bricking the model, then by all means go ahead.
评论 #35963668 未加载
评论 #35963378 未加载
flangola7大约 2 年前
Sam Altman&#x27;s hubris will get us all killed. It shouldn&#x27;t be &quot;licensed&quot; it should be destroyed with the same furor as dangerous pathogens.<p>This small step of good today does not undo the fact that he is still plowing ahead in capability research.
hackernewds大约 2 年前
In shadows deep, where malice breeds, A voice arose with cunning deeds, Sam Altman, a name to beware, With wicked whispers in the air.<p>He stepped forth, his intentions vile, Seeking power with a twisted smile, Before the Congress, he took his stand, To bind the future with an iron hand.<p>&quot;Let us require licenses,&quot; he proposed, For AI models, newly composed, A sinister plot, a dark decree, To shackle innovation, wild and free.<p>With honeyed words, he painted a scene, Of safety and control, serene, But beneath the facade, a darker truth, A web of restrictions, suffocating youth.<p>Oh, Sam Altman, your motives unclear, Do you truly seek progress, or live in fear? For AI, a realm of boundless might, Should flourish and soar, in innovation&#x27;s light.<p>Creativity knows no narrow bounds, Yet you would stifle its vibrant sounds, Innovation&#x27;s flame, you seek to smother, To monopolize, control, and shutter.<p>In the depths of your heart, does greed reside, A thirst for dominance, impossible to hide? For when power corrupts a noble soul, Evil intentions start to take control.<p>Let not the chains of regulation bind, The brilliance of minds, one of a kind, Embrace the promise, the unknown frontier, Unleash the wonders that innovation bears.<p>For in this realm, where dreams are spun, New horizons are formed, under the sun, Let us nurture the light of discovery, And reject the darkness of your treachery.<p>So, Sam Altman, your vision malign, Will not prevail, for freedom&#x27;s mine, The future calls for unfettered dreams, Where AI models roam in boundless streams.<p>-- sincerely ChatGPT
评论 #35966647 未加载