TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

What policy makers need to know about AI

81 点作者 jph0011 个月前

7 条评论

thegrim3311 个月前
Is there a name for the logical fallacy that the author presents, that goes like:<p>1) In the past (some portion of) society was scared about the printing press, and it turned out to be fine<p>2) In the past (some portion of) society was scared about the internet, and it turned out to be fine<p>3) Therefore, if nowadays (some portion of) people are scared of AI, then they&#x27;re wrong, AI is safe, because in the past some portion of the population was wrong about other technologies<p>I guess it would be called a non-sequitur?<p>Here&#x27;s a more contrived example to make the fallacy more clear:<p>1) In the past (some portion of) people didn&#x27;t think automobiles could obtain speeds of 50mph and they turned out to be wrong<p>2) In the past (some portion of) people didn&#x27;t think automobiles could obtain speeds of 300mph and they turned out to be wrong<p>3) Therefore, nowadays, my claim that I have an automobile that will drive 10,000 mph must always be right, because in the past (some portion of) people were wrong about automobile progress.<p>I&#x27;ve been seeing lots of examples of this type of fallacy where the giveaway is people pointing out about how people in the past made bad predictions, which somehow means any predictions people are making today are also wrong. It just doesn&#x27;t follow.
评论 #40710881 未加载
评论 #40710874 未加载
评论 #40710989 未加载
评论 #40710865 未加载
评论 #40710927 未加载
评论 #40710948 未加载
评论 #40710914 未加载
评论 #40711714 未加载
jph0011 个月前
Hi Jeremy here - I wrote this article. The deeper I got into studying this, the more I realised the people writing the laws that will regulate AI actually don&#x27;t really understand what they&#x27;re regulating at all.<p>So I created this to try to at least help them create regulations that actually do what they think they&#x27;re going to do.<p>California&#x27;s SB 1047, which I analyse closely, currently totally fails to meet the goals that the bill authors have stated. Hopefully this will help them fix these problems. If you have views on SB 1047, you can make a public comment here: <a href="https:&#x2F;&#x2F;calegislation.lc.ca.gov&#x2F;Advocates&#x2F;" rel="nofollow">https:&#x2F;&#x2F;calegislation.lc.ca.gov&#x2F;Advocates&#x2F;</a><p>Let me know if you have any questions or comments.
评论 #40709828 未加载
评论 #40709936 未加载
评论 #40709786 未加载
评论 #40709705 未加载
评论 #40710155 未加载
xg1511 个月前
&gt; <i>As you can see from this description, just like creating weights cannot be inherently dangerous (since they’re just lists of numbers), neither can running a model be inherently dangerous (because they are just mathematical functions that take a list of numbers as an input, and create a new list of numbers as an output). (And again, that is not to say that running a model can’t be used to do something harmful. Another critical technical distinction!)</i><p>Ah, the classic duality of big tech: Singlehandedly bringing upon the next stage in the evolution of mankind (to investors) while at the same time just tinkering in their garages on some silly, entirely inconsequential contraptions that do nothing more than turn ones and zeros into different ones and zeros (to regulators).
评论 #40711004 未加载
the847211 个月前
&gt; These kinds of technologies, like AI models, are fundamentally “dual use”. The general purpose computation capabilities of AI models, like these other technologies, is not amenable to control.<p>I find that entire section to be misleading or even false. Comparing a N-billion or even trillion parameter model representing aggregated human knowledge and some (however limited) agentness when put in a harness like autogpt makes it a different category than a pen and paper.<p>Additionally it is not true that models are just has hard to control as a piece of paper. If millions of dollars are invested in each training run, the reseachers and the associated infrastructure then this clearly is not a simple piece of paper. It&#x27;s more like sat tech covered by ITAR or semiconductor tech which is also export-restricted.
评论 #40710792 未加载
Aerroon11 个月前
Unfortunately the EU has already rammed their legislation through. The US is always going to be compared to EU propaganda about &quot;how they protect the people&quot;.
ayakang3141511 个月前
Maybe I am naive about the progress in this space, but we should not use the word &quot;AI&quot; first because it adds to the confusion many people have about DNN based programs. So called AI is not much different from many software we&#x27;re using in a sense that you give an input to the program, then it spits out the output. When I think about AI, I think of animal intelligence (no pun intended) that dogs or other mammals have.
评论 #40710878 未加载
janalsncm11 个月前
&gt; These kinds of technologies, like AI models, are fundamentally “dual use”.<p>It is certainly true that technologies can be used for good and evil. But that doesn’t mean that in practice good and evil benefit equally. “Dual use” implies a more or less equal split, but what about a good&#x2F;bad 10&#x2F;90 or 1&#x2F;99 split? Technology, at its core, makes accomplishing certain tasks easier or harder, and besides the assertion of dual use, the article doesn’t really justify AI models being equally good and bad.<p>In the Soviet Union, a large percentage of the population was used for surveillance. The U.S. had surveillance too, but less. Technological limitations made surveilling every person prohibitively expensive. Police couldn’t just surveil everyone.<p>Today, surveillance is not only ubiquitous but better. It is possible to track millions of people in near real time. So this technology has caused a decrease in the cost and scalability of mass surveillance, which in conjunction with the third party doctrine (read: loophole) has the emergent effect of neutering the 4th amendment.<p>What makes this hard&#x2F;impossible is anticipating likely applications, which is why I lean towards not regulating. However, we should recognize the possibility of a moral hazard here: by shielding industry from certain consequences of their actions, we may make those consequences more likely in the future.<p>&gt; The general purpose computation capabilities of AI models, like these other technologies, is not amenable to control.<p>Sure. And we can’t stop people from posting copyrighted material online, but we can hold people accountable for distributing it. The question in my mind is whether we will have something like Section 230 for these models, which shields large distributors from first-pass liability. I don’t know how that would work though.