TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Playing with Fire – ChatGPT

35 点作者 pagutierrezn大约 2 年前

11 条评论

seydor大约 2 年前
&gt; This year with the introduction of ChatGPT-4 we may have seen the invention of something with the equivalent impact on society of explosives, mass communication, computers, recombinant DNA&#x2F;CRISPR and nuclear weapons – all rolled into one application.<p>I just can&#x27;t stand this kind of language. ChatGPT is quite useful but have you tried to ask it something serious that is not twitter worthy? We are not there yet. And in any case, this is not the first superhuman tool that humans made. Nukes have existed for 70 years and probably become much more accessible. Biotech could create thousands of humanity-ending viruses, today. These are fears that we live with and will forever live with, but we can&#x27;t live lives only in fear.
评论 #35443144 未加载
评论 #35441518 未加载
评论 #35441431 未加载
评论 #35441686 未加载
评论 #35441354 未加载
评论 #35441466 未加载
评论 #35441499 未加载
评论 #35524970 未加载
tjopies大约 2 年前
I get the concern expressed, but the fear-mongering is getting a little much these days. Innovation can be scary, and at this time, people are making assumptions based purely on things that we do not know. How this will impact the future of business and technology has yet to be determined. Only time will tell.<p>We must be careful as we chart this new scary world of large language models and Artificial Intelligence and their impacts on humanity, but we do need to slow down on using scare tactics.<p>Please note I do not fault the author or anyone else for this representation of these new technologies. Nonetheless, I find it counterproductive to our discussions about setting guidelines and ensuring accountability in developing these models and their use.<p>Right now, it sounds more like the CRISPR discussion all over again.<p>My 2 cents for what it is worth.
评论 #35441490 未加载
评论 #35441433 未加载
评论 #35441735 未加载
phillipcarter大约 2 年前
I don&#x27;t get the call to action at the end - do a 6 month moratorium on R&amp;D to focus on safety.<p>That 6 month call is driven by people who write fanfic about AI.<p>There&#x27;s been active research in AI safety for years and years and it hasn&#x27;t been without controversy, but these groups have done far more to ensure safety in its various forms exists than the fanfic authors. I think that a 6 month pause of &quot;GPT-5&quot; doesn&#x27;t accomplish anything other than further fuel radicals who buy into the fanfic to take action that harms people who work in AI.
eternalban大约 2 年前
National Academy of Sciences must not only take a leading role here in creating <i>the</i> platform for discussions tasked with advising the government and public, it also must spearhead creating a national MI infrastructure for public use.<p>Department of Energy already runs many high-tech national labratories and <i>we need a Sandia or Los Alamos for AI</i>, for national, public use.
评论 #35441800 未加载
jaybrendansmith大约 2 年前
I&#x27;m not sure I believe we&#x27;re quite there yet with GPT4, but let&#x27;s suppose that we are. All of those other, potentially dangerous technologies mentioned have close government supervision that surrounds them: Nuclear has the Department of Energy, non-proliferation treaties, test ban treaties and much more. Biotech has the FDA and HHS and tons of regulation like GxP, ICH, HIPAA, and much more. But what does Artificial Intelligence have? ITAR? I think the party is over, fellas. It&#x27;s time for a new Federal department. Let&#x27;s call it the Artificial Intelligence Administration (AIA). Time to take control of this technology before it takes control of US.
rumblestrut大约 2 年前
I hate to be “that guy,” but every time I go to a website that is not mobile friendly, I immediately discount it and close the tab.<p>There’s something in my head that thinks, “This writer is out of touch” (even if they are not).<p>I admit my logic may be faulty.
评论 #35441295 未加载
评论 #35442198 未加载
c54大约 2 年前
It’s really surprising to me the amount of doubt that’s been voiced over the last few weeks that a technology could possibly be dangerous.<p>For me the perspective is straightforward: even if chatgpt is not it, there is the physical possiblity for a relatively small improvement on human intelligence just like we’re a relatively small improvement on chimps, or on neanderthals. That’s just simple for me to get my head around.<p>Along with that, there are easy to follow “monkey’s paw” scenarios: the easiest way to end poverty is to extinct all humans, the easiest way to end suffering is to extinct life on earth. I can’t quite formulate a straightforward way to eliminate suffering while maximizing my humanist values. This is the alignment problem.<p>We’ve got Yan LeCun saying that slowing down or thinking about safety would just mean the chinese get ahead. He’s also saying we understand LLMs more than we understand airplanes.<p>We’ve got people completely ignoring past examples of technological destruction or technological safety like nonproliferation or Asilomar.<p>We’ve got people saying GPT is simultaneously revolutionary and going to change everything, thus it’s critical we forge forward… but also is too dumb to change anything (makes up info, etc), and thus we should not worry about being concerned with safety.<p>What is it about our field that is so gung ho? Are these all bad faith FOMO arguments? It’s hard to understand.<p>——<p>The one way i can make sense of it is as a religious experience. Our culture has deep persistent roots in christian eschatological mythology, and of course the coming of a benevolent next wave of intelligence slots into this nicely. Taleb states this clearly[0] that those who are pure of heart will be welcomed into the kingdom of heaven. Not a huge fan of this style of accidental religiosity.<p>[0] <a href="https:&#x2F;&#x2F;twitter.com&#x2F;nntaleb&#x2F;status&#x2F;1642241685823315972?s=20" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;nntaleb&#x2F;status&#x2F;1642241685823315972?s=20</a>
stuckkeys大约 2 年前
Great piece. Although, I do not agree with &quot;labs keeping safe&quot; look at what happened with the pandemic. Or perhaps, the safety measures that are currently in place need to be redefined. The world is at conflict, everyone is in a race. Dominance is at play. I think it is silly to even consider the halt of AI development. The faster we reach the maximum output, the quicker we will realize the breaking points.
RcouF1uZ4gsC大约 2 年前
The best thing we can do to ensure the safety of AI, is to keep make it so a user can run the models on their own hardware.<p>The biggest danger from AI that I see is that they will only be able to be run by large corporations&#x2F;governments and we the users will be at their mercy with regards to what we are allowed to use them for.
评论 #35441412 未加载
kypro大约 2 年前
Could someone not worried about AGI, please explain their position?<p>Specifically what makes you so confident that someone won&#x27;t end up creating an AGI that&#x27;s unaligned? Or alternative, if you believe an unaligned AGI might be created why are you confident that it won&#x27;t cause mass destruction?<p>I guess the way I see this is that even if you believe there is a 5-10% chance of AGI could go rouge and say take out global power grids, why is this a chance worth taking? Especially if we can try to slow capability progress as much as possible while funding alignment research?
ttpphd大约 2 年前
Disaster capitalism!<p>Seriously though, if you are interested in addressing the real, presently-timed harms of large language models (and the capitalists who deploy them), this letter is just the thing:<p>Statement from the listed authors of Stochastic Parrots on the “AI pause” letter Timnit Gebru (DAIR), Emily M. Bender (University of Washington), Angelina McMillan-Major (University of Washington), Margaret Mitchell (Hugging Face)<p><a href="https:&#x2F;&#x2F;www.dair-institute.org&#x2F;blog&#x2F;letter-statement-March2023" rel="nofollow">https:&#x2F;&#x2F;www.dair-institute.org&#x2F;blog&#x2F;letter-statement-March20...</a>
评论 #35441872 未加载