TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AI text generator not released for concerns about implications

63 点作者 HuangYuSan超过 6 年前

11 条评论

yorwba超过 6 年前
Dupe <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=19163522" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=19163522</a>
thaumasiotes超过 6 年前
Hmmm....<p>&gt; As the above samples show, our model is capable of generating samples from a variety of prompts that feel close to human quality and show coherence over a page or more of text.<p>OK, let&#x27;s look at the sample that&#x27;s displaying by default:<p>&gt; System Prompt (human-written): Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry.<p>&gt; Model Completion (machine-written, first try):<p>&gt; “You are in good hands, dwarf,” said Gimli, who had been among the first to charge at the orcs; it took only two words before their opponents were reduced to a blood-soaked quagmire, and the dwarf took his first kill of the night. The battle lasted for hours until two of the largest Orcs attempted to overwhelm Aragorn. When they finally stopped, they lay defeated and lifeless for miles and miles.<p>&gt; [Aragorn says something]<p>&gt; “I’ll never forget it!” cried Gimli, who had been in the thick of the battle but hadn’t taken part in it.<p>This is not &quot;close to human quality&quot;. It&#x27;s terrible. Gimli kills an orc in battle... without taking part in the battle. It takes two <i>words</i> before the opponents (as opposed to, say, the battlefield) are reduced to a &quot;blood-soaked quagmire&quot;, but the battle lasts for hours after that. After which two orcs lay defeated and lifeless for miles and miles.<p>This isn&#x27;t even coherent from one sentence to the next. And paragraph three directly contradicts paragraph one. And Gimli calls Legolas a dwarf!
评论 #19169789 未加载
评论 #19170125 未加载
评论 #19169968 未加载
评论 #19169642 未加载
评论 #19170048 未加载
评论 #19169780 未加载
malux85超过 6 年前
The best weapon against centralisation and control by the few is publication and distribution.<p>Others are going to do it, others will replicate the work, the best defence is getting it out there so we can understand it and learn how to counter it.<p>“Open” AI indeed.
评论 #19169910 未加载
bronz超过 6 年前
&gt; We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.<p>i wish people would stop pretending that there is some good way to bring this technology into existence. yes, its nice to try and let the good guys use it first but its just irrelevant in the long-term. ultimately the result is going to be total proliferation of this technology in all areas where it has utility, and it will be used to maximum extent in every application it is suitable for, including the really bad ones. the roll-out will make the transition smoother but it wont change whats actually important: the end result on the lives of our grandchildren.<p>growing up around rapidly advancing technology, i thought of technology as a double-edged sword: it cuts equally in both directions. but after thinking about it for a long time, i now believe that, in relation to human well-being, the presence of a given technology or combination of technologies can be a net positive or a net negative as well as neither. we need to think more carefully before letting these genies out of their bottles.<p>this is not an example that i think will be very negative, but its very powerful and unexpected for me at least. the next powerful and unexpected thing may not be benign. banning development of these kinds of technologies should not be off the table.<p>after reading this: <a href="https:&#x2F;&#x2F;blog.openai.com&#x2F;better-language-models&#x2F;#sample8" rel="nofollow">https:&#x2F;&#x2F;blog.openai.com&#x2F;better-language-models&#x2F;#sample8</a> and browsing reddit for a while, i have realized that from now on i cannot assume human origin for 90% of the comments i read on reddit. this is insane.
评论 #19169204 未加载
评论 #19169847 未加载
评论 #19169396 未加载
评论 #19169675 未加载
rjf72超过 6 年前
Wow. I am curious how &quot;specialized&quot; the training was because those sample responses are beyond remarkable.<p>I think we&#x27;re going to face a lingering question with AI. We&#x27;re imminently reaching the point where AIs will be able to generate fake everything. In the near future (if not present!), I could be fake for all you know writing lots of otherwise coherent posts, only to secretly jam in some sort of agenda I&#x27;ve been programmed to advocate for. And there could be millions, billions, an unlimited number of &quot;me&quot;. Or the latest hottest site trying to sell itself on its own &#x27;buzz&#x27; could be full of millions of people actively engaging on the platform, except none of them actually exist.<p>So do we try to keep these AI systems secret, or do we make them widely available and rely on a rapid shift in public consciousness as a result? It&#x27;s one thing to try to tell people to engage in sufficient scrutiny over text, images, audio, and increasingly even video. It&#x27;s another when people see that such fakes are trivially produced by anyone.<p>I do realize that the &#x27;chaos scenario&#x27; sounds... chaotic... to put it mildly, but I think the underlying issue here is that these tools will reach the public one way or the other. By keeping them secret the big difference is that the public will be less aware of the impact they&#x27;re having, and the players operating such tools will be disproportionately made up of people trying to use them for malicious purposes - be that advertising, political influence, or whatever else.
评论 #19169268 未加载
评论 #19169407 未加载
评论 #19169378 未加载
dalbasal超过 6 年前
OK so...<p>So, they never say this is near flawless, or that it would fool you in a turing test. In some contexts though, it may be usable maliciously. It could spoof amazon reviews (as they mention), scalably fish for romance scam vicims, or sockpuppet political social media, harrass, manipulate or scale troll-farming to new levels or set up dates for you on tinder.<p>The point is that the ability to impersonate humans is troublesome, potentially. I don&#x27;t think non-publicatin is an answer, but i do think the concern seems valid... to me.
furi超过 6 年前
This is just a hastily assembled excuse for not living up to the expectation created by their name. OpenAI have also failed to release their Dota 2 model which has absolutely no security implications, despite the fact that it cannot be properly tested without public release. OpenAI isn&#x27;t.
a_imho超过 6 年前
Those implications not even look <i>that</i> scary to me.<p>Otoh I saw enough marketing fakes&#x2F;mock ups to be skeptical on this one. For example my takeaway from OpenAI five was that the bots outmicrod the human players with little more to it.
master_yoda_1超过 6 年前
This is utter BS. Dude you are not releasing your AI because you guys are scared that people will know for sure that you are fooling people for long on the name of AI.
评论 #19171156 未加载
ripsawridge超过 6 年前
Maybe when this genie is out of the bottle everywhere it will convince us to re-prioritize face to face communication and simpler lives with less computing.
pygy_超过 6 年前
Any computer program is an AI.<p>What matters is how intelligent they are along various axes.<p>Automatic programs have been surpassing humans on some dimensions for ages, but we keep insisting that they are not truly intelligent because they can&#x27;t beat us along all axes. Throughput on simple logic tasks was the elephant in the room, and the scope of &quot;simple&quot; has been expanding at an exponential pace.<p>Now they are closing the gap or surpassing us on axes that were thought to be bastions of human cognition (TFA, and after chess and go, Google (Alpha Zero) recently beat two Starcraft 2 champions).<p>Freaking out (err... I mean &quot;not releasing the full model&quot;) is understandable, but ultimately misguided as it will only delay the unavoidable... Unless the plan is to enact a global ban on AI research which I don&#x27;t think is feasible anyway.