TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

OpenAI Trains Language Model, Mass Hysteria Ensues

152 pointsby zackchaseover 6 years ago

19 comments

ilyasutover 6 years ago
Ilya from OpenAI here. Here&#x27;s our thinking:<p>- ML is getting more powerful and will continue to do so as time goes by. While this point of view is not unanimously held by the AI community, it is also not particularly controversial.<p>- If you accept the above, then the current AI norm of &quot;publish everything always&quot; will have to change<p>- The _whole point_ is that our model is not special and that other people can reproduce and improve upon what we did. We hope that when they do so, they too will reflect about the consequences of releasing their very powerful text generation models.<p>- I suggest going over some of the samples generated by the model. Many people react quite strongly, e.g., <a href="https:&#x2F;&#x2F;twitter.com&#x2F;justkelly_ok&#x2F;status&#x2F;1096111155469180928" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;justkelly_ok&#x2F;status&#x2F;1096111155469180928</a>.<p>- It is true that some media headlines presented our nonpublishing of the model as &quot;OpenAI&#x27;s model is too dangerous to be published out of world-taking-over concerns&quot;. We don&#x27;t endorse this framing, and if you read our blog post (or even in most cases the actual content of the news stories), you&#x27;ll see that we don&#x27;t claim this at all -- we say instead that this is just an early test case, we&#x27;re concerned about language models more generally, and we&#x27;re running an experiment.<p>Finally, despite the way the news cycle has played out, and despite the degree of polarized response (and the huge range of arguments for and against our decision), we feel we made the right call, even if it wasn&#x27;t an easy one to make.
评论 #19186708 未加载
评论 #19186598 未加载
评论 #19188731 未加载
评论 #19186571 未加载
评论 #19186679 未加载
评论 #19187189 未加载
评论 #19186626 未加载
评论 #19187225 未加载
评论 #19188565 未加载
评论 #19192001 未加载
评论 #19188294 未加载
评论 #19189181 未加载
评论 #19186663 未加载
amrrsover 6 years ago
&gt;Elon Musk distances himself from OpenAI, group that built fake news AI tool<p>This is the worst headlines in this matter. This is one of the leading media in India. A language model being touted as Fake news AI tool. This is like calling a car, A run over machine by Ford.<p><a href="https:&#x2F;&#x2F;www.hindustantimes.com&#x2F;tech&#x2F;elon-musk-distances-himself-from-openai-group-that-built-fake-news-ai-tool&#x2F;story-Q3PkEU6fsJQVkhilpPCQ8M.html" rel="nofollow">https:&#x2F;&#x2F;www.hindustantimes.com&#x2F;tech&#x2F;elon-musk-distances-hims...</a>
评论 #19186420 未加载
评论 #19186590 未加载
评论 #19187467 未加载
Eliezerover 6 years ago
It seems disingenuous that this article fails to quote examples of GPT-2’s stunning results, or give any contrasting results from BERT to support the claim that this is all normal and expected progress.<p>Like many, I was viscerally shocked that the results were possible, the potential to further wreck the Internet seemed obvious, and an extra six months for security actors to prepare a response seemed like normal good disclosure practice. OpenAI warned everyone of an “exploit” in which text humans can trust to be human-generated, and then announced they would hold off on publishing the exploit code for 6 months. This is normal in computer security and I’m taken aback at how little the analogy seems to be appreciated.
评论 #19187137 未加载
评论 #19186832 未加载
czrover 6 years ago
Many reactions across here &#x2F; twitter &#x2F; reddit seem totally out of proportion. And an odd mix of &quot;stop acting so self-important, this research isn&#x27;t special so you shouldn&#x27;t have any qualms about releasing it&quot; and &quot;this research is super important, how dare you not release it&quot;.<p>The strongest counterargument I&#x27;ve seen to OpenAI&#x27;s decision is that the decision won&#x27;t end up mattering, because someone else will eventually replicate the work and publish a similar model. But it still seems like a reasonable choice on OpenAI&#x27;s part–they&#x27;re warning us that <i>some</i> language model will soon be good enough for malicious use (e.g. large-scale astroturfing&#x2F;spam), but they&#x27;re deciding it won&#x27;t be theirs (and giving the public a chance to prepare).
jph00over 6 years ago
In other fields such as infosec, responsible disclosure is a standard approach. You don&#x27;t just throw a zero-day out there because you can. Whilst the norms for AI research needn&#x27;t be identical, they should at least be informed by the history in related fields.<p>The lead policy analyst at OpenAI has already tried to engage the community in discussing the malicious use of AI, on many occasions, including this extremely well-researched piece with input from many experts: <a href="https:&#x2F;&#x2F;maliciousaireport.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;maliciousaireport.com&#x2F;</a> . But until OpenAI actually published examples, the conversation didn&#x27;t really start.<p>In the end, there&#x27;s no right answer - both releasing the model, and not releasing the model, have downsides. But we need a respectful and informed discussion about AI research norms. I&#x27;ve written more detailed thoughts here: <a href="https:&#x2F;&#x2F;www.fast.ai&#x2F;2019&#x2F;02&#x2F;15&#x2F;openai-gp2&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.fast.ai&#x2F;2019&#x2F;02&#x2F;15&#x2F;openai-gp2&#x2F;</a>
Permitover 6 years ago
&gt; Namely, he argued that OpenAI is concerned that the technology might be used to impersonate people or to fabricate fake news.<p>This seems to be a particularly weak argument to make. How is their model going to impersonate someone in a way that a human can not?
评论 #19186095 未加载
评论 #19186463 未加载
评论 #19186688 未加载
kirillzubovskyover 6 years ago
What if OpenAI didn’t write the piece? What if the research was announced by the machine, and the folks at OpenAI are all dead?
评论 #19187393 未加载
agentofoblivionover 6 years ago
It’s amazing to me that no one has yet pointed out the blatant irony that their name is OpenAI, yet they are concealing far more than what is typical.
评论 #19186407 未加载
xiphias2over 6 years ago
Elon Musk was kicked out because he poached Andrej Karpathy from OpenAI to lead Autopilot. Anyways, it was worth it, Andrej is doing an amazing job, and OpenAI is still alive :)
评论 #19186920 未加载
SubiculumCodeover 6 years ago
To what extent is this not just finding text samples written in its training sample and regurgitating it near verbatim?? -Non ml guy
评论 #19186644 未加载
评论 #19186427 未加载
sp332over 6 years ago
Does someone have a description of the network somewhere? Does it use LSTM for memory or what? Is there anything unusual about the size or structure of the network? Does it use an attention mechanism?
评论 #19187256 未加载
itgover 6 years ago
Can you imagine if the teams that worked on the Internet decided not to make it available to the public because of the potential misuses. OpenAI is a joke.
mlbossover 6 years ago
I think OpenAI should change org name to ClosedAI.
bitLover 6 years ago
So an article about recycling generated by OpenAI model (best out of 25) already makes more sense than presidential speeches or most of ramblings of average politicians. Can we automate them away as well?
crobertsbmwover 6 years ago
How do we know this article isn’t just fake news being written by an AI?
Eli_Pover 6 years ago
When a bug is caught on your palm, it pretends to be a dead bug. When a moose is scared, it plays dead moose. When AI wants to fool a human or a captcha filter, it impersonates a human.<p>Only when a human wants to fool a human, it impersonates whatever possible but a human, then suddenly charges a shitload of ape shit, and then behaves like it never happened.<p>Without a decent natural language translation or automatic reasoning, which they have not, looks like N-gram where N equals to number of words in language corpus.
rajacombinatorover 6 years ago
It’s a great marketing hack. That’s the real accomplishment here.
toufiqbarhamovover 6 years ago
Ms. Anandkumar nailed it, this is blatant hype bordering on hucksterism. Elon Musk May have left, but his influence remains I guess.
评论 #19186272 未加载
fareeshover 6 years ago
&gt; Fictitious state of emergency<p>Pretty dumb and disrespectful to politicize a blog post about OpenAI.