TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

PHP code generated by GPT-2

96 点作者 monort大约 6 年前

14 条评论

ndpsr大约 6 年前
Still better than Wordpress core.
评论 #19254109 未加载
评论 #19254533 未加载
tyingq大约 6 年前
The tweets that go with this: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;moyix&#x2F;status&#x2F;1096255984866082816" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;moyix&#x2F;status&#x2F;1096255984866082816</a>
评论 #19254838 未加载
评论 #19254075 未加载
userbinator大约 6 年前
Any context on what this is supposed to be...? I can vaguely read PHP, but the code does not appear to be doing anything of much substance.<p>At first I thought it was something to do with a second revision of GUID Partition Tables.
评论 #19253907 未加载
评论 #19253892 未加载
评论 #19253894 未加载
glup大约 6 年前
I have been working with a group that is trying to clone this dataset and make it publicly available (<a href="https:&#x2F;&#x2F;github.com&#x2F;jcpeterson&#x2F;openwebtext" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jcpeterson&#x2F;openwebtext</a>), and I have noticed quite a bit of code in the scraped dataset. Future releases of our dataset will be pre-filtered with another LSTM language model that will filter sentences by their probability under more conversational &#x2F; literary datasets.
pamparosendo大约 6 年前
It will be interesting when AI finds out there&#x27;s no need for her to generate human-readeable code.
aboutruby大约 6 年前
With some automated formatting: <a href="https:&#x2F;&#x2F;pastebin.com&#x2F;7F2Leqy1" rel="nofollow">https:&#x2F;&#x2F;pastebin.com&#x2F;7F2Leqy1</a>
评论 #19254531 未加载
cubano大约 6 年前
Just what I didn&#x27;t need to see this morning.<p>I am <i>literally</i> living in the streets, freezing my ass off and hungry, looking for any kind of programming work for the past month, and now I have to see some AI bot generating more inexpensive shit code that I am sure some manager will convince themselves might get them that final career promotion by lowering their labor costs to near zero.<p>WTG, geniuses, for developing AI that before you know will have all of us living in the streets and hungry...<p>I&#x27;ll save you a spot.
评论 #19255017 未加载
评论 #19254951 未加载
评论 #19255855 未加载
评论 #19258840 未加载
评论 #19257623 未加载
评论 #19255310 未加载
gambler大约 6 年前
This is very fishy. You can get code like this by substituting words in identifier names for other words, but how can an algorithm trained on English dataset &quot;learn&quot; that keywords like &#x27;function&#x27; and &#x27;class&#x27; are exempt from substitution? I know most people here have unwavering faith in the magic of deep neural networks, but you&#x27;d need _a lot_ of examples to deduce this with any certainty, regardless of how you do it.
评论 #19256534 未加载
评论 #19257209 未加载
评论 #19257560 未加载
chadbennett大约 6 年前
Motivated by this post, I decided to test it out. It&#x27;s impressive how powerful the software is even with the limitations. I made a simple tutorial on how to test GPT-2 out for yourself at <a href="https:&#x2F;&#x2F;medium.com&#x2F;heroic-com&#x2F;how-to-quickly-generate-full-articles-using-openais-gpt-2-3876870aeb5c" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;heroic-com&#x2F;how-to-quickly-generate-full-a...</a>
scrollaway大约 6 年前
Important note: The AI did not generate that exact version of the code. It was <i>almost</i> syntactically correct. Here&#x27;s the diff:<p><a href="https:&#x2F;&#x2F;gist.github.com&#x2F;moyix&#x2F;dda9c3180198fcb68ad64c3e6bc7afbc&#x2F;revisions#diff-8074138f091ac83e2bef3faec88bdb05" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;moyix&#x2F;dda9c3180198fcb68ad64c3e6bc7af...</a>
评论 #19255170 未加载
kodablah大约 6 年前
Can someone shed more light behind this? What is the true source? Was it generated via the unreleased full model by an OpenAI employee? Or did someone generate it with the released &quot;smaller model&quot;? Can we, the curious public, see the model and replicate the results?
评论 #19276428 未加载
beager大约 6 年前
This makes me think that something like Stack Overflow could be used to train a model that generates code to answer a question—and that software specifications that are decomposed into a series of requirements or &quot;questions&quot; could be fed into this model to produce code that&#x27;s equivalent to a team of remote contractors.<p>Your model would be based on NLP&#x2F;votes of the questions, NLP&#x2F;votes of the answers, and separating the text from the code in both.<p>The fact that many markdown&#x2F;code formatting tools have you select the language for syntax highlighting is useful for classifying code as well.
评论 #19254646 未加载
rbrtdrmpc-大约 6 年前
Look ma, no Laravel
gambler大约 6 年前
BTW, I would like to point you to the MIT&#x27;s project Genesis as an example of what a rule-based text comprehension system could do almost a decade ago.