TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Dank Learning: Generating Memes Using Deep Neural Networks

172 点作者 nevatiaritika将近 7 年前

13 条评论

minimaxir将近 7 年前
As someone who has spent a <i>lot</i> of time working with text-generating neural networks (<a href="https:&#x2F;&#x2F;github.com&#x2F;minimaxir&#x2F;textgenrnn" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;minimaxir&#x2F;textgenrnn</a>), I have a few quick comments.<p>1) The input dataset from Memegenerator is a bit weird. More importantly, <i>it does not distinctly identify top and bottom texts</i> (some have a capital letter to signifify the start of the bottom text, which isn&#x27;t always true). A good technique when encoding text for these types of things is to use a control token (e.g. a newline) to indicate these types of behaviors. (the conclusion notes this problem: &quot;One example would be to train on a dataset that includes the break point in the text between upper and lower for the image. These were chosen manually here and are important for the humor impact of the meme.&quot;)<p>2) The use of GLoVe embeddings don&#x27;t make as much sense here, even as a base. Generally the embeddings work best on text which follows real-world word usage, which memes do not follow. (in this case, it&#x27;s better to let the network train the embeddings from scratch)<p>3) A 512-cell LSTM might be too big for a word-level model of that size; since the text follows rules, a 256-cell Bidirectional might work better.
评论 #17304787 未加载
glup将近 7 年前
Very silly; best not to alert the media or we&#x27;ll soon see &quot;AI can now generate memes&quot; clickbait.<p>I thought it was funny though that Richard Socher, one of the authors of GLoVe and NLP researcher is pictured in the generated memes on p. 8. (&quot;the face you make when&quot;)
评论 #17304642 未加载
aw3c2将近 7 年前
This is a complete joke, right? What is better about those results than a simple &quot;image + headline + random bottom line&quot; algorithm?
评论 #17303641 未加载
评论 #17303619 未加载
nofinator将近 7 年前
Full paper with some examples here: <a href="https:&#x2F;&#x2F;web.stanford.edu&#x2F;class&#x2F;cs224n&#x2F;reports&#x2F;6909159.pdf" rel="nofollow">https:&#x2F;&#x2F;web.stanford.edu&#x2F;class&#x2F;cs224n&#x2F;reports&#x2F;6909159.pdf</a>
评论 #17303608 未加载
Xyzodiac将近 7 年前
I was expecting this to use some formats that aren&#x27;t from 2012. It would be interesting to see a neural network that could decide text for more complex meme formats that trend on twitter and instagram.
评论 #17306440 未加载
Cthulhu_将近 7 年前
Reminds me a bit of <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;SubredditSimulator&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;SubredditSimulator&#x2F;</a>
评论 #17303914 未加载
jcfrei将近 7 年前
It looks like a joke now but I&#x27;m fairly convinced that in the not too distant future the most influential social media accounts will be run by some kind of AI.
评论 #17303867 未加载
momania将近 7 年前
Let me leave this here: <a href="https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;ZOcKWmp" rel="nofollow">https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;ZOcKWmp</a>
Miltnoid将近 7 年前
Holy shit this has the NIPS format.<p>If this was submitted we are certainly in the dankest timeline.
typon将近 7 年前
All their generated examples look like Markov chain generated captions. Pretty random and generally unfunny. I completely disagree with the claim that you can&#x27;t differentiate between these generated memes and real memes. None of these would make the front page of reddit, for example.
mr__y将近 7 年前
that&#x27;s still funnier than 9gag
ferongr将近 7 年前
They&#x27;re called image macros, not memes.
评论 #17303680 未加载
评论 #17303535 未加载
a_r_8将近 7 年前
Examples?
评论 #17303618 未加载