I found this from neural-networks course in coursera[0]. The author of this paper had been discussed as an example of what recurrent neural nets can now do.<p>Here's the description from the slide:<p><pre><code> • Ilya Sutskever (2011) trained a special type of recurrent
neural net to predict the next character in a sequence.
• After training for a long time on a string of half a billion
characters from English Wikipedia, he got it to generate new
text.
– It generates by predicting the probability distribution
for the next character and then sampling a character from
this distribution.
– The next slide shows an example of the kind of text it
generates.
Notice how much it knows!
Some text generated one character at a time by Ilya Sutskever’s
recurrent neural network:
In 1974 Northern Denver had been overshadowed by CNL, and several
Irish intelligence agencies in the Mediterranean region. However,
on the Victoria, Kings Hebrew stated that Charles decided to
escape during an alliance. The mansion house was completed in
1882, the second in its bridge are omitted, while closing is the
proton reticulum composed below it aims, such that it is the
blurring of appearing on any well-paid type of box printer.
</code></pre>
[0] - <a href="https://www.coursera.org/learn/neural-networks/" rel="nofollow">https://www.coursera.org/learn/neural-networks/</a>