In both this demonstration and the demonstration by Google of image generation by neural networks (<a href="http://www.popsci.com/these-are-what-google-artificial-intelligences-dreams-look" rel="nofollow">http://www.popsci.com/these-are-what-google-artificial-intel...</a>) you can see hints of organic, naturalistic behavior emerging from artificial networks. Is there any doubt that such networks can simulate even more complex aspects of our cognition? Isn't the writing on the wall (no joke intended) with regard to the Strong AI hypothesis? Or at least a stronger weak hypothesis.
That's cute. At last, the forgery app.<p>The killer app in this space will be when someone figures out how to extract a vocal model from existing recordings of singers. Vocaloid already synthesizes singing quite well, but a human singer has to go into a studio and sing a long list of standard phrases to build the singer model. The next step will be to feed existing singing into a system that extracts a model usable for synthesis.<p>The RIAA is so going to hate this.
It's interesting that the network will sometimes misspell words:<p><a href="http://i.imgur.com/cFrlyy8.png" rel="nofollow">http://i.imgur.com/cFrlyy8.png</a><p>The input was copied from the instructions - "Type a message into the text box, and the network will try to write it out longhand". But you can see it skipped the "e" in "Type" and added an "h" after the "w" in "network", and pretty clearly spelled "to" as "du".<p>It also tried to cross the first vertical line of the "w" in "network" in lieu of adding an actual "t" beforehand (which is arguably an idiosyncrasy a human's handwriting might have, if a rather odd one); and stuck a big phantom stroke/letter between "T" and "y".
It's interesting because handwritten notes are seen as one of the last symbols of human authenticity. The unique drawing of each letter seems to prove that a human invested time and thought in directly communicating with you. There was no copy-paste. There was no form letter. This was not a bot. And, based on the length of the note, there was provable effort involved. Both a Turing test and a proof of work problem. The world's oldest Captcha.<p>And so begins the devaluing of that proof. Just like when marketers started reproducing the "signature" on every sales letter with blue-colored toner, mimicking the authenticity of a hand signature.<p>I don't write handwritten letters, and I don't romanticize the past. But our dwindling ability to assess the authenticity of incoming communication is slightly unsettling.
Very strange results with rare Unicode characters (I used "𝕳𝖔𝖜 𝖜𝖊𝖑𝖑 𝖉𝖔𝖊𝖘 𝖙𝖍𝖎𝖘 𝖜𝖔𝖗𝖐")<p><a href="https://imgur.com/a/Li8OZ" rel="nofollow">https://imgur.com/a/Li8OZ</a>
I implemented this system for the final project of my computer vision class. Couldn't get it to work by the deadline, but I'm very familiar with this paper. Happy to answer any questions.
I'm curious as to what happened here:<p>Text entered: this is a test of handwriting generation<p>Style sample #1 selected.<p>All other settings at default.<p><a href="http://imgur.com/6b1G5Tj" rel="nofollow">http://imgur.com/6b1G5Tj</a><p>Edit: I've tried a couple other styles and haven't duplicated this craziness.
Does anyone know what happened here[0]? The input text was "This is a test. Reeeeeeeeeeeeee!", and I chose the third style.<p>[0] <a href="http://i.imgur.com/6B9JkjC.png" rel="nofollow">http://i.imgur.com/6B9JkjC.png</a>