Ooh front page; I guess this calls for a bit of an explanation!<p>First off - used this code for training the models:
<a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/" rel="nofollow">http://karpathy.github.io/2015/05/21/rnn-effectiveness/</a><p>Very very easy to setup and train; highly recommend playing around with your own training data (just a text file!)<p>This project's code:
github.com/shariq/burgundy<p>Styled and deployed the website about a year ago at a hackathon; it then used a nice wordlist with hand picked words.
(repo/wordserver/old_burgundy_words.txt)<p>Few days ago: got the server to start training a bunch of models (~200), with randomized parameters, using the original wordlist as the training data.
(repo/rnn/rnn.py:forever)<p>Yesterday: woke up at 3 AM after my sleep schedule rolled around, started exploring the output of models trained to different numbers of epochs and run at different temperatures. Subjectively looked at the outputs, decided some model/epoch/temperature tuples were horrible, got rid of those. Wrote a few different scoring functions (just using intuition for what kinds of bad outputs seemed to be commonly occurring) to score the model/epoch/temperature tuples. Got the top ~10 scoring tuples from each scoring function, plus added some additional interesting ones along the way, and then used a pronunciation scoring function (repo/rnn/pronounce.py) to select the top 5 of all of these. Funny enough, the top 5 tuples all used different models and a varying range of temperatures (i.e, not the same model from different epochs, and picking the right temperature significantly improved how well the model performed)
(repo/rnn/explore.py)<p>Since the models would still occasionally output words which were completely unpronounceable, I put some code on top of the models which would generate a bunch of words then discard the bottom 1/3rd of unpronouncable words. A significant portion of generated words from these models also started with a "c" or "b" for some reason: gave those a high chance of being discarded. Short words were also uninteresting, and extremely long words would occasionally show up: added probabilistic filters for length. Finally, initialization time of LuaJIT is very high, so I had the server keep a pool of words which gets reseeded as it runs out.
(repo/rnn/rnnserver.py)<p>If you want to train your own word generator and you need some pointers, would love to help: @shariq
No "about" info? No "how it works", "how it was trained", etc.?<p>It <i>seems</i> to only generate words that match English phonotactics & spelling conventions- things that <i>could be</i> English words. Can it be retargeted to other languages, or to arbitrary word-shape constraints?<p>I am particularly interested because I've recently undertaken a survey of word-generation software for conlangers (people who create artificial languages, like Quenya or Klingon or Na'vi), and while they do come in widely varying degrees of sophistication, with varying degrees of built-in linguistic knowledge, there are none yet publically available that are based on neural networks.
I got carantil which is not great, but with a small tweak and it's Carancil which is a perfectly good name for a new drug. Companies like Brand Institute charge good money for these services.
Train it with some Tolkien appendixes and it could be a good RPG name generator.<p>Also, realworld usernames may be fun. You could make a twitter username generator or something.
vermocharen -- certainly works in some contexts. A coffee roaster, e.g.<p>no small feat to get even marginally-euphoneous words from an open, available code base.<p>Next up came
tintilu
picolera
fangon