This is really cool and an inspiration of mine. As a matter of fact I am working on something very similar to this.<p>I don't know if you are familiar with the Dasher project for text input, but as of know I'm trying to improve on that work partly by improving on how many letters are available simultaneously by protecting the line of text upon a fractal surface. Something that should be a more efficient use of a 2d surface, theoretically infinitely so.<p>As far as autocomplete is concerned my approach is to try to do exactly this but on a character basis. I think this can lead to some interesting advantages, for example different dialects gives rise to words that not always conform to dictionary specifications.<p>The next level would be to go one step higher so to speak. If we imagine Markov Chain on letters as the first level and said chains level, I'd say that the third level in our hierarchy would be to apply markov chain on groups of words grouped by proximity in a word2vec space.<p>Having markov chains working on groups of word2vec words would give us a statistical analogy of grammar. However without having to implement it programatically, something that inevitably would lead to missed corner cases and if not that a too strict algorithm that would hinder intentional abuse of grammar by purpose.<p>Maybe this is already being implemented, as it to me seems as the logical next step. Anybody got any info on this?
I wonder what the result would look like if this were applied to source code. IMHO a probabilistic algorithm for code completion could turn out interesting.<p>Most code completion algorithms work deterministic by deducing the set of completion candidates from the receiver's type/class or a list of keywords. Given that people/teams tend to name variables in a certain fashion, a probabilistic completion algorithm could make use of this and adapt to team/project-specific conventions. Given a team's code base one could probably build a pretty good code completion algorithm without any knowledge about the programming language.<p>likelycomplete[1] tries to do this in a dilettantish ad-hoc way for vim. It rates completion candidates (that are gathers from previously seen code) on the basis of context information. It's hampered by the limited performance of vimscript though. A full fledged solution would require an external server.<p>[1] <a href="https://www.vim.org/scripts/script.php?script_id=4889" rel="nofollow">https://www.vim.org/scripts/script.php?script_id=4889</a>
I've thought about using a markov chain suggester to trip up stylometric analysis, but never got around to creating some sort of practical UX for it.<p>I think if you plugged it into Vim or Emacs's autocompletion functionality, that might do the trick.
Keep in mind this is essentially the same concept as the big scary AI from OpenAI that is making the news recently. They use neural nets, not markov chains, but the idea is similar: given a word, predict the next word.<p>> It's surprising how easy this can be turned into something rather practically useful<p>Given the above, it's not so surprising: this word prediction problem is fundamental, with a wide range of applications.