Amazing!<p>I feel like even more creativity could be brought to bear somehow on how to use the output channel to select text quickly. For example, in the final scene, where the previous state of the art showed someone controlling a mouse-like cursor, I thought "this would be a lot faster for text input if it were controlling something like Dasher".<p><a href="https://en.wikipedia.org/wiki/Dasher_(software)" rel="nofollow">https://en.wikipedia.org/wiki/Dasher_(software)</a><p>Similarly, I would imagine that (something like in the novel <i>Rainbows End</i>?) you could have a biofeedback process where the user and the device collaboratively develop an idiosyncratic interface method, which doesn't necessarily have to be based on any pre-existing language, writing system, or even motor skill. I feel almost sure that that could be a thing, and could produce better accuracy and speed for most users than visualizing writing letters could.