In the past couple of years voice recognition has gotten good enough that developers who can't type for whatever reason have started using voice recognition:<p><a href="http://ergoemacs.org/emacs/using_voice_to_code.html" rel="nofollow">http://ergoemacs.org/emacs/using_voice_to_code.html</a><p>Other advances like eye-tracking might help. OptiKey, for example, was released last month to help with with ALS use eye-tracking to use a computer: <a href="https://github.com/JuliusSweetland/OptiKey/wiki" rel="nofollow">https://github.com/JuliusSweetland/OptiKey/wiki</a><p>John Siracusa used to write his 20,000 word Mac reviews with Dragon:<p>"Recognition is surprisingly good. I’m using Dragon Dictate for OS X to write these very words. It costs over a hundred dollars, but it earns its price with extensive customization features and a recognition engine trained specifically for my voice. Dragon has no problem transcribing sentences like, “Craig Federighi loves ice cream and OS X 10.9.”"<p><a href="http://arstechnica.com/apple/2013/10/os-x-10-9/23/" rel="nofollow">http://arstechnica.com/apple/2013/10/os-x-10-9/23/</a><p>So, it's probably possible to eliminate typing now, it's just a matter of finding the proper setup to make it more efficient than typing.
Its about the more naturalistic interface between the computer and the user and the way they're making thoughts. You can much more easily correct words for non-fully-formed thoughts via typing and text than through speech - the "backspace" key of speech requires a lot more slowing down and then going back. In writing these sentences I've stared at a few of them, made some edits and decided they're better that way. You spend more time with your thoughts than with speech.<p>Maybe that's fine of course, but for such a fundamental change it will take a while for those who are used to text-based stream of consciousness to ever consider changing.