In the past couple of years voice recognition has gotten good enough that developers who can't type for whatever reason have started using voice recognition:<p><a href="http://ergoemacs.org/emacs/using_voice_to_code.html" rel="nofollow">http://ergoemacs.org/emacs/using_voice_to_code.html</a><p>Other advances like eye-tracking might help. OptiKey, for example, was released last month to help with with ALS use eye-tracking to use a computer: <a href="https://github.com/JuliusSweetland/OptiKey/wiki" rel="nofollow">https://github.com/JuliusSweetland/OptiKey/wiki</a><p>John Siracusa used to write his 20,000 word Mac reviews with Dragon:<p>"Recognition is surprisingly good. I’m using Dragon Dictate for OS X to write these very words. It costs over a hundred dollars, but it earns its price with extensive customization features and a recognition engine trained specifically for my voice. Dragon has no problem transcribing sentences like, “Craig Federighi loves ice cream and OS X 10.9.”"<p><a href="http://arstechnica.com/apple/2013/10/os-x-10-9/23/" rel="nofollow">http://arstechnica.com/apple/2013/10/os-x-10-9/23/</a><p>So, it's probably possible to eliminate typing now, it's just a matter of finding the proper setup to make it more efficient than typing.