My three partners and I have be developing and selling multi-camera arrays specifically for eye tracking as well as measuring other physiological features for several years now. Our main customers are a couple university research groups, a human factors group in Lockheed, and just recently the US Air Force. In fact we just returned from a trip to Wright-Patterson installing an array in a hypobaric chamber to perform gaze tracker and pupil response for pilots under hypoxic conditions. Phase two will be a custom gaze tracker for their centrifuge. Our main features are accurate eye and face tracking up to a meter from the array, minimal calibration per subject (about 10 seconds staring at a dot), pupil response for measuring fatigue and other things, plus we can adapt the array for the client ranging from a cockpit to a large flat screen TV. We've looked into medical usage such as ALS, but we're bootstrapped based in Iowa and found the military niche as a more direct way to generate cash flow. It's ashame we can't apply this work towards people with medical needs, but we don't have the funds nor the clients to make such a pivot.
I responded to the thread but Senseye has been working on this for a while now. Originally they were working with the US Air Force to help with improving pilot training - fatigue etc.. inference with retinal reading<p><a href="https://senseye.co/" rel="nofollow noreferrer">https://senseye.co/</a><p>They have generally struggled to find funding for their eye tracking focused work, and have recently had to pivot away from the really exciting but hard to fund stuff into PTSD screening (which is important too).<p>I can connect you with the founder if desired via the email in my bio
I do hardware. I do software. I do computer vision. I built some software that ran on a cellphone used by LEO (law enforcement officers) to determine if the person they are quizzing is inebriated or impaired through controlled substances by examining the person's eyes and having them focus on images displayed on the phone screen. I've done eye tracking using fully custom solutions and also through a few of the off-the-shelf SDKs such as GazeSense from eyeware and a few other SDKs.<p>The problem is not the eye-tracking, it is reasonably easy to build robust systems that can do that easily enough, even with custom hardware under all sorts of lighting conditions. The hard part is the UX if you are trying to build something that isn't hampered by current UI paradigms.<p>Rapid typing and menus of custom actions with just eye movement, though fatiguing, shouldn't be hard to solve, and then render the output however you want; text, text to speech, commands issued to an machine, etc. Making a usable user interface to do anything else, that's where the rubber hits the road.<p>@pg, which software is your friend using? If it is anything like I've looked in to in the past, it's over-priced accessibility crap with a UI straight out of the 1990s.
EEG recording is an alternative that would outlast the potential disease-related degradation of eye movements. Manny Donchin gave a brown bag at UIUC about the possibilities of using this approach to support communication by ALS patients many years ago. It's clever: they use the P300 marker to index attention/intention. I do not recall whether he and his colleagues ever commercialized the tech. I believe that this publication is representative: <a href="https://doi.org/10.1016/j.clinph.2005.06.027" rel="nofollow noreferrer">https://doi.org/10.1016/j.clinph.2005.06.027</a>
I worked on eye tracking hardware for Microsoft HoloLens. Several AR headsets offer decent eye tracking, including Hololens 2 and Magic Leap's ML2. I think Tobii's eye tracking glasses are probably better as a stand-alone solution though: <a href="https://www.tobii.com/products/eye-trackers/wearables/tobii-pro-glasses-3" rel="nofollow noreferrer">https://www.tobii.com/products/eye-trackers/wearables/tobii-...</a>
So, guy who has deployed eye-scanning machines all over Africa and has found that many of them have been hacked and are giving incorrect responses suddenly has a friend with ALS and is willing to fund better quality eye tracking?<p>Either:<p><pre><code> - Part of the whole world-coin thing was privately trying to get the data to help his friend
- He doesn't want to say "looking to develop eye tracking tech for my world-coin scam", since most devs won't touch that thing. Conveniently found a "friend" with ALS.
</code></pre>
Saying, on behalf of a friend, that he doesn't believe PG.
I've been working on the menuing side [1] based on crossing Fitt's Law with Huffman trees. But, don't know the constraints for ALS.<p>Hopefully, whomever takes this on doesn't take the standard Accessibility approach, which is adding an extra layer of complexity on an existing UI.<p>A good friend, Gordon Fuller, found out he was going blind. So, he co-founded one of the first VR startups in the 90's. Why? For wayfinding.<p>What we came up with is a concept of Universal design. Start over from first principles. Seeing Gordon use an Accessible UI is painful to watch, it takes three times as many steps to navigate and confirm. So, what is the factor? 0.3 X?<p>Imagine if we could refactor all apps with a LLM, and then couple it with an auto compete menu. Within that menu is personal history of all your past transversals.<p>What would be the result? A 10X? Would my sister in a wheelchair be able to use it? Would love to find out!<p>[1] <a href="https://github.com/musesum/DeepMenu">https://github.com/musesum/DeepMenu</a>
There is a Turkish ALS patient, he has a youtube channel, he is creating youtube videos, podcasts, streams on twitch thanks to eye tracker.<p>He is using tobii eye tracker.
There is a video he made about the eye tracker. It's in Turkish but you can see how he uses it.<p><a href="https://www.youtube.com/watch?v=pzSXyiWN_uw">https://www.youtube.com/watch?v=pzSXyiWN_uw</a><p>Here is a article about him in English:
<a href="https://www.dexerto.com/entertainment/twitch-streamer-with-als-beats-the-odds-by-using-eye-tracker-to-make-content-2073988/" rel="nofollow noreferrer">https://www.dexerto.com/entertainment/twitch-streamer-with-a...</a>
Another route might also be sub-vocalization[1], like TTS for your thoughts. I recently picked up some cheap toys to get started trying to emulate the results[2].<p>1. <a href="https://www.nasa.gov/centers/ames/news/releases/2004/subvocal/subvocal.html" rel="nofollow noreferrer">https://www.nasa.gov/centers/ames/news/releases/2004/subvoca...</a><p>2. <a href="https://github.com/kitschpatrol/Brain">https://github.com/kitschpatrol/Brain</a>
Is the lack of mentioning Apple deliberate? It seems like they've already poured a lot of R&D into this for the Vision Pro, which might be exactly the kind of thing the friend needs.
> A friend of mine has ALS and can only move his eyes. He has an eye-controlled keyboard, but it's not very good. Can you make him a better one?<p>When I worked for one of the big game engines I got contacted by the makers of the tech that Stephen Hawking used to communicate, which includes an eye tracker:<p><a href="https://www.businessinsider.com/an-eye-tracking-interface-helps-als-patients-use-computers-2015-9" rel="nofollow noreferrer">https://www.businessinsider.com/an-eye-tracking-interface-he...</a>
I would love to hear pg's analysis of the business case for this company.<p>By my math, 5k people in the US are diagnosed per year, and if your keyboard costs $1k, then your ARR is $5m, and maybe the company valuation is $50m. Numerically, this is pretty far from the goal of a typical YC company.<p>I hate to be so cold-hearted about the calculations, but I've had a few friends get really passionate about assistive tech, and then get crushed by the financial realities. Just from the comments, you can see how many startups went either the military route or got acquired into VR programs.<p>The worst I've seen, btw, is trying to build a better powered wheelchair. All the tech is out there to make powered wheelchairs less bulky and more functional, but the costs of getting it approved for health insurance to pay the price, combined with any possible risk of them falling over, combined with the tiny market you are addressing makes it nearly impossible to develop and ship an improvement. I do hope that we reach a tipping point in the near future where a new wheelchair makes sense to build, because something more nimble would be a big improvement to people's lives.
As someone who suffered some severe mobility impairment a few years ago and relied extensively on eye tracking for just over a year, <a href="https://precisiongazemouse.org/" rel="nofollow noreferrer">https://precisiongazemouse.org/</a> (Windows) and <a href="https://talonvoice.com/" rel="nofollow noreferrer">https://talonvoice.com/</a> (multiplatform) are great. In my experience the hardware is already surprisingly good, in that you get accuracy to within an inch or half an inch depending on your training. Rather, it's all about the UX wrapped around it, as a few other comments have raised.<p>IMO Talon wins* for that by supporting voice recognition and mouth noises (think lip popping), which are less fatiguing than one-eye blinks for common actions like clicking. The creator is active here sometimes.<p>(* An alternative is to roll your own sort of thing with <a href="https://github.com/dictation-toolbox/dragonfly">https://github.com/dictation-toolbox/dragonfly</a> and other tools as I did, but it's a lot more effort)
This was designed for a graffiti artist with ALS:<p><a href="https://en.wikipedia.org/wiki/EyeWriter" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/EyeWriter</a><p><a href="https://github.com/eyewriter/eyewriter">https://github.com/eyewriter/eyewriter</a><p><a href="https://www.instructables.com/The-EyeWriter-20/" rel="nofollow noreferrer">https://www.instructables.com/The-EyeWriter-20/</a><p><a href="https://www.moma.org/collection/works/145518" rel="nofollow noreferrer">https://www.moma.org/collection/works/145518</a>
Tobii have been doing eye-tracking since 2001 and have a product for that. <a href="https://www.tobiidynavox.com/" rel="nofollow noreferrer">https://www.tobiidynavox.com/</a>
Edit: Check out Dasher for a much better interface to enter text with a cursor, compared to a virtual keyboard.<p><a href="https://dasher.acecentre.net/" rel="nofollow noreferrer">https://dasher.acecentre.net/</a> , source at <a href="https://github.com/dasher-project/dasher">https://github.com/dasher-project/dasher</a><p>---<p>I remember seeing a program years ago, which used the mouse cursor in a really neat way to enter text. Seems like it would be far better than clicking on keys of a virtual keyboard, but I can't remember the name of this program nor seem to find it...<p>Will probably get some of this wrong, but just in case it rings a bell (or someone wants to reinvent it - wouldn't be hard):<p>The interface felt like a side-scrolling through through a map of characters. Moving left and right controlled speed through the characters; for instance moving to the left extent would backspace, and moving further to the right would enter more characters per time.<p>Up and down would select the next character - in my memory these are presented as a stack of map-coloured boxes where each box held a letter (or, group of letters?), say 'a' to 'z' top-to-bottom, plus a few punctuation marks. The height of each box was proportional to the likelihood that letter would be the next you'd want, so the most likely targets would be easier+quicker to navigate to. Navigating in to a box for a character would "type" it. IIRC, at any instant, you could see a couple levels of letters, so if you had entered c-o, maybe 'o' and 'u' would be particularly large, and inside the 'o' box you might see that 'l' and 'k' are bigger so it's easy to write "cool" or "cook".<p>(I do hardware+firmware in Rust and regularly reference Richard Hamming, Fred Brooks, Donald Norman, Tufte. Could be up for a change)
Huh. I wrote a paper for my undergraduate dissertation on eye tracking using a laptop camera, and it ended up published and I won a scholarship award (for €150, imagine that). I wonder if it's time to dust off that project
This seems like it would fit<p><a href="https://thinksmartbox.com/products/eye-gaze/" rel="nofollow noreferrer">https://thinksmartbox.com/products/eye-gaze/</a><p>I once interviewed at this company. Unfortunately didn't get the job but very impressed nonetheless.
I agree, eye tracking is going to have really broad applications. I've been interested in eye tracking for over a decade, and in fact built my own eye tracker, joined a startup, and got acquired by Google[1]. But there's way more to do. We've barely scratched the surface of what's possible with eye tracking and I'd love to take a second crack at it.<p>[1] <a href="https://techcrunch.com/2016/10/24/google-buys-eyefluence-eye-tracking-startup/" rel="nofollow noreferrer">https://techcrunch.com/2016/10/24/google-buys-eyefluence-eye...</a>
I used this software when my mom was battling ALS:<p><pre><code> https://www.optikey.org/
</code></pre>
which ran on a < $1k computer<p>At the time, the other options were much more expensive (> $10-15k) which were sadly out of out budget.
Adhawk, adhawk.io, has the only all day ultralight eye tracking wearable I'm aware of, all MEMS based with ultra high scan rates, 500Hz+ and research grade accuracy. For ALS u likely need something light and frictionless, wearing a hot and heavy headset all day probably doesn't work.
I suspect the real moneymakers for such startups have very little to do with ALS. ALS demand is fortunately small, and can't lead to VC desired growth curve. Imagine instead using it in a classroom to ensure the kids pay attention. Or making sure you see the advertisement.
Yes, the ALS/disability angle is noble. Viewed another way, the entire human race is afflicted by the disability of not having access to eye-tracking (and other) technologies. Paul Graham and co. are also invested in companies that are going to be highly enabled and boosted by the growth of eye-tracking and related technologies. I don't view his statement of motivation related to ALS as insincere, I just also notice that it's accessible, easily understandable, and also in line with other aspects of Paul's motivation (and that's a good thing).<p>I would also recommend Jean-Dominique Bauby's Le Scaphandre et le Papillon to anyone interested in this topic. Typing using eye movements was used in that book in a slow, inefficient manner. In the book's case, the question one should ask is, was his UI paced at the exact correct speed? I was and still am deeply emotionally moved by what the author was able to accomplish and convey. I am unsure if a faster keyboard would have made a meaningful and positive difference in that particular case, to the author's quality of life. I'll need to give that book another read with that question in mind.<p>Happily, I expect eye tracking to find fascinating, novel and unexpected applications. As others have stated, UI/UX design is an interesting part of this puzzle. For example, if you ask an LLM to output short branches of text and have a writer look at the words that he wants to convey. It's definitely blurring the line between reading and writing. Myself, finding writing to be a tactile exercise, I think that emotional state comes into play. That's what I'm interested in. Yes, can you literally read someone's eyes and tell what they are thinking?
I literally just bought this last night. Works with just a webcam and is shockingly accurate. <a href="https://beam.eyeware.tech/" rel="nofollow noreferrer">https://beam.eyeware.tech/</a>
I'm very interested in eye-tracking and see a lot of potential in this tech.<p>For inspiration, check out the Vocal Eyes Becker Communication System: <a href="https://jasonbecker.com/archive/eye_communication.html" rel="nofollow noreferrer">https://jasonbecker.com/archive/eye_communication.html</a><p>A system invented for ALS patient Jason Becker by his dad: <a href="https://www.youtube.com/watch?v=wGFDWTC8B8g">https://www.youtube.com/watch?v=wGFDWTC8B8g</a><p>Also already mentioned in here, EyeWriter ( <a href="https://en.wikipedia.org/wiki/EyeWriter" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/EyeWriter</a> ) and Dasher ( <a href="https://en.wikipedia.org/wiki/Dasher_(software)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Dasher_(software)</a> ) are two interesting projects to look into.
Does something like a blink, wink tracker for swiping UI pique your interest? I built a PoC a while back: <a href="https://github.com/anupamchugh/BlinkPoseAndSwipeiOSMLKit">https://github.com/anupamchugh/BlinkPoseAndSwipeiOSMLKit</a>
I'm consulting with an Australian group called Control Bionics. They have a US company & office, with CTO and sales team in Ohio, but software engineering is done in AU. Their primary product is an electromyography and accelerometer hardware device to detect muscle activations or movements, and then most commonly used as a mouse-click substitute in conjunction with third-party eye-gaze hardware proving the cursor. (I've also designed them an autonomous wheelchair module, but that's another story...)<p>@pg - If your friend has not tried adding a mouse-click via something they can activate other than eye-gaze, this would be worth a shot. We have a lot of MND patients who use our combination to great success. If they can twitch an eyebrow, wiggle a toe or a finger, or even flex their abdomen, we can put electrodes there and give them a way forward.<p>Also, my contact details are in my profile. I'd be happy to put you in touch with our CEO and I'm confident that offers of funding would be of interest. The company is listed on the Australian stock exchange, but could likely go much further with a direct injection of capital to bolster the engineering team.<p>Cheers, Tom
In responses, there seem to be dozens of experts and companies already doing this. Where does he think they fall short of meeting his friend's needs?
Eye tracking is essentially a model of visual attention. Visual attention is part of the overall attention space and big companies and use-cases are built around visual attention. Today we track attention by explicit interactions, if we can model around implicitly observable interactions - then we have a much larger observable data space around the user.,
I did a fun project a few years ago with eye tracking.<p>We built a prototype for roadside sobriety checks. The idea was to take race/subjectivity out of the equation in these traffic stops.<p>We modified an oculus quest and added IR LEDs and cameras with small PI zero's. I wrote software for the quest that gave instructions and had a series of examinations where you'd follow a 3D ball, the screen would brighten and darken, and several others while I looked for eye jerks (saccades) and pupil dilation. The officer was able to see your pupil (enlarged) on a laptop in real time and we'd mark suspicious times on the video timeline for review.<p>It was an interesting combination of video decoding, OpenCV and real-time streams with a pretty slick UI. The Pi Zero was easily capable of handling real-time video stream decoding, OpenCV and Node. Where I ran into performance problems I wrote node -> c++ bindings.<p>We did it all on something silly like a 50k budget. Neat project.
What do you think are some challenges that an eyetracker in this specific context has to face? What is your friend mostly struggling with the current solutions? Are there tracking specific challenges related to ALS? Is it mostly a UI/"better prediction" interface issue?<p>With my group we are developing an eyetracker for studying developmental and clinical populations, which typically present challenges to conventional eyetrackers. It is a spin off from our academic work with infants, and we already have a study almost done that uses it. We are still into the very beginning phase in terms of where this may lead us, but we are interested in looking into contexts where eyetracking for different reasons may be more challenging.
PG mention that the solution his friend used wasn't any good. How does the best system there is out today work? And what different solutions are there?
I found this tool to be interesting to play with and seems to work pretty well assuming you stay in the same position: <a href="https://gazerecorder.com/" rel="nofollow noreferrer">https://gazerecorder.com/</a><p>I'm guessing a combination of projection mapping, built in lighting, and some crowdsourced data will get accuracy to very usable levels
How about a library that starts loading a link when you look at it with intent. Or maybe with BCI integration that detects the moment you decide you want to access it.<p>Or how about a UI that automatically adapts to your eye movement and access patterns to minimize the amount of eye movement required to complete your most common tasks by rearranging the UI elements.
I thought this was solved long time ago, I wrote a program many years ago using kinect that tracks the center of the eye pretty precisely, using color gradients. The pupil is pretty uniform in every human being (it's black) surrounded by some color and then white. Even just a few pixels are enough to do it.
@paulg look in to this: <a href="https://spectrum.ieee.org/brain-implant-speech" rel="nofollow noreferrer">https://spectrum.ieee.org/brain-implant-speech</a>
<a href="https://www.eyecontrol.co.il/" rel="nofollow noreferrer">https://www.eyecontrol.co.il/</a> was founded exactly to solve this problem
I would like to look at the problem more deeply, the eyes can be tracked but what about facial movement, the more data the better training for machine learning
I hear the Apple Vision Pro has a good implementation. If this were Microsoft, you'd be able to find the details in the Microsoft Research website.
Is there any solution out there that does not use IR + dark pupil segmentation?<p>Seems like all the solutions out there are some flavour or variation of this.
I can make an eye tracking keyboard with tensor flow, if anyone is interested in this problem.<p>It would be great to hear from paul about how his friend uses the keyboard and what kind of tasks he’d love to do but can’t with current solutions.<p>It seems like a throughput problem to me. How can you type quickly using only your eyes?<p>Have people explored using small phonetic alphabets or Morse code style encoding?<p>Once I got tensorflow working, I’d start mapping different kinds of ux. Throughput is king.
In case anyone is interested: There are plenty of companies around.<p>Both apple and Facebook acquired eye tracking companies to kickstart their own development.<p>Here are some Top-lists<p><a href="https://imotions.com/blog/insights/trend/top-eye-tracking-hardware-companies/" rel="nofollow noreferrer">https://imotions.com/blog/insights/trend/top-eye-tracking-ha...</a>
<a href="https://valentinazezelj.medium.com/top-10-eye-tracking-companies-on-the-market-today-3b96ef131ab5" rel="nofollow noreferrer">https://valentinazezelj.medium.com/top-10-eye-tracking-compa...</a><p>Its also an active research field, this is one of the bigger conferences:
<a href="https://etra.acm.org/2023/" rel="nofollow noreferrer">https://etra.acm.org/2023/</a>
Aw, that's nice of pg to want something better for his friend. As cynical as we are about technology, new developments can be so fantastic for accessibility and better quality of life.
So you're saying there's a final frontier in the mapping of intention and the discernment of preferences ... and you'd like some bright young things to explore that possibility space under the guise of (<i>unimpeachable good cause</i>) ?<p>Go to hell.<p>Unless, of course, you'd like to commit the funded work to the free commons, unencumbered by patents and copyrights, and free to use by any entity for any purpose.<p>That's what we'd do for ALS, right ?
<a href="https://nitter.net/paulg/status/1695596853864321055" rel="nofollow noreferrer">https://nitter.net/paulg/status/1695596853864321055</a> to see more of the thread if not logged in.<p>It'd be good to know what rate we need to beat and some other metrics.
And now if only someone funded an ALS cure...<p>As far I know, I don't don't think mainstream medicine is close to solving _any_ chronic condition, except managing it.
Someone should do this, but for the love of god, DO NOT take any venture capital to do it. No matter how well-intentioned the VCs are at the start, eventually your eye tracking startup will 100% be used for advertising as investors in your Series D need to make an exit or take you public.
Not really a huge PG fan, but this is what billionaires should be doing: see where a need is exists and put some of your insane wealth towards making an improvement. This why I respect Elon even though I don’t really like him; he puts his money to use, in a very public manner.
I have a new approach of doing ML, where autodiff is replaced with something better. Magically a lot of things fall into place. This approach should make problems like this relatively straight forward.<p>Interested in hearing more?