Legitimate question: Why isn't this being done with software? The Speech-to-Text problem has been around for a long time, and it seems like there are a lot of people who are financially motivated to solve it. If the best solutions on the market, or ideally a combination of the best solutions, can't provide a baseline decent transcript then why aren't people tripping over themselves to solve this problem?<p>It seems like an 80% solution would be good enough. Hell, even a 66% solution seems like a good compromise or starting point. If an automatically generated transcript can convey at least 2/3rds of the information from a lecture for a one time or small incremental cost then I don't see why both parties wouldn't be ok with it. Those with disabilities would have to do some extra work to look up garbled words or ideas that don't translate well to text, but it would be within the bounds of reason (say a 1 hour lecture would now take 2 hours to parse). The organizations producing the content would most likely having to pay for speech-to-text software, either several thousand dollars per year per class or $X per lecture, but they would still come out cheaper than paying someone per minute to do the transcription. It isn't a win-win situation, but more of an equitable lose-lose.<p>They say a fair deal has been reached when both sides in a negotiation are a little bit unhappy. A software solution would seem to do that without ignoring the rights of the disabled or placing prohibitive costs on the content producers. And it would set a precedent going forward: Content producers must make an effort to accommodate those with disabilities, but the disabled should be willing to make some extra effort themselves. Asking an elderly woman in a wheelchair to lift herself over a sidewalk curb is not reasonable. Asking the same person to spend an extra 30 minutes to decipher an unclear transcript might be.