Currently, I use a GUI for Whisper AI (https://github.com/Const-me/Whisper) to upload MP3s of interviews to get text transcripts. However, I'm hoping to find another tool that would recognize and split out the text per speaker.<p>Does such a thing exist?
For an end user application, Otter.ai is the best I've seen - I wish there was a better faster one built on top of Whisper, but there isn't a good one that I've seen.<p>If you're looking for an API - then check AssemblyAI, Google Cloud transcription, Deepgram. I have a list here: <a href="https://llm-utils.org/List+of+AI+APIs" rel="nofollow noreferrer">https://llm-utils.org/List+of+AI+APIs</a>
Descript.com was pretty good at it when I tried it, but it's pretty expensive:
<a href="https://www.descript.com/transcription" rel="nofollow noreferrer">https://www.descript.com/transcription</a><p>We ended up using Otter.ai, which if I remember correctly didn't have as good a speaker separation model, but it was good enough for the price:
<a href="https://otter.ai/" rel="nofollow noreferrer">https://otter.ai/</a><p>There's also the much more expensive, human-powered Rev:
<a href="https://www.rev.com/" rel="nofollow noreferrer">https://www.rev.com/</a>
Microsoft has a tool that accepts wav or mp3 and transcribes it.<p>But I do not think it can distinguish between speakers.<p>How well does Whisper work in terms of correctness for single speakers?