TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Open-source, native audio turn detection model

126 点作者 kwindla2 个月前
Our goal with this project is to build a completely open source, state of the art turn detection model that can be used in any voice AI application.<p>I&#x27;ve been experimenting with LLM voice conversations since GPT-4 was first released. (There&#x27;s a previous front page Show HN about Pipecat, the open source voice AI orchestration framework I work on. [1])<p>It&#x27;s been almost two years, and for most of that time, I&#x27;ve been expecting that someone would &quot;solve&quot; turn detection. We all built initial, pretty good 80&#x2F;20 versions of turn detection on top of VAD (voice activity detection) models. And then, as an ecosystem, we kind of got stuck.<p>A few production applications have recently started using Gemini 2.0 Flash to do context aware turn detection. [2] But because latency is ~500ms, that&#x27;s a more complicated approach than using a specialized model. The team at LiveKit released an open weights model that does text-based turn detection. [3] I was really excited to see that, but I&#x27;m not super-optimistic that a text-input model will ever be good enough for this task. (A good rule of thumb in deep learning is that you should bet on end-to-end.)<p>So ... I spent Christmas break training several little proof of concept models, and experimenting with generating synthetic audio data. So, so, so much fun. The results were promising enough that I nerd-sniped a few friends and we started working in earnest on this.<p>The model now performs really well on a subset of turn detection tasks. Too well, really. We&#x27;re overfitting on a not-terribly-broad initial data set of about 8,000 samples. Getting to this point was the initial bar we set for doing a public release and seeing if other people want to get involved in the project.<p>There are lots of ways to contribute. [4]<p>Medium-term goals for the project are:<p><pre><code> - Support for a wide range of languages - Inference time of &lt;50ms on GPU and &lt;500ms on CPU - Much wider range of speech nuances captured in training data - A completely synthetic training data pipeline. (Maybe?) - Text conditioning of the model, to support &quot;modes&quot; like credit card, telephone number, and address entry. </code></pre> If you&#x27;re interested in voice AI or in audio model ML engineering, please try the model out and see what you think. I&#x27;d love to hear your thoughts and ideas.<p>[1] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40345696">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40345696</a><p>[2] <a href="https:&#x2F;&#x2F;x.com&#x2F;kwindla&#x2F;status&#x2F;1870974144831275410" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;kwindla&#x2F;status&#x2F;1870974144831275410</a><p>[3] <a href="https:&#x2F;&#x2F;blog.livekit.io&#x2F;using-a-transformer-to-improve-end-of-turn-detection&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.livekit.io&#x2F;using-a-transformer-to-improve-end-o...</a><p>[4] <a href="https:&#x2F;&#x2F;github.com&#x2F;pipecat-ai&#x2F;smart-turn#things-to-do">https:&#x2F;&#x2F;github.com&#x2F;pipecat-ai&#x2F;smart-turn#things-to-do</a>

10 条评论

pzo2 个月前
I will have a look at this. Played with pipecat before and it&#x27;s great, switched to sherpa-onnx though since I need something that compile to native and can run on edge devices.<p>I&#x27;m not sure if turn detection can be really solved except dedicated push to talk button like in walkie-talkie. I often tried google translator app and the problem is in many times when you speaking longer sentence you will stop or slow down a little to gather thought before continuing talking (especially if you are not native speaker). For this reason I avoid converation mode in such cases like google translator and when using perplexity app I prefer the push to talk button mode instead of new one.<p>I think this could be solved but we would need not only low latency turn detection but also low latency speech interruption detection and also very fast low latency llm on device. And in case we have interruption good recovery that system know we continue last sentence instead of discarding previous audio and starting new etc.<p>Lots of things can be improved also regarding i&#x2F;o latency, like using low latency audio api, very short audio buffer, dedicated audio category and mode (in iOS), using wired headsets instead of buildin speaker, turning off system processing like in iphone audio boosting or polar pattern. And streaming mode for all STT, transport (using using remote LLM), TTS. Not sure if we can have TTS in streaming mode. I think most of the time they split by sentence.<p>I think push to talk is a good solution if well designed: big button in place easily reached with your thumb, integration with iphone action button, using haptic for feedback, using apple watch as big push button, etc.
评论 #43298561 未加载
kwindla2 个月前
A couple of interesting updates today:<p>- 100ms inference using CoreML: <a href="https:&#x2F;&#x2F;x.com&#x2F;maxxrubin_&#x2F;status&#x2F;1897864136698347857" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;maxxrubin_&#x2F;status&#x2F;1897864136698347857</a><p>- An LSTM model (1&#x2F;7th the size) trained on a subset of the data: <a href="https:&#x2F;&#x2F;github.com&#x2F;pipecat-ai&#x2F;smart-turn&#x2F;issues&#x2F;1">https:&#x2F;&#x2F;github.com&#x2F;pipecat-ai&#x2F;smart-turn&#x2F;issues&#x2F;1</a>
foundzen2 个月前
I got most of my answers from the README. Well written. I read most of it. Can you share what kind of resources (and how much of them) were required to fine tune Wav2Vec2-BERT?
评论 #43287423 未加载
remram2 个月前
Ok what&#x27;s turn detection?
评论 #43287164 未加载
评论 #43287163 未加载
评论 #43287779 未加载
xp842 个月前
I&#x27;m excited to see this particular technology developing more. From the absolute worst speech systems such as Siri, who will happily interrupt to respond with nonsense at the slightest half-pause, to even ChatGPT voice mode which at least tries, we haven&#x27;t yet successfully got computers to do a good job of this - and I feel it may be the biggest obstacle in making &#x27;agents&#x27; that are competent at completing simple but useful tasks. There are so many situations where humans &quot;just know&quot; when someone hasn&#x27;t yet completed a thought, but &quot;AI&quot; still struggles, and those errors can just destroy the efficiency of a conversation or worse, lead to severe errors in function.
zamalek2 个月前
As an [diagnosed] HF autistic person, this is unironically something I would go for in an earpiece. How many parameters is the model?
评论 #43286789 未加载
written-beyond2 个月前
Having reviewed a few turn based models your implementation is pretty inline with them. Excited to see how this matures!
评论 #43286791 未加载
prophesi2 个月前
I&#x27;d love for Vedal to incorporate this in Neuro-sama&#x27;s model. An osu bot turned AI Vtuber[0].<p>[0] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;shorts&#x2F;eF6hnDFIKmA" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;shorts&#x2F;eF6hnDFIKmA</a>
lostmsu2 个月前
Does this support multiple speakers?
评论 #43292316 未加载
cyberbiosecure2 个月前
forking...