Hey HN! Janak here from Outspeed (<a href="https://outspeed.com" rel="nofollow">https://outspeed.com</a>).<p>We’re excited to show you Outspeed : a purpose-built platform for realtime voice & video AI applications.<p>Here’s a demo of some cool apps you can create using Outspeed: <a href="https://www.youtube.com/watch?v=a11LQIlXelM" rel="nofollow">https://www.youtube.com/watch?v=a11LQIlXelM</a><p>Outspeed emerged from our frustration of needing to stitch together multiple tools such as livekit, vocode, langflow, silero etc. just to make a simple voice bot. Even after all that hard work, it still wasn’t production-ready. So we decided to work on a complete framework that could stand production level workloads.<p>Outspeed differs from other open-source libraries such as Pipecat or Livekit-Agents in 3 major ways:<p>1. Pytorch-like interface - Livekit and Pipecat were built on video-conferencing primitives and thus, are non-intuitive for a python/ML developer.<p>2. Vercel-like deployments - You can deploy your code using a single command to Outspeed’s cloud or host it on your own infra.<p>3. Built-in WebRTC server - Instead of deploying another server to handle webRTC connections, Outspeed comes with a built-in webRTC server. No longer need to depend on webRTC providers such as Livekit or Daily.<p>Outspeed is being actively developed. We’re eager to hear honest feedback, likes, dislikes, feature requests, you name it.
Congratulations on the launch! I'm curious to understand, in your experience, what was the most challenging part of building a realtime voice AI app was? Naively, assumed that this would be a solved problem.