I work in the video streaming space and curious on others thoughts. Here's my wishlist though<p>1. WebTransport and WebCodec becomes the primary means for client to server real-time video delivery (e.g. compositing, off-device analysis)<p>2. No more vendor lockin with WebRTC (WHIP and WHEP might help here). Build a solution once on the client and if I don't like my provider just change the endpoint URL.<p>3. Google MediaPipe or a high level API on the browser to run AI models easily for audio / video. Right now it seems like most solutions for simple things like blurring are just minor abstractions on top of MediaStreamTrackProcessor.<p>4. Optimized headless browser for cloud rendering. Too many terrible solutions at the moment using CEF and chrome that then use ffmpeg or gstreamer, XVFB and pulseaudio.<p>5. Plug and play pipelines in the cloud for video processing (like zapier for video). I can plug in any processing I want in between the source and sink without a convulted mess of trying to push audio and video around to different apps either in network or across the internet.
Hey! Just a rando here, but I would be interested in hearing your opinion as to where Peertube does well with this wish list and where it needs improvement.<p><a href="https://joinpeertube.org" rel="nofollow noreferrer">https://joinpeertube.org</a><p><a href="https://framablog.org/2023/11/28/peertube-v6-is-out-and-powered-by-your-ideas/" rel="nofollow noreferrer">https://framablog.org/2023/11/28/peertube-v6-is-out-and-powe...</a>
I see 1 and 2 as going in the opposite directions. WebTransport+WebCodec enables the shipping of binary blobs for each individual service. WHIP+WHEP might see enough demand (OBS input) that locked down services have to offer it.<p>What cloud rendering are you trying to do? My hope/goal is to drop the browser dependency completely.