Aside from the Rust aspect (which is cool!), I can't believe we've come this far and still don't have low-latency video conferencing. Maybe I'm overly sensitive, but people talking over each other and the lack of conversational flow drives me crazy with things like hangouts.
Nitpick: “audiophile-quality sound” it seems, is becoming the new “military-grade encryption.”<p>I don’t have many other comments to make other than I am surprised rust-analyzer was only mentioned in passing.
I wish I read more things like this on hn.
"We wanted to know and understand every line of code being run on our hardware, and it should be designed for the exact hardware we wanted"
If this actually works, I am desperately keen to get my hands on it. If you have the capacity for high bandwidth, why not use it? Zoom’s model must work on whatever crappy broadband people have in their home office. If you have gigabit, it doesn’t seem to make use of that extra capacity to improve video quality.<p>As for sound, I don’t think audiophile quality is necessary...
I love Rust, but them deciding to redesign/reimplement webrtc after being frustrated after a week seems like a prime candidate for not invented here syndrome with Rust being the justification. There is a reason webrtc is as big as it is, it’s a complex problem to solve.<p>Regarding the premise of high latency in webrtc: Google Stadia has ~160ms round trip latency at 4k from my Macbook to a data center, so it’s not like that’s unachievable.
After reading it, I'm still not entirely sure what's being done.<p>Is it live streaming or is it the transport?<p>Are they doing video encoding (the audio encoding seems to be done by that webrtc-audio thing)?<p>Have they chosen a progressive encoding format that compresses frames and pumps them out to the wire as soon as they're done?<p>Is TCP or UDP involved or a new Layer 3 protocol entirely?<p>Have I just missed all of those parts or were they really missing amid all the Rust celebration?
If anybody is looking for a low latency high bandwidth P2P video streaming solution there is <a href="https://github.com/CESNET/UltraGrid/wiki" rel="nofollow">https://github.com/CESNET/UltraGrid/wiki</a> It can do less than 80ms of latency
Gotta love a writeup with this line in it:<p><pre><code> like Brian's 1970s-era MacBook Pro
</code></pre>
That's a writer(s) who knows what it's like to read long (aka thorough) technical articles and not bore the readers to death.<p>Great article!
> We just enforce rustfmt.<p>After interaction with both rustfmt and go fmt, I have concluded that .editorconfig is solving a problem that really shouldn't be solved. We went through the ordeal of defining our C# coding standards where I work and, let me tell you, people (myself included) care very deeply about their way of structuring code. And it's a bloody waste of their time.<p>Having the language designers say, "here is how our language should be structured" is a breath of fresh air.
My WebRTC projects haven't suffered that much from latency. The biggest source of delays is usually caused by encoding video for me. I've had to limit streams to 720p and 25fps to reduce the time spent on CPU encoding a vp8 stream. There are also bandwidth considerations (real time encoding = significantly less compression) but the end result is slightly less than 200ms one way latency (including input lag from mouse, 15ms network latency and display lag) without any special settings. All I'm doing is feeding a ffmpeg stream to kurento and letting it broadcast it via WebRTC. This is not a web conferencing application and it is also not using WebRTC via p2p. It's closer to conventional live streaming with a sane amount of latency (compared to up to 30s of latency you commonly see on twitch). Of course I personally would prefer it if the latency can be brought down even further. 100ms or lower is like the holy grail for me and only appears to be doable with codecs that aren't supported by WebRTC. However, people don't want to install apps just for my little service and I certainly won't encode every stream via several codecs just for the tiny minority of the user base that actually ends up using the app.
Very cool from a tech standpoint.<p>From a product point of view, I find it interesting that the illustrations/concept videos for these things always show people interacting very closely to the wall - e.g. playing chess, sitting around a table, etc.<p><a href="https://tonari.no/static/media/family.48218197.svg" rel="nofollow">https://tonari.no/static/media/family.48218197.svg</a><p>But in practice, people tend to keep their distance from it. E.g. the pictures of this setup tend to show people clustered in their own group on each side of the wall, with a solid 2-3 meters from the wall.<p><a href="https://blog.tonari.no/images/ea56c74d-a55d-4183-9a7b-d697954c5159-tonari-frontier-2.png.optimized.jpg" rel="nofollow">https://blog.tonari.no/images/ea56c74d-a55d-4183-9a7b-d69795...</a><p>It makes sense, it's awkward to be close to a large solid (emissive) surface, and humans instinctively get closer to their in group when faced with an out group. I wonder how the system could be designed to encourage participants being closer, if there is an advantage to that.
Why exactly do existing video streaming solutions use such small amounts of bandwidth and have terrible quality as a result? Does anyone have a deep dive into why this is the case? It seems that it would be a killer feature to make better utilization of bandwidth.<p>Even over wifi, speedtest shows 4ms/100mb/100mb on my internet connection, but Zoom, FaceTime, and others never use more than about 0.8Mbit/s for a video stream, and the resulting quality of audio and video is...understandably poor.<p>Latency too totally feels like a software problem, perhaps with too many layers of abstraction. (60fps->16ms for the camera, ~10ms for encoding with NVENC/equivalents, 35ms measured one-way latency from my laptop to my parents 4000km away, ~10ms decode, 16ms frame delay = 87ms one way). Maybe I'm asking for too much from non-realtime systems (I'm used to RTOS, extensive use of DMA, zero-copy network drivers, etc), but it seems that there is a lot of room to improve.
The bottleneck is not on the CPU. I'm afraid this company may have wasted their time trying to reinvent WebRTC. If you really want to get realtime video, I think the best approach is a custom codec on CUDA or better yet custom hardware (FPGA). You can only go so far on general purpose hardware before you hit a wall and get Zoom/WebEx quality.
This is welcome news.<p>I have been itching to convert a small headshot videostream (thing under 100x100px) to audio, stream it over mumble and then convert it back to video, just to see what the latency is like. It would obviously be a big undertaking, but not as big as this methinks.
"We wanted to know and understand every line of code being run on our hardware, and it should be designed for the exact hardware we wanted."<p>This rings very true for every high-performance thing I've ever worked on, from games to trading systems.
Any suggestions on a group video conferencing tool for use on a local network (Ethernet) that's effective? Either self-hosted or online, just for personal usage to talk with others?
"A week of struggling with WebRTC’s nearly 750,000 LoC behemoth of a codebase revealed just how painful a single small change could be — how hard it was to test, and feel truly safe, with the code you were dealing with."<p>I <i>totally</i> feel you. It's impressive what the WebRTC implementation has achieved, but it's just not pleasant at all to work with it.
130ms is a world better than 500ms and a much welcome improvement, but it is still terrible.<p>Latency happens throughout the whole stack; Unfortunately much would need to be fixed outside this project to achieve any further significant improvement.<p>Operating System, firmware, blackbox hardware are some other non-negligible sources of latency. Everything adds up.
This is amazing! The first thing that popped to my mind seeing the life sized "portal" was the farcaster portals from the sci-fi novel Hyperion<p><a href="https://hyperioncantos.fandom.com/wiki/Farcaster" rel="nofollow">https://hyperioncantos.fandom.com/wiki/Farcaster</a>
I wonder how it compares with apple facetime on two new macbooks with ethernet connections on both sides.<p>They actually work on reducing latency and pushing high res video if your connection supports it.
<p><pre><code> for crate in $(ls */Cargo.toml | xargs dirname); do
cargo build
</code></pre>
Why do this instead of<p><pre><code> cargo --workspace build
</code></pre>
Is it so you can time the individual crates?
"we truly don't believe we could have achieved these numbers with this level of stability without Rust"<p>Oh please. This is just rust sensationalism. People don't truly believe rust is faster than C do they?
I wonder how much bandwidth this uses. The less bandwidth it uses the higher the latency because of compression. Its much easier to get low latency video when you have large (Gbit+) links