I've been recently tasked to find a live video solution for an industrial device. In my case, I want to display video from a camera to a local LCD and simultaneously allow it to be live streamed over the web. By web, I mean that the most likely location of the client is on the same LAN, but this is not guaranteed. I figured this has to be a completely solved problem by now.<p>Anyway, so I've tried many of the recent protocols. I was really hoping that HLS would work, because it's so simple. For example, I can use the gstreamer "hlssink" to generate the files, and basically deliver video with a one-line shell script and any webserver. But the 7 second best case latency is unacceptable. I really want 1 second or better.<p>I looked at MPEG-DASH: it seems equivalent to HLS. Why would I use it when all of the MPEG-DASH examples fall back on HLS?<p>I looked at WebRTC, but I'm too nervous the build a product around the few sample client/server code bases I can find on github. They are not fully baked, and then I'm really depending on a non-standard solution.<p>I looked a Flash: but of course it's not desirable to use it these days.<p>So the solution that works for me happens to be the oldest: Motion JPEG, where I have to give up on using a good video compression (MPEG). I get below 1 second latency, and no coding (use ffmpeg + ffserver). Luckily Internet Explorer is dead enough that I don't have to worry about its non-support of it. It works everywhere else, including Microsoft-Edge. MJPEG is not great in that the latency can be higher if the client can't keep up. I think WebRTC is likely better here.<p>Conclusion: here we are in 2019 and the best low latency video delivery protocol is from the mid-90s. It's nuts. I'm open to suggestions in case I've missed anything.
The major criticism the author has is the requirement for HTTP2 push for ALHLS, which many CDNs don't support. While I agree it is a valid criticism, I am glad Apple is forcing the CDNs to support push. Without the 800lb gorilla pushing everyone to upgrade, we would still be using pens on touchscreens.<p>I am not a fan when Apple obsoletes features that people love and use. But I always support when Apple forces everyone to upgrade because friction from existing providers is what keeps things slow and old. Once Apple requires X, everyone just sighs and updates their code, and 12mo later, we are better off for it.<p>That being said, I agree with author's disappointment that Apple mostly ignored LHLS instead of building upon it. Chunked encoding does sound better.
It's ironic that "live streaming" has gotten worse since it was invented in the 1930's. Up until TV went digital, the delay on analog TV was just the speed of light transmission time plus a little bit for broadcasting equipment. It was so small it was imperceptible. If you had a portable TV at the live event, you just heard a slight echo.<p>Now the best we can do is over 1 second, and closer to 3 seconds for something like satellite TV, where everything is in control of the broadcaster from end to end.<p>I suppose this is the tradeoff we make for using more generalized equipment that has much broader worldwide access than analog TV.
This title is unnecessarily inflammatory with intent to gain our sympathy to the position presented.<p>The technical writeup of this post are spot-on, though. I prefer less drama with my bias but I’m very glad I read this.
Looks like as far back as 2014 research has pointed to some big gains using HTTP/2 push: <a href="https://dl.acm.org/citation.cfm?id=2578277" rel="nofollow">https://dl.acm.org/citation.cfm?id=2578277</a>
> A Partial Segment must be completely available for download at the full speed of the link to the client at the time it is added to the playlist.<p>So with this, you can not have a manifest file that point to next future chunks (e.g. for up to next 24 hours of live stream) and delay proccessing of http request until the chunk became available. Like HTTP Long Polling used for chunks.<p>> On the surface, LHLS maintains the traditional HLS paradigm, polling for playlist updates, and then grabbing segments, however, because of the ability to stream a segment back as it's being encoded, you actually don’t have to reload the playlist that often, while in ALHLS, you’ll still be polling the playlist many times a second looking for new parts to be available, even if they’re then pushed to you off the back of the manifest request.<p>Which could be avoided if Apple didn't enforced the availibilty of download "at the full speed" once it appeared in the manifest. (long polling of chunks)<p>LHLS doesn't have this issue as the manifest file itself is streamed with chunked responses hence it makes sense. (streaming manifest file)<p>> For the time being at least, you’ll have to get your application (and thus your low latency implementation) tested by Apple to get into the app store, signaled by using a special identifier in your application’s manifest.<p>And this makes me to think about the implementability of the 1st and 2nd point on ALHLS. Maybe the current "implementation" is compatible but not with the specs itself.
Apple low latency test stream I set up if useful (uses CDN)
<a href="https://alhls-switchmedia.global.ssl.fastly.net/lhls/master.m3u8" rel="nofollow">https://alhls-switchmedia.global.ssl.fastly.net/lhls/master....</a>
> measuring the performance of a blocking playlist fetch along with a segment load doesn’t give you an accurate measurement, and you can’t use your playlist download performance as a proxy.<p>I don’t see why this would be the case. If you measure from the time the last bit of the playlist is returned to the last bit of the video segment is pushed to the client, you’ll be able to estimate bandwidth accurately.
As usual, Apple pushes NIH, instead of supporting DASH which is the common standard. And they also tried to sabotage adoption of the later by refusing to support MSE on the client side that's needed for handling DASH.