Note that there are a lot of tunings that you may need depending on what your latency tolerance and picture quality tolerance is. I would recommend following FFmpeg's streaming guide [0].<p>If you are trying to stream desktop, camera, and microphone to the browser, I would recommend pion's mediadevices package [1].<p>[0] - <a href="https://trac.ffmpeg.org/wiki/StreamingGuide" rel="nofollow">https://trac.ffmpeg.org/wiki/StreamingGuide</a><p>[1] - <a href="https://github.com/pion/mediadevices" rel="nofollow">https://github.com/pion/mediadevices</a>
My all time question about FFmpeg is what are all those timestamp correction flags and synchronization options for:<p>* -fflags +genpts, +igndts, +ignidx<p>* -vsync<p>* -copyts<p>* -use_wallclock_as_timestamps 1<p>* And more that you find even when you thought you had seen all flags that might be related.<p>FFmpeg docs are a strange beast, they cover a lot of topics, but are extremely shallow in most of them, so the overall quality ends up being pretty poor. I mean it's like the kind of frowned upon code comments such as "ignidx ignores the index; genpts generates PTS". No surprises there... but no real explanation, either.<p>What I'd love is for a real, technical explanation of what are the consequences of each flag, and more importantly, the kind of scenarios where they would make a desirable difference.<p>Especially for the case of recording live video that comes from an unreliable connection (RTP through UDP) <i>and storing it as-is (no transcoding whatsoever)</i>: what is the best, or recommended set of flags that FFmpeg authors would recommend? Given that packets can get lost, or timestamps can get garbled, UDP packets reordered in the network, or any combination of funny stuff.<p>For now I've sort of decided on using genpts+igndts and use_wallclock_as_timestamps, but all comes from intuition and simple tests, and not from actual evidence and guided by technical documentation of each flag.
For `-f h264`, `-bsf:v h264_mp4toannexb` is not needed. It will be automatically inserted as needed, with ffmpeg 4.0 or later.<p>For latency, specify a short GOP size, e.g. `-g 50`
I personally use this project to proxy IP camera RTSP stream via Web Sockets as fragmented MP4 - <a href="https://github.com/deepch/RTSPtoWSMP4f" rel="nofollow">https://github.com/deepch/RTSPtoWSMP4f</a><p>I'm not affiliated with the project, it's just really performant and reliable.
Nice stuff; I did something similar with ffmpeg and pion.<p>It was for audio and it was webrtc to ffmpeg. I was streaming a group chat directly to s3.<p>It mostly worked, but the only problem I ran into was syncing issues if a user had a spotty connection. The solution seemed to involve using rtmp to synchronize but I didn’t have a chance to go down that rabbit hole.
Hopefully WHIP takes off. It’s a standard protocol that would easily allow things to interface with WebRTC.<p><a href="https://www.meetecho.com/blog/whip-janus/" rel="nofollow">https://www.meetecho.com/blog/whip-janus/</a><p><a href="https://millicast.medium.com/whip-the-magic-bullet-for-webrtc-media-ingest-57c2b98fb285" rel="nofollow">https://millicast.medium.com/whip-the-magic-bullet-for-webrt...</a>
I did something similar for Mac, a while back[0]. I never really developed it much farther, because of the latency issues. Since it was for surveillance cameras, that was a showstopper.<p>[0] <a href="https://github.com/RiftValleySoftware/RVS_MediaServer" rel="nofollow">https://github.com/RiftValleySoftware/RVS_MediaServer</a>
To the author: if you really want to be permissive about what others can do with your software, a MIT, BSD, or Apache 2 license (which is more complete in that it even includes a patent grant) seem to be more widely recognized and well tested than the Unlicense. Unless you did choose that license for some solid reasons, I'd suggest to consider switching to one of the other better regarded licences.<p>* <a href="https://softwareengineering.stackexchange.com/questions/147111/what-is-wrong-with-the-unlicense" rel="nofollow">https://softwareengineering.stackexchange.com/questions/1471...</a><p>* <a href="https://news.ycombinator.com/item?id=3610208" rel="nofollow">https://news.ycombinator.com/item?id=3610208</a>
I was just looking for something to do this, but couldn’t find much. I need to serve up about 1000 cameras to both hls (for public) and webrtc (for low latency/ptz admin use). Today we do it with paid packages, but I was exploring just using ffmpeg + nginx. Hls is easy enough, but since webrtc is not http, needs its own piece. Anyone have ideas on this? I’m familiar with Wowza and Ant. Any other open source utilities that do rtsp to both hls/webrtc?
Since we're on this topic, I want to ask a question:<p>How do I play video files stored in my VPS to Chromecast?<p>I want my mom to watch a video, from her TV, but I can't upload it to YouTube due to copyrighted content (yes, even if you set unlisted, YouTube will block it).
[help request]<p>I created a commercial product <i>Video Hub App</i> and have been trying for a year to get streaming a video from a PC to an iPhone working (through a PWA, not a dedicated iOS app) and have had 0 success. I could get the video stream to play on a separate laptop through Chrome, but iOS Safari kicks my ass.<p>Does anyone have suggestions / ideas?<p><a href="https://github.com/whyboris/Video-Hub-App" rel="nofollow">https://github.com/whyboris/Video-Hub-App</a><p><a href="https://github.com/whyboris/Video-Hub-App-remote" rel="nofollow">https://github.com/whyboris/Video-Hub-App-remote</a>
Since there are probably some people experienced with ffmpeg here, is it possible to to image zooms with ffmpg that go deeper then zoom factor 10?<p>I can zoom up to factor 10 like this:<p>ffmpeg -i someimage.jpg -vf "zoompan=z='10-on/100':d=1000:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':s=1920x1437" zoom.mp4<p>But everything above a zoom of 10 seems to fail. Is there a hard limit in the code for some reason? Some way to overcome this?<p>Or is there another nice linux or online tool to do zooms into images?
As someone completely new to go, how do I run this? I have go installed but I cant seem to get any of the sample commands to work. I pulled the repo, cd'd into directory and ran the GO sample command that was provided in the source, but the terminal just hangs and blinks with no output.
This is what I got for<p>go run . -rtbufsize 100M -f dshow -i video="Integrated Webcam" -pix_fmt yuv420p -c:v libx264 -bsf:v h264_mp4toannexb -b:v 2M -max_delay 0 -bf 0 - < SDP<p>Connection State has changed failed<p>Peer Connection State has changed: failed<p>Peer Connection has gone to failed exiting
Would using gstreamer instead of ffmpeg offer better or worse performance? (Less CPU usage on the sender side?) If anyone has experience with this setup, I’d love to know.