>And finally, we encounter a large issue without a good solution. In encoded videos, a key frame is a frame in the video that contains all the visual information needed to render itself without any additional metadata. These are much larger than normal frames, and contribute greatly to the bitrate. Ideally, there would be as a few keyframes as possible. However, when a new user starts consuming a stream, they need at least one keyframe to view the video. WebRTC solves this problem using the RTP Control Protocl (RTCP). When a new user consumes a stream, they send a Full Intra Request (FIR) to the producer. When a producer receives this request, they insert a keyframe into the stream. This keeps the bitrate low while ensuring all the users can view the stream. FFmpeg does not support RTCP. This means that the default FFmpeg settings will produce output that won’t be viewable if consumed mid-stream, at least until a key frame is received. Therefore, the parameter -force_key_frames expr:gte(t,n_forced*4) is needed, which produces a key frame every 4 seconds.<p>in case someone was wondering why it was a bad idea