I'd love to see something like this as a plugin for OBS. I've been using it lately due to all the video conferencing we're all growing to love since it's got basic color correction/manual controls for my webcam feed.<p>It's got the option for "real" chromakey, but like the author, I don't have a green screen, Amazon isn't scheduling any deliveries for another month, and I don't feel like a trip to the fabric store would count as "essential" travel (especially if it's only so I can screw around with stupid backgrounds).<p>Tried a few different sheets/blankets I had at home, but none are a suitable color or uniform/matte enough to work well, even with proper lighting. I admit this is such a non-issue and only something I want to play around with, but it would be fun nonetheless.
Good article, thanks. BTW, this is the method Jitsi-meet uses. They also use BodyPix. <a href="https://github.com/jitsi/jitsi-meet/blob/master/react/features/stream-effects/blur/JitsiStreamBlurEffect.js" rel="nofollow">https://github.com/jitsi/jitsi-meet/blob/master/react/featur...</a>
Found something similar yesterday, which is basically deepfake for avatars - <a href="https://github.com/alievk/avatarify" rel="nofollow">https://github.com/alievk/avatarify</a>
Since it looks like your webcam is mounted to a stationary PC and you're only really writing this for yourself, wouldn't it be a lot easier to just subtract out a static picture of your background from the feed?
I'm working on something very similar right now, so it's great to see that this is actually possible with currently available open-source resources. I'm trying to re-create Skype's background blur feature, but that I can use in OBS (obsproject.com) so I can apply it just to my webcam without messing up the other stream elements for online lectures.
I've been using the the DeepLab model from tensorflow research repo to do segmentation, but BodyPix seems even better for the job. If it performs any better, this might be the break I was looking for...
This is pretty awesome work, but I just wanted to point out that Zoom doesn't actually require a green screen. If you uncheck the "I have a green screen" button, choosing your own virtual background still looks really good, although of course you're not going to get any crazy effects like this script adds.
Using video loopback opens up some great creative possibilities for fun with video conferencing.<p>There's one thing I'd love to achieve though, which seems not possible on Linux desktop (specifically kubuntu)....<p>I want to be able to use the loopback as the source for screen share instead of webcam; i.e. to use the loopback as the conference presentation.<p>Has anyone got any ideas how to achieve this? Given most conference solutions on Linux do not seem to support either 'share this window' or 'share this screen region'. It seems to be the whole desktop or nothing.
Okay now please use your skills around AI and Virtual Webcams to Create a script that just generates a Picture of me that nods at the right moments during a zoom call ;)
Really good write up of a flexable approch.<p>If you just want an easy greenscreen <a href="https://obsproject.com/" rel="nofollow">https://obsproject.com/</a> has a very good chromakey filter and a V4L2loopback plugin <a href="https://github.com/CatxFish/obs-v4l2sink" rel="nofollow">https://github.com/CatxFish/obs-v4l2sink</a>
Pytorch has Deeplab available on their hub (I'm sure TF has something similar).<p>It's a couple of lines to use: <a href="https://pytorch.org/hub/pytorch_vision_deeplabv3_resnet101/" rel="nofollow">https://pytorch.org/hub/pytorch_vision_deeplabv3_resnet101/</a>
Fascinating write-up Ben, who would have known that you were a genius with image processing as well as running containers :-) Love the gory details and I didn't know about pyfakewebcam either.<p>Do you have a live video recorded showing how quickly it can process a stream?
Great read!
I am curious then why the Linux client doesn't supports this, if all it takes is to send out our webcam stream data to be processed server-side?<p>P.S.
What happens when they do e2ee on the webcam stream?
Brilliant!<p>Now, I know that there are many companies that force people to be on with a live video feed, and that many don't really like it.<p>How about recording a 3-min clip and playing that in an infinite loop - creating a fake feed (remember Keanu Reeves' Speed?) - so that people can avoid not being seen, but still get things done better? A mask on the face is a simple addition to avoid detection. As the saying goes, modern problems require modern solutions!
Thanks for sharing! I have been working on the exact same project with Tensorflow & BodyPix, really helpful to compare notes & see the pyfakewebcam approach!
Very interesting,<p>The magic is pyfakewebcam and v4l2loopback, I was looking foe a way to turn myself into a potato on Teams. The bit I was missing was how to create a virtual webcam.
Pretty cool write up of how this can be replicated with opencv and out of the box libraries like bodypix. I imagine Zoom is using something like this too.