It would be interesting to see how far you could get using deepfakes as a method for video call compression.<p>Train a model locally ahead of time and upload it to a server, then whenever you have a call scheduled the model is downloaded in advance by the other participants.<p>Now, instead of having to send video data, you only have to send a representation of the facial movements so that the recipients can render it on their end. When the tech is a little further along, it should be possible to get good quality video using only a fraction of the bandwidth.
Actual Github repo for this called Avatarify: <a href="https://github.com/alievk/avatarify" rel="nofollow">https://github.com/alievk/avatarify</a><p>Original repo code that Avatarify is based on called First Order Model: <a href="https://github.com/AliaksandrSiarohin/first-order-model" rel="nofollow">https://github.com/AliaksandrSiarohin/first-order-model</a><p>Short video demonstrating First Order Model: <a href="https://www.youtube.com/watch?v=mUfJOQKdtAk" rel="nofollow">https://www.youtube.com/watch?v=mUfJOQKdtAk</a>
Voice fake has been done already as well.<p><a href="https://github.com/CorentinJ/Real-Time-Voice-Cloning" rel="nofollow">https://github.com/CorentinJ/Real-Time-Voice-Cloning</a>
I ran this on ubuntu 18.04. It took a little work to get around a small bug that will be squashed when the v412loopback library gets officially rebuilt but here is a solution
<a href="https://github.com/alievk/avatarify/issues/37#issuecomment-614503547" rel="nofollow">https://github.com/alievk/avatarify/issues/37#issuecomment-6...</a><p>anyway, on a 4930k at 4.5 ghz i am seeing reasonable performance(~30fps) but minimal cuda utilization(titan pascal).
The biggest issue is that you need a well lit, stationary face for it to properly map features of the jpg you are using to substitute for your face. Also the jpg needs clearly identifiable features. Even then, the amount of facial expression is subdued (for example closing your eyes is not properly processed).<p>I seem to recall software about 10 years ago where you drew line segments of corresponding features on 2 images and the jpg was then mapped onto the video. It was more accurate and expressive than this is but did require time to set up.
The way that the mouth and eyes move, but that the rest of the face is pretty static, is actually really charming to me.<p>It betrays that it's a fake, which I think makes it easier to be a fun joke.
This is the best application of deep fakes I have seen! If someone was selling deep fakes for StarTrek/StarWar, it would be hot cake among crowd here, except for too many Kirk and Picards might be seen in meetings.
Sounds like it would be with a few minutes of fun on a daily status call during these work-from-home days of quarantine. For the time being the goofiest it’s gotten was someone using a virtual webcam that allowed for green-screen and looping video of the Max Headroom background... Which reminds me, though slightly off topic, if there’s virtual webcam software whose sole point is to keep a log of all of the applications using the built-in camera or mic. That might be useful.
I’m sure we will see more and more of this. One question is when they get so good, how can people figure out that it’s a deep fake or a real human? Will we have captchas for videos soon ? <i>sigh</i>
I seriously wonder how this will affect online dating. Not that I've dated in quite a few years, but even if I wanted to, I wouldn't go back because last time I did, the proliferation of obnoxious Instagram filters and photoshopping made the experience unenjoyable. Fake people aren't appealing. I would bet that the ease at which the average person can deep fake will only make matters much worse. There will always be a demand for Tinder, Match, Bumble, etc., but they will be strictly used for hookups.(I know some will say that's what they are already for, but people are always making that argument for every dating app, thus I can't take that opinion very seriously) Actual dating will have to either go back into being more in-person or require a third-party to handle photography.
All the new mobile banks seem to do their ID verification/KYC using video selfies. Interesting to consider whether deepfakes are being used in wild to fool these systems and commit fraud.
I am just waiting for someone to build a deep_nude_realtime_zoom plugin so I can finally tell people that we should take digital security, privacy and identity seriously.
Hey guys! I'm one of the founders of Impressions, the first mobile deepfake app on a phone. Try it out and give us your thoughts. Right now it's out for IOS but Android is on route. Here's our website <a href="https://impressions.app" rel="nofollow">https://impressions.app</a>
I wanted to play with this, but ran into an error upon starting a Miniconda prompt after a fresh install of Miniconda3 on Win7:<p><pre><code> ModuleNotFoundError: No module named 'conda'
</code></pre>
It's triggered by this line in conda-script.py:<p><pre><code> from conda.cli import main
</code></pre>
Same thing happened when I tried installing Anaconda instead. Any suggestions?<p>----<p>EDIT: To get this to work, I had to remove a PYTHONHOME environment variable lingering from an old (but still valid) Python install (see <a href="https://github.com/conda/conda/pull/9239" rel="nofollow">https://github.com/conda/conda/pull/9239</a>), and <i>set CONDA_DLL_SEARCH_MODIFICATION_ENABLE=1</i> to avoid mkl_intel_thread.dll issues.<p>Leaving my comment here for anyone else who gets stumped.
Ok so how about an app that does the opposite of this, and can prove that the video/audio data hasn't been tampered with since it was created? I don't think it would need much more than public/private key encryption, and a blockchain ledger to record every-time the data was transmitted from one user to another. Thoughts?
I don’t understand why everything thinks they have to be on video for every single meeting. I never turn on my webcam. It makes no difference to the outcome of the call.
From the site's text:<p>> <i>and plenty of Americans and Russian's willing to use whatever means necessary to ensure Trump gets re-elected</i><p>That is not what the Russian attacks are about.<p>In fact, the maker of the site seems to have fallen for the attacks himself.<p>The attacks are about <i>dividing</i> society. Radicalizing people towards the left and right. The "Trump haters" are just the same as the "Trump fanatics".<p>The author, by implying that the "bad guys" are just one site, is playing into the hands of the Russian trolls. Its exactly what the attacks are trying to achieve.