Hi, I've built the demo. Unfortunately, it's running on a single GPU and can only be used concurrently by a few users. For a better real-time experience, you would need a dedicated machine.<p>You can get the source here and how to run:
<a href="https://github.com/radames/Real-Time-Latent-Consistency-Model">https://github.com/radames/Real-Time-Latent-Consistency-Mode...</a>
See video here:
<a href="https://twitter.com/radamar/status/1718783886413709542" rel="nofollow noreferrer">https://twitter.com/radamar/status/1718783886413709542</a> or on the Github readme.<p>FYI, this is made possible due to a new technique: <a href="https://latent-consistency-models.github.io" rel="nofollow noreferrer">https://latent-consistency-models.github.io</a>, fine-tuning an existing models. The author will soon publish the training script. We'll see all the cool image models running at this speed! I'm excited for this! I can see many interesting experiments and projects emerging from this.
Well. Everything in AI seems to be going faster than I thought, even including the assumption that things will go faster than I expect.<p>This will be great for creating truly interesting avatars during video calls, instead of simplified fitting of facial landmarks to a 3d model (if the temporal consistency can be fixed)
Is there a non-live demo somewhere? I don't think huggingface's max of 4 users is going to be able to handle the HN crowd. Still curious as to what it looks like in action though.