This has problems usually not seen with current systems. It's produced human characters with one thick leg and one thin leg. Three legs of different sizes. Three arms.<p>It can do humans in passive poses, but ask for an action shot and it botches it badly.
It needs more training data on how bodies move. Maybe load it up with stills from dance, martial arts, and sports.
The most interesting aspect of this model is that it is very training efficient: <a href="https://pixart-alpha.github.io/" rel="nofollow noreferrer">https://pixart-alpha.github.io/</a><p>It also has the same idea as Dalle 3 to train the model on synthetic captions.
The source code license is AGPL-3.0 license. Perfect for these kinds of models: <a href="https://github.com/PixArt-alpha/PixArt-alpha">https://github.com/PixArt-alpha/PixArt-alpha</a>
From their GitHub:<p>>This integration allows running the pipeline with a batch size of 4 under 11 GBs of GPU VRAM. GPU VRAM consumption under 10 GB will soon be supported, too. Stay tuned.
I think it's kind of disingenuous maybe to claim such improvements in training efficiency when they rely on:<p>- Existing models for data pseudo-labelling<p>- ImageNet pretraining<p>- A frozen text encoder<p>- A frozen image encoder