TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

PixArt-α:A New Open-Source Text-to-Image Model Challenging SDXL and Dalle·3

77 pointsby liuxiaopaiover 1 year ago

9 comments

Animatsover 1 year ago
This has problems usually not seen with current systems. It&#x27;s produced human characters with one thick leg and one thin leg. Three legs of different sizes. Three arms.<p>It can do humans in passive poses, but ask for an action shot and it botches it badly. It needs more training data on how bodies move. Maybe load it up with stills from dance, martial arts, and sports.
评论 #38257745 未加载
GaggiXover 1 year ago
The most interesting aspect of this model is that it is very training efficient: <a href="https:&#x2F;&#x2F;pixart-alpha.github.io&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;pixart-alpha.github.io&#x2F;</a><p>It also has the same idea as Dalle 3 to train the model on synthetic captions.
ShamelessCover 1 year ago
Why name it PixArt when it covers a broader range of media than simply pixel art? Super confusing.
评论 #38259985 未加载
krasinover 1 year ago
The source code license is AGPL-3.0 license. Perfect for these kinds of models: <a href="https:&#x2F;&#x2F;github.com&#x2F;PixArt-alpha&#x2F;PixArt-alpha">https:&#x2F;&#x2F;github.com&#x2F;PixArt-alpha&#x2F;PixArt-alpha</a>
评论 #38257072 未加载
gigel82over 1 year ago
From their GitHub:<p>&gt;This integration allows running the pipeline with a batch size of 4 under 11 GBs of GPU VRAM. GPU VRAM consumption under 10 GB will soon be supported, too. Stay tuned.
ilakshover 1 year ago
Seems to have pretty good understanding and performance.
camdenlockover 1 year ago
This appears to be work sponsored by Huawei.
评论 #38257204 未加载
andromeduckover 1 year ago
Thought this was going to be a new optical sensor series :(
philmitchell47over 1 year ago
I think it&#x27;s kind of disingenuous maybe to claim such improvements in training efficiency when they rely on:<p>- Existing models for data pseudo-labelling<p>- ImageNet pretraining<p>- A frozen text encoder<p>- A frozen image encoder