Better link <a href="https://blackforestlabs.ai/announcing-flux-1-1-pro-and-the-bfl-api/" rel="nofollow">https://blackforestlabs.ai/announcing-flux-1-1-pro-and-the-b...</a>
Flux is so frustrating to me. Really good prompt adherence, strong ability to keep track of multiple parts of a scene, it's technically very impressive. However it seems to have had no training on art-art. I can't get it to generate even something that looks like Degas, for instance. And, I can't even fine tune a painterly art style of any sort into Flux dev. I get that there was working, living artist backlash at SD and I can therefore imagine that the BFL team has decided not to train on art, but, it's a real loss. Both in terms of human knowledge of, say composition, emotion, and so on, but also for style diversity.<p>For goodness sake, the MET in New York has a massive trove of open CC0 type licensed art. Dear BFL, please ease up a bit on this, and add some art-art to your models, they will be better as a result.
Pretty smart model. Here's one I made: <a href="https://replicate.com/p/6ez0x8xqvsrga0cjadg8m7bah0">https://replicate.com/p/6ez0x8xqvsrga0cjadg8m7bah0</a>
"state of the art" has become such tired marketing jargon.<p>"our most advanced and efficient model yet"<p>"a significant step forward in our mission to empower creators"<p>I get it, you can't sell things if you don't market them, and you can't make a living making things if you don't sell them, but it's exhausting.
Far more interesting will be when pony diffusion V7 launches.<p>No one in the image space wants to admit it, but well over half of your user base wants to generate hardcore NSFW with your models and they mostly don’t care about any other capabilities.
Ah, that was one short gravy train even by modern tech company standards. Really wish the space was more competitive and open so it wouldn't just be one company at the top locking their models behind APIs.
It doesn’t get piano keyboards right, but it’s the first image generator I’ve tried that sometimes get “someone playing accordion” mostly right.<p>When I ask for a man playing accordion, it’s usually a somewhat flawed piano accordion, but If I ask for a woman playing accordion, it’s usually a button accordion. I’ve also seen a few that are half-button, half-piano monstrosities.<p>Also, if I ask for “someone playing accordion”, it’s always a woman.
I'm running Asahi Linux on a 32GB M1 Pro. Any chance of being able to run text-to-image models locally? I've had some success with LLMs, but only the smaller models. No idea where to start with images, everything seems geared towards msft+nvda.
I'm worried about what happens when more people find out about Ideogram.<p>There are a lot of things that don't appear in ELO scores. For one, they will not reflect that you cannot prompt women's faces in Flux. We can only speculate why.
I think Flux is better than SDXL and Dall e. I tried the models from here <a href="https://apps.apple.com/us/app/art-x-a-i-art-generator-aiart/id1644315225" rel="nofollow">https://apps.apple.com/us/app/art-x-a-i-art-generator-aiart/...</a>
I've been playing with Flux.Dev and such a big step forward from Stable Diffusion and all the other Generative AIs that could run on consumer GPUs.<p>I just tried this Flux1.1 pro page (prompt: "A sad Macintosh user who is upset because his computer can't play games") and was very impressed by the detail and "understanding" this model has.
I asked for a simple scene and it drew in the exact same AI girl that every text-to-image model wants to draw, same face, same hair, so generic that a Google reverse image search pulls up thousands of the exact same AI girl. No variety of output at all.
I really enjoy its service. It's promising for UI design. My advocacy website pages' UI design was bootstrapped using it. It is quite good for developers without much design ability.<p>Ironically, I am afraid to type the website out and will keep it unknown here. My account could be suspended because of this. It had already reached -1 karma. It's better to keep my account alive.
The generated images look impressive of course but I can't help but be mildly amused by the fact that the prompt for the second example image insists strongly that the image should say 1.1:<p>> ... photo with the text "FLUX 1.1 [Pro]", ..., must say "1.1", ...<p>...And of course, it does not.
Sorry to be a noob, but how does this relate to fastflux.ai which seems to work great and creates an image in less than a second? Is this a new model on a slower host?
In case you want to try it out without hassling with the API, I've set up a free tool for it so you can try it out on WhatsApp: <a href="https://instatools.ai/products/fluxprovisions" rel="nofollow">https://instatools.ai/products/fluxprovisions</a>