Sharing our reference application that we built using Stable Diffusion and Segment Anything.<p>Stable Diffusion + Segment Anything - <a href="https://www.editanything.ai/" rel="nofollow">https://www.editanything.ai/</a> (try out the app!)<p>We believe chaining different models can lead to impressive user experiences and as an AI product owner you can really differentiate yourself from others if you use several models in creative ways.<p><a href="https://github.com/fal-ai/edit-anything-app">https://github.com/fal-ai/edit-anything-app</a><p>In the example there is python code to do the model inference as well as the javascript code to build the application. I believe this would be a great reference implementation for people trying to build their own AI apps.<p>Made a short video explaining the application: <a href="https://youtu.be/ob_WOogJn_A" rel="nofollow">https://youtu.be/ob_WOogJn_A</a><p>If there is interest would love to do a walkthrough of the codebase with a video as well!
Generative AI art definitely isn't done. I'm actually working on a feature-based UI, currently Dall-E 2 only, but I plan to add Stable Diffusion backends too. <a href="https://inventai.xyz" rel="nofollow">https://inventai.xyz</a>
Sorta Slow, I almost gave up while testing it.<p>Neat chain of tech. What would be really novel is if the UI and User experience was really smooth. Overall neat demonstration.