Hi guys,<p>I noticed many iPhone users can't access the new AI emoji features because they have older devices. So I spent 48 hours building an accessible alternative using open-source images models.<p>The system uses a fine-tuned Flux AI model (based on <a href="https://github.com/kijai/ComfyUI-FluxTrainer">https://github.com/kijai/ComfyUI-FluxTrainer</a>) to generate custom emojis from text. It runs efficiently enough to work on a basic web interface - no device requirements or OS updates needed.<p>Just type what emoji you want (like "happy pizza doing backflip"), and it generates a vector emoji with transparent background in about 5 seconds. Works on any device with a web browser, including older iPhones.<p>iOS and android versions are coming, and would love your feedbacks on the tools. Love to hear if something else interesting you would like to see on the too,<p>Cheers,
//TT
Looks cool, although i personally stopped at the login prompt.
I would probably also tone down the continuous generic pop-up that someone generated something. It seems a bit too frequent, repeating and generic to be trustworthy.<p>That being said, I like the use-case and it seems like nice work in a short amount of time. I have a couple of questions that I am curious about<p>1. Can you elaborate a bit more on your fine-tuning process? Did you "just" feed the model a bunch of regular emojis? Have you considered using any RLHF/DPO approaches?<p>2. You mention you generate a vector emoji. As far as I know the flux model just generates bitmaps, how do you handle that conversion?