I've been working on training this small vision language model for the last month - excited to release the first prototype today! It is based on SigLIP (image encoder), Phi-1.5 (text model) and trained using the LLaVa-1.5 training dataset.<p>It runs reasonably fast on CPU with ~8GB of RAM in full 32-bit precision. There's plenty of room to speed it up and reduce memory consumption by quantizing the model.<p>I posted a video of it running on my M2 Macbook Air (on CPU not MPS, so performance should be comparable on other hardware) on Twitter to demonstrate inference speed: <a href="https://twitter.com/vikhyatk/status/1740910503323734448" rel="nofollow">https://twitter.com/vikhyatk/status/1740910503323734448</a>