We were frustrated by the slowness of the Deepseek app and website, so we decided to build a local-first version of it on top of our Basic database and sync tech. We chose the Groq Deepseek distill LLaMa 70B model because we felt it had the best balance of speed and accuracy.<p>This is an experimental open-source project that we threw together as quickly as we could, so please bare with us with any janky UI and bugs you face (happy to fix them as you point them out), but we personally have started using this instead of any of the other models and chat interfaces just because of how fast everything is<p>We hope this can be a pleasant contribution to your workflow