100% bootstrapped new startup. It lets you fine tune Mistral-7B and SDXL. In particular, for the LLM fine tuning we implemented a dataprep pipeline that turns websites/pdfs/doc files into question-answer pairs for training the small LLM using an big LLM.<p>It includes a GPU scheduler that can do finegrained GPU memory scheduling (Kubernetes can only do whole-GPU, we do it per-GB of GPU memory to pack both inference and fine tuning jobs into the same fleet) to fit model instances into GPU memory to optimally trade off user facing latency with GPU memory utilization<p>It's a pretty simple stack of control plane and a fat container that runs anywhere you can get hold of a GPU (e.g. runpod).<p>Architecture: <a href="https://docs.helix.ml/docs/architecture" rel="nofollow noreferrer">https://docs.helix.ml/docs/architecture</a><p>Demo walkthrough showing runner dashboard:
<a href="https://docs.helix.ml/docs/overview" rel="nofollow noreferrer">https://docs.helix.ml/docs/overview</a><p>Run it yourself: <a href="https://docs.helix.ml/docs/controlplane" rel="nofollow noreferrer">https://docs.helix.ml/docs/controlplane</a><p>Discord: <a href="https://discord.gg/VJftd844GE" rel="nofollow noreferrer">https://discord.gg/VJftd844GE</a><p>Please roast me!
This is huge. Luke I think you have a winner here, this is great. Can't wait to try it over the holidays.<p>If I can be cheeky, be sure to repost over the coming days at different hours – you're likely to spawn more traffic that way =)
Some resources:<p>Demo: <a href="https://www.youtube.com/watch?v=Ym4nPSzfer0" rel="nofollow noreferrer">https://www.youtube.com/watch?v=Ym4nPSzfer0</a><p>About Helix<p>Helix is a generative AI platform that you can run on our cloud or deploy in your own data center or cloud account. It provides an easy-to-use interface to using open source AI that's accessible to everyone.<p>Under the hood, it uses the best open source models and includes a GPU scheduler that can fit model instances into GPU memory to optimally trade off user facing latency with GPU memory utilization.<p>If you think this is cool, please vote for us on <a href="https://www.producthunt.com/posts/helix-5" rel="nofollow noreferrer">https://www.producthunt.com/posts/helix-5</a> today.<p>Docs: <a href="https://docs.helix.ml/docs/overview" rel="nofollow noreferrer">https://docs.helix.ml/docs/overview</a><p>Architecture: <a href="https://docs.helix.ml/docs/architecture" rel="nofollow noreferrer">https://docs.helix.ml/docs/architecture</a><p>Things to try with LLM fine-tuning using Helix:<p>- <a href="https://docs.helix.ml/docs/papers" rel="nofollow noreferrer">https://docs.helix.ml/docs/papers</a><p>- <a href="https://docs.helix.ml/docs/engaging-content" rel="nofollow noreferrer">https://docs.helix.ml/docs/engaging-content</a><p>- <a href="https://docs.helix.ml/docs/insights-data" rel="nofollow noreferrer">https://docs.helix.ml/docs/insights-data</a><p>- <a href="https://docs.helix.ml/docs/website-content" rel="nofollow noreferrer">https://docs.helix.ml/docs/website-content</a><p>Sample sessions for SDXL:<p>- <a href="https://app.tryhelix.ai/session/e1b50789-a209-46c8-aa60-4d097af1aa8b" rel="nofollow noreferrer">https://app.tryhelix.ai/session/e1b50789-a209-46c8-aa60-4d09...</a><p>- <a href="https://app.tryhelix.ai/session/cc6004cd-111b-48ae-9a8c-d651f3ed45c8" rel="nofollow noreferrer">https://app.tryhelix.ai/session/cc6004cd-111b-48ae-9a8c-d651...</a><p>- <a href="https://app.tryhelix.ai/session/d50db369-4ffa-4a49-88dd-1cff05fee947" rel="nofollow noreferrer">https://app.tryhelix.ai/session/d50db369-4ffa-4a49-88dd-1cff...</a>