TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

UI for fine tuning Mistral and SDXL, GPU mem/latency optimization

2 点作者 lewq超过 1 年前

1 comment

lewq超过 1 年前
100% bootstrapped new startup with source available on GitHub. It lets you fine tune Mistral-7B and SDXL with a nice UI. In particular, for the LLM fine tuning we implemented a dataprep pipeline that turns websites&#x2F;pdfs&#x2F;doc files into question-answer pairs for training the small LLM using an big LLM.<p>It includes a GPU scheduler that can do finegrained GPU memory scheduling (Kubernetes can only do whole-GPU, we do it per-GB of GPU memory to pack both inference and fine tuning jobs into the same fleet) to fit model instances into GPU memory to optimally trade off user facing latency with vram memory utilization<p>It&#x27;s a pretty simple stack of control plane and a fat container that runs anywhere you can get hold of a GPU (e.g. runpod).<p>Architecture: <a href="https:&#x2F;&#x2F;docs.helix.ml&#x2F;docs&#x2F;architecture" rel="nofollow noreferrer">https:&#x2F;&#x2F;docs.helix.ml&#x2F;docs&#x2F;architecture</a><p>Demo walkthrough showing runner dashboard: <a href="https:&#x2F;&#x2F;docs.helix.ml&#x2F;docs&#x2F;overview" rel="nofollow noreferrer">https:&#x2F;&#x2F;docs.helix.ml&#x2F;docs&#x2F;overview</a><p>Run it yourself: <a href="https:&#x2F;&#x2F;docs.helix.ml&#x2F;docs&#x2F;controlplane" rel="nofollow noreferrer">https:&#x2F;&#x2F;docs.helix.ml&#x2F;docs&#x2F;controlplane</a><p>Roast me!