TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Do You Need a CLI Tool for Output Schema Fine-Tuning of Open Source LLM

1 点作者 cyrusradfar大约 1 年前
Trying to gauge whether to just solve my problem or release the solution.<p>Who&#x27;s deploying Open Source models and would like a simpler way to fine-tune the models out of HuggingFace &#x2F; Ollama?<p>I was going to build a tool for me because I have a lot of Agent fine-tuning (and re-tuning) required. If there&#x27;s interest, I can share my code &#x2F; learning.<p>The minimum most of my agents from the &#x27;base&#x27; models is training them on output format schema so they&#x27;re more consistent.<p>I want the process to be simple enough that I could put them reliability in a build &amp; deployment pipeline.<p>Vision of how it&#x27;ll work:<p>(setup) 0. `pip install [newlibary]` &amp; run a setup command on that library<p>(day-to-day usage) 1. &#x27;[newlib] create training_manifest.yaml&#x27; (or json, feedback welcome)<p>2. &#x27;[newlib] tune `model-name`&#x27; (name in manifest)<p>3. &#x27;[newlib] verify `model-name`&#x27;<p>When we verify we should be able to see the improvement in results on a set of verification tests.<p>I&#x27;ll be using Ollama behind the scenes, we should, eventually be able to push and pull our new models to a store.

1 comment

nbbaier大约 1 年前
Sounds neat!