Trying to gauge whether to just solve my problem or release the solution.<p>Who's deploying Open Source models and would like a simpler way to fine-tune the models out of HuggingFace / Ollama?<p>I was going to build a tool for me because I have a lot of Agent fine-tuning (and re-tuning) required. If there's interest, I can share my code / learning.<p>The minimum most of my agents from the 'base' models is training them on output format schema so they're more consistent.<p>I want the process to be simple enough that I could put them reliability in a build & deployment pipeline.<p>Vision of how it'll work:<p>(setup)
0. `pip install [newlibary]` & run a setup command on that library<p>(day-to-day usage)
1. '[newlib] create training_manifest.yaml' (or json, feedback welcome)<p>2. '[newlib] tune `model-name`' (name in manifest)<p>3. '[newlib] verify `model-name`'<p>When we verify we should be able to see the improvement in results on a set of verification tests.<p>I'll be using Ollama behind the scenes, we should, eventually be able to push and pull our new models to a store.