Hi,<p>During the recent explosion of LLM's, I've been playing around with Llama-2 7b on my Mac, and have wanted to fine tune the model.<p>I've created a service which lets you upload a json/jsonl file, and it'll automatically fine tune Llama/Mistral/anything from hf on that dataset.<p>You don't have to "burn" money trying to get fine tuning scripts to work or use gpt-3.5. You just upload your dataset, let the model fine tune, and then download the model to run locally.<p>I personally used my service to fine tune Code Llama on python docstring comments. I then took the model and converted it to .gguf (for use in llama.cpp) and I'm now able to generate python docstrings without sending my code to OpenAi (it works surprisingly well).<p>I hope this helps some people. I'm manually onboarding users and I'm interested in any feedback you might have.