It's easy to misunderstand claims of running LLMS locally, as if anyone can write the next ChatGPT on their laptop.<p>Even though fine tuning is a type of training, it is not the hard part, so one solution is to communicate more clearly and always call fine-tuning fine-tuning. There are a lot of new people wanting to get into the field, and having clarity in your claims will help us out.<p>Thanks
Well if the LLM is closed and proprietary, there is no insight into how training data is even used. It's just a black-box we have to use blindly and 'hope' the designers are using a blend of fine-tuning coupled with better training data.