What has me excited about Llama: I've built some tools that I think would make sense to offer for an affordable "lifetime price" but they currently rely on OpenAI api / GPT4. I cannot get myself to offer lifetime memberships to something with an ongoing cost. Lately I've been considering building Electron apps with Llama for code embedded targeted toward Apple Silicon devices. I think with this stack I wouldn't incur any ongoing costs and these utilities could exist for a one time fee.
This is really cool, nice work!<p>Quick question - what would the cost of inference be, at scale, between a fine-tuned 3.5 and Llama 2 fine-tuned? Surely that's another factor that should be considered in this case, right?
I'm curious about the terminology for the "functional representation" dataset.<p>Is this a well-defined term? I've been thinking about similar approaches for getting more structured propositional knowledge into and out of LLMs, and the examples in the Viggo data set are the closest thing so far to someone thinking the same way I am.<p>However, Google doesn't turn up many results that use the term in this way. I'd love any more resources or information on the topic.
I've been struggling with figuring out a good dataset for fine-tuning. Most of the ones that exist were purpose made for finetuning/training a model already.<p>Does anyone have any tips for creating sufficient datasets for finetuning specific workloads?