Does it work...? The examples given at the bottom of the post were pretty great, but could easily have been cherry picked. I'd be curious to see how it performs against standard benchmarks.<p>But I love the thought here. I didn't realize the instruction tuning for GPT was from only 40 people. It really does bring into perspective how easily a motivated large organization could bring their employees to bear to do something like this, and I'm grateful that DataBricks has done it and is sharing it here.<p>I wish I understood how LLMs work a little better. This is a neat piece of the puzzle I wasn't fully aware of. But now my mental model is that LLMs work with kind of "three layers" of inputs:<p>* The base many-billion or even trillion parameter model, trained on a huge corpus of text, which basically is how it learns to use language as I/O.<p>* The instruction tuning, on just tens of thousands of inputs, to give the raw model some further guidance. This is a sort of transfer learning, maybe? Doing further training on top of a big model?<p>* The prompt itself can provide further inputs and context to tweak how the response should look.<p>I had been thinking of LLMs in terms of the first layer, the base model, and the bottom layer the prompt, and was thinking that you could get progressively more sophisticated in the prompt "context" to have LLMs tailor made for your particular use case.<p>But actually, there's a decent chunk of space to explore on the instruction tuning? Like, say you wanted an LLM to help lawyers with case law or something, to keep it from hallucinating quite as much and being more detailed and useful. Is that something that would fit in the middle layer? Could a "legal AI startup" tackle that problem by starting with a big open source base model, proprietarily tuning it with 10s of thousands of legal questions and answers, and then sharing that model with law firms, with maybe a customer support rep at the firm able to do the final tweaking with the prompt context? Is that how this all fits together?<p>The examples here of digesting DataBricks info and customer support tickets I found really interesting. How exactly would large companies like DB tailor LLMs to their particular use cases and data?