The price of intelligence is dropping to near-zero... An actually-open 72b model that benchmarks near GPT4 (Mistral Medium is still closed source / closed weights, even if an early version of it leaked with miqu) is insane. Even the 14b model easily outperforms gpt-3.5-turbo, and given how useful gpt-3.5-turbo finetunes are (they generally outperform non-finetuned GPT4 at a given task they've been finetuned on), I'd imagine the 14b model will prove pretty useful too as a super-low-cost finetuning target. Not to mention the fact that the 14b and especially 7b models are runnable locally on consumer GPUs...