The most interesting thing about this is the way it was trained using synthetic data, which is described in quite a bit of detail in the technical report: <a href="https://arxiv.org/abs/2412.08905" rel="nofollow">https://arxiv.org/abs/2412.08905</a><p>Microsoft haven't officially released the weights yet but there are unofficial GGUFs up on Hugging Face already. I tried this one: <a href="https://huggingface.co/matteogeniaccio/phi-4/tree/main" rel="nofollow">https://huggingface.co/matteogeniaccio/phi-4/tree/main</a><p>I got it working with my LLM tool like this:<p><pre><code> llm install llm-gguf
llm gguf download-model https://huggingface.co/matteogeniaccio/phi-4/resolve/main/phi-4-Q4_K_M.gguf
llm chat -m gguf/phi-4-Q4_K_M
</code></pre>
Here are some initial transcripts: <a href="https://gist.github.com/simonw/0235fd9f8c7809d0ae078495dd630b67" rel="nofollow">https://gist.github.com/simonw/0235fd9f8c7809d0ae078495dd630...</a><p>More of my notes on Phi-4 here: <a href="https://simonwillison.net/2024/Dec/15/phi-4-technical-report/" rel="nofollow">https://simonwillison.net/2024/Dec/15/phi-4-technical-report...</a>
For prompt adherence it still fails on tasks that Gemma2 27b nails every time. I haven't been impressed with any of the Phi family of models. The large context is very nice, though Gemma2 plays very well with self-extend.
Looks like it punches way above its weight(s).<p>How far are we from running a GPT-3/GPT-4 level LLM on regular consumer hardware, like a MacBook Pro?
Looks like someone converted it for Ollama use already: <a href="https://ollama.com/vanilj/Phi-4">https://ollama.com/vanilj/Phi-4</a>
I really like the ~3B param version of phi-3. It wasn't very powerful and overused memory, but was surprisingly strong for such a small model.<p>I'm not sure how I can be impressed by a 14B Phi-4. That isn't really small any more, and I doubt it will be significantly better than llama 3 or Mistral at this point. Maybe that will be wrong, but I don't have high hopes.
Where have I been? What is a “small” language model? Wikipedia just talks about LLMs. Is this a sort of spectrum? Are there medium language models? Or is it a more nuanced classifier?
Model releases without comprehensive coverage of benchmarks make me deeply skeptical.<p>The worst was the gpt4o update in November. Basically a 2 liner on what it is better at and in reality it regressed in multiple benchmarks.<p>Here we just get MMLU, which is widely known to be saturated and knowing they trained on synthetic data, we have no idea how much "weight" was given to having MMLU like training data.<p>Benchmarks are not perfect, but they give me context to build upon.
---<p>edit: the benchmarks are covered in the paper: <a href="https://arxiv.org/pdf/2412.08905" rel="nofollow">https://arxiv.org/pdf/2412.08905</a>
I'm not too excited by Phi-4 benchmark results - It is#BenchmarkInflation.<p>Microsoft Research just dropped Phi-4 14B, an open-source model that’s turning heads. It claims to rival Llama 3.3 70B with a fraction of the parameters — 5x fewer, to be exact.<p>What’s the secret? Synthetic data.
-> Higher quality, Less misinformation, More diversity<p>But the Phi models always have great benchmark scores, but they always disappoint me in real-world use cases.<p>Phi series is famous for to be trained on benchmarks.<p>I tried again with the hashtag#phi4 through Ollama - but its not satisfactory.<p>To me, at the moment - IFEval is the most important llm benchmark.<p>But look the smart business strategy of Microsoft:<p>have unlimited access to gpt-4
the input prompt it to generate 30B tokens
train a 1B parameter model
call it phi-1
show benchmarks beating models 10x the size
never release the data
never detail how to generate the data( this time they told in very high level)
claim victory over small models