TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

New Phi-3.5 Models from Microsoft, including new MoE

25 pointsby thecal9 months ago

3 comments

thecal9 months ago
Mini: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;microsoft&#x2F;Phi-3.5-mini-instruct" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;microsoft&#x2F;Phi-3.5-mini-instruct</a><p>Large MoE with impressive benchmarks: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;microsoft&#x2F;Phi-3.5-MoE-instruct" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;microsoft&#x2F;Phi-3.5-MoE-instruct</a><p>Vision: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;microsoft&#x2F;Phi-3.5-vision-instruct" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;microsoft&#x2F;Phi-3.5-vision-instruct</a>
pseudosavant9 months ago
Does anyone have an idea what the output token limit is? I only see mention of the 128k token context window, but I bet the output limit is 4k tokens.
hurrdurr579 months ago
The Phi models always seem to do really well when it comes to benchmarks but then in real world performance they always fall way behind competing models.