TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Llama 3.2: Revolutionizing edge AI and vision with open, customizable models

924 pointsby nmwnmw8 months ago

46 comments

simonw8 months ago
I&#x27;m absolutely amazed at how capable the new 1B model is, considering it&#x27;s just a 1.3GB download (for the Ollama GGUF version).<p>I tried running a full codebase through it (since it can handle 128,000 tokens) and asking it to summarize the code - it did a surprisingly decent job, incomplete but still unbelievable for a model that tiny: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;simonw&#x2F;64c5f5b111fe473999144932bef4218b" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;simonw&#x2F;64c5f5b111fe473999144932bef42...</a><p>More of my notes here: <a href="https:&#x2F;&#x2F;simonwillison.net&#x2F;2024&#x2F;Sep&#x2F;25&#x2F;llama-32&#x2F;" rel="nofollow">https:&#x2F;&#x2F;simonwillison.net&#x2F;2024&#x2F;Sep&#x2F;25&#x2F;llama-32&#x2F;</a><p>I&#x27;ve been trying out the larger image models to using the versions hosted on <a href="https:&#x2F;&#x2F;lmarena.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lmarena.ai&#x2F;</a> - navigate to &quot;Direct Chat&quot; and you can select them from the dropdown and upload images to run prompts.
评论 #41652426 未加载
评论 #41656351 未加载
评论 #41653209 未加载
评论 #41666200 未加载
评论 #41653531 未加载
评论 #41654106 未加载
评论 #41652621 未加载
评论 #41659234 未加载
opdahl8 months ago
I&#x27;m blown away with just how open the Llama team at Meta is. It is nice to see that they are not only giving access to the models, but they at the same time are open about how they built them. I don&#x27;t know how the future is going to go in the terms of models, but I sure am grateful that Meta has taken this position, and are pushing more openness.
评论 #41657103 未加载
评论 #41655873 未加载
评论 #41658148 未加载
评论 #41653536 未加载
评论 #41657459 未加载
评论 #41655857 未加载
评论 #41662693 未加载
评论 #41654664 未加载
评论 #41652062 未加载
评论 #41657008 未加载
评论 #41654443 未加载
a_wild_dandan8 months ago
&quot;The Llama jumped over the ______!&quot; (Fence? River? Wall? Synagogue?)<p>With 1-hot encoding, the answer is &quot;wall&quot;, with 100% probability. Oh, you gave plausibility to &quot;fence&quot; too? WRONG! ENJOY MORE PENALTY, SCRUB!<p>I believe this unforgiving dynamic is why model distillation works well. The original teacher model had to learn via the &quot;hot or cold&quot; game on <i>text</i> answers. But when the child instead imitates the teacher&#x27;s predictions, it learns <i>semantically rich</i> answers. That strikes me as vastly more compute-efficient. So to me, it makes sense why these Llama 3.2 edge models punch so far above their weight(s). But it still blows my mind thinking how far models have advanced from a year or two ago. Kudos to Meta for these releases.
评论 #41653235 未加载
评论 #41661149 未加载
评论 #41653425 未加载
评论 #41660659 未加载
评论 #41660902 未加载
评论 #41655611 未加载
alanzhuly8 months ago
Llama3.2 3B feels a lot better than other models with same size (e.g. Gemma2, Phi3.5-mini models).<p>For anyone looking for a simple way to test Llama3.2 3B locally with UI, Install nexa-sdk(<a href="https:&#x2F;&#x2F;github.com&#x2F;NexaAI&#x2F;nexa-sdk">https:&#x2F;&#x2F;github.com&#x2F;NexaAI&#x2F;nexa-sdk</a>) and type in terminal:<p>nexa run llama3.2 --streamlit<p>Disclaimer: I am from Nexa AI and nexa-sdk is an open-sourced. We&#x27;d love your feedback.
评论 #41655185 未加载
评论 #41721033 未加载
评论 #41660648 未加载
freedomben8 months ago
If anyone else is looking for the bigger models on ollama and wondering where they are, the Ollama blog post answered that for me. The are &quot;coming soon&quot; so they just aren&#x27;t ready quite yet[1]. I was a little worried when I couldn&#x27;t find them but sounds like we just need to be patient.<p>[1]: <a href="https:&#x2F;&#x2F;ollama.com&#x2F;blog&#x2F;llama3.2">https:&#x2F;&#x2F;ollama.com&#x2F;blog&#x2F;llama3.2</a>
评论 #41654922 未加载
评论 #41653654 未加载
评论 #41654051 未加载
moffkalast8 months ago
I&#x27;ve just tested the 1B and 3B at Q8, some interesting bits:<p>- The 1B is extremely coherent (feels something like maybe Mistral 7B at 4 bits), and with flash attention and 4 bit KV cache it only uses about 4.2 GB of VRAM for 128k context<p>- A Pi 5 runs the 1B at 8.4 tok&#x2F;s, haven&#x27;t tested the 3B yet but it might need a lower quant to fit it and with 9T training tokens it&#x27;ll probably degrade pretty badly<p>- The 3B is a certified Gemma-2-2B killer<p>Given that llama.cpp doesn&#x27;t support any multimodality (they removed the old implementation), it might be a while before the 11B and 90B become runnable. Doesn&#x27;t seem like they outperform Qwen-2-VL at vision benchmarks though.
评论 #41652242 未加载
dhbradshaw8 months ago
Tried out 3B on ollama, asking questions in optics, bio, and rust.<p>It&#x27;s super fast with a lot of knowledge, a large context and great understanding. Really impressive model.
评论 #41652340 未加载
评论 #41652786 未加载
kingkongjaffa8 months ago
llama3.2:3b-instruct-q8_0 is performing better than 3.1 8b-q4 on my macbookpro M1. It&#x27;s faster and the results are better. It answered a few riddles and thought experiments better despite being 3b vs 8b.<p>I just removed my install of 3.1-8b.<p>my ollama list is currently:<p>$ ollama list<p>NAME ID SIZE MODIFIED<p>llama3.2:3b-instruct-q8_0 e410b836fe61 3.4 GB 2 hours ago<p>gemma2:9b-instruct-q4_1 5bfc4cf059e2 6.0 GB 3 days ago<p>phi3.5:3.8b-mini-instruct-q8_0 8b50e8e1e216 4.1 GB 3 days ago<p>mxbai-embed-large:latest 468836162de7 669 MB 3 months ago
评论 #41654926 未加载
评论 #41652676 未加载
评论 #41654937 未加载
kgeist8 months ago
Tried the 1B model with the &quot;think step by step&quot; prompt.<p>It gets &quot;which is larger: 9.11 or 9.9?&quot; right if it manages to mention that decimals need to be compared first in its step-by-step thinking. If it skips mentioning decimals, then it says 9.11 is larger.<p>It gets the strawberry question wrong even after enumerating all the letters correctly, probably because it can&#x27;t properly count.
评论 #41661555 未加载
评论 #41655797 未加载
评论 #41657036 未加载
评论 #41660879 未加载
JohnHammersley8 months ago
Ollama post: <a href="https:&#x2F;&#x2F;ollama.com&#x2F;blog&#x2F;llama3.2">https:&#x2F;&#x2F;ollama.com&#x2F;blog&#x2F;llama3.2</a>
getcrunk8 months ago
Still no 14&#x2F;30b parameter models since llama 2. Seriously killing real usability for power users&#x2F;diy.<p>The 7&#x2F;8B models are great for poc and moving to edge for minor use cases … but there’s a big and empty gap till 70b that most people can’t run.<p>The tin foil hat in me is saying this is the compromise the powers that be have agreed too. Basically being “open” but practically gimped for average joe techie. Basically arms control
评论 #41652822 未加载
评论 #41652171 未加载
评论 #41652593 未加载
arnaudsm8 months ago
Is there an up-to-date leaderboard with multiple LLM benchmarks?<p>Livebench and Lmsys are weeks behind and sometimes refuse to add some major models. And press releases like this cherry pick their benchmarks and ignore better models like qwen2.5.<p>If it doesn&#x27;t exist I&#x27;m willing to create it
评论 #41662335 未加载
gdiamos8 months ago
Llama 3.2 includes a 1B parameter model. This should be 8x higher throughput for data pipelines. In our experience, smaller models are just fine for simple tasks like reading paragraphs from PDF documents.
评论 #41652158 未加载
kombine8 months ago
Are these models suitable for Code assistance - as an alternative to Cursor or Copilot?
评论 #41654651 未加载
Ey7NFZ3P0nzAe8 months ago
Interesting that its scores are somewhat helow Pixtral 12B <a href="https:&#x2F;&#x2F;mistral.ai&#x2F;news&#x2F;pixtral-12b&#x2F;" rel="nofollow">https:&#x2F;&#x2F;mistral.ai&#x2F;news&#x2F;pixtral-12b&#x2F;</a>
gunalx8 months ago
3b was pretty good at multimodal (Norwegian) still a lot of gibberish at times, and way more sensitive than 8b but more usable than Gemma 2 2b at multi modal, fine at my python list sorter with args standard question. But 90b vision just refuses all my actually useful tasks like helping recreate the images in html or do anything useful with the image data other than describing it. Have not gotten as stuck with 70b or openai before. Insane amount of refusals all the time.
resters8 months ago
This is great! Does anyone know if the llama models are trained to do function calling like openAI models are? And&#x2F;or are there any function calling training datasets?
评论 #41652070 未加载
评论 #41652076 未加载
评论 #41652088 未加载
l5870uoo9y8 months ago
&gt; These models are enabled on day one for Qualcomm and MediaTek hardware and optimized for Arm processors.<p>Do they require GPU or can they be deployed on VPS with dedicated CPU?
评论 #41658088 未加载
chriskanan8 months ago
The assessments of visual capability really need to be more robust. They are still using datasets like VQAv2, which while providing some insight, have many issues. There are many newer datasets that serve as much more robust tests and that are less prone to being affected by linguistic bias.<p>I&#x27;d like to see more head-to-head comparisons with community created multi-modal LLMs as done in these papers:<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2408.05334" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2408.05334</a><p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2408.03326" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2408.03326</a><p>I look forward to reading the technical report, once its available. I couldn&#x27;t find a link to one, yet.
评论 #41659826 未加载
sgt8 months ago
Anyone on HN running models on their own local machines, like smaller Llama models or such? Or something else?
评论 #41662305 未加载
评论 #41657957 未加载
404mm8 months ago
Can anyone recommend a webUI client for ollama?
评论 #41655272 未加载
评论 #41653138 未加载
评论 #41655153 未加载
评论 #41653201 未加载
xrd8 months ago
I&#x27;m currently fighting with a fastapi python app deployed to render. It&#x27;s interesting because I&#x27;m struggling to see how I encode the image and send it using curl. Their example sends directly from the browser and uses a data uri.<p>But, this is relevant because I&#x27;m curious how this new model allows image inputs. Do you paste a base64 image into the prompt?<p>It feels like these models can start not only providing the text generation backend, but start to replace the infrastructure for the API as well.<p>Can you input images without something in front of it like openwebui?
josephernest8 months ago
Can it run with llama-cpp-python? If so, where can we find and download the gguf files? Are they distributed directly by meta, or are they converted to gguf format by third parties?
thimabi8 months ago
Does anyone know how these models fare in terms of multilingual real-world usage? I’ve used previous iterations of llama models and they all seemed to be lacking in that regard.
aussieguy12348 months ago
When using meta.ai, its able to generate images as well as understand them. Has this also been open sourced or just a GPT4o style ability to see images?
desireco428 months ago
I have to say that running this model locally I was pleasantly suprised how well it ran, it doesn&#x27;t use as much resources and produce decent output, comparable to ChatGPT, it is not quite as OpenAI but for a lot of tasks, since it doesn&#x27;t burden the computer, it can be used with local model.<p>Next I want to try to use Aider with it and see how this would work.
GaggiX8 months ago
The 90B seem to perform pretty weak on visual tasks compare to Qwen2-VL-72B: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;Qwen&#x2F;Qwen2-VL-72B-Instruct" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;Qwen&#x2F;Qwen2-VL-72B-Instruct</a>, or am I missing something?
notpublic8 months ago
Llama-3.2-11B-Vision-Instruct does an excellent job extracting&#x2F;answering questions from screenshots. It is even able to answer questions based on information buried inside a flowchart. How is this even possible??
评论 #41655259 未加载
评论 #41654730 未加载
评论 #41660629 未加载
评论 #41654668 未加载
sk110018 months ago
Can one of thse models be run on a single machine? What specs do you need?
评论 #41652249 未加载
评论 #41656538 未加载
bombi8 months ago
Is Termux enough to run the 1B model on Android?
评论 #41654409 未加载
oulipo8 months ago
Can the 3B run on a M1 macbook? It seems that it hogs all the memory. The 1B runs fine
评论 #41655084 未加载
bottlepalm8 months ago
What mobile devices can the smaller models run on? iPhone, Android?
评论 #41653793 未加载
troupo8 months ago
It&#x27;s hallucinating so badly, it&#x27;s kinda hilarious<p>Literally everything about the quote below is wrong. The problem with these &quot;amazing ground-breaking models&quot; is that you never know which knowledge they actually possess.<p>--- start quote ---<p>&gt;&gt;&gt; what is the last version of Ash Framework for Elixir you know My training data goes up to December 2023, and according to the Ash Framework releases, the last stable version that I&#x27;m familiar with is:<p>* *Ash 2.14*: Released on April 8, 2021.<p>However, it&#x27;s worth noting that Ash has been merged into Elixir itself, and most of its features have been integrated into the Elixir language and standard libraries.<p>In fact, Ash was one of the first frameworks to be included in Elixir, and it played a significant role in shaping the language. Many of Ash&#x27;s core concepts, such as `defstruct&#x2F;1` and `defmodule&#x2F;1`, were later adopted by Elixir itself.<p>...<p>Keep in mind that Ash is no longer actively maintained as a separate framework, but its legacy continues to influence Elixir and its community.<p>--- end quote ---
评论 #41660895 未加载
gdiamos8 months ago
Do inference frameworks like vllm support vision?
评论 #41652236 未加载
评论 #41657116 未加载
stogot8 months ago
Surprised no mention of audio?
评论 #41660441 未加载
ofermend8 months ago
Great release. Models just added to Hallucination Leaderboard: <a href="https:&#x2F;&#x2F;github.com&#x2F;vectara&#x2F;hallucination-leaderboard">https:&#x2F;&#x2F;github.com&#x2F;vectara&#x2F;hallucination-leaderboard</a>.<p>TL;DR: * 90B-Vision: 4.3% hallucination rate * 11B-Vision: 5.5% hallucination rate
dharma18 months ago
are these better than qwen at codegen?
taytus8 months ago
meta.ai still running on 3.1
84adam8 months ago
excited for this
sva_8 months ago
Curious about the multimodal model&#x27;s architecture. But alas, when I try to request access<p>&gt; Llama 3.2 Multimodal is not available in your region.<p>It sounds like they input the continuous output of an image encoder into a transformer, similar to transfusion[0]? Does someone know where to find more details?<p>Edit:<p><i>&gt; Regarding the licensing terms, Llama 3.2 comes with a very similar license to Llama 3.1, with one key difference in the acceptable use policy: any individual domiciled in, or a company with a principal place of business in, the European Union is not being granted the license rights to use multimodal models included in Llama 3.2.</i> [1]<p>What a bummer.<p>0. <a href="https:&#x2F;&#x2F;www.arxiv.org&#x2F;abs&#x2F;2408.11039" rel="nofollow">https:&#x2F;&#x2F;www.arxiv.org&#x2F;abs&#x2F;2408.11039</a><p>1. <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;blog&#x2F;llama32#llama-32-license-changes-sorry-eu-" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;blog&#x2F;llama32#llama-32-license-changes...</a>
评论 #41652354 未加载
评论 #41652225 未加载
评论 #41652328 未加载
评论 #41652151 未加载
评论 #41652174 未加载
评论 #41652212 未加载
评论 #41652465 未加载
评论 #41652395 未加载
minimaxir8 months ago
Off topic&#x2F;meta, but the Llama 3.2 news topic received many, many HN submissions and upvotes but never made it to the front page: the fact that it&#x27;s on the front page now indicates that moderators intervened to rescue it: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;from?site=meta.com">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;from?site=meta.com</a> (showdead on)<p>If there&#x27;s an algorithmic penalty against the news for whatever reason, that may be a flaw in the HN ranking algorithm.
评论 #41652099 未加载
评论 #41655289 未加载
评论 #41652443 未加载
nmwnmw8 months ago
- Llama 3.2 introduces small vision LLMs (11B and 90B parameters) and lightweight text-only models (1B and 3B) for edge&#x2F;mobile devices, with the smaller models supporting 128K token context.<p>- The 11B and 90B vision models are competitive with leading closed models like Claude 3 Haiku on image understanding tasks, while being open and customizable.<p>- Llama 3.2 comes with official Llama Stack distributions to simplify deployment across environments (cloud, on-prem, edge), including support for RAG and safety features.<p>- The lightweight 1B and 3B models are optimized for on-device use cases like summarization and instruction following.
评论 #41652469 未加载
monkfish3288 months ago
Zuckerberg has never liked having Android&#x2F;iOs as gatekeepers i.e. &quot;platforms&quot; for his apps.<p>He&#x27;s hoping to control AI as the next platform through which users interact with apps. Free AI is then fine if the surplus value created by not having a gatekeeper to his apps exceeds the cost of the free AI.<p>That&#x27;s the strategy. No values here - just strategy folks.
评论 #41658215 未加载
评论 #41660531 未加载
评论 #41660645 未加载
TheAceOfHearts8 months ago
I still can&#x27;t access the hosted model at meta.ai from Puerto Rico, despite us being U.S. citizens. I don&#x27;t know what Meta has against us.<p>Could someone try giving the 90b model this word search problem [0] and tell me how it performs? So far with every model I&#x27;ve tried, none has ever managed to find a single word correctly.<p>[0] <a href="https:&#x2F;&#x2F;imgur.com&#x2F;i9Ps1v6" rel="nofollow">https:&#x2F;&#x2F;imgur.com&#x2F;i9Ps1v6</a>
评论 #41652782 未加载
评论 #41652836 未加载
评论 #41652053 未加载
评论 #41650145 未加载
评论 #41652039 未加载
alexcpn8 months ago
In KungfuPanda there is this line that the Panda says &quot;I love KungFuuuuuuuu&quot;, well I normally don&#x27;t tell like this, but when I saw this and (starting to use this), I feel like yelling&quot;I like Metaaaaa or is it LLAMMMAAA or is it Open source.. or is it this cool ecosystem which gives such value for free...
404mm8 months ago
Newbie question, what size model would be needed to have a 10x software engineer skills and no knowledge of the human kind (ie, no need to know how to make a pizza or sequence your DNA). Is there such a model?
评论 #41653199 未加载
评论 #41653434 未加载
评论 #41653319 未加载
评论 #41656377 未加载
评论 #41653475 未加载
评论 #41660706 未加载