TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Fast and Portable Llama2 Inference on the Heterogeneous Edge

313 pointsby 3Sophonsover 1 year ago

25 comments

oerstedover 1 year ago
I&#x27;m all for Rust and WASM, but if you look at the code it&#x27;s just 150 lines of a basic Rust command-line script. All the heavy lifting is done by a single line passing the model to the WASI-NN backend, which in this case is provided by the WasmEdge runtime, which incidentally is C++, not Rust.<p>Rust is bringing zero advantage here really, the backend could be called from Python or anything else.
评论 #38249776 未加载
edover 1 year ago
Whoa! Great work. To other folks checking it out, it still requires downloading the weights, which are pretty large. But they essentially made a fully portable, no-dependency llama.cpp, in 2mb.<p>If you&#x27;re an app developer this might be the easiest way to package an inference engine in a distributable file (the weights are already portable and can be downloaded on-demand — the inference engine is really the part you want to lock down).
评论 #38247587 未加载
评论 #38252439 未加载
FL33TW00Dover 1 year ago
This just wrapping llama.cpp right? I’m sorry but I’m pretty tired of projects wrapping x.cpp.<p>I’ve been developing a Rust + WebGPU ML framework for the past 6 months. I’ve learned quickly how impressive the work by GG is.<p>It’s early stages but you can check it out here: <a href="https:&#x2F;&#x2F;www.ratchet.sh&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.ratchet.sh&#x2F;</a> <a href="https:&#x2F;&#x2F;github.com&#x2F;FL33TW00D&#x2F;whisper-turbo">https:&#x2F;&#x2F;github.com&#x2F;FL33TW00D&#x2F;whisper-turbo</a>
评论 #38248195 未加载
评论 #38252475 未加载
评论 #38247768 未加载
评论 #38248052 未加载
wokwokwokover 1 year ago
Mmm…<p>The wasm-nn that this relies on (<a href="https:&#x2F;&#x2F;github.com&#x2F;WebAssembly&#x2F;wasi-nn">https:&#x2F;&#x2F;github.com&#x2F;WebAssembly&#x2F;wasi-nn</a>) is a proposal that relies on sending arbitrarily chunks to some vendor implementation. The api is literally like set input, compute, set output.<p>…and that is totally non portable.<p>The reason <i>this</i> works, is because it’s relying on the abstraction already implemented in llama.cpp that allows it to take a gguf model and map it to multiple hardware targets,which you can see has been lifted as-is into WasmEdge here: <a href="https:&#x2F;&#x2F;github.com&#x2F;WasmEdge&#x2F;WasmEdge&#x2F;tree&#x2F;master&#x2F;plugins&#x2F;wasi_nn&#x2F;thirdparty&#x2F;ggml">https:&#x2F;&#x2F;github.com&#x2F;WasmEdge&#x2F;WasmEdge&#x2F;tree&#x2F;master&#x2F;plugins&#x2F;was...</a><p>So..<p>&gt; Developers can refer to this project to write their machine learning application in a high-level language using the bindings, compile it to WebAssembly, and run it with a WebAssembly runtime that supports the wasi-nn proposal, such as WasmEdge.<p>Is total rubbish; no, you can’t.<p>This isn’t portable.<p>It’s not sandboxed.<p>It’s not a HAL.<p>If you have a wasm binary you <i>might</i> be able to run it <i>if</i> the version of the runtime you’re using <i>happens</i> to implement the specific ggml backend you need, which it probably doesn’t… because there’s literally no requirement for it to do so.<p>…and if you do, you’re just calling the llama.cpp ggml code, so it’s as safe as that library is…<p>There’s a lot of “so portable” and “such rust” talk in this article which really seems misplaced; this doesn’t seem to have the benefits of either of those two things.<p>Let’s imagine you have some new hardware with a WASI runtime on it, can you run your model on it? Does it have GPU support?<p>Well, turns out the answer is “go and see if llama.cpp compiles on that platform with GPU support and if the runtime you’re using happens have a ggml plugin in it and happens to have a copy of that version of ggml vendored in it, and if not, then no”.<p>..at which point, wtf are you even using WASI for?<p>Cross platform GPU support <i>is</i> hard, but this… I dunno. It seems absolutely ridiculous.<p>Imagine if webGPU was just “post some binary chunk to the GPU and maybe it’ll draw something or whatever if it’s the right binary chunk for the current hardware.”<p>That’s what this is.
评论 #38249401 未加载
评论 #38249152 未加载
reidjsover 1 year ago
Can I run this offline on my iPhone? That would be like having basic internet search regardless of reception. Could come in handy when camping
评论 #38247061 未加载
评论 #38247120 未加载
评论 #38249510 未加载
评论 #38248539 未加载
behnamohover 1 year ago
The way things are going, we&#x27;ll see more efficient and faster methods to run transformer arch on edge, but I&#x27;m afraid we&#x27;re approaching the limit because you can&#x27;t just rust your way out of the VRAM requirements, which is the main bottleneck in loading large-enough models. One might say &quot;small models are getting better, look at Mistral vs. llama 2&quot;, but small models are also approaching their capacity (there&#x27;s only so much you can put in 7b parameters).<p>I don&#x27;t know man, this approach to AI doesn&#x27;t &quot;feel&quot; like it&#x27;ll lead to AGI—it&#x27;s too inefficient.
评论 #38248311 未加载
anentropicover 1 year ago
&gt; the Mac OS build of the GGML plugin uses the Metal API to run the inference workload on M1&#x2F;M2&#x2F;M3’s built-in neural processing engines<p>I don&#x27;t think that&#x27;s accurate (someone please correct me...)<p>GGML use of Metal API means it runs on the M1&#x2F;2&#x2F;3 <i>GPU</i> and not the neural engine<p>Which is all good, but for sake of being pedantic...
评论 #38249951 未加载
nigmaover 1 year ago
I hate this kind of clickbait marketing suggesting the project is delivering 1&#x2F;100 of the size or 100x-35000x the speed of other solutions because it uses a different language for a wrapper around core library and completely neglecting tooling and community expertise built around other solutions.<p>First of all the project is based on llama.cpp[1], which does the heavy work of loading and running multi-GB model files on GPU&#x2F;CPU and the inference speed is not limited by the wrapper choice (there are other wrappers in Go, Python, Node, Rust, etc. or one can use llama.cpp directly). The size of the binary is also not that important when common quantized model files are often in the range of 5GB-40GB and require a beefy GPU or a MB with 16-64GB of RAM.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp">https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp</a>
hnarayananover 1 year ago
If a large part of the size is essentially the trained weights of a model, how can one reduce the size by orders of magnitude (without losing any accuracy)?
评论 #38246897 未加载
评论 #38246922 未加载
评论 #38246920 未加载
diimdeepover 1 year ago
I do not see the point to use this instead of directly using llama.cpp
评论 #38247209 未加载
评论 #38247216 未加载
estover 1 year ago
&gt; The core Rust source code is very simple. It is only 40 lines of code. The Rust program manages the user input, tracks the conversation history, transforms the text into the llama2’s chat template, and runs the inference operations using the WASI NN API.<p>TL;DR a 2MB executable that reads stdin and calls WASI-NN
评论 #38247532 未加载
hedgehogover 1 year ago
It looks like this is Rust for the application wrapped around a WASM port of llama.cpp that in turn uses an implementation of WASI-NN for the actual NN compute. It would be interesting to see how this compares to the TFLite, the new stuff in the PyTorch ecosystem, etc.
danielEMover 1 year ago
I&#x27;m getting lost in all that.<p>Using llama cpp and mlc-llm. Both on my 2 years old mobile Ryzen APU with 64GB of RAM. First does not use GPU at all, tried plenty of options, nothing did work, but llama 34B works - painfully slow, but does work. Second is working on top of Vulkan and I didn&#x27;t take any precise measurements but it&#x27;s limit looks like is 32GB RAM (so no llama 34B), but it offloads CPU, unfortunately seem like performance is similar to CPU (that is my perception, didn&#x27;t take any measurements here too).<p>So ... will I get any benefits from switching to rust&#x2F;webassembly version???
anon23432343over 1 year ago
So you need to mb2 for sending an api call to the edge?<p>Okaayyyy...
评论 #38248120 未加载
dkgaover 1 year ago
Very cool, but unless I missed it could someone please explain why not just compile a Rust application? Is the Wasm part needed for the GPU acceleration (whatever the user GPU is?)
评论 #38247428 未加载
thih9over 1 year ago
&gt; the binary application (only 2MB) is completely portable across devices with heterogeneous hardware accelerators.<p>What does the “heterogenous hardware accelerators” mean in practice?
gvandover 1 year ago
The binary size is not really important in this case, llama.cpp should not be that far from this, what&#x27;s matter as we all know is how much gpu memory we need.
rowanG077over 1 year ago
I don&#x27;t think you can call anything wasm efficient.
rjzzleepover 1 year ago
Is there any detailed info on how a 4090 + ryzen 7840 compares to any of the new Apple offerings with 64GB or more unified RAM?
评论 #38247691 未加载
antirezover 1 year ago
Linkbait at its finest. But it&#x27;s true that the Python AI stack sucks big times.
syrusakbaryover 1 year ago
Congrats on the work... it&#x27;s an impressive demo!<p>It may be worth researching to add support of it into the Wasmer WebAssembly runtime [1]. (Note: I work at Wasmer!)<p>[1] <a href="https:&#x2F;&#x2F;wasmer.io&#x2F;">https:&#x2F;&#x2F;wasmer.io&#x2F;</a>
classifiedover 1 year ago
How is it still fast if it was compiled to WASM?
tomalbrcover 1 year ago
&gt; No wonder Elon Musk said that Rust is the language of AGI.<p>What.
bugglebeetleover 1 year ago
Wow, this is a “holy shit” moment for Rust in AI applications if this works as described. Also, so long Mojo!<p>EDIT:<p>Looks like I’m wrong, but I appreciate getting schooled by all the HNers with low-level expertise. Lots to go and learn about now.
评论 #38246987 未加载
评论 #38247099 未加载
评论 #38247309 未加载
评论 #38247003 未加载
jasonjmcgheeover 1 year ago
Confused about the title rewrite from “Fast and Portable Llama2 Inference on the Heterogeneous Edge” which more clearly communicates what this article is about - a wasm version of llama.cpp.<p>I feel like editorializing to highlight the fact that it’s 2MB and runs on a mac are missing some of the core aspects of the project and write up.
评论 #38253479 未加载
评论 #38248051 未加载
评论 #38248025 未加载
评论 #38247349 未加载