TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

LLaMA now goes faster on CPUs

1372 点作者 lawrencechen大约 1 年前

45 条评论

speps大约 1 年前
Regarding this bit at the end:<p>&gt; I learned how to write math kernels by renting Vast VMs and watching Gautham Venkatasubramanian and mrdomino develop CUDA kernels in a tmux session. They&#x27;ve been focusing on solving a much more important challenge for llamafile, which is helping it not have a mandatory dependency on the cuBLAS<p>If I&#x27;m reading this right, they&#x27;re trying to rewrite cuBLAS within CUDA itself. I&#x27;m guessing the next step would be removing CUDA dependency and go with directly using Vulkan or Metal compute shaders. Am I correct?
评论 #39891938 未加载
评论 #39891952 未加载
评论 #39904465 未加载
评论 #39903245 未加载
bottlepalm大约 1 年前
I think it&#x27;s a good idea for everyone to download and be able to run a LLM locally, even if you have the minimum of requirements. As a pseudo-backup of a large chunk of human knowledge.
评论 #39890656 未加载
评论 #39890615 未加载
评论 #39890470 未加载
评论 #39892548 未加载
评论 #39890818 未加载
评论 #39890546 未加载
评论 #39890739 未加载
评论 #39899411 未加载
评论 #39896840 未加载
评论 #39890507 未加载
评论 #39891250 未加载
评论 #39891923 未加载
评论 #39891011 未加载
评论 #39890772 未加载
marshallward大约 1 年前
There is an implication here that the Fortran implementation of `SGEMM` is somehow inadequate. But any modern Fortran compiler will quite easily apply the AVX and FMA optimizations presented here without any additional changes. Both GNU and Intel make these substitutions with the correct flags.<p>The unrolling optimization is also just another flag away (`-funroll-all-loops`). The Intel Compiler will even do this without prompting. In fact, it appears to only do a modest 2x unroll on my machine, suggesting that the extreme unroll in this article would have been overkill.<p>Parallelization certainly a lot to ask of Fortran 77 source, but there there is little stopping you from adding OpenMP statements to the `SGEMM` function. In fact, modern Fortran even offers its own parallelization constructs if you&#x27;re willing to go there.<p>Which is to say: Let&#x27;s not belittle this old Fortran 77 function. Yes it is old, and does not even resemble modern Fortran. But the whole point of Fortran is to free the developer from these platform-specific details, and hand the job off to the compiler. If you don&#x27;t like that approach, then you&#x27;re welcome to go to C or C++. But this little block of Fortran code is already capable of doing just about everything in this article.
评论 #39894636 未加载
评论 #39896349 未加载
评论 #39894984 未加载
ajtulloch大约 1 年前
- <a href="https:&#x2F;&#x2F;www.cs.utexas.edu&#x2F;users&#x2F;flame&#x2F;laff&#x2F;pfhp&#x2F;index.html" rel="nofollow">https:&#x2F;&#x2F;www.cs.utexas.edu&#x2F;users&#x2F;flame&#x2F;laff&#x2F;pfhp&#x2F;index.html</a> (e.g. here <a href="https:&#x2F;&#x2F;www.cs.utexas.edu&#x2F;users&#x2F;flame&#x2F;laff&#x2F;pfhp&#x2F;week2-blocking-for-registers.html" rel="nofollow">https:&#x2F;&#x2F;www.cs.utexas.edu&#x2F;users&#x2F;flame&#x2F;laff&#x2F;pfhp&#x2F;week2-blocki...</a>)<p>- <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;nadavrot&#x2F;5b35d44e8ba3dd718e595e40184d03f0" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;nadavrot&#x2F;5b35d44e8ba3dd718e595e40184...</a><p>might be of interest
评论 #39890777 未加载
TimPC大约 1 年前
Strange title. My first read of the title thought the author was arguing the model is now faster on CPU than GPU. Would be much nicer if they titled this something closer to &quot;Performance Improvement for LLaMa on CPU&quot;.
评论 #39899459 未加载
aaronscott大约 1 年前
&gt; I like to define my subroutines using a modern language like C++, which goes 47 gigaflops. This means C++ is three orders of a magnitude faster than Python. That&#x27;s twenty years of progress per Moore&#x27;s law.<p>This is great. I love the idea of measuring performance differences in “years of Moore’s law.”<p>Twenty years puts the delta in an easy to understand framework.
评论 #39896704 未加载
wokwokwok大约 1 年前
&gt; You don&#x27;t need a large computer to run a large language model<p>While running tiny llama does indeed count as running a language model, I’m skeptical that the capabilities of doing so match what most people would consider a baseline requirement to be useful.<p>Running 10 param model is also “technically” running an LM, and I can do it by hand with a piece of paper.<p>That doesn’t mean “you don’t need a computer to run an LM”…<p>I’m not sure where LM becomes LLM, but… I personally think it’s more about capability than parameter count.<p>I don’t <i>realllly</i> believe you can do a lot of useful LLM work on a pi
评论 #39890860 未加载
评论 #39891672 未加载
评论 #39895120 未加载
评论 #39892159 未加载
tiffanyh大约 1 年前
Pixar uses CPUs …<p>I wonder if we’ll end up in a situation like rendered movies.<p>Where the big studios like Pixar uses CPUs (not GPUs) to render their movies due to the cost&#x2F;perf (and access to larger amounts of RAM).<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=25616372">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=25616372</a>
评论 #39900237 未加载
评论 #39893668 未加载
评论 #39893646 未加载
ein0p大约 1 年前
As someone who has tried to beat MKL-DNN, and was unsuccessful at doing so even for constrained matrix sizes, I’m curious how they pulled off such a massive improvement.<p>But as someone who routinely estimates picojoules per flop at $DAY_JOB - there’s simply no way this is energy efficient. That is not even physically possible with a CPU.
评论 #39898386 未加载
AbuAssar大约 1 年前
regarding AMD zen4 with avx512:<p>&quot;Here we see that, despite only being twice the price, the 7995WX x86 ISA offers 7x more raw compute power than the M2 Ultra ARM ISA, and nearly the same token generation speed, which is likely thanks to its 384mb L3 cache. When I bought this chip, I had to expand support in llama.cpp for bfloat16 and AVX512 before I could fully test its capabilities. My work means you can now run LLaMA 2.8x faster on Zen4 than you could before.&quot;
评论 #39893525 未加载
pama大约 1 年前
Super nice story on the matmul optimization that gave 810 gflops for 512x512. Thanks for the write up and the contributions to llama.cpp and the community more broadly.
saagarjha大约 1 年前
&gt; One important thing to know if you&#x27;re considering buying a Mac Studio is that, like the Windows Executive, XNU does a really good job keeping your desktop stable, and that means protecting your system from you. It takes me 45 seconds on Mac Studio to compile the Cosmo monorepo, due to all these safety features; but if I fork bombed it, I&#x27;d be surprised if Netflix skipped a single frame.<p>Clearly nobody actually tried this, because on XNU if you fork bomb the system it reliably goes down every single time. There are no &quot;safety features&quot; here but extra overhead when spawning processes.
none_to_remain大约 1 年前
From the example: &quot;--temp 0 turns off the random number generator (we don&#x27;t want improvisation for a spam filter)&quot;<p>I&#x27;ve been thinking for a while about how many applications of LLMs need this adjustment and aren&#x27;t getting it
评论 #39892268 未加载
评论 #39890973 未加载
jongjong大约 1 年前
That&#x27;s interesting because I built a simple ANN library and I was playing around with GPU acceleration and came to a similar conclusion as this article.<p>To be fair, my ANN library was faster (up to 2x) with GPU acceleration in some scenarios were ANN was shallow (as opposed to deep with many hidden layers). I thought the marginal gain may have been because, the way it&#x27;s set up in my library, it has to load all the values into the GPU from RAM for each pass of forward and back propagation in each layer during training. I believe there is a way to allocate memory on the GPU chip itself but it&#x27;s a lot more challenging to do, especially in a modular, fully portable way (which was one of the goals of my library).<p>But anyway, even the 2x best-case figure seemed disappointing. In my mind, I expected to see at least 10x speed improvement... And I was surprised that the CPU version was actually slightly faster in the scenario I was testing at the time which was a relatively deep network. It makes sense since the different layers cannot be parallelized as the input of one layer depends on the output of the previous layer... So the more layers you have, the more serial bottlenecks you have, the less you can benefit from GPU acceleration... And unfortunately, deep networks also happen to be those which tend to perform best for a lot of use cases.
kiratp大约 1 年前
It fascinating to me that coming up on a year since Sapphire Rapids has been available in the public cloud, developers are still targeting AVX512 when they should be targeting VNNI and AMX.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp&#x2F;issues&#x2F;2555">https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp&#x2F;issues&#x2F;2555</a>
评论 #39890653 未加载
评论 #39890886 未加载
评论 #39891742 未加载
评论 #39891196 未加载
politelemon大约 1 年前
This is great work. I&#x27;ve always thought it would be great if running LLM could be commoditized for regular average Joe hardware. I had thought that llamafile was like dockerfile for llama.cpp but looks like that&#x27;s a mistake?<p>Will definitely be giving this a try.
aniijbod大约 1 年前
A way of thinking about what&#x27;s inside any of the top LLMs right now: even if they never learn another single fact, even if they get ridiculously out of date as a result, even if they are even more riddled with errors and prone to biases than we know them to be, even if they are as prone to hallucinations as we know they they are and they never develop the capacity to cure themselves of this, they are more knowledgeable and capable of more reasoned response, despite their capacity for error, to more questions than any single human being that has ever lived.
评论 #39891806 未加载
评论 #39890630 未加载
评论 #39896079 未加载
kristianp大约 1 年前
Nice to see such speedups for CPUs. Are these changes available as a branch or pull request in llama.cpp itself? I&#x27;d like to make use of them in that form if possible (as I&#x27;m used to using that).
评论 #39891814 未加载
s_Hogg大约 1 年前
I&#x27;d pay good money to watch jart in conversation with Carmack
评论 #39900141 未加载
评论 #39899793 未加载
评论 #39894208 未加载
miki123211大约 1 年前
If I&#x27;m reading the post correctly, Llamafile is faster than llama.cpp, despite the author upstreaming some of the changes. What&#x27;s the reason for this?
mijoharas大约 1 年前
Has Justine written anywhere about her disassembly setup?<p>&gt; I configured Emacs so I can push a button, and the disassembly for the C++ code I&#x27;m working on will pop up on the screen in a few milliseconds.<p>I assume it&#x27;s something project specific rather than being able to get the disassembly for an arbitrary section of code or something?<p>It seems very handy, so I&#x27;d love to see the implementation (I couldn&#x27;t find anything googling)
评论 #39892954 未加载
hrkfmud50k大约 1 年前
&gt; It&#x27;s clearly optimal since my CPU is listed as only being capable of going 780 gigaflops<p>780 GFLOP is the iGPU spec. Is this a valid comparison?<p><a href="https:&#x2F;&#x2F;nanoreview.net&#x2F;en&#x2F;cpu&#x2F;intel-core-i9-14900k" rel="nofollow">https:&#x2F;&#x2F;nanoreview.net&#x2F;en&#x2F;cpu&#x2F;intel-core-i9-14900k</a>
moffkalast大约 1 年前
&gt; the Raspberry Pi<p>Odd how there were no Mistral 7 benchmarks for the Pi 5 in that table (I doubt anyone is seriously considering using TinyLlama for anything at all), so I went to re-test it out myself on the Pi 5 8G.<p>llamafile 0.7: 52 predicted, 150 cached, 430ms per token, 2.32 tokens per second<p>llama.cpp + OpenBLAS: 36 predicted, 124 cached, 381ms per token, 2.62 tokens per second<p>It does seem to inch closer to the speed you get with blas acceleration which is quite impressive, but in practical terms the Pi 5 is so heavily limited by its memory throughput bottleneck that it saturates the required compute with 3 threads already. So while fancy kernels will make it more efficient it won&#x27;t really save you from that fundamental bandwidth limit. The Pi foundation messed up going with a 32 bit memory bus, simple as.
isusmelj大约 1 年前
Is there somewhere an overview of the progress we made on the software side for training and inference of LLMs? It feels like we squeezed 10-100x more out of the hardware since llama appeared. This crazy progress will probably saturate though as we reach theoretical limits, no?
1-6大约 1 年前
Question is, how much of an improvement has it gotten to over a GPU or ASIC?
评论 #39891224 未加载
评论 #39890904 未加载
评论 #39890616 未加载
评论 #39891758 未加载
评论 #39892763 未加载
bee_rider大约 1 年前
Is it easy to find where the matvecs are, in LLaMA (if you are someone who is curious and wants to poke around at the “engine” without understanding the “transmission,” so to speak)? I was hoping to mess around with this for Stable Diffusion, but it seemed like they were buried under quite a few layers of indirection. Which is entirely reasonable, the goal is to ship software, not satisfy people who’d just want to poke things and see what happens, haha.
评论 #39891042 未加载
column大约 1 年前
Unfortunately BitDefender (corporate) blocks llamafile as a ransomware &quot;atc.heur.crypt&quot; and it seems there is no workaround. :(
评论 #39907998 未加载
Ono-Sendai大约 1 年前
Multithreading support in llama.cpp is probably still pretty busted, assuming it uses the same underlying NN inference code as whisper.cpp: <a href="https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;whisper.cpp&#x2F;issues&#x2F;200#issuecomment-1484025515">https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;whisper.cpp&#x2F;issues&#x2F;200#issuecom...</a>
评论 #39891848 未加载
rbnsl大约 1 年前
Definitely wild we’re in the timeline you can run a 1.1 bn param model on a raspberry pi, but its still tough to justify because the 1.1 is kinda useless compared to the beefier models. Sick for home builds&#x2F;hobbyists though I might wanna get one of the new Pis just to try this out
DrNosferatu大约 1 年前
Any performance benchmark against intel&#x27;s &#x27;IPEX-LLM&#x27;[0] or others?<p>[0] - <a href="https:&#x2F;&#x2F;github.com&#x2F;intel-analytics&#x2F;ipex-llm">https:&#x2F;&#x2F;github.com&#x2F;intel-analytics&#x2F;ipex-llm</a>
yieldcrv大约 1 年前
note, this is &quot;goes faster on CPUs than before&quot;, not faster than GPUs.
Dobiasd大约 1 年前
Are there any benchmarks on the performance of these new matrix multiplication kernels compared to the Eigen library (ideally for float32)?
评论 #39927490 未加载
discordance大约 1 年前
&quot;As for disk speed, dd if=&#x2F;dev&#x2F;zero of=&#x2F;tmp&#x2F;output bs=128k count=50k; rm -f &#x2F;tmp&#x2F;output reports 1.6 GB&#x2F;s which is 3.6x slower than my Mac Studio, and 3x slower than my Intel (which has the same M.2 stick). I&#x27;m told that Intel and Apple are just better at this, but I wish I understood why. &quot;<p>Can anyone here answer why this is?
评论 #39890752 未加载
评论 #39890853 未加载
arendtio大约 1 年前
Does someone else see llamafile using Wine on Linux?<p>Edit: After the download I did a simple chmod +x llava-v1.5-7b-q4.llamafile; .&#x2F;llava-v1.5-7b-q4.llamafile
评论 #39896987 未加载
seangrogg大约 1 年前
Mmm, I wonder how well this would work on a mobile device. Maybe I&#x27;ll try grabbing my ubuntu touch here in a sec...
评论 #39892950 未加载
m3kw9大约 1 年前
So Nvidia in trouble now because intel can be used instead for faster&#x2F;cheaper? inference?
6r17大约 1 年前
today being today ; I must ask ; anyone has actually tried this ?
JohnnyHerz大约 1 年前
Awesomeness. thank you for sharing!
tubs大约 1 年前
The ram is not on the cpu on a mac. It&#x27;s in the same can but it&#x27;s still regular ddr dimms.
aimonster2大约 1 年前
Posted too early.
wtallis大约 1 年前
I know this post is focused specifically on <i>CPU</i> performance, but the section on the performance on the Mac Studio seems to be deliberately avoiding directly mentioning that machine&#x27;s GPU, let alone benchmark against it. I think it would have been interesting to see a straightforward comparison of what compute performance and memory bandwidth (as measured by the prompt processing and token generation speeds, respectively) are achievable with reasonable optimization effort on the CPU vs GPU when they&#x27;re attached to the same memory subsystem.
4bpp大约 1 年前
It would be good to see some independent verification of this claim. HN has previously [1] fallen for a claim by the same author to have reduced llama.cpp memory usage for a dense model way below the size of the model, which should have failed a basic smell test and indeed was debunked shortly after. Justine Tunney appears to enjoy extreme superstar status here, and it&#x27;s hard to overstate the degree of social pressure that needed to be overcome at the time for the skeptic position to reach fixation (to begin with, what other LLM developments even hit upvote numbers like the +1300ish there or the +712 here at the time of writing?).<p>[1] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35393284">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35393284</a>
评论 #39893847 未加载
评论 #39894552 未加载
评论 #39894591 未加载
评论 #39894270 未加载
评论 #39894122 未加载
评论 #39894502 未加载
评论 #39893932 未加载
评论 #39904887 未加载
评论 #39894801 未加载
评论 #39895378 未加载
pknerd大约 1 年前
So, I can now run it on my 2015 Macbook with 8GB RAM?
sublimefire大约 1 年前
re:funding<p>my friend suggested to nominate Justine for the open source contributions in an internal Microsoft programme (the winner takes $10k). They did not even want to add her to the potential list of nominees because her software is not used in MSFT. It speaks volumes about the corporate culture and shows what they really think about OSS support.
评论 #39893425 未加载
tomp大约 1 年前
TL;DR: unroll the outer two loops of matrix multiplication
评论 #39893976 未加载