TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Llama.cpp guide – Running LLMs locally on any hardware, from scratch

368 点作者 zarekr6 个月前

18 条评论

smcleod6 个月前
Neat to see more folks writing blogs on their experiences. This however does seem like it&#x27;s an over-complicated method of building llama.cpp.<p>Assuming you want to do this iteratively (at least for the first time) should only need to run:<p><pre><code> ccmake . </code></pre> And toggle the parameters your hardware supports or that you want (e.g. if CUDA if you&#x27;re using Nvidia, Metal if you&#x27;re using Apple etc..), and press &#x27;c&#x27; (configure) then &#x27;g&#x27; (generate), then:<p><pre><code> cmake --build . -j $(expr $(nproc) &#x2F; 2) </code></pre> Done.<p>If you want to move the binaries into your PATH, you could then optionally run cmake install.
评论 #42278825 未加载
评论 #42278214 未加载
评论 #42278327 未加载
评论 #42279758 未加载
marcodiego6 个月前
First time I heard about Llama.cpp I got it to run on my computer. Now, my computer: a Dell laptop from 2013 with 8Gb RAM and an i5 processor, no dedicated graphic card. Since I wasn&#x27;t using a MGLRU enabled kernel, It took a looong time to start but wasn&#x27;t OOM-killed. Considering my amount of RAM was just the minimum required, I tried one of the smallest available models.<p>Impressively, it worked. It was slow to spit out tokens, at a rate around a word each 1 to 5 seconds and it was able to correctly answer &quot;What was the biggest planet in the solar system&quot;, but it quickly hallucinated talking about moons that it called &quot;Jupterians&quot;, while I expected it to talk about Galilean Moons.<p>Nevertheless, LLM&#x27;s really impressed me and as soon as I get my hands on better hardware I&#x27;ll try to run other bigger models locally in the hope that I&#x27;ll finally have a personal &quot;oracle&quot; able to quickly answers most questions I throw at it and help me writing code and other fun things. Of course, I&#x27;ll have to check its answers before using them, but current state seems impressive enough for me, specially QwQ.<p>Is Any one running smaller experiments and can talk about your results? Is it already possible to have something like an open source co-pilot running locally?
评论 #42276869 未加载
评论 #42275863 未加载
评论 #42278234 未加载
评论 #42275703 未加载
评论 #42276402 未加载
评论 #42282362 未加载
wing-_-nuts6 个月前
Llama.cpp is one of those projects that I <i>want</i> to install, but I always just wind up installing kobold.cpp because it&#x27;s simply <i>miles</i> better with UX.
评论 #42276389 未加载
评论 #42276213 未加载
评论 #42275817 未加载
评论 #42279787 未加载
评论 #42275793 未加载
superkuh6 个月前
I&#x27;d say avoid pulling in all the python and containers required and just download the gguf from huggingface website directly in a browser rather than doing is programmatically. That sidesteps a lot of this project&#x27;s complexity since nothing about llama.cpp requires those heavy deps or abstractions.
评论 #42278264 未加载
arendtio6 个月前
I tried building and using llama.cpp multiple times, and after a while, I got so frustrated with the frequently broken build process that I switched to ollama with the following script:<p><pre><code> #!&#x2F;bin&#x2F;sh export OLLAMA_MODELS=&quot;&#x2F;mnt&#x2F;ai-models&#x2F;ollama&#x2F;&quot; printf &#x27;Starting the server now.\n&#x27; ollama serve &gt;&#x2F;dev&#x2F;null 2&gt;&amp;1 &amp; serverPid=&quot;$!&quot; printf &#x27;Starting the client (might take a moment (~3min) after a fresh boot).\n&#x27; ollama run llama3.2 2&gt;&#x2F;dev&#x2F;null printf &#x27;Stopping the server now.\n&#x27; kill &quot;$serverPid&quot; </code></pre> And it just works :-)
评论 #42277196 未加载
dmezzetti6 个月前
Seeing a lot of Ollama vs running llama.cpp direct talk here. I agree that setting up llama.cpp with CUDA isn&#x27;t always the easiest. But there is a cost to running all inference over HTTPS. Local in-program inference will be faster. Perhaps that doesn&#x27;t matter in some cases but it&#x27;s worth noting.<p>I find that running PyTorch is easier to get up and running. For quantization, AWQ models work and it&#x27;s just a &quot;pip install&quot; away.
slavik816 个月前
FYI, if you&#x27;re on Ubuntu 24.04, it&#x27;s easy to build llama.cpp with AMD ROCm GPU acceleration. Debian enabled support for a wider variety of hardware than is available in the official AMD packages, so this should work for nearly all discrete AMD GPUs from Vega onward (with the exception of MI300, because Ubuntu 24.04 shipped with ROCm 5):<p><pre><code> sudo apt -y install git wget hipcc libhipblas-dev librocblas-dev cmake build-essential # add yourself to the video and render groups sudo usermod -aG video,render $USER # reboot to apply the group changes # download a model wget --continue -O dolphin-2.2.1-mistral-7b.Q5_K_M.gguf \ https:&#x2F;&#x2F;huggingface.co&#x2F;TheBloke&#x2F;dolphin-2.2.1-mistral-7B-GGUF&#x2F;resolve&#x2F;main&#x2F;dolphin-2.2.1-mistral-7b.Q5_K_M.gguf?download=true # build llama.cpp git clone https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp.git cd llama.cpp git checkout b3267 HIPCXX=clang++-17 cmake -S. -Bbuild \ -DGGML_HIPBLAS=ON \ -DCMAKE_HIP_ARCHITECTURES=&quot;gfx803;gfx900;gfx906;gfx908;gfx90a;gfx1010;gfx1030;gfx1100;gfx1101;gfx1102&quot; \ -DCMAKE_BUILD_TYPE=Release make -j8 -C build # run llama.cpp build&#x2F;bin&#x2F;llama-cli -ngl 32 --color -c 2048 \ --temp 0.7 --repeat_penalty 1.1 -n -1 \ -m ..&#x2F;dolphin-2.2.1-mistral-7b.Q5_K_M.gguf \ --prompt &quot;Once upon a time&quot; </code></pre> I think this will also work on Rembrandt, Renoir, and Cezanne integrated GPUs with Linux 6.10 or newer, so you might be able to install the HWE kernel to get it working on that hardware.<p>With that said, users with CDNA 2 or RDNA 3 GPUs should probably use the official AMD ROCm packages instead of the built-in Ubuntu packages, as there are performance improvements for those architectures in newer versions of rocBLAS.
HarHarVeryFunny6 个月前
What are the limitations on which LLMs (specific transformer variants etc) llama.cpp can run? Does it require the input mode&#x2F;weights to be in some self-describing format like ONNX that support different model architectures as long as they are built out of specific module&#x2F;layer types, or does it more narrowly only support transformer models parameterized by depth, width, etc?
nobodyandproud6 个月前
This was nice. I took the road less traveled and tried building on Windows and AMD.<p>Spoiler: Vulkan with MSYS2 was indeed the easiest to get up and running.<p>I actually tried w64devkit first and it worked properly for llama-server, but there were inexplicable plug-in problems with llama-bench.<p>Edit: I tried w64devkit before I read this write-up and I was left wondering what to try next, so the timing was perfect.
smcleod6 个月前
Somewhat related - on several occasions I&#x27;ve come across the claim that _&quot;Ollama is just a llama.cpp wrapper&quot;_, which is inaccurate and completely misses the point. I am sharing my response here to avoid repeating myself repeatedly.<p>With llama.cpp running on a machine, how do you connect your LLM clients to it and request a model gets loaded with a given set of parameters and templates?<p>... you can&#x27;t, because llama.cpp is the inference engine - and it&#x27;s bundled llama-cpp-server binary only provides relatively basic server functionality - it&#x27;s really more of demo&#x2F;example or MVP.<p>Llama.cpp is all configured at the time you run the binary and manually provide it command line args for the one specific model and configuration you start it with.<p>Ollama provides a server and client for interfacing and packaging models, such as:<p><pre><code> - Hot loading models (e.g. when you request a model from your client Ollama will load it on demand). - Automatic model parallelisation. - Automatic model concurrency. - Automatic memory calculations for layer and GPU&#x2F;CPU placement. - Layered model configuration (basically docker images for models). - Templating and distribution of model parameters, templates in a container image. - Near feature complete OpenAI compatible API as well as it&#x27;s native native API that supports more advanced features such as model hot loading, context management, etc... - Native libraries for common languages. - Official container images for hosting. - Provides a client&#x2F;server model for running remote or local inference servers with either Ollama or openai compatible clients. - Support for both an official and self hosted model and template repositories. - Support for multi-modal &#x2F; Vision LLMs - something that llama.cpp is not focusing on providing currently. - Support for serving safetensors models, as well as running and creating models directly from their Huggingface model ID. </code></pre> In addition to the llama.cpp engine, Ollama are working on adding additional model backends (e.g. things like exl2, awq, etc...).<p>Ollama is not &quot;better&quot; or &quot;worse&quot; than llama.cpp because it&#x27;s an entirely different tool.
评论 #42278711 未加载
评论 #42279399 未加载
notadoc6 个月前
Ollama is so easy, what&#x27;s the benefit to Llama.cpp?
评论 #42278272 未加载
marcantonio6 个月前
I set up llama.cop last week on my M3. Was fairly simple via homebrew. However, I get tags like &lt;|imstart|&gt; in the output constantly. Is there a way to filter them out with llama-server? Seems like a major usability issue if you want to use llama.cpp by itself (with the web interface).<p>ollama didn’t have the issue, but it’s less configurable.
secondcoming6 个月前
I just gave this a shot on my laptop and it works reasonably well considering it has no discrete GPU.<p>One thing I’m unsure of is how to pick a model. I downloaded the 7B one from Huggingface, but how is anyone supposed to know what these models are for, or if they’re any good?
评论 #42280151 未加载
varispeed6 个月前
I use ChatGPT and Claude daily, but I can&#x27;t see a use case for why would I use LLM outside of these services.<p>What do you use Llama.cpp for?<p>I get you can ask it a question in natural language and it will spit out sort of an answer, but what would you do with it, what do you ask it?
评论 #42278977 未加载
评论 #42280159 未加载
inLewOf6 个月前
re Temperature config option: I&#x27;ve found it useful for trying to generate something akin to a sampling-based confidence score for chat completions (e.g., set the temperature a bit high, run the model a few times and calculate the distribution of responses). Otherwise haven&#x27;t figured out a good way to get confidence scores in llama.cpp (Been tracking this git request to get log_probs <a href="https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp&#x2F;issues&#x2F;6423">https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp&#x2F;issues&#x2F;6423</a>)
niek_pas6 个月前
Can someone tell me what the advantages are of doing this over using, e.g., the ChatGPT web interface? Is it just a privacy thing?
评论 #42276051 未加载
评论 #42276001 未加载
评论 #42276009 未加载
评论 #42276524 未加载
评论 #42276014 未加载
评论 #42276412 未加载
评论 #42276174 未加载
评论 #42276010 未加载
评论 #42277442 未加载
NoZZz6 个月前
You can also just download LMStudio for free, works out of the box.
nothrowaways6 个月前
There are many open source alternatives to LMstudio that work just as good.