TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Nvidia on NixOS WSL – Ollama up 24/7 on your gaming PC

95 点作者 fangpenlin大约 1 个月前

18 条评论

amiantos大约 1 个月前
I agree with the other commenters, this post does not explain why you would not just run Ollama or Koboldcpp on Windows. What exactly makes running Ollama within virtualized NixOS in WSL in some way better than running natively?<p>If it&#x27;s just the novelty aspect of it or some idealogical reason, that&#x27;s fine, but it should be explained in the blog post before someone thinks this is a sane and logical way to run Ollama on a gaming PC.
评论 #43650987 未加载
elwebmaster大约 1 个月前
I would dare guess the author just doesn’t know there is a perfectly functional Windows native Ollama release. I was doing the same thing until I realized that it makes no sense because I can just install ollama on Windows and then connect to it from within WSL.
vunderba大约 1 个月前
As others have already pointed out if you&#x27;re going to run Ollama in Windows anyway, why not use the native build? And if you want to use WSL, then I&#x27;d suggest using something like LocalAI which gives you a lot more control and support for additional formats (GGML, GGUF, GPTQ, ONNX, etc).<p><a href="https:&#x2F;&#x2F;github.com&#x2F;mudler&#x2F;LocalAI">https:&#x2F;&#x2F;github.com&#x2F;mudler&#x2F;LocalAI</a>
razemio大约 1 个月前
Just to awnser the question why not using windows. Here are some ideas why the author might have used nix instead:<p>- reproducible (with minor adjustments even on non-WSL systems)<p>- if you are used to nix, there is not much which beats it in terms of stability, maintainability, upgradability, and fun (?)<p>- additional services are typically easier to set up like tailscale-acl, used by the author, which uses pulumi under the hood<p>- despite some downsides (disc speed was an issue when I used it), WSL is surprisingly capable
评论 #43650978 未加载
评论 #43651599 未加载
magicalhippo大约 1 个月前
Given that Ollama runs quite fine on Windows if you have NVIDIA, why such complicated setup?<p>It would make more sense for AMD I suppose where Ollama&#x27;s Windows support is lacking compared to Linux.<p>That said, neat tricks useful for other stuff as well.
评论 #43650497 未加载
rrix2大约 1 个月前
it&#x27;s unclear to me what you gain running ollama in wsl like this compared to switching to a productive native operating system [like nixos] or just installing the windows release of ollama and quietly forgetting about it.<p>i use nixos.wsl at work to have the same emacs configuration as on my laptop, and that&#x27;s fine except the windows filesystem performance makes me want to throw the whole system in a dumpster. but on my home gaming machine i have some games that only run on windows so i just installed ollama&#x27;s windows installer which works with my GPU and installs an autostart entry.<p>these days the windows box sits in a dark corner on my network with tailscale (again just the windows install), running sunshine too to start steam games on my laptop.
emsign大约 1 个月前
Running models at home seems like a waste of money while at the same time they are currently heavily subsidized in the cloud by dumb money.
评论 #43652208 未加载
评论 #43652467 未加载
评论 #43652148 未加载
评论 #43652565 未加载
评论 #43654416 未加载
评论 #43653206 未加载
Havoc大约 1 个月前
Just did the opposite. Decided it’s time for Linux on desktop. Better for programming. Better for AI.<p>Bet being that I can get most games to work on it - that was the sticking point. (Thanks to Valve I think it’ll work out)
评论 #43653218 未加载
Carrok大约 1 个月前
This is great work, it solves an exact problem I too am having. Now I just need to upgrade my 12 year old GPUs to something that can run an LLM.
评论 #43650905 未加载
IHLayman大约 1 个月前
I love the idea of this flake to run Ollama even on Windows, but just pointing people to your _everything_ flake is going to confuse people and make it look harder than it is to run Ollama on Nix.<p>If you are using a system-controlling Nix (nix-darwin, NixOS…), it’s as easy as `hardware.services.ollama.enable=true` with maybe adding `.acceleration=“cuda”` to force GPU usage or `host=“0.0.0.0”` to allow connections to Ollama that are not local to your system. In a home-manager situation it is even easier: just include `pkgs.ollama` in your `home.packages`, with an `.override{}` for the same options above. That should be it, really.<p>I will say that if you have a more complex NixOS setup that patches the kernel or can’t lean on cachix for some reason that using the ollama package takes a long time to compile. My setup at home runs on a 3950X Threadripper and when Ollama compiles it uses all the cores at 99% for about 16 minutes.
mertleee大约 1 个月前
Remove windows. And this is amazing.
itissid大约 1 个月前
I&#x27;ve been running an ollama and deepseek in a container in TrueNAS k8s for several months. It&#x27;s hooked up to my Continue extension in VSCode. I also mix it with cloud hosted &quot;dumb&quot; ones for other tasks like code completion. Ollama deepseek is reserved for heavier chat and code tasks.<p>It&#x27;s fast as hell. Though you will need at least two GPUs to divide between ollama and if need something else(display&#x2F;game&#x2F;proxmox) to use it.
pipyakas大约 1 个月前
Nvidia never fixed their sysmem fallback policy for wsl2 though, running on wsl2 rather than native Windows just spell so much performance problems when VRAM overflows
shmerl大约 1 个月前
Gaming PC can run Linux to begin with.
评论 #43651127 未加载
WD-42大约 1 个月前
Quoting the article:<p>&gt; I refused to manage a separate Ubuntu box that would need reconfiguring from scratch.<p>Immediately followed by:<p>&gt; After hacking away at it for a number of weeks<p>Hmmm
rob_c大约 1 个月前
Oh dear god, just use containers or instead a proper os rather than that disk chewing monstrosity...
DeathArrow大约 1 个月前
Ollama also runs on Windows.
dopa42365大约 1 个月前
&gt;gaming PC<p>&gt;LLM<p>stinky