TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: What is your local LLM setup?

10 pointsby anditherobot7 months ago

6 comments

sandwichsphinx7 months ago
For local large language models, my current setup is Ollama running on my M1 Mac Mini with 8GB of RAM, using whatever SOTA 8B model comes out. I used to have a more powerful workstation I built in 2016 with three GTX 1070s, but the capacitors were falling off, and I could not justify replacing it when Claude and ChatGPT subscriptions are more than enough for me. I plan on building a new dedicated workstation as soon as the first-mover disadvantage comes down. Today's hardware is still too early and too expensive to warrant any significant personal investment, in my opinion.
ActorNightly7 months ago
At work, we have access to AWS bedrock, so we use that.<p>At home, I did the math, and its cheaper for me to buy credits for openai and use gpt4 than investing in graphics cards.I use maybe 5 dollars a month max
roosgit7 months ago
I have a separate PC that I access through SSH. I recently bought a GPU for it, before that I was running it on CPU alone.<p>- B550MH motherboard<p>- Ryzen 3 4100 CPU<p>- 32GB (2x16) RAM cranked up to 3200MHz (prompt generation in memory bound)<p>- 256GB M.2 NVMe (helps with loading models faster)<p>- Nvidia 3060 12GB<p>Software-wise, I use llamafile because on the CPU it&#x27;s faster by 10-20% for prompt processing than llama.cpp.<p>Performance &quot;Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf&quot;:<p>CPU-only: 23.47 t&#x2F;s (processing), 8.73 t&#x2F;s (generation)<p>GPU: 941.5 t&#x2F;s (processing), 29.4 t&#x2F;s (generation)
lysace7 months ago
Is anyone doing a local Copilot? What&#x27;s your setup? Is it competitive with Github Copilot?<p>I just that realized my 32 GB Mac M2 Max Studio is pretty good at running relatively large models using Ollama. And there&#x27;s the Continue.dev VS Code plugin that can use it, but I feel that the suggested defaults aren&#x27;t very optimal for this config.
评论 #41848012 未加载
p1esk7 months ago
8xA6000
talldayo7 months ago
RTX 3070