TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: I have $5k to spend on a local AI machine, what should I get?

17 点作者 siltcakes3 个月前
I would like to run local models as large and as fast as possible for around $5,000 USD. Is an Apple machine the best choice with their shared memory or is there a particular GPU that would be more cost effective? Thanks!

11 条评论

billconan3 个月前
I guess I would buy nvidia digits <a href="https:&#x2F;&#x2F;www.nvidia.com&#x2F;en-us&#x2F;project-digits&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.nvidia.com&#x2F;en-us&#x2F;project-digits&#x2F;</a>
评论 #42856137 未加载
评论 #42895441 未加载
评论 #42876159 未加载
评论 #42859906 未加载
jotux3 个月前
Not precisely what you were looking for, but this was going around yesterday: <a href="https:&#x2F;&#x2F;rasim.pro&#x2F;blog&#x2F;how-to-install-deepseek-r1-locally-full-6k-hardware-software-guide&#x2F;" rel="nofollow">https:&#x2F;&#x2F;rasim.pro&#x2F;blog&#x2F;how-to-install-deepseek-r1-locally-fu...</a>
cameron_b3 个月前
Unpopular but highly promising way to go if training is on your mind- 4x 7900 xtx cards and some nuts and bolts to feed them could be a price per GPU memory high point. There are folks using ROCm with that to put up some interesting numbers in terms of wall clock and power required per training run.
monroewalker3 个月前
This Reddit comment mentioned this site with used servers:<p><a href="https:&#x2F;&#x2F;pcserverandparts.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pcserverandparts.com&#x2F;</a> <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;LocalLLaMA&#x2F;comments&#x2F;1i8rujw&#x2F;comment&#x2F;m93j7xd&#x2F;?utm_source=share&amp;utm_medium=mweb3x&amp;utm_name=mweb3xcss&amp;utm_term=1&amp;utm_content=share_button" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;LocalLLaMA&#x2F;comments&#x2F;1i8rujw&#x2F;comment...</a>
vunderba3 个月前
Be more specific - AI is a very broad field.<p>nVidia GPUs have the best inference speed (particularly around SDXL, Hunyuan, Flux, etc), but unless you&#x27;re buying several used 3090s SLI style, you&#x27;re going to have to split larger LLM GGUFs across main memory and GPU. I&#x27;m excluding the RTX 5090 since two of them (plus tax) would basically blow your budget out the water.<p>With Apple I <i>think</i> you can get up to 192GB of shared memory allowing for very large LLMs.<p>Another thing is your experience. Unless you want to shell out even more money, you&#x27;ll likely have to build the PC. It&#x27;s not hard but it&#x27;s definitely more work than just grabbing a Mac Studio from the nearest Apple Store.
评论 #42878295 未加载
PaulHoule3 个月前
I just got a Mac Mini with maximum specs (can&#x27;t believe how small the box it came in was!) and that&#x27;s not a bad choice. As you say it has the advantage of handling large models. I think the 5090 will outperform it in terms of FLOPS but it only comes with 32MB compared to the 64MB you can get on an M4 mini. The 5090 itself will be $2000 (if you can get it at that price) compared to the $2500 max mini M4. You&#x27;ll probably spend at least $1k for the rest of the PC worthy of the 5090 card.
lulznews3 个月前
More info needed but for 5k you can get a near maxed mbp that should be able to handle most models, perform pretty well, and also serve as your laptop so it’s not a pure AI box. Then if you have heavier needs go to the cloud.
eatenbyagrue3 个月前
Here&#x27;s a nice looking DeepSeek build <a href="https:&#x2F;&#x2F;x.com&#x2F;carrigmat&#x2F;status&#x2F;1884244369907278106?mx=2" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;carrigmat&#x2F;status&#x2F;1884244369907278106?mx=2</a>
giardini3 个月前
<a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;Yassk-Fortune-Telling-Floating-Answers&#x2F;dp&#x2F;B0B6VXJMMW" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;Yassk-Fortune-Telling-Floating-Answer...</a><p>AI in the palm of your hand! Best deal evarrr!
mikewarot3 个月前
I used to think GPUs were the way to go, but now my goal is to get a used server with a Terabyte of RAM so I can run the full size Deepseek R1
评论 #42919535 未加载
throwaway5193 个月前
Why get an Apple? Even th3 keyboard lacks required keys for development. They&#x27;re purely tech bro poser machines.
评论 #42876171 未加载