TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Cheaper or similar setup like Asus ROG G16 for local LLM development?

3 pointsby hedgehog0about 1 year ago
Hello everyone,<p>I’m a math graduate student in Germany and recently I’m interested in developing local and&#x2F;or web apps with LLMs. I have a 12-year-old MacBook Pro, so I’m thinking about buying something new.<p>I have searched relevant keywords here, and the “universal suggestions” seem to be that one should use laptops to access GPUs on the cloud, instead of running training and&#x2F;or inferences on a laptop.<p>Someone mentioned [ASUS ROG G16](https:&#x2F;&#x2F;www.amazon.de&#x2F;Anti-Glare-Display-i9-13980HX-Windows-Keyboard&#x2F;dp&#x2F;B0BZTJKZ5L&#x2F;) or G14&#x2F;15 can be a good local setup for running small models. While I probably can afford this, but it’s still slightly more expensive than I thought.<p>Given that a 3060 is around 300 euros, I was wondering that would a cheaper solution would be to build a PC myself? If so, how much cost do you think I would spend? I’ll probably move to a new place in the Fall semester, so I would like something portable or not too heavy if possible.<p>Thank you very much for your time!

1 comment

FlyingAvatarabout 1 year ago
So running models on a M1 Mac with 32gb+ works very well as the CPU and GPU RAM are shared so you can run some really significant models with with.<p>Earlier this year, I also went down the path of looking into building a machine with dual 3090s. Doing it for &lt;$1,000 is fairly challenging once you add case, motherboard, CPU, RAM, etc.<p>What I ended up doing was getting a used rackmount server that is capable of handling dual GPUs and two nVidia Telsa P40s.<p>Examples: <a href="https:&#x2F;&#x2F;www.ebay.com&#x2F;itm&#x2F;284514545745?itmmeta=01HRJZX097EGBPF60J0VTJJVXY" rel="nofollow">https:&#x2F;&#x2F;www.ebay.com&#x2F;itm&#x2F;284514545745?itmmeta=01HRJZX097EGBP...</a> <a href="https:&#x2F;&#x2F;www.ebay.com&#x2F;itm&#x2F;145655400112?itmmeta=01HRJZXK512Y3N26A8N2YDPX2X" rel="nofollow">https:&#x2F;&#x2F;www.ebay.com&#x2F;itm&#x2F;145655400112?itmmeta=01HRJZXK512Y3N...</a><p>The total here was ~$600 and there was essentially no effort building &#x2F; assembling the machine, except I needed to order some molex power adapters, which were cheap.<p>The server is definitely compact, but it can get LOUD when it&#x27;s running heavy load, so that might be a consideration.<p>It&#x27;s probably not the right machine for training models, but it runs inference on GGUF (using ollama) quite well. I have been running Mixtral at zippy token rates and smaller models even faster.