TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Running Local LLMs on MacBook Pro M1 Max?

2 pointsby hhimanshuabout 1 month ago
Hello Community<p>I’m experimenting with running local LLMs and would love to learn from those of you who’ve spent more time in this space. I’m particularly interested in performance tuning, reasoning capabilities, and enabling agent-style workflows using local models.<p>My setup: • MacBook Pro (2021) • Chip: Apple M1 Max (10-core CPU: 8 performance + 2 efficiency cores) • GPU: Apple M1 Max (24-core GPU) • Memory: 64GB LPDDR5 (Hynix) • Main display: 3024x1964 Liquid Retina XDR • External display: Dell S2721QS (3840x2160) • Model Identifier: MacBookPro18,4<p>I use LM Studio, but even with quantized 7B models, my machine often slows down or hangs during inference. I’m hoping to get better performance and explore more advanced use cases.<p>My questions: 1. What local LLM software are you using successfully on Apple Silicon? 2. How do you improve inference speed? Any success with specific quantizations, runtimes, or swap&#x2F;virtual memory tricks? 3. What reasoning models have worked well for you, especially for: • Coding • Understanding specific domains (e.g., health insurance, legal, science, finance)? 4. Which local models support agent-style workflows, including: • Function calling &#x2F; tool use • JSON or structured output • Multi-step planning and reasoning 5. Could you share your setup, resources, or references that helped you get started or scale up? 6. If you’ve built a local inference box (outside your main machine), how did you approach it? DIY build advice, parts lists, or tutorials would be hugely appreciated!<p>I’m learning and iterating, and would love to go deeper into running capable models offline. Any guidance would be super helpful—thanks so much in advance!

no comments

no comments