TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How can I experiment with LLMs with a old machine?

5 点作者 hedgehog08 个月前
Dear all,<p>Recently I purchased &quot;[Build a Large Language Model (From Scratch)](https:&#x2F;&#x2F;www.manning.com&#x2F;books&#x2F;build-a-large-language-model-from-scratch)&quot; by Sebastian Raschka, so that I could learn more about how to build and&#x2F;or fine-tune a LLM, and even developing some applications with them. I have also been skimming and reading on this sub for several months, and have witnessed many interesting developments that I would like to follow and experiment with.<p>However, there is a problem: The machine I have is a very old Macbook Pro from 2011 and I probably would not be able to afford a new one until I&#x27;m in graduate school next year. So I was wondering that, other than getting a new machine, what are the other (online&#x2F;cloud) alternatives and&#x2F;or options that I could use, to experiments with LLMs?<p>Many thanks!

3 条评论

LargoLasskhyfv8 个月前
Make yourself comfortable with<p><a href="https:&#x2F;&#x2F;blogs.oracle.com&#x2F;database&#x2F;post&#x2F;freedom-to-build-announcing-oracle-cloud-free-tier-with-new-always-free-services-and-always-free-oracle-autonomous-database" rel="nofollow">https:&#x2F;&#x2F;blogs.oracle.com&#x2F;database&#x2F;post&#x2F;freedom-to-build-anno...</a><p><a href="https:&#x2F;&#x2F;gist.github.com&#x2F;rssnyder&#x2F;51e3cfedd730e7dd5f4a816143b25dbd" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;rssnyder&#x2F;51e3cfedd730e7dd5f4a816143b...</a><p><a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;oraclecloud&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;oraclecloud&#x2F;</a><p>or any other offer.<p>Deploy some minimal Linux on them, or use what&#x27;s offered.<p>Plus optionally, if you don&#x27;t want to instantly start coding from first principles&#x2F;scratch, make use of established and excellent solutions, like<p><a href="https:&#x2F;&#x2F;future.mozilla.org&#x2F;builders&#x2F;news_insights&#x2F;introducing-llamafile&#x2F;" rel="nofollow">https:&#x2F;&#x2F;future.mozilla.org&#x2F;builders&#x2F;news_insights&#x2F;introducin...</a><p><a href="https:&#x2F;&#x2F;ai-guide.future.mozilla.org&#x2F;content&#x2F;running-llms-locally&#x2F;" rel="nofollow">https:&#x2F;&#x2F;ai-guide.future.mozilla.org&#x2F;content&#x2F;running-llms-loc...</a><p><a href="https:&#x2F;&#x2F;github.com&#x2F;mozilla-Ocho&#x2F;llamafile">https:&#x2F;&#x2F;github.com&#x2F;mozilla-Ocho&#x2F;llamafile</a><p><a href="https:&#x2F;&#x2F;justine.lol&#x2F;matmul&#x2F;" rel="nofollow">https:&#x2F;&#x2F;justine.lol&#x2F;matmul&#x2F;</a><p>and parallelize them with<p><a href="https:&#x2F;&#x2F;github.com&#x2F;b4rtaz&#x2F;distributed-llama">https:&#x2F;&#x2F;github.com&#x2F;b4rtaz&#x2F;distributed-llama</a><p>Obviously this needs some knowledge of the command line, so get a good terminal emulator like<p><a href="https:&#x2F;&#x2F;iterm2.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;iterm2.com&#x2F;</a><p>Mend, bend, rend that stuff and see what works how and why, and what not.<p>Edit: Optionally, if you really want to go low-level, with some debugger like<p><a href="https:&#x2F;&#x2F;justine.lol&#x2F;blinkenlights&#x2F;" rel="nofollow">https:&#x2F;&#x2F;justine.lol&#x2F;blinkenlights&#x2F;</a><p>for &#x27;toy-installations&#x27; of smallest models.<p>&#x27;Toy&#x27; because that doesn&#x27;t fully support the CPU-instructions which are used in production.<p>Could still help conceptually.
评论 #41612549 未加载
roosgit8 个月前
I&#x27;ve never used it, but I think Google Colab has a free plan.<p>As another option, you can rent a machine with a decent GPU on vast.ai. An Nvidia 3090 can be rented for about $0.20&#x2F;hr.
tarun_anand8 个月前
I am on a 2019 mac and finding it difficult too.<p>Best bet would be to start with a small language model?
评论 #41611306 未加载