Dear all,<p>Recently I purchased "[Build a Large Language Model (From Scratch)](https://www.manning.com/books/build-a-large-language-model-from-scratch)" by Sebastian Raschka, so that I could learn more about how to build and/or fine-tune a LLM, and even developing some applications with them. I have also been skimming and reading on this sub for several months, and have witnessed many interesting developments that I would like to follow and experiment with.<p>However, there is a problem: The machine I have is a very old Macbook Pro from 2011 and I probably would not be able to afford a new one until I'm in graduate school next year. So I was wondering that, other than getting a new machine, what are the other (online/cloud) alternatives and/or options that I could use, to experiments with LLMs?<p>Many thanks!
Make yourself comfortable with<p><a href="https://blogs.oracle.com/database/post/freedom-to-build-announcing-oracle-cloud-free-tier-with-new-always-free-services-and-always-free-oracle-autonomous-database" rel="nofollow">https://blogs.oracle.com/database/post/freedom-to-build-anno...</a><p><a href="https://gist.github.com/rssnyder/51e3cfedd730e7dd5f4a816143b25dbd" rel="nofollow">https://gist.github.com/rssnyder/51e3cfedd730e7dd5f4a816143b...</a><p><a href="https://www.reddit.com/r/oraclecloud/" rel="nofollow">https://www.reddit.com/r/oraclecloud/</a><p>or any other offer.<p>Deploy some minimal Linux on them, or use what's offered.<p>Plus optionally, if you don't want to instantly start coding from first principles/scratch, make use of established and excellent solutions, like<p><a href="https://future.mozilla.org/builders/news_insights/introducing-llamafile/" rel="nofollow">https://future.mozilla.org/builders/news_insights/introducin...</a><p><a href="https://ai-guide.future.mozilla.org/content/running-llms-locally/" rel="nofollow">https://ai-guide.future.mozilla.org/content/running-llms-loc...</a><p><a href="https://github.com/mozilla-Ocho/llamafile">https://github.com/mozilla-Ocho/llamafile</a><p><a href="https://justine.lol/matmul/" rel="nofollow">https://justine.lol/matmul/</a><p>and parallelize them with<p><a href="https://github.com/b4rtaz/distributed-llama">https://github.com/b4rtaz/distributed-llama</a><p>Obviously this needs some knowledge of the command line, so get a good terminal emulator like<p><a href="https://iterm2.com/" rel="nofollow">https://iterm2.com/</a><p>Mend, bend, rend that stuff and see what works how and why, and what not.<p>Edit: Optionally, if you really want to go low-level, with some debugger like<p><a href="https://justine.lol/blinkenlights/" rel="nofollow">https://justine.lol/blinkenlights/</a><p>for 'toy-installations' of smallest models.<p>'Toy' because that doesn't fully support the CPU-instructions which are used in production.<p>Could still help conceptually.
I've never used it, but I think Google Colab has a free plan.<p>As another option, you can rent a machine with a decent GPU on vast.ai. An Nvidia 3090 can be rented for about $0.20/hr.