I would like to run an LLM locally so I can be absolutely sure that the data I send to it is private. Does anyone have experience doing this with the latest Mac Mini? Any insights will be very much appreciated. Thanks.
I ran the 13GB <a href="https://simonwillison.net/2023/Nov/29/llamafile/" rel="nofollow">https://simonwillison.net/2023/Nov/29/llamafile/</a> on an M3 without issues. 40 tokens per second output.
I bought a Mac Mini M2 last year to start playing around with some personal projects, and I did some tests using LM Studio running Mixtral models with pretty good throughput, I also tested Open AI's whisper models to do some transcriptions and those ran fine as well.<p>I do, however, recommend that you upgrade the RAM, 8GB is barely enough as is, so getting at least 16GB would be better. (I don't recommend upgrading the SSD though, since because of Thunderbolt 4 you can have a fast external SSD for half the price that Apple charges for storage).
download km studio and you're done. depending on the amount of ram you have you can run different models. check out the mixtral 8x7b ones for generally good results. <a href="https://lmstudio.ai/" rel="nofollow">https://lmstudio.ai/</a>