Not sure what to do next. I'm building a home setup to test various LLMs and gain hands-on experience with custom built gear. I want to run Llama 3.1, Hermes 3, Qwen 2.5 Coder<p>My specs:<p>- Intel Core i7 13700<p>- RTX 4090<p>- SSD NVMe 1TB<p>Any other options I should be aware of? And how much did you invest in your current setup?
Check out <a href="https://www.reddit.com/r/LocalLLaMA/" rel="nofollow">https://www.reddit.com/r/LocalLLaMA/</a> if you want to socialise with other people doing this and see their setups, etc :)
<p><pre><code> > Not sure what to do next.
</code></pre>
What do you mean? Setup the models themselves on your machine, and do what you wanted to do with them. If you see any bottlenecks during the process, then address them.