Up until not too long ago I assumed that self-hosting an llm would come at an outrageous cost. I have a bunch of problems with LLM's in general. The major one is that all LLMs(even openAI) will produce output which will give anyone a great sense of confidence, only to be later slapped across the face with the harsh reality-for anything involving serious reasoning, chances are the response you got was at large bullshit. The second one is that I do not entirely trust those companies with my data, be it OpenAI, Microsoft or Github or any other.<p>That said, a while ago there was this[1] thread on here which helped me snatch a brand new, unboxed p40 for peanuts. Really, the cost was 2 or 3 jars of good quality peanut butter. Sadly it's still collecting dust since although my workstation can accommodate it, cooling is a bit of an issue - I 3D printed a bunch of hacky vents but I haven't had the time to put it all together.<p>The reason why I went this road was phi-3, which blew me away by how powerful, yet compact it is. Again, I would not trust it with anything big, but I have been using it for sifting through a bunch of raw, unstructured text and extract data from it and it's honestly done wonders. Overall, depending on your budget and your goal, running an llm in your home lab is a very appealing idea.<p>[1] <a href="https://news.ycombinator.com/item?id=39477848">https://news.ycombinator.com/item?id=39477848</a>