Hey HN,<p>I've recently started running local LLMs, and one problem I encountered was inconsistent information on whether a model will run within a certain amount of VRAM.<p>I created a simple calculator that helps determine if a model can run on your hardware. It will also tell you how much VRAM you need at different quantization levels.<p>It doesn’t work with all models yet, but I’m working on building a more stable dataset to pull from.<p>Feedback is appreciated!