I noticed that offline LLM builds running on personal computers are now possible, but it seemed like all the solutions required the installation of dependencies, so I created a containerized solution that makes it easy to swap out the model in use: <a href="https://github.com/paolo-g/uillem">https://github.com/paolo-g/uillem</a>