Hello Hacker News, Yagil here- founder and original creator of LM Studio (now built by a team of 6!). I had the initial idea to build LM Studio after seeing the OG LLaMa weights ‘leak’ (<a href="https://github.com/meta-llama/llama/pull/73/files">https://github.com/meta-llama/llama/pull/73/files</a>) and then later trying to run some TheBloke quants during the heady early days of ggerganov/llama.cpp. In my notes LM Studio was first “Napster for LLMs” which evolved later to “GarageBand for LLMs”.<p>What LM Studio is today is a an IDE / explorer for local LLMs, with a focus on format universality (e.g. GGUF) and data portability (you can go to file explorer and edit everything). The main aim is to give you an accessible way to work with LLMs and make them useful for your purposes.<p>Folks point out that the product is not open source. However I think we facilitate distribution and usage of openly available AI and empower many people to partake in it, while protecting (in my mind) the business viability of the company. LM Studio is free for personal experimentation and we ask businesses to get in touch to buy a business license.<p>At the end of the day LM Studio is intended to be an easy yet powerful tool for doing things with AI without giving up personal sovereignty over your data. Our computers are super capable machines, and everything that can happen locally w/o the internet, should. The app has no telemetry whatsoever (you’re welcome to monitor network connections yourself) and it can operate offline after you download or sideload some models.<p>0.3.0 is a huge release for us. We added (naïve) RAG, internationalization, UI themes, and set up foundations for major releases to come.
Everything underneath the UI layer is now built using our SDK which is open source (Apache 2.0): <a href="https://github.com/lmstudio-ai/lmstudio.js">https://github.com/lmstudio-ai/lmstudio.js</a>. Check out specifics under packages/.<p>Cheers!<p>-Yagil
In some brief testing, I discovered that the same models (Llama 3 7B and one more I can't remember) are running MUCH slower in LM Studio than in Ollama on my MacBook Air M1 2020.<p>Has anyone found the same thing, or was that a fluke and I should try LM Studio again?
Originally started out with LM Studio which was pretty nice but ended up switching to Ollama since I only want to use 1 app to manage all the large model downloads and there are many more tools and plugins that integrate with Ollama, e.g. in IDEs and text editors
I never could get anything local working a few years ago and someone on reddit told me about LM Studio and I finally managed to "run an AI" on my machine. Really cool and now I'm tinkering with it using the built in HTTP server
LM Studio is great, although I wish recommended prompts were part of the data of each LLM. I probably just don't know enough but I feel like I get hunk of magic data and then I'm mostly on my own.<p>Similarly with images, LLMs and ML in general feel like DOS and config.sys and autoexec.bat and qemm days.
Does anyone know if there's a changelog/release notes available for <i>all</i> historical versions of this? This is one of those programs with the annoying habit to surface only the list of changes in the most recent version, and their release cadence is such that there are some 3 to 5 updates between the times I run, and then I have no idea what changed.
I LOVE LM studio, it's super convenient for testing model capabilities, and the OpenAI server makes it really easy to spin up a server and test. My typical process is to load it up in LM studio, test it, and when I'm happy with the settings, move to vllm.
Yesterday I wanted to find a conversation snippet in ChatGPT of a conversation I had maybe 1 or 2 weeks ago. Searching for a single keyword would have been enough to find it.<p>How is it possible that there's still no way to search through your conversations?
Question for everyone: I am using the MLX version of Flux to generate really good images from text on my M2 Mac, but I don’t have an easy setup for doing text + base image to a new image. I want to be able to use base images of my family and put them on Mount Everest, etc.<p>Does anyone have a recommendation?<p>For context: I have almost ten years experience with deep learning, but I want something easy to set up in my home M2 Mac, or Google Colab would be OK.
Cool, it's a bit weird that the Windows download is 32-bit, it should be 64-bit by default and there's no need for a 32-bit windows version at all.
Been using LM studio for months on windows, its so easy to use, simple install, just search for the LLM off huggingface and it downloads and just works. I dont need to setup a python environment in conda, its way easier for people to play and enjoy. Its what I tell people who want to start enjoying LLM's without the hassle.
I filed a GitHub issue two weeks ago about a bug that was enough for me to put it down for a bit, and there’s been not even a response. Their development velocity seems incredible, though. I’m not sure what to make of it.
If you're hopping between these products instead of learning and understanding how inference works under the hood, and familiarizing yourself with the leading open source projects (i.e. llama.cpp), you are doing yourself a great disservice.