Ollama is getting some crazy vendor lock in. Ollama has an openai api, Llama.cpp has an openai compatible api, the various local llm proxies all support the openai api. But people insist on tying their products to the ollama api.<p>Hopefully as it goes forward if you implement the full ollama api then you will at least implement some subset of the openai api so the non-ollama tooling will work with the cool projects.
> no need to run servers<p>> In order to use oterm you will need to have the Ollama server running<p>These mutually exclusive statements in the readme were confusing to me.