Are these libraries for connecting to an ollama service that the user has already installed or do they work without the user installing anything? Sorry for not checking the code but maybe someone has the same question here.<p>I looked at using ollama when I started making FreeChat [0] but couldn't figure out a way to make it work without asking the user to install it first (think I asked in your discord at the time). I wanted FreeChat to be 1-click install from the mac app store so I ended up bundling the llama.cpp server instead which it runs on localhost for inference. At some point I'd love to swap it out for ollama and take advantage of all the cool model pulling stuff you guys have done, I just need it to be embeddable.<p>My ideal setup would be importing an ollama package in swift which would start the server if the user doesn't already have it running. I know this is just js and python to start but a dev can dream :)<p>Either way, congrats on the release!<p>[0]: <a href="https://github.com/psugihara/FreeChat">https://github.com/psugihara/FreeChat</a>
I posted about the Python library few hours after release. Great experience.
Easy, fast and works well.<p>I create a GIST with a quick and dirty way of generating a dataset for fine-tuning Mistral model using Instruction Format on a given topic: <a href="https://gist.github.com/ivanfioravanti/bcacc48ef68b02e9b7a4034161824287" rel="nofollow">https://gist.github.com/ivanfioravanti/bcacc48ef68b02e9b7a40...</a>
An off topic question: Is there such a thing as a "small-ish language model". A model that you could simple give instructions / "capabilities" which a user can interact with. Almost like Siri-level of intelligence.<p>Imagine you have an API-endpoint where you can set the level of some lights and you give the chat a system prompt explaining how to build the JSON body of the request, and the user can prompt it with stuff like "Turn off all the lights" or "Make it bright in the bedroom" etc.<p>How low could the memory consumption of such a model be? We don't need to store who the first kaiser of Germany was, "just" enough to kinda map human speech onto available API's.
Not directly related to what Ollama aims to achieve. But, I’ll ask nevertheless.<p>Local LLMs are great! But, it would be more useful once we can _easily_ throw our own data for them to use as reference or even as a source of truth. This is where it opens doors that a closed system like OpenAI cannot - I’m never going to upload some data to ChatGPT for them to train on.<p>Could Ollama make it easier and standardize the way to add documents to local LLMs?<p>I’m not talking about uploading one image or model and asking a question about it. I’m referring to pointing a repository of 1000 text files and asking LLMs questions based on their contents.
Used ollama as part of a bash pipeline for a tiny throwaway app.<p>It blocks until there is something on the mic, then sends the wav to whisper.cpp, which then sends it to llama which picks out a structured "remind me" object from it, which gets saved to a text file.
I love the ollama project. Having a local llm running as a service makes sense to me. It works really well for my use.<p>I’ll give this Python library a try. I’ve been wanting to try some fine tuning with LLMs in the loop experiments.
Noob question, and may be probably being asked at the wrong place.
Is there any way to find out min system requirements for running ollama run commands with different models.
I posted about my awesome experiences using Ollama a few months ago: <a href="https://news.ycombinator.com/item?id=37662915">https://news.ycombinator.com/item?id=37662915</a>. Ollama is definitely the easiest way to run LLMs locally, and that means it’s the best building block for applications that need to use inference. It’s like how Docker made it so any application can execute something kinda portably kinda safely on any machine. With Ollama, any application can run LLM inference on any machine.<p>Since that post, we shipped experimental support in our product for Ollama-based local inference. We had to write our own client in TypeScript but will probably be able to switch to this instead.
So cool! I have bene using Ollama for weeks now and I just love it! Easiest way to run local LLMs, we are actually embedding them into our product right now and super excited about it!
I used this half a year ago, love the UX but it was not possible to accelerate the workloads using an AMD GPU. How's the support for AMD GPUs under Ollama today?
I'm a huge fan of Ollama. Really like how easy it makes local LLM + neovim <a href="https://github.com/David-Kunz/gen.nvim">https://github.com/David-Kunz/gen.nvim</a>
This should be nice to be easier to integrate with things like Vanna.ai, that was on HN recently.<p>There a bunch of methods need to be implemented to work, but then usual OpenAI buts can be switched out to anything else, e.g. see the code stub in <a href="https://vanna.ai/docs/bigquery-other-llm-vannadb.html" rel="nofollow">https://vanna.ai/docs/bigquery-other-llm-vannadb.html</a><p>Looking forward to more remixes for other tools too.
Why does this feel like an exercise in the high priesting of coding. Shouldn't a python library have everything necessary and work out of the box?
What I hate about ollama is that it makes server configuration a PITA. ollama relies on llama.cpp to run GGUF models but while llama.cpp can keep the model in memory using `mlock` (helpful to reduce inference times), ollama simply won't let you do that:<p><a href="https://github.com/ollama/ollama/issues/1536">https://github.com/ollama/ollama/issues/1536</a><p>Not to mention, they hide all the server configs in favor of their own "sane defaults".
I love Ollama's simplicity to download and consume different models with its REST API. I've never used it in a "production" environment, anyone knows how Ollama performs? or is it better to move to something like Vllm for that?
API wise, it looks very similar to the OpenAI python SDK but not quite the same. I was hoping I could swap out one client for another. Can anyone confirm they’re intentionally using an incompatible interface?
I love ollama, the engine underneath is llama.cpp, and they have the first version of self-extend about to me merged into main, so with any luck it will be available in ollama soon too!
Is anyone using this as an api behind a multi user web application? Or does it need to be fed off of a message queue or something to basically keep it single threaded?
ollama feels like llama.cpp with extra undesired complexities. It feels like the former project is desperately trying to differentiate and monetize while the latter is where all the things that matter happens.
If you're using TypeScript I highly recommend modelfusion <a href="https://modelfusion.dev/guide/" rel="nofollow">https://modelfusion.dev/guide/</a><p>It is far more robust, integrates with any LLM local or hosted, supports multi-modal, retries, structure parsing using zod and more.
What is the benefit?<p>Ollama already exposes REST API that you can query with whatever language (or you know, just using curl) - why do I want to use Python or JS?
There is also an Elixir library: <a href="https://overbring.com/blog/2024-01-14-ollamex-ollama-api-embeddings/" rel="nofollow">https://overbring.com/blog/2024-01-14-ollamex-ollama-api-emb...</a>
The Rust+Wasm stack provides a strong alternative to Python in AI inference.<p>* Lightweight. Total runtime size is 30MB as opposed 4GB for Python and 350MB for Ollama.
* Fast. Full native speed on GPUs.
* Portable. Single cross-platform binary on different CPUs, GPUs and OSes.
* Secure. Sandboxed and isolated execution on untrusted devices.
* Modern languages for inference apps.
* Container-ready. Supported in Docker, containerd, Podman, and Kubernetes.
* OpenAI compatible. Seamlessly integrate into the OpenAI tooling ecosystem.<p>Give it a try --- <a href="https://www.secondstate.io/articles/wasm-runtime-agi/" rel="nofollow">https://www.secondstate.io/articles/wasm-runtime-agi/</a>
I wish JS libraries would stop using default exports. They are not ergonomic as soon as you want to export one more thing in your package, which includes types, so all but the most trivial package requires multiple exports.<p>Just use a sensibly named export, you were going to write a "how to use" code snippet for the top of your readme anyway.<p>Also means that all of the code snippets your users send you will be immediately sensible, even without them having to include their import statements (assuming they don't use "as" renaming, which only makes sense when there's conflicts anyway)