TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

RamaLama

163 pointsby johlo4 months ago

14 comments

eigenvalue4 months ago
This is the point of it:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp&#x2F;pull&#x2F;11016#issuecomment-2599740463">https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp&#x2F;pull&#x2F;11016#issuecomme...</a>
评论 #42889234 未加载
评论 #42888339 未加载
评论 #42888267 未加载
评论 #42888141 未加载
mckirk4 months ago
This looks great!<p>While we&#x27;re at it, is there already some kind of standardized local storage location&#x2F;scheme for LLM models? If not, this project could potentially be a great place to set an example that others can follow, if they want. I&#x27;ve been playing with different runtimes (Ollama, vLLM) the last days, and I really would have appreciated better interoperability in terms of shared model storage, instead of everybody defaulting to downloading everything all over again.
评论 #42888308 未加载
评论 #42890003 未加载
pzo4 months ago
To make it AI really boring all those projects need to be more approachable to non-tech savvy people, e.g. some minimal GUI for searching, listing, deleting, installing ai models. I wish e.g. this or ollama could work more as invisible AI models dependency manager. Right now every app that want to have STT like whisper will bundle such model inside. User waste more memory storage and have to wait to download big models. We had similar problems with and static libraries and then moved to dynamic linking libraries.<p>I wish your app could add some model as dependency and on install would download only if such model is not avialable locally. Also would check if ollama is installed and only bootstrap if also doesn&#x27;t exist on drive. Maybe with some nice interface for user to confirm download and nice onboarding.
rhatdan4 months ago
One of my primary goals of RamaLama was to allow users to move AI Models into containers, so they can be stored in OCI Registries. I believe there is going to be a proliferation of &quot;private&quot; models, and eventually &quot;private&quot; RAG data. (Working heavily in RAG support in RamaLama now.<p>Once you have private models and RAG, I believe you will want to run these models and data on edge devices in in Kubernetes clusters. Getting the AI Models and data into OCI content. Would allow us to take advantage of content signing, trust, mirroring. And make running the AI in production easier.<p>Also allowing users to block access to outside &quot;untrusted&quot; AI Models stored in the internet. Allow companies to only use &quot;trusted&quot; AI.<p>Since Companies already have OCI registries, it makes sense to store your AI Models and content in the same location.
评论 #42897866 未加载
jerrygenser4 months ago
122 points 2 hours ago yet this is currently #38 and not on the front page.<p>Strange. At the same time I see numerous items that are on the front page posted 2 hours or older with fewer points.<p>I&#x27;m willing to take a reputation hit on this meta post. I wonder why this got demoted so quickly from front page despite people clearly voting on it. I wonder if it has anything to do with being backed by YC.<p>I sincerely hope it&#x27;s just my miss understanding of hn algorithm though
评论 #42889509 未加载
guerrilla4 months ago
&gt; Running in containers eliminates the need for users to configure the host system for AI.<p>When is that a problem?<p>Based on the linked issue in eigenvalue&#x27;s comment[1], this seems like a very good thing. It sounds like ollama is up to no good and this is a good drop-in replacement. What is the deeper problem being solved here though, about configuring the host? I&#x27;ve not run into any such issue.<p>1. <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42888129">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42888129</a>
评论 #42888982 未加载
2mlWQbCK4 months ago
What benefit does Ollama (or RamaLama) offer over just plain llama.cpp or llamafile? The only thing I understand is that there is automatic downloading of models behind the scenes, but a big reason for me to want to use local models at all is that I want to to know exactly what files I use and keep them sorted and backed up properly, so a tool automatically downloading models and dumping in some cache directory just sounds annoying.
评论 #42889252 未加载
评论 #42898619 未加载
baron-bourbon4 months ago
Does this provide a Ollama compatible API endpoint? I&#x27;ve got at least one other project running that only supports Ollama&#x27;s API or OpenAI&#x27;s hosted solution (ie. the API endpoint isn&#x27;t configurable to use llama.cpp and friends)
评论 #42888687 未加载
glitchc4 months ago
Great, finally an alternative to ollama&#x27;s convenience.
评论 #42888266 未加载
评论 #42894150 未加载
Y_Y4 months ago
So it&#x27;s a replacement for Ollama?<p>The killer features of Ollama for me right now are the nice library of quantized models and the ability to automatically start and stop serving models in response to incoming requests and timeouts. The first send to be solved by reusing the Ollama models, but I can&#x27;t see if the service is possible from my cursory look.
评论 #42888273 未加载
ecurtin4 months ago
I am doing a short talk on this tomorrow at FOSDEM:<p><a href="https:&#x2F;&#x2F;fosdem.org&#x2F;2025&#x2F;schedule&#x2F;event&#x2F;fosdem-2025-4486-ramalama-making-working-with-ai-models-boring&#x2F;" rel="nofollow">https:&#x2F;&#x2F;fosdem.org&#x2F;2025&#x2F;schedule&#x2F;event&#x2F;fosdem-2025-4486-rama...</a>
wsintra20224 months ago
I’m using openwebui, can this replace ollama in my setup?
n144q4 months ago
It seems that all instructions are based on Mac&#x2F;Linux? Can someone confirm this works smoothly on Windows?
评论 #42940688 未加载
esafak4 months ago
Is this useful? Can someone help me see the value add here?
评论 #42888124 未加载
评论 #42888173 未加载
评论 #42888143 未加载
评论 #42888285 未加载