TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

RamaLama

163 点作者 johlo4 个月前

14 条评论

eigenvalue4 个月前
This is the point of it:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp&#x2F;pull&#x2F;11016#issuecomment-2599740463">https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp&#x2F;pull&#x2F;11016#issuecomme...</a>
评论 #42889234 未加载
评论 #42888339 未加载
评论 #42888267 未加载
评论 #42888141 未加载
mckirk4 个月前
This looks great!<p>While we&#x27;re at it, is there already some kind of standardized local storage location&#x2F;scheme for LLM models? If not, this project could potentially be a great place to set an example that others can follow, if they want. I&#x27;ve been playing with different runtimes (Ollama, vLLM) the last days, and I really would have appreciated better interoperability in terms of shared model storage, instead of everybody defaulting to downloading everything all over again.
评论 #42888308 未加载
评论 #42890003 未加载
pzo4 个月前
To make it AI really boring all those projects need to be more approachable to non-tech savvy people, e.g. some minimal GUI for searching, listing, deleting, installing ai models. I wish e.g. this or ollama could work more as invisible AI models dependency manager. Right now every app that want to have STT like whisper will bundle such model inside. User waste more memory storage and have to wait to download big models. We had similar problems with and static libraries and then moved to dynamic linking libraries.<p>I wish your app could add some model as dependency and on install would download only if such model is not avialable locally. Also would check if ollama is installed and only bootstrap if also doesn&#x27;t exist on drive. Maybe with some nice interface for user to confirm download and nice onboarding.
rhatdan4 个月前
One of my primary goals of RamaLama was to allow users to move AI Models into containers, so they can be stored in OCI Registries. I believe there is going to be a proliferation of &quot;private&quot; models, and eventually &quot;private&quot; RAG data. (Working heavily in RAG support in RamaLama now.<p>Once you have private models and RAG, I believe you will want to run these models and data on edge devices in in Kubernetes clusters. Getting the AI Models and data into OCI content. Would allow us to take advantage of content signing, trust, mirroring. And make running the AI in production easier.<p>Also allowing users to block access to outside &quot;untrusted&quot; AI Models stored in the internet. Allow companies to only use &quot;trusted&quot; AI.<p>Since Companies already have OCI registries, it makes sense to store your AI Models and content in the same location.
评论 #42897866 未加载
jerrygenser4 个月前
122 points 2 hours ago yet this is currently #38 and not on the front page.<p>Strange. At the same time I see numerous items that are on the front page posted 2 hours or older with fewer points.<p>I&#x27;m willing to take a reputation hit on this meta post. I wonder why this got demoted so quickly from front page despite people clearly voting on it. I wonder if it has anything to do with being backed by YC.<p>I sincerely hope it&#x27;s just my miss understanding of hn algorithm though
评论 #42889509 未加载
guerrilla4 个月前
&gt; Running in containers eliminates the need for users to configure the host system for AI.<p>When is that a problem?<p>Based on the linked issue in eigenvalue&#x27;s comment[1], this seems like a very good thing. It sounds like ollama is up to no good and this is a good drop-in replacement. What is the deeper problem being solved here though, about configuring the host? I&#x27;ve not run into any such issue.<p>1. <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42888129">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42888129</a>
评论 #42888982 未加载
2mlWQbCK4 个月前
What benefit does Ollama (or RamaLama) offer over just plain llama.cpp or llamafile? The only thing I understand is that there is automatic downloading of models behind the scenes, but a big reason for me to want to use local models at all is that I want to to know exactly what files I use and keep them sorted and backed up properly, so a tool automatically downloading models and dumping in some cache directory just sounds annoying.
评论 #42889252 未加载
评论 #42898619 未加载
baron-bourbon4 个月前
Does this provide a Ollama compatible API endpoint? I&#x27;ve got at least one other project running that only supports Ollama&#x27;s API or OpenAI&#x27;s hosted solution (ie. the API endpoint isn&#x27;t configurable to use llama.cpp and friends)
评论 #42888687 未加载
glitchc4 个月前
Great, finally an alternative to ollama&#x27;s convenience.
评论 #42888266 未加载
评论 #42894150 未加载
Y_Y4 个月前
So it&#x27;s a replacement for Ollama?<p>The killer features of Ollama for me right now are the nice library of quantized models and the ability to automatically start and stop serving models in response to incoming requests and timeouts. The first send to be solved by reusing the Ollama models, but I can&#x27;t see if the service is possible from my cursory look.
评论 #42888273 未加载
ecurtin4 个月前
I am doing a short talk on this tomorrow at FOSDEM:<p><a href="https:&#x2F;&#x2F;fosdem.org&#x2F;2025&#x2F;schedule&#x2F;event&#x2F;fosdem-2025-4486-ramalama-making-working-with-ai-models-boring&#x2F;" rel="nofollow">https:&#x2F;&#x2F;fosdem.org&#x2F;2025&#x2F;schedule&#x2F;event&#x2F;fosdem-2025-4486-rama...</a>
wsintra20224 个月前
I’m using openwebui, can this replace ollama in my setup?
n144q4 个月前
It seems that all instructions are based on Mac&#x2F;Linux? Can someone confirm this works smoothly on Windows?
评论 #42940688 未加载
esafak4 个月前
Is this useful? Can someone help me see the value add here?
评论 #42888124 未加载
评论 #42888173 未加载
评论 #42888143 未加载
评论 #42888285 未加载