TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ollama now supports tool calling with popular models in local LLM

81 点作者 thor-rodrigues9 个月前

6 条评论

koinedad9 个月前
Pretty sweet to get to run models locally and have more advanced usages like tool calling, excited to try it out
评论 #41297267 未加载
gavmor9 个月前
Where is `get_current_weather` implemented?<p>&gt; Tool responses can be provided via messages with the `tool` role.
评论 #41291918 未加载
fancy_pantser9 个月前
I see Command-R+ but not Command-R marked for tool use. The model is geared for it, much easier to fit on commodity hardware like 4090s, and Ollama&#x27;s own description for it even includes tool use. I think it&#x27;s just not labeled for some reason. It works really well with the provided ollama-python package and other tools that already brought function calling capabilities via Ollama&#x27;s API.<p><a href="https:&#x2F;&#x2F;ollama.com&#x2F;library&#x2F;command-r">https:&#x2F;&#x2F;ollama.com&#x2F;library&#x2F;command-r</a>
codeisawesome9 个月前
How does this compare to Agent Zero (frdel&#x2F;agent-zero on GitHub)? Seems that provides similar functionality and uses docker for running the scripts &#x2F; code generated.
评论 #41292249 未加载
hm-nah9 个月前
The first I think of when anyone mentions agent-like “tool use” is:<p>- Is the environment that the tools are run from sandboxed?<p>I’m unclear on when&#x2F;how&#x2F;why you’d want an LLM executing code on your machine or in a non-sandboxed environment.<p>Anyone care to enlighten?
评论 #41292453 未加载
评论 #41292530 未加载
评论 #41293781 未加载
评论 #41292712 未加载
评论 #41292437 未加载
评论 #41292432 未加载
SV_BubbleTime9 个月前
My guess since programmer blog post writing (plus autism?) assumes <i>“Everyone already knows everything about my project because I do!”</i>…<p>Is this to the effect of running a local LLM, that reads your prompt and then decides which correct&#x2F;specialized LLM to hand it off to? If that is the case, isn’t it going to be a lot of latency to switch models back and forth as most people usually run the single largest model that will fit on their GPU?
评论 #41292160 未加载
评论 #41294837 未加载
评论 #41292422 未加载
评论 #41292142 未加载