TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

I built an open source Computer-use framework that uses Local LLMs with Ollama

4 点作者 powerawq103846大约 2 个月前

1 comment

powerawq103846大约 2 个月前
We just launched stable support for local LLM models with Ollama in Agent today - our Computer-use agent framework on macOS. Repo here: <a href="https:&#x2F;&#x2F;github.com&#x2F;trycua&#x2F;cua">https:&#x2F;&#x2F;github.com&#x2F;trycua&#x2F;cua</a><p>Run models like Gemma3, Phi4, Qwen2.5, Llama3.1 &amp; more supported by Ollama, keeping your data completely private - no cloud required.<p>Agent with local LLMs combines UI grounding with pixel-detection for accurate computer control with Pyautogen OmniParser - all in Cua&#x27;s sandboxed environment for security.<p>Privacy-conscious? Local models through Ollama run entirely on your machine while maintaining the same powerful agent capabilities backed by MPS on Apple Silicon.<p>Simple to set up: just `pip install &quot;cua-agent[all]&quot;` and connect to your local Ollama models. Check out our examples on our repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;trycua&#x2F;cua">https:&#x2F;&#x2F;github.com&#x2F;trycua&#x2F;cua</a>