TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Arch-Function: 3B parameter LLM that beats GPT-4o on function calling

5 点作者 sparacha7 个月前
Hi HN!<p>My name is Salman Paracha. I aam the the Founder&#x2F;CEO of Katanemo - the organization behind the open source Arch GW (an intelligent gateway for prompts - <a href="https:&#x2F;&#x2F;github.com&#x2F;katanemo&#x2F;arch">https:&#x2F;&#x2F;github.com&#x2F;katanemo&#x2F;arch</a>). Today, we are making the (SOTA) LLMs engineered in Arch GW for function calling scenarios available under an OSS license that borrows from Llama&#x27;s community license.<p>What is function calling? Function calling helps developers personalize apps by calling application-specific operations via user prompts. This involves any predefined functions or APIs you want to expose to perform tasks, gather information, or manipulate data - via prompts. With function calling, you get to support agentic workflows tailored to domain-specific use cases - from updating insurance claims to creating ad campaigns. Arch-Function analyzes prompts, extracts critical information from prompts, engages in lightweight conversations with the user to gather any missing parameters and makes API calls so that you can focus on writing business logic.<p>Arch-Function is an auto-regressive model that if run on the NVIDIA A100 GPUs using vLLM offers throughput of ~1900&#x2F;output tokens per second, and a output token price of $0.10&#x2F;M token. This is ~12x faster and 44x cheaper than GPT-4o.

暂无评论

暂无评论