Hey HN,<p>We've been hacking around with LLMs for a while and have encountered a specific problem with distributed tool calling.<p>The Problem<p>When building AI agents or LLM-automations in distributed environments, you typically:<p>- Need to build APIs for your distributed tools<p>- Require load balancers in front of tool replicas (e.g. in k8s environments)<p>- Must refactor long-running tools to work within HTTP timeout constraints<p>Our Solution: AgentRPC<p>AgentRPC addresses these challenges by converting any function into a consumer for a distributed message queue that works via long-polling. The consumers register with a centralized server which:<p>- Monitors their health<p>- Maintains context about function schemas<p>## Features<p>The AgentRPC SDKs provide:<p>- A unified MCP-compatible server<p>- Tool definitions in an OpenAI SDK compatible format<p>The AgentRPC server handles:<p>- Load balancing<p>- Automatic failover<p>- Observability<p>Because tool calling happens through an async HTTP-based API, it can handle tool calls well beyond HTTP timeout limits.<p>We currently support TypeScript, Go, and Python natively, with more SDKs in development.<p>Check us out: <a href="https://agentrpc.com/" rel="nofollow">https://agentrpc.com/</a><p>We're still early, but keen to hear any feedback!