Hi HN! My name is Adil Hafeez, and I am the Co-Founder at Katanemo and the lead developer behind Arch - an open source project for developers to build faster, generative AI apps. Previously I worked on Envoy at Lyft.<p>Engineered with purpose-built LLMs, Arch handles the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting jailbreak attempts, intelligently calling “backend” APIs to fulfill the user’s request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way - all outside business logic.<p>Here are some additional key details of the project,<p>* Built on top of Envoy and is written in rust. It runs alongside application servers, and uses Envoy's proven HTTP management and scalability features to handle traffic related to prompts and LLMs.<p>* Function calling for fast agentic and RAG apps. Engineered with purpose-built fast LLMs to handle fast, cost-effective, and accurate prompt-based tasks like function/API calling, and parameter extraction from prompts.<p>* Prompt guardrails to prevent jailbreak attempts and ensure safe user interactions without writing a single line of code.<p>* Manages LLM calls, offering smart retries, automatic cutover, and resilient upstream connections for continuous availability.<p>* Uses the W3C Trace Context standard to enable complete request tracing across applications, ensuring compatibility with observability tools, and provides metrics to monitor latency, token usage, and error rates, helping optimize AI application performance.<p>This is our first release, and would love to build alongside the community. We are just getting started on reinventing what we could do at the networking layer for prompts.<p>Do check it out on GitHub at <a href="https://github.com/katanemo/arch/">https://github.com/katanemo/arch/</a>.<p>Please leave a comment or feedback here and I will be happy to answer!
Hi, I'm curious how preventing jailbreaks protects the <i>user</i>?<p>> Prompt guardrails to prevent jailbreak attempts and ensure safe user interactions [...]
Lately, I have seen few gateways around LLM. Namely, openrouter, portkey.ai, etc.<p>My key question is, who would be the ideal customer who would need a proxy or a gateway like this? Why couldn't it be an extension or plugin of existing LBs, proxies etc.
Tetrate and Bloomberg want to contribute their code to Envoy to create "Envoy AI Gateway", similarly to how there is an "Envoy Gateway" spec. Do you see this as being complementary or competitive with your work?<p><a href="https://tetrate.io/press/tetrate-and-bloomberg-collaborate-on-community-led-open-standard-for-ai-gateways-built-on-cncfs-envoy-gateway-project/" rel="nofollow">https://tetrate.io/press/tetrate-and-bloomberg-collaborate-o...</a>
Hey HN - my name is Salman and I am Adil’s Co-Founder. Would love to hear and get feedback. Here is a link to our public roadmap, please lets us know if there are things you’d like for us to work on first<p><a href="https://github.com/orgs/katanemo/projects/1">https://github.com/orgs/katanemo/projects/1</a>
Envoy is legendary in (dev)ops circles, but I don't understand what it lends to the AI space. I feel like building a separate backend service that runs behind envoy would make more sense but that's just me.
Offtopic technical note: I've created a new post for this because the previous one (<a href="https://news.ycombinator.com/item?id=41801315">https://news.ycombinator.com/item?id=41801315</a>) was old enough to fall out of the ranked stories on HN.<p>We picked it for the second-chance pool (<a href="https://news.ycombinator.com/item?id=26998308">https://news.ycombinator.com/item?id=26998308</a>) when it was already several days old, and by the time the thread got going, it basically got evicted from cache. This is a manual workaround to correct that. Sorry all!<p>I've moved the comments from the other thread hither, which is why most of them are hours older than the current submission is.