TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Tanuki: Alignment-as-Code for LLM Applications

8 pointsby JackHopkinsover 1 year ago

1 comment

JackHopkinsover 1 year ago
Hey everyone,<p>I&#x27;m a main contributor of Tanuki (formerly MonkeyPatch).<p>The purpose of Tanuki is to reduce the time to ship your LLM projects, so you can focus on building what your users want instead of MLOps.<p>You define patched functions in Python using a decorator, and the execution of the function is delegated to an LLM, with type-coercion on the response.<p>Automatic distillation is performed in the background, which can reduce the cost and latency of your functions by up to 10x without compromising accuracy.<p>The real magic feature, however, is how you can implement <i>alignment-as-code</i>, in which you can use Python&#x27;s `assert` syntax to declare the desired behaviour of your LLM functions. As this is managed in code, and is subject to code-review and the standard software-lifecycle, it becomes much clearer to understand how an LLM feature is meant to behave.<p>Any feedback is much appreciated! Thanks.