Hi HN, Jack here! I'm one of the creators of MonkeyPatch, an easy tool that helps you build LLM-powered functions and apps that get cheaper and faster the more you use them.<p>For example, if you need to classify PDFs, extract product feedback from tweets, or auto-generate synthetic data, you can spin up an LLM-powered Python function in <5 minutes to power your application. Unlike existing LLM clients, these functions generate well-typed outputs with guardrails to mitigate unexpected behavior.<p>After about 200-300 calls, these functions will begin to get cheaper and faster. We've seen 8-10x reduction in cost and latency in some use-cases! This happens via progressive knowledge distillation - MonkeyPatch incrementally fine-tunes smaller, cheaper models in the background, tests them against the constraints defined by the developer, and retains the smallest model that meets accuracy requirements, which typically has significantly lower costs and latency.<p>As an LLM researcher, I kept getting asked by startups and friends to build specific LLM features that they could embed into their applications. I realized that most developers have to either 1) use existing low-level LLM clients (GPT4/Claude), which can be unreliable, untyped, and pricey, or 2) pore through LangChain documentation for days to build something.<p>We built MonkeyPatch to make it easy for developers to inject LLM-powered functions into their code and create tests to ensure they behave as intended. Our goal is to help developers easily build apps and functions without worrying about reliability, cost, and latency, while following best software engineering practices.<p>We're only available in Python currently but actively working on a Typescript version. The repo has all the instructions you need to get up and running in a few minutes.<p>The world of LLMs is changing by the day and so we're not 100% sure how MonkeyPatch will evolve. For now, I'm just excited to share what we've been working on with the HN community. Would love to know what you guys think!<p>Open-source repo: <a href="https://github.com/monkeypatch/monkeypatch.py">https://github.com/monkeypatch/monkeypatch.py</a><p>Sample use-cases: <a href="https://github.com/monkeypatch/monkeypatch.py/tree/master/examples">https://github.com/monkeypatch/monkeypatch.py/tree/master/ex...</a><p>Benchmarks: <a href="https://github.com/monkeypatch/monkeypatch.py#scaling-and-finetuning">https://github.com/monkeypatch/monkeypatch.py#scaling-and-fi...</a>
Nice! If I were to write a test for invariant aspects of the function (eg, it produces valid json), will the system guarantee that those invariants are fulfilled? I suppose naively you could just do this by calling over and over and 'telling off' the model if it didn't get it right
This is like calling a python package "ListComprehension", that loops through a list and calls OpenAI's API on each item. Confusing and unproductive.
Hey Jack! Thanks for sharing this. The incremental fine-tuning of smaller and cheaper models for cost reduction is definitely a really interesting differentiator.
I had a few questions regarding the reliability of the LLM-powered functions MonkeyPatch facilitates and the testing process. How does MonkeyPatch ensure the reliability of LLM-powered functions it helps developers create, and do the tests employed provide sufficient confidence in maintaining consistent output? If tests fall short of 100% guarantee, how does MonkeyPatch address concerns similar to historical challenges faced with testing traditional LLMs? Thanks.
MonkeyPatch is a specific programming term that people have been using for decades. What would posses someone to name a programming tool "MonkeyPatch" when the tool doesn't even have something to do with patching?
Slightly tangential: is it unfair/unreasonable to judge a project by its name? It's hard not to interpret this project's name as the result of poor judgement. Is that sufficient cause to write off the project entirely? That may seem a tad dramatic but I feel that it's a fairly strong signal for how little effort I need to put into evaluating it.
Not including "pass" in a function definition in Python makes the code not compilable, and if we're using VSCode, PyCharm, etc. our IDEs will complain about this whenever the code is viewed. Is this an intentional design decision?
Tests to align your model seems neat. How reliable is it? Won’t models still hallucinate time to time? How do you think about performance monitoring/management?
tried a shot , quite impressed.
I am implementing the bedrock interface (OpenAPI is limited access from my location). Look promised.
Will check it out the fine-tuning with bedrock. But not sure we can do that or not.
Appreciate your work