I read about ingesting PDFs in Google Gemini here on HN last week [1]. This and some other thoughts that I had on AI [2] made me want to create a summarize PDF command line utility. Just point the utility to a PDF and get a summary back. However, I personally prefer to write CLI tools in Rust since those binaries are fast, small, and easy to distribute via `cargo install` (or `cargo binstall`). So that was a problem. I wanted to use a cloud provider since my system doesn't have enough RAM for most state-of-the-art models, but at the same time, I didn't want to require users of the tool to use the same API provider as me.<p>That's why I created the `transformrs` Rust library [3]. The name is from "transform", which is essentially what AI models do, and "Rust". It is also a nod to the transformers algorithm.<p>To allow for proper testing in CI, I considered to make a mock endpoint that I could run my tests against. However, AI APIs are so cheap nowadays that I figured I could just as well run the tests against the actual endpoints. So that is what currently happens. Locally and in CI I have set up 5 different API keys and I run multiple tests against all of these endpoints, so that's about 30 requests each time (tests run in 10 seconds). I change a file or each push to CI. It's incredible how stable these endpoints have become. 18 tests check whether the prompt "This is a test. Please respond with 'hello world'" actually responds with "hello world" and so far >99% of the times it does. CI is way less flaky than I would have expected.<p>In any case, I hope people will find the library useful. I'm curious for feedback and suggestions.<p>[1]: <a href="https://news.ycombinator.com/item?id=42952605">https://news.ycombinator.com/item?id=42952605</a><p>[2]: <a href="https://huijzer.xyz/posts/ai-learning-rate/" rel="nofollow">https://huijzer.xyz/posts/ai-learning-rate/</a><p>[3]: <a href="https://github.com/transformrs/transformrs">https://github.com/transformrs/transformrs</a>