Hi!<p>I've found myself repeatedly writing little scripts to do bulk calls to LLMs for various tasks. For example, run some analysis on a large list of records.<p>There are a few "gotchas" to doing this. For example, some service providers have rate limits, and some models will not reliably return JSON (if you're asking for it).<p>So, I've written a command for this.<p>What I've tried to do here is let the user break up prompts and configuration as they see fit.<p>For example, you can have a prompt file which includes the API key, rate limit, settings, etc. all together, or break these up into multiple files, or keep some parts local, or override parameters.<p>This solves the problem of sharing settings between activities, and keeping prompts in simple, committable files of narrow scope.<p>I hope this can be of use to someone. Thanks for reading.