Hi HN,<p>I'm pleased to share Promptspot, an open-source (Apache License 2.0) project that helps automate testing of large language model (LLM) prompts against an array of input data.<p>Modern LLMs offer an enormous amount of leverage if you "teach the bot to fish" — i.e. simply prompt it with both a "system prompt" (which typically doesn't change often) and a dynamic input, which is often application state, search results, recent activity, user profile data, etc.<p>Existing playgrounds and prompt management systems often lack the rigor and flexibility required for this dynamic approach — and as more teams adopt this pattern, I hope Promptspot can become a useful tool for testing, monitoring, and centralizing this data.<p>Promptspot currently supports text-davinci-003 from OpenAI, but I hope to add support for more models soon. Contributions welcome!