Let me preface this by linking to simonw’s latest prompt injection blog post. [0]<p>Geiger does not <i>solve</i> prompt injection, but it is biased towards false positives and the false positive rate can be curtailed by experimenting with the `task` parameter. It is a stop-gap measure that‘s meant to be used <i>right now</i> for services that are exposing the LLM to untrusted potentially-poisoned post-GPT information such as raw web searches. The injection test set I use is as wide as all public injections. There‘s some secret ingredient as well but it‘s not anything groundbreaking and it can be replicated independently with enough effort.<p>This is as simple as possible. There’s as little JavaScript as possible on the website to prevent abuse and there’s no JavaScript at all in the app. Payments are handled by Stripe.<p>Try it out and let me know what you think. Do get in touch if you find it doesn’t work for you or if you need anything specific.<p>[0] <a href="https://simonwillison.net/2023/May/2/prompt-injection-explained/" rel="nofollow">https://simonwillison.net/2023/May/2/prompt-injection-explai...</a>