Hey HN, we are a small team from Europe building in agent security and we have just released Invariant Guardrails, our open-source system to enforce contextual security in AI agents and MCP-powered applications.<p>Guardrails acts as a transparent layer between your LLM/MCP server and your agent. It lets you define deterministic rules that block risky behavior: secret leakage, unsafe tool use, PII exposure, malicious code patterns, jailbreaks, loops, and more.<p>Rules are written in a Python-inspired DSL, enabling powerful contextual logic like below. The origins of this idea go back to OPA/Rego, i.e. policy languages used for authentication.<p><pre><code> raise "PII leakage in email" if:
(out: ToolOutput) -> (call: ToolCall)
any(pii(out.content))
call is tool:send_email({ to: "^(?!.*@ourcompany.com$).*$" })
</code></pre>
It’s fast (low-latency, pipelined execution), supports both hosted and local deployments, and integrates via simple proxies. You keep your agent code unchanged.<p>Let us know what you think. We found it quite helpful for MCP debugging and security analysis so far. Happy to answer questions!<p>Docs: <a href="https://explorer.invariantlabs.ai/docs" rel="nofollow">https://explorer.invariantlabs.ai/docs</a><p>Repo: <a href="https://github.com/invariantlabs-ai/invariant">https://github.com/invariantlabs-ai/invariant</a><p>Blog post: <a href="https://invariantlabs.ai/blog/guardrails" rel="nofollow">https://invariantlabs.ai/blog/guardrails</a><p>Playground: <a href="https://explorer.invariantlabs.ai/playground" rel="nofollow">https://explorer.invariantlabs.ai/playground</a>