just released our plan-linter – a tiny static-analysis tool that catches "obvious-stupid" failures in AI agent plans before they
reach runtime.<p>GitHub repo -> <a href="https://github.com/cirbuk/plan-lint">https://github.com/cirbuk/plan-lint</a><p>also read on how to deal with safety using a 4-step safety stack (“No Safe Words”) → <a href="https://mercurialsolo.substack.com/p/no-safe-words" rel="nofollow">https://mercurialsolo.substack.com/p/no-safe-words</a><p>Why?<p>Agents now emit machine-readable JSON/DSL plans.
Most prod incidents
(loops, privilege spikes, raw secrets) could have been caught by
scanning those plans offline, yet everyone focuses on runtime guardrails.<p>What it does<p>* Schema + policy validation (JSONSchema / YAML / OPA)<p>* Data-flow + taint checks for secrets & PII<p>* Loop detection (graph cycle)<p>* Risk score 0-1, fail threshold configurable<p>* Plugin rules via entry_points<p>Runs in <50 ms for 100-step plans, zero token cost.<p>how are you dealing with safety (budget overruns, token leaks) when deploying agents in prod with tool access?