Using LLMs for formal specs / formal modeling makes a lot of sense to me. If an LLM can do the work of going from informal English-language specs to TLA+ / Dafny / etc, then it can hook into a very mature ecosystem of automated proof tools.<p>I'm picturing it something like this:<p>1. Human developer says, "if a user isn't authenticated, they shouldn't be able to place an order."<p>2. LLM takes this, and its knowledge of the codebase, and turns it into a formal spec -- like, "there is no code path where User.is_authenticated is false and Orders.place() is called."<p>3. Existing code analysis tools can confirm or find a counterexample.