Terrific!<p>I took the liberty of translating your module to TLA+, my formal tool of choice (recently, I've been learning Lean, which is similar to Coq, but I find it much harder than TLA+). I tried to stick to your naming and style (which deviate somewhat from TLA+'s idiomatic styling), and, as you can see, the result is extremely similar, but I guess that when I try writing the proofs, some changes to the definitions would need to be made.<p>One major difference is that proofs in TLA+ are completely declarative. You just list the target and the required axioms, lemmas and definitions that require expanding. Usually, the proof requires breaking the theorem apart to multiple intermediary steps, but in the case of the proof you listed, TLA+ is able to find the proof completely automatically:<p><pre><code> LEMMA supremumUniqueness ≜
∀ P ∈ SUBSET T : ∀ x1, x2 ∈ P : supremum(P, x1) ∧ supremum(P, x2) ⇒ x1 = x2
PROOF BY antisym DEF supremum (* we tell the prover to use the antisym axiom and expand the definition of supremum *)
</code></pre>
The natDiff lemma doesn't even need to be stated, as it's automatically deduced from the built-in axioms/theorems:<p><pre><code> LEMMA natDiff ≜ ∀ n1, n2 ∈ Nat : ∃ n3 ∈ Nat : n1 = n2 + n3 ∨ n2 = n1 + n3
PROOF OBVIOUS (* this is automatically verified using just built-in axioms/theorems *)
</code></pre>
Another difference is that TLA+ is untyped (which makes the notation more similar to ordinary math), but, as you can see, this doesn't make much of a difference. The only things that are different from ordinary math notation is that function application uses square brackets (parentheses are used for operator substitution; operators are different from functions, but that's a subtelty; you can think of operators as polymorphic functions or as macros), set comprehension uses a colon instead of a vertical bar, a colon is also used in lieu of parentheses after quantifiers, and aligned lists of connectives (conjunctions and disjunctions) are read as if there were parentheses surrounding each aligned clause. Also `SUBSET T` means the powerset of T.<p>Here's the module (without proofs, except for the one above):<p><pre><code> ------------------------------- MODULE Kleene -------------------------------
EXTENDS Naturals
(*
Assumption: Let (T, leq) be a partially ordered set, or poset. A poset is
a set with a binary relation which is reflexive, transitive, and
antisymmetric.
*)
CONSTANT T
CONSTANT _ ≼ _
AXIOM refl ≜ ∀ x ∈ T : x ≼ x
AXIOM trans ≜ ∀ x, y, z ∈ T : x ≼ y ∧ y ≼ z ⇒ x ≼ z
AXIOM antisym ≜ ∀ x, y ∈ T : x ≼ y ∧ y ≼ x ⇒ x = y
(*
A supremum of a subset of T is a least element of T which is greater than
or equal to every element in the subset. This is also called a join or least
upper bound.
*)
supremum(P, x1) ≜ ∧ x1 ∈ P
∧ ∀ x2 ∈ P : x2 ≼ x1
∧ ∀ x3 ∈ P : ∀ x2 ∈ P : x2 ≼ x3 ⇒ x1 ≼ x3
(*
A directed subset of T is a non-empty subset of T such that any two elements
in the subset have an upper bound in the subset.
*)
directed(P) ≜ ∧ P ≠ {}
∧ ∀ x1, x2 ∈ P : ∃ x3 ∈ P : x1 ≼ x3 ∧ x2 ≼ x3
(*
Assumption: Let the partial order be directed-complete. That means every
directed subset has a supremum.
*)
AXIOM directedComplete ≜ ∀ P ∈ SUBSET T : directed(P) ⇒ ∃ x : supremum(P, x)
(*
Assumption: Let T have a least element called bottom. This makes our partial
order a pointed directed-complete partial order.
*)
CONSTANT bottom
AXIOM bottomLeast ≜ bottom ∈ T ∧ ∀ x ∈ T : bottom ≼ x
(*
A monotone function is one which preserves order. We only need to consider
functions for which the domain and codomain are identical and have the same
order relation, but this need not be the case for monotone functions in
general.
*)
monotone(f) ≜ ∀ x1, x2 ∈ DOMAIN f : x1 ≼ x2 ⇒ f[x1] ≼ f[x2]
(*
A function is Scott-continuous if it preserves suprema of directed subsets.
We only need to consider functions for which the domain and codomain are
identical and have the same order relation, but this need not be the case for
continuous functions in general.
*)
Range(f) ≜ { f[x] : x ∈ DOMAIN f }
continuous(f) ≜
∀ P ∈ SUBSET T: ∀ x1 ∈ P :
directed(P) ∧ supremum(P, x1) ⇒ supremum(Range(f), f[x1])
(* This function performs iterated application of a function to bottom. *)
RECURSIVE approx(_, _)
approx(f, n) ≜ IF n = 0 THEN bottom ELSE f[approx(f, n-1)]
(* We will need this simple lemma about pairs of natural numbers. *)
LEMMA natDiff ≜ ∀ n1, n2 ∈ Nat : ∃ n3 ∈ Nat : n1 = n2 + n3 ∨ n2 = n1 + n3
(* The supremum of a subset of T, if it exists, is unique. *)
LEMMA supremumUniqueness ≜
∀ P ∈ SUBSET T : ∀ x1, x2 ∈ P : supremum(P, x1) ∧ supremum(P, x2) ⇒ x1 = x2
PROOF BY antisym DEF supremum
(* Scott-continuity implies monotonicity. *)
LEMMA continuousImpliesMonotone ≜ ∀ f : continuous(f) ⇒ monotone(f)
(*
Iterated applications of a monotone function f to bottom form an ω-chain,
which means they are a totally ordered subset of T. This ω-chain is called
the ascending Kleene chain of f.
*)
LEMMA omegaChain ≜
∀ f : ∀ n, m ∈ Nat :
monotone(f) ⇒
approx(f, n) ≼ approx(f, m) ∨ approx(f, m) ≼ approx(f, n)
(* The ascending Kleene chain of f is directed. *)
LEMMA kleeneChainDirected ≜
∀ f : monotone(f) ⇒ directed({ approx(f, n) : n ∈ Nat })
(*
The Kleene fixed-point theorem states that the least fixed-point of a Scott-
continuous function over a pointed directed-complete partial order exists and
is the supremum of the ascending Kleene chain.
*)
THEOREM kleene ≜
∀ f : continuous(f) ⇒
∃ x1 ∈ T :
∧ supremum({ approx(f, n) : n ∈ Nat }, x1)
∧ f[x1] = x1
∧ ∀ x2 : f[x2] = x2 ⇒ x1 ≼ x2
=============================================================================</code></pre>