That reminds me of: <a href="https://en.wikipedia.org/wiki/Dialogical_logic" rel="nofollow">https://en.wikipedia.org/wiki/Dialogical_logic</a><p>also made known as "game semantics" by Hintikka, where the two players are "me" and "the Nature", and the two players read alternatively the elements of a quantified mathematical proposition.<p>Each time the Nature finds a counterexample, she wins, and if "me" is able to select the correct OR branches or to find the existing elements of \exists symbols, "me" wins.
I recently ran into the concept of a "variational inequality". A variational inequality defines the optimality conditions for a problem in terms of some F. It basically says that x is the optimal solution to the problem if "for all y, dot(F(x), y - x) >= 0".<p>Regular convex optimization (set gradient equal to zero) is a special case of a variational inequality, if you define F(x) to be the gradient of the objective. But one can also define F(x) for a saddle-point problem, or even in some cases a general-sum Nash equilibrium.<p>Anyway, I don't know very much about this topic, but it was a mind-expanding idea for me to run into, in terms of generalizing the connections between optimization and game-theoretic equilibria, and might be of interest to people who also find this work interesting.<p><a href="https://en.wikipedia.org/wiki/Variational_inequality" rel="nofollow">https://en.wikipedia.org/wiki/Variational_inequality</a>