I recently ran into the concept of a "variational inequality". A variational inequality defines the optimality conditions for a problem in terms of some F. It basically says that x is the optimal solution to the problem if "for all y, dot(F(x), y - x) >= 0".<p>Regular convex optimization (set gradient equal to zero) is a special case of a variational inequality, if you define F(x) to be the gradient of the objective. But one can also define F(x) for a saddle-point problem, or even in some cases a general-sum Nash equilibrium.<p>Anyway, I don't know very much about this topic, but it was a mind-expanding idea for me to run into, in terms of generalizing the connections between optimization and game-theoretic equilibria, and might be of interest to people who also find this work interesting.<p><a href="https://en.wikipedia.org/wiki/Variational_inequality" rel="nofollow">https://en.wikipedia.org/wiki/Variational_inequality</a>