"A mathematical optimization problem, or just optimization problem, has the form minimize f0(x) subject to fi(x) ≤ bi , i = 1, . . . , m. (1.1) Here the vector x = (x1, . . . , xn) is the optimization variable of the problem, the function f0 : R n → R is the objective function, the functions fi : R n → R, i = 1, . . . , m, are the (inequality) constraint functions, and the constants b1, . . . , bm are the limits, or bounds, for the constraints. A vector x ⋆ is called optimal, or a solution of the problem (1.1), if it has the smallest objective value among all vectors that satisfy the constraints: for any z with f1(z) ≤ b1, . . . , fm(z) ≤ bm, we have f0(z) ≥ f0(x ⋆ ). We generally consider families or classes of optimization problems, characterized by particular forms of the objective and constraint functions. As an important example, the optimization problem (1.1) is called a linear program if..."<p>Boy, and that's just the opening paragraph of the introduction.<p>Exactly what arcane requisite elite math precursors are necessary to even remotely understand this?