Not too impressed with the course.<p>My qualifications: I got started in optimization
scheduling the fleet at FedEx. In the words of
FedEx founder, COB, CEO F. Smith, my work "Solved
the most important problem facing the start of
Federal Express" and the output from my software was
"An amazing document".<p>Later my Ph.D. research was in optimization and, in
part, what I'd wished I'd known while at FedEx.<p>Since my Ph.D. once there a problem in 'resource
allocation' that became a 0-1 integer linear
programming problem with 40,000 constraints and
600,000 variables. I derived some math, wrote some
software, and in 905 seconds on a 90 MHz PC got a
feasible solution within 0.025% of optimality.<p>And there have been other problems.<p>For the video, a good, first suggestion for an
attack on a knapsack problem is via dynamic
programming.<p>By a wide margin, the most important topic in
optimization is the simplex algorithm for linear
programming. While the algorithm directly solves
many problems in practice, it is the most important
tool in solving optimization problems that are not
linear programming. Usually there the role of
linear programming is as a local linear
approximation in an iterative scheme of better
approximations.<p>Much of the reason for the power of linear
programming as a tool in larger iterative schemes
is the several ways can take a linear programming
problem and an optimal solution for it, make some
changes in the problem, start with the optimal
solution for the old problem, and get an optimal
solution for the new problem very quickly.<p>Good starts in linear programming include:<p>Mokhtar S. Bazaraa and John J. Jarvis, 'Linear
Programming and Network Flows', ISBN 0-471-06015-1,
John Wiley and Sons.<p>Vavsek Chvatal, 'Linear Programming', ISBN
0-7167-1587-2, W. H. Freeman, New York.<p>Yes, the min-cost capacitated (each arc has a
maximum flow it can carry) network flow problem, a
special case of which is the transshipment problem
(find the least cost way to ship some one good from
factories to warehouses; also the source of the
Kantorovich Nobel prize in economics) is a linear
programming problem, and there the simplex algorithm
simplifies and becomes the network simplex
algorithm. Curiously, and of importance for fast
computation, in the network simplex algorithm a
basic feasible solution corresponds to a spanning
tree in the network. For that problem, don't miss
the W. Cunningham idea of a 'strongly feasible
basis' or the more recent polynomial algorithm of D.
Bertsekas.<p>A significantly large fraction of real problems in
combinatorial and discrete optimization can be
solved as min-cost capacitated network flow
problems via the network simplex algorithm. A big
reason is, if the flow capacities are all integers
and if have an integer basic feasible solution, then
the simplex algorithm maintains integer basic
feasible solutions for no extra work and, with the
usual assumptions, produces an optimal integer basic
feasible solution. So this is a case of linear
programming where get integer linear programming for
no extra effort; really, since nearly all the
arithmetic is just integer, don't have to be
concerned about roundoff error and get the solution
for less effort.<p>Yes, in combinatorial and discrete optimization,
along with the rest of optimization, linear algebra
helps.<p>Curiously, one of the more powerful approaches to
combinatorial optimization is a technique from
nonlinear programming called Lagrangian relaxation,
and there commonly the main computational step is
linear programming. With that technique, some
nonlinear duality theory can yield some bounds on
how far are away from an optimal solution; such
duality theory is what yielded the bound of 0.025%
above.<p>For combinatorial optimization there is, say,<p>George L. Nemhauser and Laurence A. Wolsey, 'Integer
and Combinatorial Optimization', ISBN 0-471-35943-2,
John Wiley & Sons, Inc., New York.<p>William J. Cook, William H. Cunningham, William R.
Pulleyblank, and Alexander Schrijver, 'Combinatorial
Optimization', ISBN 0-471-55894-X, John Wiley &
Sons, Inc., New York.<p>In the OP, the emphasis on NP-complete and
algorithms that are worst case exponential is a bit
exaggerated. E.g., as in the work of Klee and
Minty, the simplex algorithm is worst case
exponential, but in expectation in practice it is
low order polynomial -- for why, there was some good
work by K. Borgward.<p>For integer, combinatorial, and discrete
optimization, I have yet to see a claim that getting
approximately optimal solutions is in NP-complete.
In particular, if in a real problem have saved $10
million and have a bound saying that can't save more
than another $2000, then why spend millions trying
to save the last tiny fraction of the last penny and
guarantee to have done so? If are willing to
sacrifice the last penny, then it is not so clear
(so far at least to me) that really are being forced
into exponential algorithms, either average or worst
case.<p>The claim in the OP that combinatorial and discrete
optimization are part of computer science looks like
a case of 'academic theft'! Such optimization has
long been regarded as part of applied math and there
part of operations research. The material has been
of interest in, say, departments of mathematical
sciences. Some parts of optimization, e.g., control
theory, have been regarded as parts of electronic
engineering. At times it does appear that academic
computer science wants to regard every topic that
makes heavy use of computing as part of computer
science! It looks like computer science has gotten
tired of just quicksort, AVL trees, BNF, YACC, and
RDBMS! :-)!<p>The claims in the OP about the importance of
optimization in the economy are a bit over
enthusiastic. Maybe the importance would, could,
and should be there, and maybe in Australia they are
there, but my experience in the US is that there are
essentially no significant non-academic career
opportunities in optimization. E.g., at one time
near year 2000, I had a nice, long talk with one of
the leading US headhunters in technical fields, and
he finally confessed that in the previous year in
all of the US there had been exactly one recruiting
request in operations research. He "packed it in"
(from the movie 'The Sting', i.e., gave up) and went
to law school!<p>Here is a good future -- although a wildly long long
shot -- for integer, combinatorial, and discrete
optimization: At present, all but quite simple
problems in such optimization are solved by some
experts attacking each problem one at a time,
exploiting special structure of each problem, trying
various techniques, and, hopefully, finally getting
some good results. It's custom, one-off, bench
scale, expert, creative, hand work and often,
really, some applied research.<p>In addition, there have long been programs that can
be used to pose or enter problems in integer,
combinatorial, and discrete optimization suitable
for solution by a 'solver'. The problem is, since
the programs are general purpose, the 'solvers' to
be used are also general purpose and, without the
hand work, too often in practice are not powerful
enough.<p>Presumably a polynomial algorithm of low degree that
showed that P = NP could be the core of a general
purpose solver powerful enough to make solving such
optimization problems easy and routine.<p>Lacking such a solver, in practice patience with
such optimization has long been a bit thin.<p>So, what the heck happened? In much of physics,
say, planetary motion, we can write down some
equations the real system must solve. If the
equations have a unique solution, then that solution
must be the right one for the real system. At times
since early in WWII, such physics was of enormous
value for at least the US DoD.<p>So, given such an equation, solve it. This
technique often worked for, say, initial value
problems for relatively simple ordinary differential
equations. Alas, for the partial differential
equations of fluid flow, the Navier-Stokes
equations, the situation was usually a bit
challenging.<p>Also in problems for the US DoD, e.g., 'logistics'
or how to move a large army across a large ocean in
least time within capabilities of existing
resources, could write down some equations and other
mathematical specifications that a 'best' solution
had to satisfy. So, then, to say how to move the
army, just 'solve' the equations, etc. This work
was called 'operations research', and at times the
US DoD was very enthusiastic about it. With enough
US Federal Government research grants to academics,
parts of academics got enthusiastic about it. And
so did Bell Labs.<p>But there was no law of the universe that said that
finding solutions to such mathematical
specifications would be easy. Yes, Navier-Stokes is
another example of difficulty. Now, in
cryptography, so is the fundamental theorem of
arithmetic, that is, factoring a positive integer
into a product of prime numbers.<p>Clay Mathematics Institute in Boston has more than
one prize of $1 million available for progress on
such things.<p>Net, it does very much appear that for now making
money by building a popular Web site and selling ads
on it is a much easier way to make money! And, at
times, in such work some applied math, possibly new,
can be an advantage. But, clearly for now, the
grand goals of integer, combinatorial, and discrete
optimization are too difficult for 'prime time' in
practice in mainstream business.<p>Or, problem A can make a lot of money if we have a
lot of applied math and computing. So, let's attack
problem A. Alas, that much applied math and
computing, or even just part of the computing, may
be much more valuable for solving other problems!