If anyone is interested in learning more about MIP and other discrete optimization topics, there is a great MOOC on coursera (link: <a href="https://www.coursera.org/learn/discrete-optimization/" rel="nofollow">https://www.coursera.org/learn/discrete-optimization/</a>). Lectures are quite engaging, delivered by enthusiastic and a bit eccentric prof. Van Hentenryck. Also there are no stupid quizzes, just 5 problems (with 6-8 instances each, ranging from comparatively small to huge) that you need to solve (in any programming language) in order to pass. The only drawback is significant investment of time it requires.
In my experience, each sufficiently complicated model becomes non-linear in some respect. For example, you might want to work with margins for time-slots that are non-linear but smooth.<p>So, even though I've worked with constrained linear programming in the past, I tend to prefer algorithms with meta-heuristics, such as simulated annealing or Tabu search. Although this might not provide the 'best' solution, it provides a wider range of modelling tools.<p>To elaborate a bit on a use-case. Lets say we want to plan a high-school roster. Teachers might have a maximum of 8 hours per day of work, but if we make a hard cut-off at that time, we might miss an interesting 8:15 schedule that gives a teacher more time to have proper lunch. If, furthermore, we do not break the 40 hr./week rule, it might be a workable schedule. Also, it provides the search algorithm a usable gradient, so the search space becomes smooth and easier to navigate. Finally, if we provide several top solutions, we give a human planner information on how the problem is (over-)constrained.
Hi all, author here. I'm very humbled that this made it to the front page! A few things I want to say:<p>1) Most of this work is a simplified copy of the papers I linked to. Special thanks to James Bookbinder and his team at University of Waterloo.<p>2) I'm an OR novice, and this is my first optimization project. I feel now that I can roughly model the flight assignment domain, but what the solvers do is a mystery to me, and when I read about Lagrangian relaxation and column generation, I'm lost. Fortunately I haven't needed those techniques in this project.<p>3) The mathematical model, when written down, looks mystifying even to me, the author, and that's unfortunate. My goal in writing this post was to reduce the mystery and explain the model in plain English (plus math notation), but in the end I'm afraid it still looks eye-glazingly complicated. Data science can be all too happy to cloak itself in mystery by writing down what are actually pretty basic equations. I don't have a good solution to this.<p>Most of all, I got such joy out of building the model one constraint at a time and seeing the solver follow my directions and spit out optimal solutions. I couldn't believe it worked. It was like I discovered electricity. Lastly, much credit goes to Flexport leadership for allowing me, an OR novice, to embark on this optimization project.
I love MIP. It's super useful.<p>I like to think that MIP is a framework that meets halfway between a natural modeling language and a natural solution language. The reason it's good is that the framework naturally lends itself to abusing linear programming as a subroutine, which can cut out a large amount of search state space. And you tend to find you can encode many problems without too much difficulty with just a handful of tricks.<p>You can encode a great deal into MIP. Piecewise linearized versions of non linear problems, or logical constraints.<p>Some interesting applications I've seen or played with:<p>- Verifying neural nets <a href="https://github.com/vtjeng/MIPVerify.jl" rel="nofollow">https://github.com/vtjeng/MIPVerify.jl</a><p>- Solving the classical Ising Model <a href="http://www.philipzucker.com/solving-the-ising-model-using-a-mixed-integer-linear-program-solver-gurobi/" rel="nofollow">http://www.philipzucker.com/solving-the-ising-model-using-a-...</a><p>- Global Robot arm kinematics <a href="http://www.philipzucker.com/2d-robot-arm-inverse-kinematics-using-mixed-integer-programming-in-cvxpy/" rel="nofollow">http://www.philipzucker.com/2d-robot-arm-inverse-kinematics-...</a><p>- model predictive control with collisions constraints <a href="http://www.philipzucker.com/flappy-bird-as-a-mixed-integer-program/" rel="nofollow">http://www.philipzucker.com/flappy-bird-as-a-mixed-integer-p...</a> See Russ Tedrake's group <a href="https://groups.csail.mit.edu/locomotion/pubs.shtml" rel="nofollow">https://groups.csail.mit.edu/locomotion/pubs.shtml</a>
MIP is NP-complete; you can reduce any problem in NP to it. But there is a large and interesting set of MIP problems where the continuous relaxation to ordinary linear optimization ("linear programming" in the jargon, although that's as misleading a term as "analog computer" or "military intelligence") gives you enough interesting information about the MIP problem to solve it enormously more efficiently than you can with a generic SAT solver.<p>One of the notes in Dercuano is an overview I put together of the field last year: <a href="http://canonical.org/~kragen/dercuano-20191230.tar.gz" rel="nofollow">http://canonical.org/~kragen/dercuano-20191230.tar.gz</a> file notes/linear-optimization-landscape.html. It seems that the best free-software solver is COIN-OR CBC, and the best free-software modeling language is GMPL, aka GNU MathProg, which is compatible with the popular proprietary linear-optimization modeling language AMPL, and includes its own somewhat weaker solver GLPK; it's much easier to get GLPK solving a problem than CBC, but the relevant incantations are in the note. But some of the proprietary solvers are much better; most of them are available on the NEOS Server.<p>I didn't evaluate embedded DSLs like Pyomo in any depth, unfortunately.<p>I'm somewhat disappointed with the Flexport post, which I feel is badly formatted and uses needlessly obscure notation and then never gets around to actually writing down a model in MathProg or Pyomo or anything similar. But I guess my own note is only a little better, and the Flexport article at least gives a MIP model of a nontrivial problem. (There are many more such examples in the GNU MathProg distribution, and MIPLIB has a wealth of extremely nontrivial ones.)<p>Mathematical optimization in general (optimization in the sense of minimizing a possibly constrained function, not in the sense of making code run faster) amounts to programming at a higher level; I think it was Norvig that described it as "the ultimate in agile software development", because you basically just write the tests. And linear optimization is the best-developed subfield of mathematical optimization: linear solvers can solve enormously larger problems than more general solvers.<p>(There's also an inferior PDF rendering of Dercuano for cellphones that can't handle tarballs: <a href="http://canonical.org/~kragen/dercuano.20191230.pdf" rel="nofollow">http://canonical.org/~kragen/dercuano.20191230.pdf</a> )
That's pretty cool these guys get to put 1950's math to work in real life. In my experience, I was never able to sell this idea to anyone based on the (potentially justified) excuse that it was completely unmaintainable without a math guy around.<p>Edit: I can personally attest that potential bootstrappers would find fertile ground making a service to do this stuff for transportation companies. I know of many who basically don't try to solve this problem because they don't have the talent and don't trust their ability to maintain something bought from a consultant. They need a service they can offload it to.
We're seeing here the debate over model-based vs. method-based approaches to optimization. My argument for the model-based approach is reflected in these two sets of slides:<p><a href="https://ampl.com/MEETINGS/TALKS/2019_09_Bolzano_Model-Based.pdf" rel="nofollow">https://ampl.com/MEETINGS/TALKS/2019_09_Bolzano_Model-Based....</a>
<a href="https://ampl.com/MEETINGS/TALKS/2018_08_Lille_Tutorial1.pdf" rel="nofollow">https://ampl.com/MEETINGS/TALKS/2018_08_Lille_Tutorial1.pdf</a><p>A recording of the first presentation will be available soon. The first uses a simpler example, but both were prepared for audiences not committed to the model-based approach. Their purpose was to familiarize people with model-based optimization and the circumstances in which it has clear advantages.
They did a poor job of explaining what Integer Programming is. Here's the wikipedia intro:<p>"An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers."<p><a href="https://en.wikipedia.org/wiki/Integer_programming" rel="nofollow">https://en.wikipedia.org/wiki/Integer_programming</a>
Cvx solves a relaxed problem trivially in matlab and then you can heuristically round to integer. <a href="http://stanford.edu/class/ee364b/lectures.html" rel="nofollow">http://stanford.edu/class/ee364b/lectures.html</a> Under L1 convex cardinality
Oh lucky you, your optimization problem is linear. I do something similar, but we have to use heuristics because our problem can't be modelled as a linear problem.