Linear programming duality

The conventional statement of linear programming duality is completely inscrutable.

  • Prime: maximise b^T x subject to Ax \leq c and x \geq 0.
  • Dual: minimise c^T y subject to A^T y \leq b and y \geq 0.

If either problem has a finite optimum then so does the other, and the optima agree.

do understand concrete examples.  Suppose we want to pack the maximum number vertex-disjoint copies of a graph F into a graph G.  In the fractional relaxation, we want to assign each copy of F a weight \lambda_F \in [0,1] such that the weight of all the copies of F at each vertex is at most 1, and the total weight is as large as possible.  Formally, we want to

maximise \sum \lambda_F subject to \sum_{F \ni v} \lambda_F \leq 1 and \lambda_F \geq 0,

which dualises to

minimise \sum \mu_v subject to \sum_{v \in F} \mu_v \geq 1 and \mu_v \geq 0.

That is, we want to weight vertices as cheaply as possible so that every copy of F contains 1 (fractional) vertex.

To get from the prime to the dual, all we had to was change a max to a min, swap the variables indexed by F for variables indexed by v and flip one inequality.  This is so easy that I never get it wrong when I’m writing on paper or a board!  But I thought for years that I didn’t understand linear programming duality.

(There are some features of this problem that make things particularly easy: the vectors b and c in the conventional statement both have all their entries equal to 1, and the matrix A is 0/1-valued.  This is very often the case for problems coming from combinatorics.  It also matters that I chose not to make explicit that the inequalities should hold for every v (or F, as appropriate).)

Returning to the general statement, I think I’d be happier with

  • Prime: maximise \sum_j b_j x_j subject to \sum_j A_{ij}x_j \leq c_i and x \geq 0.
  • Dual: minimise \sum_i y_i subject to \sum_i A_{ij} y_i \leq b_j and y \geq 0.

My real objection might be to matrix transposes and a tendency to use notation for matrix multiplication just because it’s there.  In this setting a matrix is just a function that takes arguments of two different types (v and F or, if you must, i and j), and I’d rather label the types explicitly than rely on an arbitrary convention.