24.03.2013 Views

Linear Programming Lecture Notes - Penn State Personal Web Server

Linear Programming Lecture Notes - Penn State Personal Web Server

Linear Programming Lecture Notes - Penn State Personal Web Server

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

3. The Karush-Kuhn-Tucker Conditions<br />

Theorem 8.7. Consider the linear programming problem:<br />

⎧<br />

⎪⎨<br />

max cx<br />

(8.24) P<br />

⎪⎩<br />

s.t. Ax ≤ b<br />

x ≥ 0<br />

with A ∈ Rm×n , b ∈ Rm and (row vector) c ∈ Rn . Then x∗ ∈ Rn is an optimal solution2 to<br />

P if and only if there exists (row) vectors w∗ ∈ Rm and v∗ ∈ Rn so that:<br />

(8.25)<br />

(8.26)<br />

(8.27)<br />

∗<br />

Ax ≤ b<br />

Primal Feasibility<br />

x ∗ ≥ 0<br />

⎧<br />

⎪⎨<br />

w<br />

Dual Feasibility<br />

⎪⎩<br />

∗ A − v ∗ = c<br />

w ∗ ≥ 0<br />

v ∗ ≥ 0<br />

∗ ∗<br />

w (Ax − b) = 0<br />

Complementary Slackness<br />

v ∗ x ∗ = 0<br />

Remark 8.8. The vectors w ∗ and v ∗ are sometimes called dual variables for reasons<br />

that will be clear in the next chapter. They are also sometimes called Lagrange Multipliers.<br />

You may have encountered Lagrange Multipliers in your Math 230 or Math 231 class. These<br />

are the same kind of variables except applied to linear optimization problems. There is one<br />

element in the dual variable vector w ∗ for each constraint of the form Ax ≤ b and one<br />

element in the dual variable vector v ∗ for each constraint of the form x ≥ 0.<br />

Proof. Suppose that x∗ is an optimal solution to Problem P . Consider only the binding<br />

constraints at x∗ . For simplicity, write the constraints x ≥ 0 as −x ≤ 0. Then we can form<br />

a new system of equations of the form:<br />

<br />

AE<br />

(8.28) x = bE<br />

E<br />

where E is a matrix of negative identity matrix rows corresponding to the variables xk that<br />

are equal to zero. That is, if xk = 0, then −xk = 0 and the negative identity matrix row<br />

−e T k will appear in E where −eT k ∈ R1×n . Here bE are the right hand sides of the binding<br />

constraints. Let:<br />

M =<br />

AE<br />

E<br />

<br />

The fact that x ∗ is optimal implies that there is no improving direction d at the point x ∗ .<br />

That is, there is no d so that Md ≤ 0 and c T d > 0. Otherwise, by moving in this direction<br />

we could find a new point ˆx = x ∗ + λd (with λ sufficiently small) so that:<br />

Mˆx = Mx ∗ + λMd ≤ bE<br />

2 Thanks to Rich Benjamin for pointing out the fact I was missing “. . . is an optimal solution. . . ”<br />

126

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!