04.02.2015 Views

Stochastic Programming - Index of

Stochastic Programming - Index of

Stochastic Programming - Index of

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

BASIC CONCEPTS 89<br />

and observing that ∇ u L(x, u) ≤ 0 simply repeats the constraints g i (x) ≤ 0 ∀i<br />

<strong>of</strong> our original program (8.1), the Kuhn–Tucker conditions now read as<br />

⎫<br />

∇ x L(ˆx, û) =0, ⎪⎬<br />

∇ u L(ˆx, û) ≤ 0,<br />

û T (8.10)<br />

∇ u L(ˆx, û) =0, ⎪⎭<br />

û ≥ 0.<br />

Assume now that the functions f,g i , i =1, ···,m, are convex. Then for<br />

any fixed u ≥ 0 the Lagrange function is obviously convex in x. For(ˆx, û)<br />

satisfying the Kuhn–Tucker conditions, it follows by Proposition 1.21 that for<br />

any arbitrary x<br />

L(x, û) − L(ˆx, û) ≥ (x − ˆx) T ∇ x L(ˆx, û) =0<br />

and hence<br />

L(ˆx, û) ≤ L(x, û) ∀x ∈ IR n .<br />

On the other hand, since ∇ u L(ˆx, û) ≤ 0isequivalenttog i (ˆx) ≤ 0 ∀i, and<br />

the Kuhn–Tucker conditions assert that û T ∇ u L(ˆx, û) = ∑ m<br />

i=1 ûig i (ˆx) =0,it<br />

follows that<br />

L(ˆx, u) ≤ L(ˆx, û) ∀u ≥ 0.<br />

Hence we have the following.<br />

Proposition 1.25 Given that the functions f,g i , i =1, ···,m, in problem<br />

(8.1) are convex, any Kuhn–Tucker point, i.e. any pair (ˆx, û) satisfying the<br />

Kuhn–Tucker conditions, is a saddle point <strong>of</strong> the Lagrange function, i.e. it<br />

satisfies<br />

∀u ≥ 0 L(ˆx, u) ≤ L(ˆx, û) ≤ L(x, û) ∀x ∈ IR n .<br />

Furthermore, it follows by the complementarity conditions that<br />

L(ˆx, û) =f(ˆx).<br />

It is an easy exercise to show that for any saddle point (ˆx, û), with û ≥ 0,<br />

<strong>of</strong> the Lagrange function, the Kuhn–Tucker conditions (8.10) are satisfied.<br />

Therefore, if we knew the right multiplier vector û in advance, the task to<br />

solve the constrained optimization problem (8.1) would be equivalent to<br />

that <strong>of</strong> solving the unconstrained optimization problem min x∈IR n L(x, û). This<br />

observation can be seen as the basic motivation for the development <strong>of</strong> a class<br />

<strong>of</strong> solution techniques known in the literature as Lagrangian methods.<br />

1.8.2 Solution Techniques<br />

When solving stochastic programs, we need to use known procedures from<br />

both linear and nonlinear programming, or at least adopt their underlying

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!