04.02.2015 Views

Stochastic Programming - Index of

Stochastic Programming - Index of

Stochastic Programming - Index of

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

98 STOCHASTIC PROGRAMMING<br />

(presumably unique) solution ˆx <strong>of</strong> problem (8.1) some constraint i 0 ∈ I is<br />

active, i.e. g i0 (ˆx) =0.Foranyfixedr>0, minimization <strong>of</strong> (8.13) will not<br />

allow us to approach the solution ˆx, since obviously, by the definition <strong>of</strong> a<br />

barrier function, this would drive the new objective F rs to +∞. Hence it<br />

seems reasonable to drive the parameter r downwards to zero, as well.<br />

With<br />

B 1 := {x | g i (x) ≤ 0, i∈ I}, B 2 := {x | g i (x) ≤ 0, i∈ J}<br />

we have B = B 1 ∩B 2 ,andforr > 0 we may expect finite values <strong>of</strong> F rs<br />

only for x ∈ B1<br />

0 := {x | g i(x) < 0, i ∈ I}. We may close this short<br />

presentation <strong>of</strong> general penalty methods by a statement showing that, under<br />

mild assumptions, a method <strong>of</strong> this type may be controlled in such a way that<br />

it results in what we should like to experience.<br />

Proposition 1.26 Let f,g i , i =1, ···,m, be convex and assume that<br />

B 0 1 ∩B 2 ≠ ∅<br />

and that B = B 1 ∩B 2 is bounded. Then for {r k } and {s k } strictly monotone<br />

sequences decreasing to zero there exists an index k 0 such that for all k ≥ k 0<br />

the modified objective function F rk s k<br />

attains its (free) minimum at some point<br />

x (k) where x (k) ∈B1.<br />

0<br />

The sequence {x (k) | k ≥ k 0 } is bounded, and any <strong>of</strong> its accumulation points<br />

is a solution <strong>of</strong> the original problem (8.1). With γ the optimal value <strong>of</strong> (8.1),<br />

the following relations hold:<br />

lim<br />

k→∞ f(x(k) ) = γ,<br />

lim r ∑<br />

k ϕ(g i (x (k) )) = 0,<br />

k→∞<br />

i∈I<br />

1 ∑<br />

lim ψ(g i (x (k) )) = 0.<br />

k→∞ s k<br />

i∈J<br />

1.8.2.4 Lagrangian methods<br />

As mentioned at the end <strong>of</strong> Section 1.8.1, knowledge <strong>of</strong> the proper multiplier<br />

vector û in the Lagrange function L(x, u) = f(x) + ∑ m<br />

i=1 u ig i (x) for<br />

problem (8.1) would allow us to solve the free optimization problem<br />

min L(x, û)<br />

x∈IR n<br />

instead <strong>of</strong> the constrained problem<br />

min f(x)<br />

s.t. g i (x) ≤ 0,<br />

i =1, ···,m.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!