04.02.2015 Views

Stochastic Programming - Index of

Stochastic Programming - Index of

Stochastic Programming - Index of

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

PROBABILISTIC CONSTRAINTS 245<br />

be treated computationally. In particular, we shall verify that, under our<br />

assumptions, a program with joint chance constraints becomes a convex<br />

program and that programs with separate chance contraints may be<br />

reformulated to become a deterministic convex program amenable to standard<br />

nonlinear programming algorithms.<br />

4.1 Joint Chance Constrained Problems<br />

Let us concentrate on the particular stochastic linear program<br />

⎫<br />

min c T x<br />

⎪⎬<br />

s.t. P ({ξ | Tx≥ ξ}) ≥ α<br />

Dx = d, ⎪⎭<br />

x ≥ 0.<br />

(1.1)<br />

For this problem we know from Propositions 1.5–1.7 in Section 1.6 that if the<br />

distribution function F is quasi-concave then the feasible set B(α) is a closed<br />

convex set.<br />

Under the assumption that ˜ξ has a (multivariate) normal distribution,<br />

we know that F is even log-concave. We therefore have a smooth convex<br />

program. For this particular case there have been attempts to adapt penalty<br />

and cutting-plane methods to solve (1.1). Further, variants <strong>of</strong> the reduced<br />

gradient method as sketched in Section 1.8.2 have been designed.<br />

These approaches all attempt to avoid the “exact” numerical integration<br />

associated with the evaluation <strong>of</strong> F (Tx)=P ({ξ | Tx ≥ ξ}) and its gradient<br />

∇ x F (Tx) by relaxing the probabilistic constraint<br />

P ({ξ | Tx≥ ξ}) ≥ α.<br />

To see how this may be realized, let us briefly sketch one iteration <strong>of</strong> the<br />

reduced gradient method’s variant implemented in PROCON, a computer<br />

program for minimizing a function under PRObabilistic CONstraints.<br />

With the notation<br />

let x be feasible in<br />

G(x) :=P ({ξ | Tx≥ ξ}),<br />

min c T x<br />

s.t. G(x) ≥ α,<br />

Dx = d,<br />

x ≥ 0,<br />

⎫<br />

⎪⎬<br />

⎪⎭<br />

(1.2)<br />

and—assuming D to have full row rank—let D be partitioned as D =(B,N)<br />

into basic and nonbasic parts and accordingly partition x T =(y T ,z T ), c T =

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!