30.11.2014 Views

Uniform Quasi-Concavity in Probabilistic Constrained ... - Rutcor

Uniform Quasi-Concavity in Probabilistic Constrained ... - Rutcor

Uniform Quasi-Concavity in Probabilistic Constrained ... - Rutcor

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

R u t c o r<br />

Research<br />

R e p o r t<br />

<strong>Uniform</strong> <strong>Quasi</strong>-<strong>Concavity</strong> <strong>in</strong><br />

<strong>Probabilistic</strong> Constra<strong>in</strong>ed<br />

Stochastic Programm<strong>in</strong>g<br />

András Prékopa a Kunikazu Yoda b<br />

Munevver M<strong>in</strong>e Subasi c<br />

RRR 6-2010, April, 2010<br />

RUTCOR<br />

Rutgers Center for<br />

Operations Research<br />

Rutgers University<br />

640 Bartholomew Road<br />

Piscataway, New Jersey<br />

08854-8003<br />

Telephone: 732-445-3804<br />

Telefax: 732-445-5472<br />

Email: rrr@rutcor.rutgers.edu<br />

http://rutcor.rutgers.edu/∼rrr<br />

a RUTCOR, Rutgers Center for Operations Research, 640 Bartholomew<br />

Road, Piscataway, NJ 08854-8003; prekopa@rutcor.rutgers.edu<br />

b RUTCOR, Rutgers Center for Operations Research, 640 Bartholomew<br />

Road, Piscataway, NJ 08854-8003; kyoda@rutcor.rutgers.edu<br />

c RUTCOR, Rutgers Center for Operations Research, 640 Bartholomew<br />

Road, Piscataway, NJ 08854-8003; msub@rutcor.rutgers.edu


<strong>Rutcor</strong> Research Report<br />

RRR 6-2010, April, 2010<br />

<strong>Uniform</strong> <strong>Quasi</strong>-<strong>Concavity</strong> <strong>in</strong> <strong>Probabilistic</strong><br />

Constra<strong>in</strong>ed Stochastic Programm<strong>in</strong>g<br />

András Prékopa Kunikazu Yoda Munevver M<strong>in</strong>e Subasi<br />

Abstract. A probabilistic constra<strong>in</strong>ed stochastic programm<strong>in</strong>g problem is considered,<br />

where the underly<strong>in</strong>g problem has l<strong>in</strong>ear constra<strong>in</strong>ts with random technology<br />

matrix. The rows of the matrix are assumed to be stochastically <strong>in</strong>dependent and<br />

normally distributed. For the convexity of the problem the quasi-concavity of the<br />

constra<strong>in</strong><strong>in</strong>g function is needed that is ensured if the factors are uniformly quasiconcave.<br />

In the paper a necessary and sufficient condition is given for that property<br />

to hold. It is also shown, through numerical examples, that such a special problem<br />

still has practical application <strong>in</strong> optimal portfolio construction.<br />

Acknowledgements:<br />

CMMI-0856663.<br />

This research was supported by National Science Foundation Grant


Page 2 RRR 6-2010<br />

1 Introduction<br />

The stochastic programm<strong>in</strong>g problem, termed programm<strong>in</strong>g under probabilistic constra<strong>in</strong>t<br />

can be formulated <strong>in</strong> the follow<strong>in</strong>g way:<br />

m<strong>in</strong>imize f(x) (1.1)<br />

subject to h 0 (x) = P (g i (x, ξ) ≥ 0, i = 1, . . . , r) ≥ p<br />

h i (x) ≥ 0, i = 1, . . . , m,<br />

where x ∈ R n , ξ ∈ R q , f(x), g i (x, y), i = 1, . . . , r, h i (x), i = 1, . . . , m are some functions and<br />

p is a fixed large probability, e.g., p = 0.8, 0.9, 0.95, 0.99. In many application the stochastic<br />

constra<strong>in</strong>ts have the form ξ − T x ≥ 0 and the probabilistic constra<strong>in</strong>t specializes as<br />

h 0 (x) = P(T x ≤ ξ) ≥ p. (1.2)<br />

For the case of cont<strong>in</strong>uously distributed random vector ξ general theorems are available<br />

to ensure the convexity of the set determ<strong>in</strong>ed by the probabilistic constra<strong>in</strong>t <strong>in</strong> (1.1). For<br />

example, if g i , i = 1, . . . , r are concave or at least quasi-concave <strong>in</strong> all variables and ξ has<br />

a logconcave p.d.f., then the function h 0 (x) is logconcave and the set {x | h 0 (x) ≥ p} is<br />

convex (see, e.g. [12, 14]). This implies that if ξ has the above-mentioned property, then<br />

the set determ<strong>in</strong>ed by the constra<strong>in</strong>t (1.2) is convex. Many applications of the model with<br />

probabilistic constra<strong>in</strong>t (1.2) have been carried out, for the cases of some special cont<strong>in</strong>uous<br />

multivariate distributions such as normal, gamma, and Dirichlet, and problem solv<strong>in</strong>g<br />

packages have been developed [14, 3, 6, 15].<br />

The solution of problems where ξ <strong>in</strong> (1.2) is a discrete random vector is more recent. The<br />

key concept here is that of a p-efficient po<strong>in</strong>t, <strong>in</strong>troduced <strong>in</strong> [11] and further developed and<br />

used <strong>in</strong> [2, 4, 8, 13].<br />

For the case of a random T <strong>in</strong> the constra<strong>in</strong>t (1.2), few results are known. The earliest<br />

papers deal<strong>in</strong>g with random matrix T <strong>in</strong> probabilistic constra<strong>in</strong>t are [7, 9]. In these papers,<br />

however, there is only one stochastic constra<strong>in</strong>t and to establish the concavity of the set<br />

{x | P(T x ≤ ξ) ≥ p} is relatively easy (see, the proof of Lemma 2.2).<br />

The first paper where convexity theorems are presented for the set of feasible solution<br />

and random matrix T has more than one row, is [10]. If T has more than one row, then even<br />

if they are <strong>in</strong>dependent, it is not easy to ensure the convexity of the set {x | P(T x ≤ d) ≥ p}.<br />

The papers [12] and [5] can be mentioned, where progress <strong>in</strong> this direction has been made.<br />

The problem is that the products or sums of quasi-concave functions are not quasi-concave,<br />

<strong>in</strong> general. We briefly recall the ma<strong>in</strong> results of the paper [10] (see also [12] pp.312–314).<br />

Theorem 1.1. Let ξ be constant and T a random matrix with <strong>in</strong>dependent, normally distributed<br />

rows (or columns) such that their covariance matrices are constant multiples of each<br />

other. Then h(x) = P(T x ≤ ξ) is a quasi-concave function on the set {x | h(x) ≥ 1/2}.<br />

We <strong>in</strong>troduce a special class of quasi-concave functions.


RRR 6-2010 Page 3<br />

Def<strong>in</strong>ition 1.1 (<strong>Uniform</strong>ly quasi-concave functions). Let h 1 (x), . . . , h r (x) be quasi-concave<br />

functions on a convex set E ∈ R n . We say that they are uniformly quasi-concave functions<br />

if for any x, y ∈ E either<br />

m<strong>in</strong>(h i (x), h i (y)) = h i (x), i = 1, . . . , r<br />

or<br />

m<strong>in</strong>(h i (x), h i (y)) = h i (y), i = 1, . . . , r.<br />

Obviously, the sum of uniformly quasi-concave functions, on the same set, is also quasiconcave<br />

and if the functions are also nonnegative, then the same holds for their product as<br />

well. The latter property is used <strong>in</strong> the next section, where we prove our ma<strong>in</strong> result.<br />

In this paper we look at probabilistic constra<strong>in</strong>ts of the type<br />

P(T x ≤ b) ≥ p, (1.3)<br />

where T is a random matrix that has <strong>in</strong>dependent, normally distributed rows and b is a<br />

constant vector. The constra<strong>in</strong><strong>in</strong>g function <strong>in</strong> (1.3) is the product of special quasi-concave<br />

functions and we show that the uniform quasi-concavity of the factors implies that the<br />

covariance matrices of the rows are constant multiples of each other. Section 2 and 3 are<br />

devoted to this. In section 4 we show that this very special type of probabilistic constra<strong>in</strong>t is<br />

still applicable to solve portfolio optimization problems. We present some numerical results<br />

<strong>in</strong> this respect.<br />

2 Prelim<strong>in</strong>ary Results<br />

First we provide a necessary condition for cont<strong>in</strong>uously differentiable and uniformly quasiconcave<br />

functions h 1 (x), . . . , h r (x) on an open convex set.<br />

Lemma 2.1. If h 1 (x), . . . , h r (x) are cont<strong>in</strong>uously differentiable and uniformly quasi-concave<br />

on an open convex set E, then any nonzero gradients ∇h i (x), ∇h j (x) are positive multiples<br />

of each other, i.e., for any i, j ∈ {1, . . . , r}, there exists a positive-valued function α ij (x) =<br />

1/α ji (x) > 0 def<strong>in</strong>ed on E ij = {x ∈ E | ∇h i (x) ≠ 0, ∇h j (x) ≠ 0} = E ji such that for all<br />

x ∈ E ij we have<br />

∇h i (x) = α ij (x)∇h j (x) (2.1)<br />

Proof. We show that (2.1) holds for all x ∈ E ij by contradiction. Suppose that for some<br />

x ∈ E ij we cannot f<strong>in</strong>d an α ij (x) > 0 satisfy<strong>in</strong>g (2.1). Without loss of generality we assume<br />

that i = 1, j = 2.<br />

From Farkas Lemma, either one of the follow<strong>in</strong>g two systems has a solution<br />

(i) ∇h 2 (x) T d ≤ 0, ∇h 1 (x) T d > 0<br />

(ii) ∇h 1 (x) = λ∇h 2 (x), λ ≥ 0


Page 4 RRR 6-2010<br />

First, note that s<strong>in</strong>ce ∇h 1 (x) ≠ 0 and ∇h 2 (x) ≠ 0, λ = 0 cannot be a solution of (ii).<br />

Also, λ > 0 cannot be a solution of (ii), otherwise we can def<strong>in</strong>e α 12 (x) = λ > 0. Hence,<br />

(i) has a solution d 1 . Similarly, s<strong>in</strong>ce ∇h 2 (x) = α 21 (x)∇h 1 (x) does not hold for any def<strong>in</strong>ed<br />

value of α 21 (x) = 1/α 12 (x) > 0 by the assumption, (i) with 1 and 2 <strong>in</strong>terchanged has a<br />

solution d 2 . So we have<br />

Let d := d 1 − d 2 . Then it follows that<br />

∇h 2 (x) T d 1 ≤ 0, ∇h 1 (x) T d 1 > 0,<br />

∇h 1 (x) T d 2 ≤ 0, ∇h 2 (x) T d 2 > 0.<br />

∇h 1 (x) T d > 0, ∇h 2 (x) T d < 0. (2.2)<br />

Note that d ≠ 0. By the use of f<strong>in</strong>ite Taylor series expansions we can write:<br />

h 1 (x + εd) = h 1 (x) + (∇h 1 (x) T d)ε + o(ε), (2.3)<br />

h 2 (x + εd) = h 2 (x) + (∇h 2 (x) T d)ε + o(ε). (2.4)<br />

S<strong>in</strong>ce E 12 is an open set, we can select ε > 0 small enough so that<br />

∃y := x + εd ∈ E 12 , y ≠ x, h 1 (y) > h 1 (x), h 2 (y) < h 2 (x)<br />

Hence h 1 (x), . . . , h r (x) are not uniformly quasi-concave, which is a contradiction.<br />

For r = 1, let us consider the function<br />

h(x) = P(T x ≤ b), (2.5)<br />

where T is a random row vector and b is a constant. The follow<strong>in</strong>g lemma was first proved<br />

by Kataoka [7] and van de Panne and Popp [9]. See also Prékopa [12].<br />

Lemma 2.2 ([7, 9]). If T has normal distribution, then the function h(x) is quasi-concave<br />

on the set<br />

{<br />

x | P(T x ≤ b) ≥ 1 }<br />

.<br />

2<br />

Proof (from [14], pp 284-285). We prove the equivalent statement: for any p ≥ 1/2 the set<br />

{x | P(T x ≤ b) ≥ p} (2.6)<br />

is convex.<br />

Let Φ(t) denote the c.d.f. of the one-dimensional standard normal distribution and let<br />

µ = E(T T ) and C = E((T T − µ)(T T − µ) T ) denote the mean vector and the covariance


RRR 6-2010 Page 5<br />

matrix of T , respectively. For any x such that x T Cx > 0,<br />

is equivalent to<br />

h(x) = P(T x ≤ b)<br />

( (T − µ T )x<br />

= P √<br />

xT Cx<br />

≤ b − )<br />

µT x<br />

√<br />

xT Cx<br />

( ) b − µ T x<br />

= Φ √ ≥ p<br />

xT Cx<br />

µ T x + Φ −1 (p) √ x T Cx ≤ b. (2.7)<br />

S<strong>in</strong>ce Φ −1 (p) ≥ 0 and √ x T Cx is a convex function of x on R n , it follows that <strong>in</strong>equality (2.7)<br />

determ<strong>in</strong>es a convex set.<br />

For any x such that x T Cx = 0, T x = µ T x with probability 1. S<strong>in</strong>ce p > 0 it follows that<br />

is equivalent to<br />

The set of x determ<strong>in</strong>ed by (2.8) is convex.<br />

h(x) = P(T x ≤ b) = P(µ T x ≤ b) ≥ p<br />

µ T x ≤ b. (2.8)<br />

Let r be an arbitrary positive <strong>in</strong>teger and <strong>in</strong>troduce the function:<br />

h i (x) = P(T i x ≤ b i ), i = 1, . . . , r, (2.9)<br />

where each row vector T i , i = 1, . . . , r has normal distribution with mean vector µ i = E(T T<br />

and covariance matrix C i = E((Ti<br />

T − µ i )(Ti<br />

T − µ i ) T ), and b = (b 1 , . . . , b r ) T is constant.<br />

Suppose b i > 0, i = 1, . . . , r. Let us def<strong>in</strong>e set E as follows:<br />

E is convex.<br />

E ⊃ B ⊃ {0} for some open set B. (2.10)<br />

Each h i (x), i = 1, . . . , r is quasi-concave on E.<br />

i )<br />

One example of such E is<br />

E =<br />

r⋂<br />

i=1<br />

{<br />

x | h i (x) ≥ 1 }<br />

. (2.11)<br />

2<br />

Note that by lemma 2.2, h i (x) is quasi-concave on the convex set E i = {x | h i (x) ≥ 1/2} and<br />

that for sufficiently small open ball B ɛ (0) = {x | ‖x‖ < ɛ} of the orig<strong>in</strong>, h i (x) ≥ 1/2, ∀x ∈<br />

B ɛ (0), thus E i ⊃ B ɛ (0). Also note that the <strong>in</strong>tersection of convex sets is a convex set. If


Page 6 RRR 6-2010<br />

rows T 1 , . . . , T r of T are <strong>in</strong>dependent and h 1 (x), . . . , h r (x) are uniformly quasi-concave, then<br />

h(x) = P(T i x ≤ b i , i = 1, . . . , r) = ∏ r<br />

i=1 P(T i ≤ b i ) = h 1 (x) · · · h r (x) is quasi-concave on E.<br />

Suppose b i > 0 and C i is positive def<strong>in</strong>ite for all i ∈ {1, . . . , r}.<br />

S<strong>in</strong>ce<br />

⎧ ( )<br />

⎨ bi − µ T i x<br />

Φ √ for x ≠ 0,<br />

h i (x) = xT C<br />

⎩<br />

i x<br />

P(0 ≤ b i ) = 1 for x = 0.<br />

lim h i(x) = lim Φ(t) = 1 = h i (0),<br />

x→0 t→∞<br />

h i (x) is cont<strong>in</strong>uous at x = 0. Let us calculate the gradient of h i (x) for x ∈ <strong>in</strong>t (E) \ {0}.<br />

( )<br />

bi − µ T i x<br />

∇h i (x) = ∇Φ √<br />

xT C i x<br />

( )<br />

bi − µ T i x<br />

= φ √ ∇ b i − µ T i x<br />

√<br />

xT C i x xT C i x<br />

( ) √<br />

bi − µ T i x − xT C<br />

= φ √ i xµ i − (b i − µ T i x)C i x/ √ x T C i x<br />

xT C i x<br />

x T C i x<br />

= −φ<br />

(<br />

bi − µ T i x<br />

√<br />

xT C i x<br />

(2.12)<br />

) (x T C i x)µ i + (b i − µ T i x)C i x<br />

(x T C i x) 3/2 , (2.13)<br />

where φ(t) is the p.d.f. of the one-dimensional standard normal distribution.<br />

φ(t) = √ 1 exp<br />

(− 1 )<br />

2π 2 t2 .<br />

For any fixed x ≠ 0, we have<br />

(<br />

lim ∇h i (εx) = − lim φ<br />

ε↓0 ε↓0<br />

= 0.<br />

b i<br />

ε √ x T C i x −<br />

) { µT i x (x T C<br />

√ i x)µ i − (µ T i x)C i x<br />

xT C i x ε √ +<br />

x T C i x<br />

}<br />

b i C i x<br />

ε 2 (x T C i x) 3/2<br />

Hence lim x→0 ∇h i (x) = 0 and ∇h i (x) is cont<strong>in</strong>uous at x = 0. Therefore h 1 (x), . . . , h r (x) are<br />

cont<strong>in</strong>uously differentiable on the open convex set <strong>in</strong>t (E).<br />

3 The Ma<strong>in</strong> Result<br />

In what follows we make use of the follow<strong>in</strong>g theorem [1] from l<strong>in</strong>ear algebra:<br />

Theorem 3.1 (Simultaneous Diagonalization of Two Matrices). Given two real symmetric<br />

matrices, A and B, with A positive-valued def<strong>in</strong>ite, there exists a nons<strong>in</strong>gular matrix U such


RRR 6-2010 Page 7<br />

that<br />

U T AU = I, U T BU = diag(λ 1 , λ 2 , . . . , λ n ) = ⎜<br />

⎝<br />

O<br />

⎛<br />

λ 1<br />

λ 2<br />

. ..<br />

⎞<br />

O<br />

⎟<br />

⎠<br />

λ n<br />

(3.1)<br />

In the next theorem we present our ma<strong>in</strong> result.<br />

Theorem 3.2. Suppose b i > 0 and C i is positive def<strong>in</strong>ite for i ∈ {1, . . . , r}. The functions<br />

h 1 (x), . . . , h r (x) def<strong>in</strong>ed by (2.9) (<strong>in</strong> this case, (2.12)) are uniformly quasi-concave on a<br />

convex set E satisfy<strong>in</strong>g (2.10) if and only if each C i is a constant multiple of a covariance<br />

matrix C, and<br />

µ 1<br />

b 1<br />

= · · · = µ r<br />

b r<br />

.<br />

Proof. Sufficiency (⇐) is obvious, so we only show necessity (⇒). It is enough to show<br />

that C 1 , C 2 are constant multiples of each other and that µ 1 /b 1 = µ 2 /b 2 for r ≥ 2. h i (x) is<br />

cont<strong>in</strong>uously differentiable on the open convex set <strong>in</strong>t (E). From (2.13) we have for x ≠ 0<br />

( )<br />

x T bi − µ T i x b<br />

∇h i (x) = −φ √ √ i<br />

xT C i x xT C i x < 0.<br />

Thus ∇h i (x) ≠ 0 for x ≠ 0. We know lim x→0 ∇h i (x) = 0. Let E ′ := <strong>in</strong>t (E) \ {0}. Then<br />

E ′ = {x ∈ <strong>in</strong>t (E) | ∇h i (x) ≠ 0, i ∈ {1, . . . , r}}. From Lemma 2.1 and (2.13), there is a<br />

positive function α 12 (x) > 0 such that for all x ∈ E ′ we have<br />

(x T C 1 x)µ 1 + (b 1 − µ T 1 x)C 1 x = α 12 (x) { (x T C 2 x)µ 2 + (b 2 − µ T 2 x)C 2 x } (3.2)<br />

For small ε > 0 and x ∈ E ′ , let us replace x with εx ∈ E ′ <strong>in</strong> (3.2) and divide by ε for both<br />

sides of the equation.<br />

ε(x T C 1 x)µ 1 + (b 1 − εµ T 1 x)C 1 x = α 12 (εx) { ε(x T C 2 x)µ 2 + (b 2 − εµ T 2 x)C 2 x } (3.3)<br />

Tak<strong>in</strong>g the limit of the both sides of (3.3) as ε → 0 we obta<strong>in</strong><br />

S<strong>in</strong>ce 0 < x T C 1 x < ∞, 0 < x T C 2 x < ∞ for x ∈ E ′ , the limit<br />

exists and 0 < α ′ 12(x) < ∞. Thus we have<br />

b 1 C 1 x = (lim<br />

ε→0<br />

α 12 (εx))b 2 C 2 x. (3.4)<br />

lim α 12(εx) = b 1x T C 1 x<br />

ε→0 b 2 x T C 2 x =: α′ 12(x) (3.5)<br />

b 1 C 1 x = α ′ 12(x)b 2 C 2 x for x ∈ E ′ . (3.6)


Page 8 RRR 6-2010<br />

S<strong>in</strong>ce C 1 and C 2 are symmetric and C 2 is positive def<strong>in</strong>ite, from Theorem 3.1 there is a<br />

nons<strong>in</strong>gular matrix U such that<br />

U T C 1 U = D, U T C 2 U = I,<br />

where D = diag(λ 1 , . . . , λ r ) is a diagonal matrix. Let y := U −1 x and F := {U −1 x | x ∈ E ′ }.<br />

S<strong>in</strong>ce U is nons<strong>in</strong>gular, F is a neighborhood of the orig<strong>in</strong> 0, and 0 /∈ F .<br />

For all y ∈ F we have by multiply<strong>in</strong>g U T from left to (3.6)<br />

which implies that<br />

b 1 Dy = α 12(Uy)b ′ 2 y<br />

⎡ ⎤<br />

⎡ ⎤<br />

λ 1 y 1<br />

y 1<br />

⎢ ⎥<br />

⇒ b 1 ⎣ . ⎦ = α 12(Uy)b ′ ⎢ ⎥<br />

2 ⎣ . ⎦<br />

λ r y r y r<br />

0 < α ′ 12(x) = α ′ 12(Uy) = b 1λ 1<br />

b 2<br />

is constant. Therefore we have from (3.6)<br />

= · · · = b 1λ r<br />

b 2<br />

=: α ′ 12<br />

Let us plug (3.7) <strong>in</strong>to (3.2).<br />

C 1 = α 12<br />

′ b 2<br />

C 2 . (3.7)<br />

b 1<br />

x T C 2 x (α 12b ′ 2 µ 1 − α 12 (x)b 1 µ 2 )<br />

{<br />

}<br />

+ (α 12 ′ − α 12 (x))b 1 b 2 − (α 12b ′ 2 µ 1 − α 12 (x)b 1 µ 2 ) T x C 2 x = 0. (3.8)<br />

Multiply<strong>in</strong>g (3.8) by x T from left we obta<strong>in</strong><br />

If we substitute (3.9) <strong>in</strong>to (3.8), we get<br />

{α ′ 12 − α 12 (x)} b 1 b 2 x T C 2 x = 0 ⇒ α 12 (x) = α ′ 12. (3.9)<br />

x T C 2 x(b 2 µ 1 − b 1 µ 2 ) = x T (b 2 µ 1 − b 1 µ 2 )C 2 x. (3.10)<br />

Let us <strong>in</strong>troduce w := U T (b 2 µ 1 − b 1 µ 2 ). S<strong>in</strong>ce x = Uy we have<br />

(y T y)w = (y T w)y<br />

⎡ ⎤<br />

⎡ ⎤<br />

w 1<br />

⎢ ⎥<br />

⇒ ⎣ . ⎦ = y y 1<br />

1w 1 + · · · + y r w r ⎢ ⎥<br />

y1 2 + · · · + yr<br />

2 ⎣ . ⎦ (3.11)<br />

w r y r<br />

S<strong>in</strong>ce (3.11) holds for y = ε[0, 1, . . . , 1] T , . . . , y = ε[1, . . . , 1, 0] T ∈ F for some small ε > 0, it<br />

follows that<br />

w 1 = 0, . . . , w r = 0 ⇒ w = 0 ⇒ µ 1<br />

b 1<br />

= µ 2<br />

b 2<br />

. (3.12)


RRR 6-2010 Page 9<br />

4 Application <strong>in</strong> Portfolio Optimization<br />

In this section we look at a probabilistic constra<strong>in</strong>ed stochastic programm<strong>in</strong>g problem, where<br />

the probabilistic constra<strong>in</strong>t is of type (1.2). We assume that T has <strong>in</strong>dependent, normally distributed<br />

rows and the factors <strong>in</strong> the product ∏ K<br />

k=1 P(T kx ≤ b k ) are uniformly quasi concave.<br />

The problem is special, but still can be applied, e.g., <strong>in</strong> portfolio optimization.<br />

Consider n assets and K consecutive periods. Let us <strong>in</strong>troduce the follow<strong>in</strong>g notations:<br />

for k = 1, ..., K<br />

T k : random loss of the assets dur<strong>in</strong>g the k-th period<br />

µ k = E[T T k ] : expected loss<br />

C k = E[(T T k − µ k )(T T k − µ k ) T ] : covariance matrix of T k .<br />

We assume that T k , k = 1, .., K, are <strong>in</strong>dependent and normally distributed random vectors<br />

and µ k ≤ 0, k = 1, ..., K. We also assume that the time w<strong>in</strong>dow of the K periods is relatively<br />

short and a l<strong>in</strong>ear trend for the expectations prevails. Formally, our assumptions are:<br />

µ 1 = µ and µ k+1 = αµ k , k = 1, . . . , K − 1 (4.1)<br />

C 1 = C and C k+1 = α 2 C k , k = 1, . . . , K − 1. (4.2)<br />

For the first period, we consider the portfolio optimization problem formulated by Kataoka<br />

[7]:<br />

(Problem 1):<br />

m<strong>in</strong>imize b<br />

subject to Φ<br />

( ) b − µ T x<br />

√ ≥ p<br />

xT Cx<br />

n∑<br />

x j = 1<br />

j=1<br />

x j ≥ 0 for j = 1, . . . , n.<br />

For the k-th period (k ∈ {2, . . . , K}), we consider the follow<strong>in</strong>g problem.<br />

(Problem k): m<strong>in</strong>imize b 1<br />

k∏<br />

subject to Φ<br />

i=1<br />

( )<br />

bi − µ T i x<br />

√ ≥ p<br />

xT C i x<br />

n∑<br />

x j = 1<br />

j=1<br />

b i+1 = α b i for i = 1, . . . , k − 1<br />

x j ≥ 0<br />

b 1 ≥ 0 .<br />

for j = 1, . . . , n


Page 10 RRR 6-2010<br />

A related model is presented <strong>in</strong> [17], where <strong>in</strong>dividual probabilistic constra<strong>in</strong>ts are taken<br />

for more than one part of the distribution.<br />

By Theorem 3.2 the functions h 1 (x), . . . , h K (x) def<strong>in</strong>ed by (2.12) are uniformly quasiconcave<br />

on the convex set E := ∩ K k=1 {x | h k(x) ≥ 1/2} = ∩ K k=1 {x | b k ≥ µ T k x} = {x | b K ≥<br />

µ T K x}, and hence h(x) = ∏ K<br />

k=1 h k(x) is quasi-concave on E. S<strong>in</strong>ce set {x | h(x) ≥ p, x ∈ E}<br />

is convex, the set of feasible solutions of (Problem k) is convex.<br />

Below we present a numerical example for the application of the above model. We take<br />

the <strong>in</strong>itial expectations and covariance matrix from past history data but then proceed to<br />

obta<strong>in</strong> those values <strong>in</strong> accordance with the assumption formulated <strong>in</strong> the model.<br />

Numerical Example.<br />

Assets “Dow, S&P500, Nasdaq, NYSECI, 10YrBond” are obta<strong>in</strong>ed from Yahoo! F<strong>in</strong>ance<br />

(http://f<strong>in</strong>ance.yahoo.com) and assets “Oil, Gold, Silver, EUR/USD” are obta<strong>in</strong>ed from<br />

Dukascopy (http://www.dukascopy.com).<br />

We consider the expected values and the covariance matrix of the daily losses of the n<strong>in</strong>e<br />

assets <strong>in</strong> May 2009. The data is shown <strong>in</strong> Table 1 and Table 2.<br />

Gold Silver Nasdaq S&P500 Oil EUR/USD 10YrBond Dow NYSECI<br />

-1.253 -3.008 -0.149 -0.711 -1.379 -0.82 -1.052 -0.546 -1.069<br />

Table 1: Expected losses <strong>in</strong> May 2009<br />

Gold Silver Nasdaq S&P500 Oil EUR/USD 10YrBond Dow NYSECI<br />

Gold 5.159 7.228 -1.437 1.492 3.989 2.764 -5.25 1.198 2.231<br />

Silver 7.228 19.441 -0.785 6.454 10.143 4.94 -5.198 5.343 9.061<br />

Nasdaq -1.437 -0.785 15.084 11.202 1.562 0.974 -0.767 9.754 12.424<br />

S&P500 1.492 6.454 11.202 16.238 10.709 4.223 -4.735 14.794 20.058<br />

Oil 3.989 10.143 1.562 10.709 21.249 4.087 -5.719 10.043 15.451<br />

EUR/USD 2.764 4.94 0.974 4.223 4.087 4.375 -3.255 3.764 5.996<br />

10YrBond -5.25 -5.198 -0.767 -4.735 -5.719 -3.255 38.003 -4.564 -4.928<br />

Dow 1.198 5.343 9.754 14.794 10.043 3.764 -4.564 13.981 18.446<br />

NYSECI 2.231 9.061 12.424 20.058 15.451 5.996 -4.928 18.446 25.706<br />

Table 2: Covariance Matrix <strong>in</strong> May 2009<br />

We assume that <strong>in</strong> the consecutive periods the expected returns are <strong>in</strong>creased by α = 1.01<br />

(1%) and the covariance matrix is <strong>in</strong>creased by α 2 = (1.01) 2 . The values of the n<strong>in</strong>e assets<br />

obta<strong>in</strong>ed by the use of (Problem k), k = 1, ..., 5 are given <strong>in</strong> Table 3.


RRR 6-2010 Page 11<br />

Gold Silver Nasdaq S&P500 Oil EUR/USD 10YrBond Dow NYSECI<br />

(Problem 1) 0.5246 0.0422 0.1267 0 0.0102 0.1433 0.1531 0 0<br />

(Problem 2) 0.5350 0 0.1342 0 0.0103 0.1718 0.1487 0 0<br />

(Problem 3) 0.5266 0 0.1379 0 0.0076 0.1804 0.1475 0 0<br />

(Problem 4) 0.5218 0 0.1399 0 0.0063 0.1849 0.1471 0 0<br />

(Problem 5) 0.5188 0 0.1414 0 0.0053 0.1878 0.1467 0 0<br />

References<br />

Table 3: Values of n<strong>in</strong>e assets, May 2009<br />

[1] R. E. Bellman, Introduction to matrix analysis, Society for Industrial Mathematics<br />

(1987) 58–59.<br />

[2] P. Beraldi and A. Ruszczyński, A Branch and Bound Method for Stochastic Integer<br />

Problems under <strong>Probabilistic</strong> Constra<strong>in</strong>ts, Optimization Methods and Software, 17, 3<br />

(2002) 359–382.<br />

[3] I. Deák, Solv<strong>in</strong>g stochastic programm<strong>in</strong>g problems by successive regression approximations<br />

– numerical results, Dynamic stochastic optimization, Lecture notes <strong>in</strong> economics<br />

and mathematical systems, Spr<strong>in</strong>ger 2003, 209–224.<br />

[4] D. Dentcheva and A. Prékopa and A. Ruszczyński, <strong>Concavity</strong> and efficient po<strong>in</strong>ts of<br />

discrete distributions <strong>in</strong> probabilistic programm<strong>in</strong>g, Mathematical Programm<strong>in</strong>g, 89, 1<br />

(2000) 55–77.<br />

[5] H. Henrion and C. Strugarek, Convexity of chance constra<strong>in</strong>ts with <strong>in</strong>dependent random<br />

variables, Computational Optimization and Applications, 41, 2, (2008) 263–276.<br />

[6] P. Kall and J. Mayer, Stochastic L<strong>in</strong>ear Programm<strong>in</strong>g: Models, Theory, and Computation,<br />

Spr<strong>in</strong>ger 2005.<br />

[7] S. Kataoka, A Stochastic Programm<strong>in</strong>g Model, Econometrica 31 (1963), 181–196.<br />

[8] J. Luedtke and S. Ahmed and G. L. Nemhauser, An <strong>in</strong>teger programm<strong>in</strong>g approach<br />

for l<strong>in</strong>ear programs with probabilistic constra<strong>in</strong>ts, Mathematical Programm<strong>in</strong>g, 122, 2<br />

(2010) 247–272.<br />

[9] C. van de Panne, W. Popp, M<strong>in</strong>imum Cost Cattle Feed under probabilistic prote<strong>in</strong><br />

constra<strong>in</strong>ts, Management Sciences, 9, 3 (1963) 405–430.<br />

[10] A. Prékopa, Programm<strong>in</strong>g under <strong>Probabilistic</strong> Constra<strong>in</strong>ts with a Random Technology<br />

Matrix, Mathematische Operationsforschung und Statistik, Ser. Optimization 5 (1974),<br />

109–116.


Page 12 RRR 6-2010<br />

[11] A. Prékopa, Dual method for the solution of a one-stage stochastic programm<strong>in</strong>g problem<br />

with random RHS obey<strong>in</strong>g a discrete probability distribution, ZOR - Methods and<br />

Models of Operations Research, 34 (1990) 441–461.<br />

[12] A. Prékopa, Stochastic Programm<strong>in</strong>g, Kluwer Academic Publishers, Dordtecht, Boston,<br />

1995.<br />

[13] A. Prékopa and B. Vizvári and T. Badics, Programm<strong>in</strong>g under probabilistic constra<strong>in</strong>t<br />

wit discrete random variable, New trends <strong>in</strong> mathematical programm<strong>in</strong>g, F. Giannessi<br />

and T. Rapcsák and S. Komlósi, Kluwer Academic Publishers, (1998) 235–255.<br />

[14] A. Prékopa, <strong>Probabilistic</strong> Programm<strong>in</strong>g, Stochastic Programm<strong>in</strong>g, Handbooks <strong>in</strong> Operations<br />

Research and Management Science, A. Ruszczyński and A. Shapiro, Elsevier,<br />

10 (2003) 207–351.<br />

[15] T. Szántai, A computer code for solution of probabilistic constra<strong>in</strong>ed stochastic programm<strong>in</strong>g<br />

problems, Numerical Techniques for Stochastic Optimization, Y. Ermoliev<br />

and R. J.-B. Wets, Spr<strong>in</strong>ger, New York (1988) 229–235.<br />

[16] S. Uryasev, <strong>Probabilistic</strong> Constra<strong>in</strong>ed Optimization: Methodology and Applications,<br />

Kluwer Academic Publishers, Boston, 2000.<br />

[17] K. Yoda and A. Prékopa, Optimal portfolio selection based on multiple Value at Risk<br />

constra<strong>in</strong>ts, to appear, RUTCOR research report, Rutgers Center for Operations Research,<br />

2010.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!