04.01.2013 Views

Springer - Read

Springer - Read

Springer - Read

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

A.2 Random Vectors 375<br />

The probability density of X is then found from<br />

f(x1,...,xn) ∂nF(x1,...,xn) .<br />

∂x1 ···∂xn<br />

The random vector X is said to be discrete if there exist real-valued vectors x0, x1,...<br />

and a probability mass function p(xj) P [X xj] such that<br />

∞<br />

p(xj) 1.<br />

j0<br />

The expectation of a function g of a random vector X is defined by<br />

<br />

<br />

E (g(X)) g(x)dF(x) g(x1,...,xn)dF(x1,...,xn),<br />

where<br />

<br />

g(x1,...,xn)dF(x1,...,xn)<br />

⎧ <br />

⎪⎨<br />

<br />

··· g(x1,...,xn)f (x1,...,xn)dx1 ···dxn, in the continuous case,<br />

<br />

⎪⎩ ··· <br />

g(xj1,...,xjn)p(xj1,...,xjn), in the discrete case,<br />

j1<br />

jn<br />

and g is any function such that E|g(X)| < ∞.<br />

The random variables X1,...,Xn are said to be independent if<br />

i.e.,<br />

P [X1 ≤ x1,...,Xn ≤ xn] P [X1 ≤ x1] ···P [Xn ≤ xn],<br />

F(x1,...,xn) FX1(x1) ···FXn(xn)<br />

for all real numbers x1,...,xn. In the continuous and discrete cases, independence<br />

is equivalent to the factorization of the joint density function or probability mass<br />

function into the product of the respective marginal densities or mass functions, i.e.,<br />

or<br />

f(x1,...,xn) fX1 (x1) ···fXn (xn) (A.2.2)<br />

p(x1,...,xn) pX1 (x1) ···pXn (xn). (A.2.3)<br />

For two random vectors X (X1,...,Xn) ′ and Y (Y1,...,Ym) ′ with joint<br />

density function fX,Y, the conditional density of Y given X x is<br />

⎧<br />

⎨ fX,Y(x, y)<br />

, if fX(x) >0,<br />

fY|X(y|x) fX(x)<br />

⎩<br />

fY(y), if fX(x) 0.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!