28.04.2014 Views

Lecture Notes - Department of Mathematics and Statistics - Queen's ...

Lecture Notes - Department of Mathematics and Statistics - Queen's ...

Lecture Notes - Department of Mathematics and Statistics - Queen's ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

6 CHAPTER 1. REVIEW OF PROBABILITY<br />

Some examples <strong>of</strong> commonly encountered r<strong>and</strong>om variables are as follows:<br />

• Gaussian (N(µ, σ 2 )):<br />

f(x) = 1 √<br />

2πσ<br />

e −(x−µ)2 /2σ 2 .<br />

• Exponential:<br />

• Uniform on [a, b] (U([a, b])):<br />

• Binomial (B(n, p)):<br />

F(x) = 1 − e −λx , p(x) = λe −λx .<br />

F(x) = x − a<br />

b − a , p(x) = 1 , x ∈ [a, b].<br />

b − a<br />

p(k) =<br />

( n<br />

k)<br />

p k (1 − p) n−k<br />

If n = 1, we also call a binomial variable, a Bernoulli variable.<br />

1.3.3 Independence <strong>and</strong> Conditional Probability<br />

Consider A, B ∈ B(X) such that P(B) > 0. The quantity<br />

P(A|B) =<br />

P(A ∩ B)<br />

P(B)<br />

is called the conditional probability <strong>of</strong> event A given B. This is itself a probability measure. If<br />

Then, A, B are said to be independent events.<br />

P(A|B) = P(A)<br />

A countable collection <strong>of</strong> events {A n } is independent if for any finitely many sub-collections A i1 , A i2 , . . . , A im ,<br />

we have that<br />

P(A i1 , A i2 , . . . , A im ) = P(A i1 )P(A i2 )...P(A im ).<br />

Here, we use the notation P(A, B) = P(A ∩ B). A sequence <strong>of</strong> events is said to be pairwise independent if for<br />

any two pairs (A m , A n ): P(A m , A n ) = P(A m )P(A n ).<br />

Pairwise independence is weaker than independence.<br />

1.4 Markov Chains<br />

One can define a sequence <strong>of</strong> r<strong>and</strong>om variable as a single r<strong>and</strong>om variable living in the product set, that is we<br />

can consider {x 1 , x 2 , . . . , x N } as an individual r<strong>and</strong>om variable X which is an (E × E × . . . E)−valued r<strong>and</strong>om<br />

variable in the measurable space (E × E × . . . E, B(E × E × . . . E)), where now the fields are to be defined on<br />

the product set. In this case, the probability measure induced by a r<strong>and</strong>om sequence is defined on the σ−field<br />

B(E × E × . . . E). If the probability measure for this sequence is such that<br />

P(x k+1 ∈ A k+1 |x k , x k−1 , . . . , x 0 ) = P(x k+1 ∈ A k+1 |x k ) P.a.s.,<br />

then {x k } is said to be a Markov chain. Thus, for a Markov chain, the immediate state is sufficient to predict<br />

the future; past variables are not needed.<br />

As an example, consider the following linear system:<br />

x t+1 = ax t + w t ,

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!