06.09.2021 Views

Measure, Integration & Real Analysis, 2021a

Measure, Integration & Real Analysis, 2021a

Measure, Integration & Real Analysis, 2021a

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 12 Probability <strong>Measure</strong>s 397<br />

Proof Suppose j ∈ Z + . Let f 1 , f 2 ,...be the sequence of simple functions converging<br />

pointwise to X j as constructed in the proof of 2.89. The Dominated Convergence<br />

Theorem (3.31) implies that EX j = lim n→∞ Ef n . Because of how each f n is constructed,<br />

each Ef n depends only on n and the numbers P(c ≤ X j < d) for c < d.<br />

However,<br />

(<br />

P(c ≤ X j < d) = lim P ( X j ≤ d − 1 ) (<br />

m→∞<br />

m − P Xj ≤ c −<br />

m<br />

1 ) )<br />

for c < d. Because {X k } k∈Γ is an identically distributed family, the numbers above<br />

on the right are independent of j. Thus EX j = EX k for all j, k ∈ Z + .<br />

Apply the result from the paragraph above to the identically distributed family<br />

{X k 2 } k∈Γ and use 12.20 to conclude that σ(X j )=σ(X k ) for all j, k ∈ Γ.<br />

The next result has the nicely intuitive interpretation that if we repeat a random<br />

process many times, then the probability that the average of our results differs from<br />

our expected average by more than any fixed positive number ε has limit 0 as we<br />

increase the number of repetitions of the process.<br />

12.38 Weak Law of Large Numbers<br />

Suppose (Ω, F, P) is a probability space and {X k } k∈Z + is an i.i.d. family of<br />

random variables in L 2 (P), each with expectation μ. Then<br />

for all ε > 0.<br />

(∣ ∣∣<br />

lim P ( n<br />

1n<br />

) ∣ ) ∣∣<br />

n→∞ ∑ X k − μ ≥ ε = 0<br />

k=1<br />

Proof Because the random variables {X k } k∈Z + all have the same expectation and<br />

same standard deviation, by 12.37 there exist μ ∈ R and s ∈ [0, ∞) such that<br />

for all k ∈ Z + . Thus<br />

( 1<br />

12.39 E<br />

n<br />

EX k = μ and σ(X k )=s<br />

n )<br />

∑ X k = μ and σ 2( 1<br />

n<br />

k=1<br />

n )<br />

∑ X k = 1 n )<br />

n<br />

k=1<br />

2 σ2( ∑ X k = s2 n ,<br />

k=1<br />

where the last equality follows from 12.22 (this is where we use the independent part<br />

of the hypothesis).<br />

Now suppose ε > 0. In the special case where s = 0, all the X k are almost surely<br />

equal to the same constant function and the desired result clearly holds. Thus we<br />

assume s > 0. Let t = √ nε/s and apply Chebyshev’s inequality (12.21) with this<br />

value of t to the random variable<br />

n 1 ∑n k=1 X k, using 12.39 to get<br />

(∣ ∣∣ ( n<br />

1 ) ∣ ) ∣∣<br />

P<br />

n<br />

∑ X k − μ ≥ ε ≤ s2<br />

nε<br />

k=1<br />

2 .<br />

Taking the limit as n → ∞ of both sides of the inequality above gives the desired<br />

result.<br />

<strong>Measure</strong>, <strong>Integration</strong> & <strong>Real</strong> <strong>Analysis</strong>, by Sheldon Axler

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!