06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

140 1 Probability <strong>Theory</strong><br />

5. Let U ∼ U(0, 1) and associate the ordering (≺, , ∼, ≻, ) with Lebesgue<br />

measure on [0, 1]. Then for any interval I ⊆ [0, 1], either A ≻ I, A ∼ I, or<br />

A ≺ I.<br />

These axioms define a linear, or total, ordering on A. (Exercise 1.89).<br />

Given these axioms for a “coherent” ordering on Ω, we can define a probability<br />

measure P on A by P(A) ≤ P(B) iff A B, and so on. It can be shown<br />

that such a measure exists, satisfies the Kolmogorov axioms, and is unique.<br />

At first it was thought that the first 4 axioms were sufficient to define<br />

a probability measure that satisfied Kolmogorov’s axioms, but Kraft et al.<br />

(1959) exhibited an example that showed that more was required.<br />

A good exposition <strong>of</strong> this approach based on coherency that includes a<br />

pro<strong>of</strong> <strong>of</strong> the existence and uniqueness <strong>of</strong> the probability measure is given by<br />

DeGroot (1970).<br />

Define probability from expectations <strong>of</strong> random variables<br />

Although the measurable spaces <strong>of</strong> Sections 1.1.1 and 0.1 (beginning on<br />

page 686) do not necessarily consist <strong>of</strong> real numbers, we defined real-valued<br />

functions (random variables) that are the basis <strong>of</strong> further development <strong>of</strong><br />

probability theory. From the axioms characterizing probability (or equivalently<br />

from the definition <strong>of</strong> the concept <strong>of</strong> a probability measure), we developed<br />

expectation and various unifying objects such as distributions <strong>of</strong> random<br />

variables.<br />

An alternate approach to developing a probability theory can begin with<br />

a sample space and random variables defined on it. (Recall our definition<br />

<strong>of</strong> random variables did not require a definition <strong>of</strong> probability.) From this<br />

beginning, we can base a development <strong>of</strong> probability theory on expectation,<br />

rather than on a probability measure as we have done in this chapter. (This<br />

would be somewhat similar to our development <strong>of</strong> conditional probability from<br />

conditional expectation in Section 1.5.)<br />

In this approach we could define expectation in the usual way as an integral,<br />

or we can go even further and define it in terms <strong>of</strong> characterizing<br />

properties. We characterize an expectation operator E on a random variable<br />

X (and X1 and X2) by four axioms:<br />

1. If X ≥ 0, then E(X) ≥ 0.<br />

2. If c is a constant in IR, then E(cX1 + X2) = cE(X1) + E(X2).<br />

3. E(1) = 1.<br />

4. If a sequence <strong>of</strong> random variables {Xn} increases monotonically to a limit<br />

{X}, then E(X) = limn→∞ E(Xn).<br />

(In these axioms, we have assumed a scalar-valued random variable, although<br />

with some modifications, we could have developed the axioms in terms <strong>of</strong><br />

random variables in IR d .) From these axioms, after defining the probability <strong>of</strong><br />

a set as<br />

Pr(A) = E(IA(ω)),<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!