06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

1.1 Some Important Probability Facts 11<br />

Random Variables and Probability Distributions<br />

Notice that a random variable is defined in terms only <strong>of</strong> a measurable space<br />

(Ω, F) and a measurable space defined on the reals (X, B d ). No associated<br />

probability measure is necessary for the definition, but for meaningful applications<br />

<strong>of</strong> a random variable, we need some probability measure. For a random<br />

variable X defined on (Ω, F) in the probability space (Ω, F, P), the probability<br />

measure <strong>of</strong> X is P ◦ X −1 . (This is a pushforward measure; see page 706. In<br />

Exercise 1.9, you are asked to show that it is a probability measure.)<br />

A probability space is also called a population, a probability distribution,<br />

a distribution, or a law. The probability measure itself is the final component<br />

that makes a measurable space a probability space, so we associate the distribution<br />

most closely with the measure. Thus, “P” may be used to denote both<br />

a population and the associated probability measure. We use this notational<br />

convention throughout this book.<br />

For a given random variable X, a probability distribution determines<br />

Pr(X ∈ B) for B ⊆ X. The underlying probability measure P <strong>of</strong> course<br />

determines Pr(X −1 ∈ A) for A ∈ F.<br />

Quantiles<br />

Because the values <strong>of</strong> random variables are real, we can define various special<br />

values that would have no meaning in an abstract sample space. As we develop<br />

more structure on a probability space characterized by a random variable, we<br />

will define a number <strong>of</strong> special values relating to the random variable. Without<br />

any further structure, at this point we can define a useful value <strong>of</strong> a random<br />

variable that just relies on the ordering <strong>of</strong> the real numbers.<br />

For the random variable X ∈ IR and given π ∈]0, 1[, the quantity xπ<br />

defined as<br />

xπ = inf{x, s.t. Pr(X ≤ x) ≥ π} (1.6)<br />

is called the π quantile <strong>of</strong> X.<br />

For the random variable X ∈ IR d , there are two ways we can interpret<br />

the quantiles. If the probability associated with the quantile, π, is a scalar,<br />

then the quantile is a level curve or contour in X ∈ IR d . Such a quantile is<br />

obviously much more complicated, and hence, less useful, than a quantile in a<br />

univariate distribution. If π is a d-vector, then the definition in equation (1.6)<br />

applies to each element <strong>of</strong> X and the quantile is a point in IR d .<br />

Multiple Random Variables on the Same Probability Space<br />

If two random variables X and Y have the same distribution, we write<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle<br />

X d = Y. (1.7)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!