06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

740 0 Statistical Mathematics<br />

The measure is based on a convex function φ <strong>of</strong> a term similar to the “odds”.<br />

Definition 0.1.46 (φ-divergence)<br />

Let P be absolutely continuous with respect to Q and φ is a convex function,<br />

<br />

dP<br />

d(P, Q) = φ dQ, (0.1.81)<br />

dQ<br />

if it exists, is called the φ-divergence from Q to P.<br />

IR<br />

The φ-divergence is also called the f-divergence.<br />

The φ-divergence is in general not a metric because it is not symmetric.<br />

One function is taken as the base from which the other function is measured.<br />

The expression <strong>of</strong>ten has a more familiar form if both P and Q are dominated<br />

by Lebesgue measure and we write p = dP and q = dQ. The Hellinger distance<br />

given in equation (0.1.80) is a φ-divergence that is a metric. The Matusita<br />

distance is the square root <strong>of</strong> a φ-divergence with φ(t) = ( √ t − 1) 2 .<br />

Another specific instance <strong>of</strong> φ-divergence is the Kullback-Leibler measure.<br />

Definition 0.1.47 (Kullback-Leibler measure)<br />

Let P be absolutely continuous with respect to Q and p = dP and q = dq.<br />

Then <br />

p(x)<br />

dx. (0.1.82)<br />

q(x)<br />

p(x)log<br />

IR<br />

is called the Kullback-Leibler measure <strong>of</strong> the difference <strong>of</strong> p and q.<br />

The Kullback-Leibler measure is not a metric.<br />

Various forms <strong>of</strong> φ-divergence are used in goodness-<strong>of</strong>-fit analyses. The<br />

Pearson chi-squared discrepancy measure, for example, has φ(t) = (t − 1) 2 :<br />

<br />

IR<br />

(q(x) − p(x)) 2<br />

dx. (0.1.83)<br />

q(x)<br />

See the discussion beginning on page 593 for other applications in which two<br />

probability distributions are compared.<br />

0.1.9.6 Convergence <strong>of</strong> Functions<br />

We have defined almost everywhere convergence <strong>of</strong> measurable functions in<br />

general measure spaces (see page 718). We will now define two additional types<br />

<strong>of</strong> convergence <strong>of</strong> measurable functions in normed linear measure spaces. Here<br />

we will restrict our attention to real-valued functions over real domains, but<br />

the ideas are more general.<br />

The first is convergence in Lp.<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!