06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

846 Appendix B. Useful Inequalities in Probability<br />

Corollary B.5.1.5 (Hammersley-Chapman-Robbins inequality)<br />

Let X be a random variable in IR d with PDF p(x; θ) and let Eθ(T(X) = g(θ).<br />

Let µ be a fixed measure on X ⊆ IR d such that p(x; θ) ≪ µ. Now define S(θ)<br />

such that<br />

p(x; θ) > 0 a.e. x ∈ S(θ)<br />

p(x; θ) = 0 a.e. x /∈ S(θ).<br />

Then<br />

V(T(X)) ≥ sup<br />

t∋S(θ)⊇S(θ+t)<br />

(g(θ + t) − g(θ)) 2<br />

p(X;θ+t) . (B.24)<br />

2<br />

Eθ<br />

p(X;θ)<br />

Pro<strong>of</strong>. This inequality follows from the covariance inequality, by first considering<br />

the case for an arbitrary t such that g(θ + t) = g(θ).<br />

Corollary B.5.1.6 (Kshirsagar inequality)<br />

Corollary B.5.1.7 (information inequality)<br />

Subject to some “regularity conditions” (see Section 2.3), if X has PDF<br />

p(x; θ), then<br />

V(T(X)) ≥<br />

Eθ<br />

2 ∂E(T(X))<br />

∂θ<br />

(B.25)<br />

2<br />

∂ log p(X;θ)<br />

The denominator <strong>of</strong> the quantity on the right side <strong>of</strong> the inequality is<br />

called the Fisher information, or just the information. Notice the similarity<br />

<strong>of</strong> this inequality to the Hammersley-Chapman-Robbins inequality, although<br />

the information inequality requires more conditions.<br />

Under the regularity conditions, which basically allow the interchange <strong>of</strong><br />

integration and differentiation, the information inequality follows immediately<br />

from the covariance inequality.<br />

We consider the multivariate form <strong>of</strong> this inequality to Section 3.1.3. Our<br />

main interest will be in its application in unbiased estimation, in Section 5.1.<br />

If T(X) is an unbiased estimator <strong>of</strong> a differentiable function g(θ), the right<br />

side <strong>of</strong> the inequality together with derivatives <strong>of</strong> g(θ) forms the Cramér-Rao<br />

lower bound, inequality (3.39), and the Bhattacharyya lower bound, inequality<br />

(5.29).<br />

Theorem B.5.2 (Minkowski’s inequality)<br />

For 1 ≤ p,<br />

(E(|X + Y | p )) 1/p ≤ (E(|X| p )) 1/p + (E(|Y | p )) 1/p<br />

∂θ<br />

(B.26)<br />

This is a triangle inequality for Lp norms and related functions.<br />

Pro<strong>of</strong>. First, observe the truth for p = 1 using the triangle inequality for<br />

the absolute value, |x + y| ≤ |x| + |y|, giving E(|X + Y |) ≤ E(|X|) + E(|Y |).<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!