06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

8.3 Nonparametric Estimation <strong>of</strong> Functions 565<br />

8.3.2 Pointwise Properties <strong>of</strong> Function Estimators<br />

The statistical properties <strong>of</strong> an estimator <strong>of</strong> a function at a given point are<br />

analogous to the usual statistical properties <strong>of</strong> an estimator <strong>of</strong> a scalar parameter.<br />

The statistical properties involve expectations or other properties <strong>of</strong><br />

random variables. In the following, when we write an expectation, E(·), or<br />

a variance, V(·), the expectations are usually taken with respect to the (unknown)<br />

distribution <strong>of</strong> the underlying random variable. Occasionally, we may<br />

explicitly indicate the distribution by writing, for example, Ep(·), where p is<br />

the density <strong>of</strong> the random variable with respect to which the expectation is<br />

taken.<br />

Bias<br />

The bias <strong>of</strong> the estimator <strong>of</strong> a function value at the point x is<br />

E f(x) − f(x).<br />

If this bias is zero, we would say that the estimator is unbiased at the point<br />

x. If the estimator is unbiased at every point x in the domain <strong>of</strong> f, we say<br />

that the estimator is pointwise unbiased. Obviously, in order for f(·) to be<br />

pointwise unbiased, it must be defined over the full domain <strong>of</strong> f.<br />

Variance<br />

The variance <strong>of</strong> the estimator at the point x is<br />

V <br />

<br />

f(x) = E f(x) − E f(x) 2 <br />

.<br />

Estimators with small variance are generally more desirable, and an optimal<br />

estimator is <strong>of</strong>ten taken as the one with smallest variance among a class <strong>of</strong><br />

unbiased estimators.<br />

Mean Squared Error<br />

The mean squared error, MSE, at the point x is<br />

MSE <br />

f(x) = E f(x) 2<br />

− f(x) . (8.7)<br />

The mean squared error is the sum <strong>of</strong> the variance and the square <strong>of</strong> the bias:<br />

MSE <br />

f(x) = E f(x) 2 <br />

− 2f(x)f(x) 2<br />

+ f(x)<br />

= V <br />

f(x) + E 2 f(x) − f(x) . (8.8)<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!