06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

8.3 Nonparametric Estimation <strong>of</strong> Functions 563<br />

Another approach to function estimation is to represent the function <strong>of</strong><br />

interest as a linear combination <strong>of</strong> basis functions, that is, to represent the<br />

function in a series expansion. The basis functions are generally chosen to be<br />

orthogonal over the domain <strong>of</strong> interest, and the observed data are used to<br />

estimate the coefficients in the series.<br />

It is <strong>of</strong>ten more practical to estimate the function value at a given point.<br />

(Of course, if we can estimate the function at any given point, we can effectively<br />

have an estimate at all points.) One way <strong>of</strong> forming an estimate <strong>of</strong> a<br />

function at a given point is to take the average at that point <strong>of</strong> a filtering<br />

function that is evaluated in the vicinity <strong>of</strong> each data point. The filtering<br />

function is called a kernel, and the result <strong>of</strong> this approach is called a kernel<br />

estimator.<br />

In the estimation <strong>of</strong> functions, we must be concerned about the properties<br />

<strong>of</strong> the estimators at specific points and also about properties over the full<br />

domain. Global properties over the full domain are <strong>of</strong>ten defined in terms <strong>of</strong><br />

integrals or in terms <strong>of</strong> suprema or infima.<br />

Function Decomposition and Estimation <strong>of</strong> the Coefficients in an<br />

Orthogonal Expansion<br />

We first do a PDF decomposition <strong>of</strong> the function <strong>of</strong> interest with the probability<br />

density function, p,<br />

f(x) = g(x)p(x). (8.3)<br />

We have<br />

ck = 〈f, qk〉<br />

<br />

= qk(x)g(x)p(x)dx<br />

D<br />

= E(qk(X)g(X)), (8.4)<br />

where X is a random variable whose probability density function is p.<br />

If we can obtain a random sample, x1, . . ., xn, from the distribution with<br />

density p, the ck can be unbiasedly estimated by<br />

ck = 1<br />

n<br />

n<br />

qk(xi)g(xi).<br />

i=1<br />

The series estimator <strong>of</strong> the function for all x therefore is<br />

f(x) = 1<br />

n<br />

j<br />

k=0 i=1<br />

n<br />

qk(xi)g(xi)qk(x) (8.5)<br />

for some truncation point j.<br />

The random sample, x1, . . ., xn, may be an observed dataset, or it may be<br />

the output <strong>of</strong> a random number generator.<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!