04.08.2013 Views

weighted and two stage least squares estimation of ... - Boston College

weighted and two stage least squares estimation of ... - Boston College

weighted and two stage least squares estimation of ... - Boston College

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Appendix<br />

The appendix first develops regularity conditions for a general density <strong>weighted</strong> closed<br />

form estimator, <strong>and</strong> then applies the results to the <strong>weighted</strong> <strong>and</strong> <strong>two</strong>-<strong>stage</strong> <strong>least</strong> <strong>squares</strong><br />

estimators introduced in this paper. Throughout this section · will denote the Euclidean<br />

norm, i.e. for a matrix A with components {aij}, A = ( <br />

i,j a2 ij) 1/2 . · ∞ will denote<br />

the sup norm over the regressor support: e.g. I[τn(x) > 0]( ˆ f ∗ − f ∗ )∞ = sup x I[τn(x) ><br />

0]| ˆ f ∗ (x) − f ∗ (x)|.<br />

A Theorem for Density Weighted Estimators<br />

In this section, we establish the asymptotic properties <strong>of</strong> a general density <strong>weighted</strong> esti-<br />

mator. The estimator is defined as a function <strong>of</strong> the data, a preliminary root-n consistent<br />

estimator <strong>of</strong> a finite dimensional nuisance parameter (denoted by κ0), <strong>and</strong> a preliminary<br />

estimator <strong>of</strong> the underlying conditional density function using kernel <strong>estimation</strong>. Here, we<br />

let Ξ0 ∈ Rk denote the parameter <strong>of</strong> interest in the general setting, defined here as<br />

<br />

i<br />

Ξ0 = E<br />

f ∗ i<br />

(A.1)<br />

with i, f ∗ i denoting (yi, vi, xi, zi, κ0) <strong>and</strong> f ∗ (vi|zi) respectively. For any other possible value<br />

<strong>of</strong> the nuisance parameter, κ, we will let i(κ) denote (yi, vi, xi, zi, κ). We define the esti-<br />

mator as a sample analog to the above equation:<br />

ˆΞ = 1<br />

n<br />

n<br />

ˆi<br />

τni<br />

ˆf i=1<br />

∗ i<br />

(A.2)<br />

with ˆ i, ˆ f ∗ i denoting i(ˆκ) <strong>and</strong> ˆ f ∗ (vi|zi) respectively; τni denotes the trimming function as<br />

before, <strong>and</strong> ˆκ denotes an estimator <strong>of</strong> κ0.<br />

We will assume throughout this section that ˆκ has an asymptotically linear representation.<br />

Letting the r<strong>and</strong>om variables yi, vi, <strong>and</strong> the r<strong>and</strong>om vectors zi, xi be as defined previously,<br />

we express the representation as:<br />

ˆκ − κ0 = 1<br />

n<br />

n<br />

i=1<br />

ψi + op(n −1/2 ) (A.3)<br />

where ψi denotes ψ(yi, xi, vi, zi) <strong>and</strong> satisfies E[ψi] = 0 <strong>and</strong> E[ψi 2 ] < ∞. The estimator<br />

<strong>of</strong> the parameter <strong>of</strong> interest also involves an estimator <strong>of</strong> the underlying conditional density<br />

function f ∗ (vi|zi). We assume that a kernel estimator is used to estimate this function,<br />

25

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!