11.07.2015 Views

Estimation and Inference of Discontinuity in Density

Estimation and Inference of Discontinuity in Density

Estimation and Inference of Discontinuity in Density

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

type <strong>of</strong> derivation. Second, the empirical likelihood con…dence sets CS G <strong>and</strong> CS do not require thelocal l<strong>in</strong>ear (or polynomial) estimators ^f l <strong>and</strong> ^f r . Thus, even if ^fl or ^f r yields negative estimates <strong>in</strong>…nite samples, CS G <strong>and</strong> CS are well de…ned. Third, the empirical likelihood test statistics are <strong>in</strong>variantto the formulation <strong>of</strong> the nonl<strong>in</strong>ear null hypotheses. For example, to test the density cont<strong>in</strong>uity, wemay specify the null hypothesis as H 0 : log f l = log f r , H0 ~ : f l = f r , H0 f: lf r= 1, etc. For thesehypotheses, the empirical likelihood test statistics are identical (i.e., `G (0) or ` (0)). On the otherh<strong>and</strong>, the Wald test statistic is not <strong>in</strong>variant to the formulation <strong>of</strong> the null hypotheses <strong>and</strong> may yieldopposite conclusions <strong>in</strong> …nite samples (see, Gregory <strong>and</strong> Veal, 1985, for examples).3 SimulationsIn this section we study the …nite-sample behaviors <strong>of</strong> the aforementioned methods us<strong>in</strong>g simulations.First we focus on the po<strong>in</strong>t estimators, i.e. the local l<strong>in</strong>ear b<strong>in</strong>n<strong>in</strong>g estimator ^ G <strong>and</strong> the local (logl<strong>in</strong>ear) likelihood estimator ^ for 0 .For comparisons, we also consider the local constant b<strong>in</strong>n<strong>in</strong>gestimator ~ G <strong>and</strong> the local (log constant) likelihood estimator ~ . For the kernel function K, we usethe triangle kernel function K (a) = max f0; 1 jajg. For the b<strong>and</strong>width h, we consider both …xedb<strong>and</strong>widths h = 1; 2; 3; 4 <strong>and</strong> data-dependent b<strong>and</strong>widths h = h dd , where h dd is the data-dependentb<strong>and</strong>width used by McCrary (2008) <strong>and</strong> = 1:5 k for k = 1; 0; 1; 2. For the b<strong>in</strong> size b to implement ^ G<strong>and</strong> ~ G , we employ a data-dependent method suggested by McCrary (2008). The data are generatedfrom normal distribution N (12; 3) (follow<strong>in</strong>g McCrary, 2008) <strong>and</strong> Student’s t distribution 12 + 3 p5t (5).Both distributions have the same mean <strong>and</strong> variance. The sample size is n = 1000 <strong>and</strong> the suspecteddiscont<strong>in</strong>uity po<strong>in</strong>t is c = 13. S<strong>in</strong>ce the above densities are cont<strong>in</strong>uous, the true value is 0 = 0. Thebiases, variances <strong>and</strong> mean square errors (MSEs) <strong>of</strong> the above estimators are reported <strong>in</strong> Tables 1 <strong>and</strong>2.Among four estimators, ^ performs best <strong>in</strong> terms <strong>of</strong> MSEs. Its MSE is slightly smaller than that <strong>of</strong>its competitor, ^ G , when a small b<strong>and</strong>width is used, but is signi…cantly smaller when the b<strong>and</strong>width isrelatively large. The dom<strong>in</strong>ance <strong>of</strong> ^ ma<strong>in</strong>ly comes from its superior bias performance on boundaries,while its variance is comparable with that <strong>of</strong> ^ G . The local constant estimators ~ G <strong>and</strong> ~ generallyhave smaller variances than ^ G <strong>and</strong> ^, but have much larger biases <strong>and</strong> thus larger MSEs. All fourestimators are generally biased downwards. On the other h<strong>and</strong>, a prelim<strong>in</strong>ary simulation <strong>in</strong>dicates thatthese estimators are generally biased upwards if the discont<strong>in</strong>uity po<strong>in</strong>t suspected is on the left side <strong>of</strong>the peak, e.g. c = 11. Typical bias-variance trade-o¤s for the b<strong>and</strong>width selection is also observed: thebiases are larger <strong>and</strong> the variances are smaller when the b<strong>and</strong>width <strong>in</strong>creases. Data generated from aheavier tailed distribution <strong>in</strong>crease the biases <strong>of</strong> the four estimators signi…cantly <strong>and</strong> a¤ect the variancesonly slightly. Aga<strong>in</strong>, ^ appears to have smaller MSEs than other estimators.Next we look at the tests for (dis)cont<strong>in</strong>uity <strong>in</strong> the density function. We consider a general set-up<strong>of</strong> mixture <strong>of</strong> normal distributions.Suppose that the r<strong>and</strong>om variable X is drawn from truncated11

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!