21.06.2014 Views

Subsampling estimates of the Lasso distribution.

Subsampling estimates of the Lasso distribution.

Subsampling estimates of the Lasso distribution.

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

46 <strong>Subsampling</strong><br />

and<br />

ˆL n,b (c U (1 − α) + ε) → P J(c U (1 − α) + ε, P ).<br />

Hence, <strong>the</strong> sets<br />

{ˆLn,b (c L (1 − α) − ε) < 1 − α ≤ ˆL n,b (c U (1 − α) + ε)}<br />

⊆<br />

{<br />

}<br />

−1<br />

c L (1 − α) − ε < ˆL<br />

n,b (1 − α) ≤ c U(1 − α) + ε<br />

have probability tending to one as n → ∞. It follows that<br />

(<br />

)<br />

P τ n (ˆθ n − θ(P )) ≤ ĉ n,b (1 − α) ≤ J n (c U (1 − α) + ε, P ) + o(1)<br />

and<br />

P<br />

(<br />

)<br />

τ n (ˆθ n − θ(P )) ≤ ĉ n,b (1 − α) ≥ J n (c L (1 − α) − ε, P ) + o(1).<br />

Letting n tend to infinity first, <strong>the</strong>n ε tend to zero yields, toge<strong>the</strong>r with <strong>the</strong> Portmanteau<br />

Theorem, <strong>the</strong> inequalities.<br />

Finally iv can be proved similarly to i and ii using Borel-Cantelli Lemma.<br />

Remark.<br />

(i) Note that point iii also holds for <strong>the</strong> root U n,b . Indeed, <strong>the</strong> pro<strong>of</strong> for ˆL n,b (·) solely<br />

rests on <strong>the</strong> convergence in probability <strong>of</strong> ˆL n,b (x) to J(x, P ) for every continuity<br />

point x <strong>of</strong> J(·, P ). As seen in <strong>the</strong> pro<strong>of</strong> <strong>of</strong> i, this is a property shared by U n,b (·) as<br />

well, this without even requiring τ b /τ n → 0, <strong>the</strong> assumption b/n → 0 being sufficient.<br />

Obviously <strong>the</strong> price to pay are larger confidence intervals.<br />

(ii) The conclusion <strong>of</strong> point iii can also be stated for two-sided confidence intervals with<br />

obvious changes in <strong>the</strong> assumptions.<br />

In <strong>the</strong> regular situation where τ n = √ n, <strong>the</strong> choice b = n δ for some 0 < δ < 1 satisfies <strong>the</strong><br />

conditions <strong>of</strong> Theorem 5.1.0.12.<br />

In view <strong>of</strong> our goal, constructing confidence intervals for <strong>Lasso</strong> <strong>estimates</strong>, <strong>the</strong> message<br />

conveyed by Theorem 5.1.0.12 is that in <strong>the</strong> situation where <strong>the</strong> 1 − α quantile happens<br />

to be a discontinuity point, and this can indeed happen if <strong>the</strong> corresponding parameter is<br />

equal to zero (cf. Theorem 3.2.1.1), <strong>the</strong> subsampling confidence interval assymptotically<br />

carries an error which is in <strong>the</strong> worst case equal to <strong>the</strong> jump height at <strong>the</strong> quantile.<br />

However, as we will see in <strong>the</strong> next section, this conclusion is too pessimistic and it turns<br />

out that some form <strong>of</strong> uniform convergence is what we need to achieve consistency.<br />

5.2 Uniform consistency for quantiles appproximation<br />

The present section focuses on <strong>the</strong> use <strong>of</strong> subsampling for <strong>the</strong> construction <strong>of</strong> confidence<br />

intervals only, in contrast to <strong>the</strong> previous one where <strong>the</strong> estimation <strong>of</strong> <strong>the</strong> <strong>distribution</strong><br />

function in a uniform sense was also considered. We will see that achieving asymptotic<br />

valid or conservative confidence intervals is possible if <strong>the</strong> <strong>distribution</strong> functions satisfy<br />

some uniformity or monoticity condition in <strong>the</strong> limit.<br />

All results <strong>of</strong> this section, appart <strong>the</strong> Dvoretzky-Kiefer-Wolfowitz inequality, are due to<br />

Romano and Shaikh (2010) who stated <strong>the</strong>ir results in a uniform sense for a family <strong>of</strong><br />

probability measures, we follow <strong>the</strong>ir exposition.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!