13.07.2015 Views

THÈSE DE DOCTORAT DE L'UNIVERSITÉ PARIS 6 Spécialité ...

THÈSE DE DOCTORAT DE L'UNIVERSITÉ PARIS 6 Spécialité ...

THÈSE DE DOCTORAT DE L'UNIVERSITÉ PARIS 6 Spécialité ...

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4.2. Main result 59(cf. Chapter 5 for more details on this problem). The asymptotically exact estimatorfor ˜β > 1 has the form∫̂f n (t) =d∑R d j=1( )1 uj − t jK˜βdY u ,h j h jwhere h j is defined as previously with the new constant C ad and a new kernelK˜β(t) =∫ f ˜β(t) . f ˜β(s)ds2. We suppose that we have observations Y t for t ∈ R d . We could have obtained thesame result for observations Y t with t ∈ [0, 1] d by modifying the kernel K˜βon theboundary as in Chapter 2 and Chapter 3.3. Our result is for estimation in the Gaussian white noise additive model. We couldhave obtained a similar result for the regression additive model with fixed design:Y i = f(l i ) + ξ i , i = 1, . . . , n,where (l 1 , . . . , l d ) is an equidistant grid on [0, 1] d and the function f is of the formf(x) = µ +d∑f j (x j ),with µ ∈ R and the f j such that ∫ f j (t)dt = 0 and f j ∈ Σ(β j , L j ). For this model,the minimax rate of convergence and the exact constant will be the same as inTheorem 4.1 and an asymptotically exact estimator can be chosen as a Nadaraya-Watson estimator defined for t ∈ [0, 1] by:)j=1̂f n (t) =∑ ni=1 Y iK˜β∑ ni=1 K˜β( li −th( li −thwhere the notation t s for two vectors t = (t 1, . . . , t d ), s = (s 1 , . . . , s d ) represents thevector (t 1 /s 1 , . . . , t d /s d ), h = (h 1 , . . . , h d ) with the h j are defined by (4.6) and thekernel K˜βis the same as in (4.7) modified on the boundary.For the regression additive model with random design, the exact constant and theasymptotically exact estimator will be almost the same but they will in additiondepend on the minimum of the design density (as in Chapter 2).4. For adaptive estimation, we conjecture that the Lepski method provides the exactadaptive asymptotics for loss function w(x) = x p (cf. Chapter 6). For the Lepskimethod, in the case of estimation in sup-norm of Hölderian functions (cf. Lepski(1992)), the goal is to estimate a function f ∈ Σ(β, L) knowing that β ∈ B where Bis a subset of R + . For each smoothness β ∈ B, we denote ψ n (β) the minimax rate of) ,

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!