Stein's method, Malliavin calculus and infinite-dimensional Gaussian
Stein's method, Malliavin calculus and infinite-dimensional Gaussian
Stein's method, Malliavin calculus and infinite-dimensional Gaussian
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
(i) For every 1 i d, F (n)<br />
i<br />
converges in distribution to a centered <strong>Gaussian</strong> r<strong>and</strong>om variable<br />
with variance C(i; i).<br />
h i<br />
(ii) For every 1 i d, E (F (n)<br />
i<br />
) 4 ! 3C(i; i) 2 .<br />
(iii) For every 1 i d <strong>and</strong> every 1 r q i<br />
1, kf (n)<br />
i<br />
r f (n)<br />
i<br />
k H 2(q i r) ! 0.<br />
(iv) The vector F (n) converges in distribution to a d-<strong>dimensional</strong> <strong>Gaussian</strong> vector N d (0; C).<br />
Moreover, if C(i; j) = ij , where ij is the Kronecker symbol, then either one of conditions<br />
(i)–(iv) above is equivalent to the following:<br />
(v) For every 1 i d, kDF (n)<br />
i<br />
k 2 H<br />
L 2 ! q i .<br />
Remark. The crucial implication in the statement of Theorem 10.2 is (i) ) (iv), yielding<br />
that, for r<strong>and</strong>om vectors composed of chaotic r<strong>and</strong>om variables <strong>and</strong> verifying the asymptotic<br />
covariance condition (10.14), componentwise convergence in distribution towards a <strong>Gaussian</strong><br />
vector always implies joint convergence. This fact is extremely useful for applications: see for<br />
instance [3], [33], [48], [54] <strong>and</strong> [55].<br />
We conclude this section by pointing out the remarkable fact that, for vectors of multiple<br />
Wiener-Itô integrals of arbitrary length, the Wasserstein distance metrizes the weak convergence<br />
towards a <strong>Gaussian</strong> vector with positive de…nite covariance. Once again, this result is not trivial,<br />
since the topology induced by the Wasserstein distance is stronger than the topology of weak<br />
convergence.<br />
Proposition 10.3 (See [60]) Fix d 2, let C be a positive de…nite d d symmetric matrix,<br />
<strong>and</strong> let 1 q 1 : : : q d . Consider vectors<br />
F (n) = (F (n)<br />
1 ; : : : ; F (n)<br />
d<br />
) = (I q1 (f (n)<br />
1 ); : : : ; I qd (f (n)<br />
d<br />
)); n 1;<br />
with f (n)<br />
i<br />
2 H q i<br />
for every i = 1 : : : ; d. Assume moreover that F (n) satis…es condition (10.14).<br />
Then, as n ! 1, the following three conditions are equivalent:<br />
(a) d W (F (n) ; Z) ! 0.<br />
(b) For every 1 i d, q 1<br />
i<br />
kDF (n)<br />
i<br />
k 2 H<br />
L 2 ! C(i; i) <strong>and</strong>, for every 1 i 6= j d,<br />
hDF i ; DL 1 F j i H = q 1<br />
j<br />
hDF i ; DF j i H<br />
L 2 ! C(i; j):<br />
(c) F (n) converges in distribution to Z N d (0; C).<br />
Proof. Since convergence in the Wasserstein distance implies convergence in distribution,<br />
the implication (a) ! (c) is trivial. The implication (b) ! (a) is a consequence of relation<br />
(10.10). Now assume that (c) is veri…ed, that is, F (n) converges in law to Z N d (0; C) as n<br />
goes to in…nity. By Theorem 10.2 we have that, for any i 2 f1; : : : ; dg <strong>and</strong> r 2 f1; : : : ; q i 1g,<br />
kf (n)<br />
i<br />
r f (n)<br />
i<br />
k H 2(q i r) !<br />
n!1<br />
0:<br />
By combining Corollary 10.2 with Lemma 10.1, one therefore easily deduces that, since (10.14)<br />
is in order, condition (b) must necessarily be satis…ed.<br />
53