Stein's method, Malliavin calculus and infinite-dimensional Gaussian
Stein's method, Malliavin calculus and infinite-dimensional Gaussian
Stein's method, Malliavin calculus and infinite-dimensional Gaussian
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
8.3 Multi-<strong>dimensional</strong> Stein’s Lemma: a <strong>Malliavin</strong> <strong>calculus</strong> approach<br />
We start by introducing some useful norms over classes of real-valued matrices.<br />
De…nition 8.1 (i) The Hilbert-Schmidt inner product <strong>and</strong> the Hilbert-Schmidt norm<br />
on the class of dd real matrices, denoted respectively by h; i H:S: <strong>and</strong> kk H:S: , are de…ned<br />
as p follows: for every pair of matrices A <strong>and</strong> B, hA; Bi H:S: , Tr(AB T ) <strong>and</strong> kAk H:S: ,<br />
hA; AiH:S: :<br />
(ii) The operator norm of a d d matrix A over R is given by kAk op , sup kxkR d =1 kAxk R d:<br />
Remark. According to the just introduced notation, we can rewrite the di¤erential characterization<br />
of the generator L, as given in (7.29), as follows: for every smooth<br />
F = f (X (h 1 ) ; :::; X (h d )) ;<br />
one has that<br />
LF = hC; Hessf (Z)i H:S: hZ; rf (Z)i R d , (8.18)<br />
where Hessf is the Hessian matrix of f, Z = (X (h 1 ) ; :::; X (h d )), <strong>and</strong> C = fC (i; j) : 1 i; j dg<br />
is the covariance matrix given by C (i; j) = hh i ; h j i H<br />
.<br />
Given a d d positive de…nite symmetric matrix C, we use the notation N d (0; C) to indicate<br />
the law of a d-<strong>dimensional</strong> <strong>Gaussian</strong> vector with zero mean <strong>and</strong> covariance C. The following<br />
result is the d-<strong>dimensional</strong> counterpart of Stein’s Lemma 8.1. Here, we provide a proof (which<br />
is taken from [60]) that is almost completely based on <strong>Malliavin</strong> <strong>calculus</strong>.<br />
Lemma 8.3 Fix an integer d 2 <strong>and</strong> let C = fC(i; j) : i; j = 1; :::; dg be a d d positive<br />
de…nite symmetric real matrix.<br />
(i) Let Y be a r<strong>and</strong>om variable with values in R d . Then Y N d (0; C) if <strong>and</strong> only if, for every<br />
twice di¤erentiable function f : R d ! R such that EjhC; Hessf(Y )i H:S: hY; rf(Y )i R dj <<br />
1, it holds that<br />
E[hY; rf(Y )i R d hC; Hessf(Y )i H:S: ] = 0: (8.19)<br />
(ii) Consider a <strong>Gaussian</strong> r<strong>and</strong>om vector Z N d (0; C). Let g : R d ! R belong to C 2 (R d ) with<br />
…rst <strong>and</strong> second bounded derivatives. Then, the function U 0 (g) de…ned by<br />
U 0 g(x) :=<br />
Z 1<br />
0<br />
1<br />
2t E[g(p tx + p 1 tZ) g(Z)]dt<br />
is a solution to the following di¤erential equation (with unknown function f):<br />
g(x) E[g(Z)] = hx; rf(x)i R d hC; Hessf(x)i H:S: ; x 2 R d : (8.20)<br />
Moreover, one has that<br />
sup kHess U 0 g(x)k H:S: kC 1 k op kCkop 1=2 kgk L : (8.21)<br />
x2R d<br />
41