26.12.2013 Views

Degenerate parabolic stochastic partial differential equations

Degenerate parabolic stochastic partial differential equations

Degenerate parabolic stochastic partial differential equations

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

M. Hofmanová / Stochastic Processes and their Applications 123 (2013) 4294–4336 4309<br />

and similarly for the second term. Let us define<br />

Θ δ (ξ) =<br />

ξ<br />

−∞<br />

ψ δ (ζ ) dζ.<br />

Then we have J = J 1 + J 2 + J 3 with<br />

t <br />

J 1 = −E (∇ x u 1 ) ∗ <br />

σ (x)σ (x)(∇ϱ τ )(x − y)Θ δ u1 (x, s) − u 2 (y, s) dxdyds,<br />

0 (T N )<br />

2 t <br />

J 2 = −E (∇ y u 2 ) ∗ <br />

σ (y)σ (y)(∇ϱ τ )(x − y)Θ δ u1 (x, s) − u 2 (y, s) dxdyds,<br />

0 (T N )<br />

2 t <br />

<br />

J 3 = −E |σ (x)∇x u 1 | 2 + |σ (y)∇ y u 2 | 2 <br />

ϱ τ (x − y)ψ δ u1 (x, s)<br />

0 (T N ) 2<br />

− u 2 (y, s) dx dy ds.<br />

Let<br />

t <br />

H = E (∇ x u 1 ) ∗ <br />

σ (x)σ (y)(∇ y u 2 )ϱ τ (x − y)ψ δ u1 (x, s) − u 2 (y, s) dx dy ds.<br />

0 (T N ) 2<br />

We intend to show that J 1 = H + o(1), J 2 = H + o(1), where o(1) → 0 as τ → 0 uniformly in<br />

δ, and consequently<br />

t <br />

<br />

J = −E<br />

σ (x)∇ x u 1 − σ (y)∇ y u 2 2 ϱτ (x − y)<br />

0 (T N ) 2<br />

× ψ δ<br />

<br />

u1 (x, s) − u 2 (y, s) dx dy ds + o(1) ≤ o(1). (21)<br />

We only prove the claim for J 1 since the case of J 2 is analogous. Let us define<br />

g(x, y, s) = (∇ x u 1 ) ∗ σ (x)Θ δ<br />

<br />

u1 (x, s) − u 2 (y, s) .<br />

Here, we employ again the assumption (ii) in Definition 2.2. Recall, that it gives us some<br />

regularity of the solution in the nondegeneracy zones of the diffusion matrix A and hence<br />

g ∈ L 2 (Ω × Tx N × TN y × [0, T ]). It holds<br />

t <br />

<br />

<br />

J 1 = −E g(x, y, s) σ (x) − σ (y) (∇ϱ τ )(x − y) dx dy ds<br />

0<br />

(T N ) 2<br />

t <br />

− E g(x, y, s)σ (y)(∇ϱ τ )(x − y) dx dy ds,<br />

0 (T N )<br />

2 t<br />

<br />

<br />

H = E g(x, y, s)div y σ (y)ϱ τ (x − y) dx dy ds<br />

0<br />

(T N ) 2<br />

t <br />

= E g(x, y, s)div σ (y) ϱ τ (x − y) dx dy ds<br />

0 (T N ) 2<br />

t <br />

− E g(x, y, s)σ (y)(∇ϱ τ )(x − y) dx dy ds,<br />

0 (T N ) 2<br />

where divergence is applied row-wise to a matrix-valued function. Therefore, it is enough to<br />

show that the first terms in J 1 and H have the same limit value if τ → 0. For H, we obtain easily<br />

t <br />

E g(x, y, s)div σ (y) ϱ τ (x − y) dx dy ds<br />

0 (T N ) 2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!