Lemma 3.12 (First Strang Lemma). Let Assumptions A1-A3 hold. Then, for almost all ω ∈ Ω,|u(ω, ·) − ũ h (ω, ·)| H 1 (D) ≤{ (inf 1 + a )max(ω)|u(ω, ·) − v h |v h ∈V h a min (ω)H 1 (D)+1a min (ω)|b ω (v h , w h ) −sup˜b}ω (v h , w h )|.w h ∈V h|w h | H 1 (D)Proposition 3.13. Let Assumptions A1–A4 hold, for some 0 < t < 1 and p ∗ ∈ (0, ∞]. Then, forall p < p ∗ , s < t with s ≠ 1/2 and 0 < h < 1, we have‖u − ũ h ‖ L p (Ω,H0 1(D)) C 3.13 ‖f‖ L p∗(Ω,H s−1 (D)) h s a 5/2max‖a‖ C, where C 3.13 :=t (D)∥ a 9/2 ∥minwith q = p∗ pp ∗−p. If Assumptions A2 and A3 hold with t = 1, thenProof. We first note that, for all w h ∈ V h ,∣∣b ω (v h , w h ) − ˜b ∑ω (v h , w h ) ∣ =∣‖u − ũ h ‖ L p (Ω,H 1 0 (D)) C 3.13 ‖f‖ L p∗(Ω,L 2 (D)) h .≤ ∑τ∈T h∫ττ∈T h∫τ(a(ω, x) − a(ω, x τ )) ∇v h · ∇w h dx∣∥L q (Ω)|a(ω, x) − a(ω, x τ )||x − x τ | t |x − x τ | t |∇v h (ω) · ∇w h | dx≤ |a(ω)| C t (D) ht |v h | H 1 (D) |w h | H 1 (D) .Hence, it follows from Lemma 3.12 that, for almost all ω ∈ Ω,{ (|u(ω, ·) − ũ h (ω, ·)| H 1 (D) ≤ inf 1 + a )max(ω)|u(ω, ·) − v h |v h ∈V h a min (ω)H 1 (D) + h |a(ω)| }t C t (D)|v h |a min (ω) H 1 (D) .Let us now make the particular choice v h := u h (ω, ·) ∈ V h , i.e. the solution of (3.8). Then itfollows from (3.9) and (3.10) and the fact that h t < h s , for any s < t ≤ 1 and h < 1, that( (|u(ω, ·) − ũ h (ω, ·)| H 1 (D) 1 + a )max(ω)C 3.1 (ω) C 3.8 (ω) + |a(ω)| )C t (D)a min (ω)a min (ω) 2 ‖f(ω, ·)‖ H s−1 (D) h s .The result follows again via an application of Höl<strong>de</strong>r’s inequality.Remark 3.14. To recover the O(h 2s ) convergence for the L 2 -error ‖u − ũ h ‖ L p (Ω,L 2 (D)) in thecase of quadrature, we require additional regularity of the coefficient function a. If a(ω, ·) is atleast C 2s , then we can again obtain O(h 2s ) convergence even with quadrature, using duality asin Corollary 3.10. This is for example the case in the context of lognormal random fields withGaussian covariance kernel, where a(ω, ·) ∈ C ∞ (D). In the context of the exponential covariancekernel the L 2 -convergence rate is always boun<strong>de</strong>d by O(h 1/2−δ ), due to the lack of regularity in a.13
3.4 Truncated FieldsCombining the results from Sections 2.4 and 3.2 we can also estimate the error in the case oftruncated KL-expansions of log-normal coefficients which we will use in the numerical experimentsin Section 5. These results give an improvement of the error estimate given in [5] for log-normalcoefficients that do not admit full regularity, e.g. those with exponential covariance kernel. Thenovelty with respect to [5] is the improvement of the bound for the finite element error, leading toa uniform bound with respect to the number of terms in the KL–expansion. We will assume herethat f(ω, ·) ∈ L 2 (D).Let a be a log-normal field as <strong>de</strong>fined in Section 2.4 via a KL expansion and let a K be theexpansion truncated after K terms. Now, for almost all ω ∈ Ω, let u K,h (ω, ·) <strong>de</strong>note the uniquefunction in V h that satisfiesb K,ω (u K,h (ω, ·), v h ) = L ω (v h ), for all v h ∈ V h , (3.11)where b K,ω (u, v) := ∫ D a K(ω, x)∇u · ∇v dx. Then we have the following result.Theorem 3.15. Let f ∈ L p∗ (Ω, L 2 (D)), for some p ∗ ∈ (0, ∞]. Then, in the Gaussian covariancecase, there exists a c 1 > 0 such that, for any p < p ∗ ,(‖u − u K,h ‖ L p (Ω,H0 1(D)) h + K d−1 (2d exp − c1 K 1/d)) ‖f‖ L p ∗ (Ω,L 2 (D)) .In the case of the 1-norm exponential covariance, for any p < p ∗ , 0 < ρ < 1 and 0 < s < 1/2,()‖u − u K,h ‖ L p (Ω,H0 1(D)) C 3.15 h s + K ρ−12 ‖f‖ L p ∗ (Ω,L 2 (D)) ,where C 3.15 := min{η(K) h 1−s , C 3.9 } and η(K) is an exponential function of K given in [5, Proposition6.3].Proof. It follows from Proposition 2.6 that a K satisfies assumptions A1–A2. Therefore we canapply Theorem 3.9 to bound ‖u K − u K,h ‖ L 2 (Ω,H0 1 (D)). The result then follows from Theorem 2.7,Proposition 2.8 and the triangle inequality.In contrast to [5, Proposition 6.3] this bound is uniform in K. Note that for small values of K,the constant C 3.15 will actually be of or<strong>de</strong>r O(h 1−s ) leading to a linear <strong>de</strong>pen<strong>de</strong>nce on h also in theexponential covariance case (as stated in [5, Proposition 6.3]). <strong>For</strong> larger values of K this will notbe the case and the lower regularity of u K will affect the convergence rate with respect to h.Remark 3.16. As we will see in Section 5, in the exponential covariance case for correlationlengths that are smaller than the diameter of the domain, a relatively large number K of KLmo<strong>de</strong>sis necessary to get even 10% accuracy. The new uniform error bound in Theorem 3.15 istherefore crucial also for the analysis of stochastic Galerkin and stochastic collocation methods,such as the one given in [5].4 Convergence Analysis of Multilevel Monte Carlo MethodsWe will now apply this new finite element error analysis in Section 4, to give a rigorous bound onthe cost of the multilevel Monte Carlo method applied to (2.1) for general random fields satisfyingAssumptions A1–A3, and to establish its superiority over the classical Monte Carlo method. Thisbuilds on the recent paper [7]. We start by briefly recalling the classical Monte Carlo (MC) and14