12.07.2015 Views

Bath Institute For Complex Systems - ENS de Cachan - Antenne de ...

Bath Institute For Complex Systems - ENS de Cachan - Antenne de ...

Bath Institute For Complex Systems - ENS de Cachan - Antenne de ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

multilevel Monte Carlo (MLMC) algorithms for PDEs with random coefficients, together with themain results on their performance. <strong>For</strong> a more <strong>de</strong>tailed <strong>de</strong>scription of the methods, we refer therea<strong>de</strong>r to [7] and the references therein.In the Monte Carlo framework, we are usually interested in finding the expected value of somefunctional Q = G(u) of the solution u to our mo<strong>de</strong>l problem (2.1). Since u is not easily accessible, Qis often approximated by the quantity Q h := G(u h ), where u h <strong>de</strong>notes the finite element solution ona sufficiently fine spatial grid T h . Thus, to estimate E [Q] := ‖Q‖ L 1 (Ω), we compute approximations(or estimators) ̂Q h to E [Q h ], and quantify the accuracy of our approximations via the root meansquare error (RMSE)e( ̂Q(h ) := E [ ( ̂Q h − E(Q)) 2]) 1/2.The computational cost C ε ( ̂Q h ) of our estimator is then quantified by the number of floating pointoperations that are nee<strong>de</strong>d to achieve a RMSE of e( ̂Q h ) ≤ ε. This will be referred to as the ε–cost.The classical Monte Carlo (MC) estimator for E [Q h ] iŝQ MCh,N := 1 NN∑Q h (ω (i) ), (4.1)i=1where Q h (ω (i) ) is the ith sample of Q h and N in<strong>de</strong>pen<strong>de</strong>nt samples are computed in total.There are two sources of error in the estimator (4.1), the approximation of Q by Q h , which isrelated to the spatial discretisation, and the sampling error due to replacing the expected valueby a finite sample average. This becomes clear when expanding the mean square error (MSE)and using the fact that for Monte Carlo E[ ̂QMCh,N ] = E[Q h] and V[ ̂QMCh,N ] = N −1 V[Q h ], whereV[X] := E[(X − E[X]) 2 ] <strong>de</strong>notes the variance of the random variable X : Ω → R. We getMCe( ̂Q h,N )2 = N −1 V[Q h ] + ( E[Q h − Q] ) 2 . (4.2)A sufficient condition to achieve a RMSE of ε with this estimator is that both of these terms areless than ε 2 /2. <strong>For</strong> the first term, this is achieved by choosing a large enough number of samples,N = O(ε −2 ). <strong>For</strong> the second term, we need to choose a fine enough finite element mesh T h , suchthat E[Q h − Q] = O(ε).The main i<strong>de</strong>a of the MLMC estimator is very simple. We sample not just from one approximationQ h of Q, but from several. Linearity of the expectation operator implies thatE[Q h ] = E[Q h0 ] +L∑E[Q hl − Q hl−1 ] (4.3)l=1where {h l } l=0,...,L are the mesh widths of a sequence of increasingly fine triangulations T hl withT h := T hL , the finest mesh, and h l−1 /h l ≤ M ∗ , for all l = 1, . . . , L. Hence, the expectation on thefinest mesh is equal to the expectation on the coarsest mesh, plus a sum of corrections adding thedifference in expectation between simulations on consecutive meshes. The multilevel i<strong>de</strong>a is now toin<strong>de</strong>pen<strong>de</strong>ntly estimate each of these terms such that the overall variance is minimised for a fixedcomputational cost.Setting for convenience Y 0 := Q h0 and Y l := Q hl − Q hl−1 , for 1 ≤ l ≤ L, we <strong>de</strong>fine the MLMCestimator simply aŝQ L MLh,{N l } := ∑L∑Ŷl,N MC 1 ∑N ll=Y l (ω (i) ), (4.4)l=0N ll=0 i=1where importantly Y l (ω (i) ) = Q hl (ω (i) ) − Q hl−1 (ω (i) ), i.e. using the same sample on both meshes.15

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!