06.04.2013 Views

Discontinuous Galerkin methods Lecture 3 - Brown University

Discontinuous Galerkin methods Lecture 3 - Brown University

Discontinuous Galerkin methods Lecture 3 - Brown University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

RMMC 2008<br />

<strong>Discontinuous</strong> <strong>Galerkin</strong> <strong>methods</strong><br />

<strong>Lecture</strong> 3<br />

Jan S Hesthaven<br />

<strong>Brown</strong> <strong>University</strong><br />

-0.75<br />

Jan.Hesthaven@<strong>Brown</strong>.edu x<br />

y<br />

y<br />

1<br />

0.75<br />

0.5<br />

0.25<br />

0<br />

-0.25<br />

-0.5<br />

-1<br />

-1 -0.5 0 0.5 1<br />

1<br />

0.75<br />

0.5<br />

0.25<br />

0<br />

-0.25<br />

-0.5<br />

-0.75<br />

1.203<br />

1.142<br />

1.081<br />

1.019<br />

0.958<br />

0.896<br />

0.835<br />

0.774<br />

0.712<br />

0.651<br />

0.590<br />

0.528<br />

0.467<br />

0.405<br />

0.344<br />

0.283<br />

0.221<br />

0.160<br />

0.099<br />

0.037<br />

y<br />

1<br />

0.75<br />

0.5<br />

0.25<br />

0<br />

-0.25<br />

-0.5<br />

-0.75<br />

8.4 Scattering about a vertical cylinder in a finite-width channel -0.0881 147<br />

10<br />

0.75<br />

9<br />

test the consistency of the imposed bound-<br />

8<br />

7<br />

ary conditions and investigate 0.5<br />

6how<br />

the geo-<br />

0<br />

cal model based on the linear Padé (2,2) ro-<br />

0.02<br />

tational velocity version we set up a test-0.25 for<br />

0.01<br />

open-channel flow. We seek to model the scat-<br />

0<br />

-0.5<br />

tering of an incident wave field which is prop-<br />

-0.01<br />

y<br />

-0.02<br />

der positioned -0.03 in the middle of a finite-width<br />

-0.03 -0.02 -0.01 0 0.01 0.02 0.03<br />

x<br />

-1<br />

-1 -0.5 0 0.5 1<br />

x<br />

y<br />

-1<br />

-1 -0.5 0 0.5 1<br />

x<br />

8.4 Scattering about a vertical cylinder in a finitewidth<br />

channel<br />

A numerical study is carried out to both<br />

metric representation of the the domain may 0.25<br />

affect the computed solution. In a numeri-<br />

agating toward a bottom-mounted rigid cylin-<br />

channel.<br />

1<br />

-0.75<br />

Due to the symmetry, the solution is mathe-<br />

y<br />

0.02<br />

0.01<br />

0<br />

-0.01<br />

-0.02<br />

-0.03<br />

-0.0028<br />

-0.0072<br />

-0.0117<br />

-0.0162<br />

-0.0207<br />

-0.0252<br />

-0.0297<br />

-0.0342<br />

-0.0387<br />

-0.0432<br />

-0.0477<br />

-0.0522<br />

-0.0567<br />

-0.0612<br />

-0.0657<br />

-0.0702<br />

-0.0747<br />

-0.0791<br />

-0.0836<br />

1.95E-06<br />

1.70E-06<br />

1.45E-06<br />

1.20E-06<br />

9.45E-07<br />

6.94E-07<br />

4.43E-07<br />

1.92E-07<br />

-5.90E-08<br />

-3.10E-07<br />

-5.61E-07<br />

-8.12E-07<br />

-1.06E-06<br />

-1.31E-06<br />

-1.57E-06<br />

-0.03 -0.02 -0.01 0 0.01 0.02 0.03<br />

-1<br />

x<br />

-1 -0.5 0 0.5 1<br />

Figure 8.12: Scattering x of waves about a<br />

Darmstadt International Workshop, October 2004 – p.79<br />

cylinder in a finite-width channel.


A brief overview of what’s to come<br />

• <strong>Lecture</strong> 1: Introduction, Motivation, History<br />

• <strong>Lecture</strong> 2: Basic elements of DG-FEM<br />

• <strong>Lecture</strong> 3: Linear systems and some theory<br />

• <strong>Lecture</strong> 4: A bit more theory and discrete stability<br />

• <strong>Lecture</strong> 5: Attention to implementations<br />

• <strong>Lecture</strong> 6: Nonlinear problems and properties<br />

• <strong>Lecture</strong> 7: Problems with discontinuities and shocks<br />

• <strong>Lecture</strong> 8: Higher order/Global problems


<strong>Lecture</strong> 3<br />

• Let’s briefly recall what we know<br />

• Constructing fluxes for linear systems<br />

• The local basis and Legendre polynomials<br />

• Approximation theory on the interval


Let us recall<br />

We already know a lot about the basic DG-FEM<br />

• Stability is provided by carefully choosing the<br />

numerical flux.<br />

• Accuracy appear to be given by the local solution<br />

representation.<br />

• We can utilize major advances on monotone<br />

schemes to design fluxes.<br />

• The scheme generalizes with very few changes to<br />

very general problems -- multidimensional systems<br />

of conservation laws.


Let us recall<br />

We already know a lot about the basic DG-FEM<br />

• Stability is provided by carefully choosing the<br />

numerical flux.<br />

• Accuracy appear to be given by the local solution<br />

representation.<br />

• We can utilize major advances on monotone<br />

schemes to design fluxes.<br />

• The scheme generalizes with very few changes to<br />

very general problems -- multidimensional systems<br />

of conservation laws.<br />

At least in principle -- but what can we actually prove ?


2.4 Interlude on linear hyperbolic problems<br />

For But linear systems, first a the bit construction more of on thefluxes upwind numerical flux is particularly<br />

simple and we will discuss this in a bit more detail. Important application<br />

areas include Maxwell’s equations and the equations of acoustics and<br />

elasticity. Let us briefly look a little more carefully at linear<br />

To systems illustrate the basic approach, let us consider the two-dimensional<br />

system<br />

Q(x) ∂u<br />

+ ∇ ·F = Q(x)∂u<br />

∂t ∂t + ∂F 1<br />

∂x + ∂F linear systems, the construction of the upwind numerical flux is particrly<br />

simple and we will discuss this in a bit more detail. Important appliion<br />

areas include Maxwell’s equations and the equations of acoustics and<br />

sticity.<br />

To illustrate the basic approach, let us consider the two-dimensional<br />

tem<br />

Q(x)<br />

2<br />

=0, (2.19)<br />

∂y<br />

where the flux is assumed to be given as<br />

F =[F1, F 2] =[A1(x)u, A2(x)u] .<br />

Furthermore, we will make the natural assumption that Q(x) is invertible and<br />

symmetric for all x ∈ Ω. To formulate the numerical flux, we will need an<br />

∂u<br />

+ ∇ ·F = Q(x)∂u<br />

∂t ∂t + ∂F 1<br />

∂x + ∂F 2<br />

=0, (2.19)<br />

∂y<br />

ere the flux is assumed to be given as<br />

F =[F1, F 2] =[A1(x)u, A2(x)u] .<br />

thermore, Prominent we will make examples the natural areassumption<br />

that Q(x) is invertible and<br />

metric • Acoustics<br />

for all x ∈ Ω. To formulate the numerical flux, we will need an<br />

• Electromagnetics<br />

• Elasticity<br />

In such cases we can derive exact upwind fluxes


∂t ∂x ∂y<br />

assume approximation that Q(x) of ˆn ·F andutilizing Ai(x) information vary smoothly from both throughout sides of the Ω. interface. In<br />

anExactly rewrite Linear how Eq. this systems (2.19) information as and is combined fluxes should follow the dynamics of the<br />

equations.<br />

Let us first assume that Q(x) and Ai(x) vary smoothly throughout Ω. In<br />

Q(x) thisAssume case, we can first rewrite that Eq. all (2.19) coefficients as vary smoothly<br />

∂u<br />

rested in the formulation Q Q(x) of a flux+ along A1(x) the normal, + A2(x) ˆn, we+<br />

B(x)u<br />

operator<br />

∂t ∂x ∂y<br />

ll low-order terms; for example, it vanishes if Ai is constant.<br />

rested in Π<br />

where the = (ˆnxA1(x) formulation B collects all low-order terms; for example, it vanishe<br />

∂u + ˆnyA2(x)) of a flux<br />

∂u . along the normal, ˆn, we<br />

operator+<br />

Since A1(x) we are + interested A2(x) + inB(x)u the formulation =0, of a flux along<br />

∂t ∂x ∂y<br />

r that will consider the operator<br />

−1 Π = SΛS −1 ,<br />

diagonal matrix, Λ, has purely real entries; that is, Eq. (2.19) is a<br />

yperbolic system [142]. We express this as<br />

Q(x) ∂u ∂u ∂u<br />

s all low-order Π = (ˆnxA1(x) terms; ˆn ·F = for + A1(x) Πu. + example, ˆnyA2(x)) + A2(x) it vanishes . + B(x)u if Ai is =0, constant.<br />

∂t ∂x ∂y<br />

terested in the formulation of a flux Π along = (ˆnxA1(x) the normal, + ˆnyA2(x)) ˆn, we .<br />

the linear system can be understood by considering Q−1 ing to the elements of Λ that have positive and negative signs, re-<br />

Π.<br />

r that<br />

Λ = Λ + + Λ − ,<br />

The flux along a normal ˆn is then<br />

ewhere operator B collects all low-order terms; for example, it vanishes if Ai is constant.<br />

Since we Note are interested inˆn particular ·F in= the Πu. formulation that of a flux along the normal, ˆn, we<br />

will consider Π = the(ˆnxA1(x) operator + ˆnyA2(x)) . ˆn ·F = Πu.<br />

the linear system can be understood by considering Q<br />

Π = (ˆnxA1(x) + ˆnyA2(x)) .<br />

−1 at Q<br />

Π.<br />

−1Π can be diagonalized as<br />

Q −1 Π = SΛS −1 Thus, the nonzero elements of Λ<br />

,<br />

− correspond to those elements<br />

racteristic vector S−1u where the direction of propagation is ophe<br />

normal (i.e., they are incoming components). In contrast, those<br />

orresponding to Λ + 36 2 The key ideas<br />

reflect components propagating along ˆn (i.e.,<br />

Note in particular that<br />

ˆn ·F = Πu.<br />

The dynamics of the linear system can be understood by considering Q−1Π. Let us assume that Q−1Π can be diagonalized as<br />

Q −1 Π = SΛS −1 lar that<br />

ˆn ·F = Πu.<br />

of the linear system can be understood by considering Q<br />

,<br />

where the diagonal matrix, Λ, has purely real entries; that is, Eq. (2.19) is a<br />

−1Π. that Q−1Π can be diagonalized as<br />

Q −1 Π = SΛS −1 The dynamics of the linear system can be understood by c<br />

Let us assume that Q<br />

,<br />

onal matrix, Λ, has purely real entries; that is, Eq. (2.19) is a<br />

−1Π can be diagonalized as<br />

Q −1 Π = SΛS −1 at Q<br />

,<br />

where the diagonal matrix, Λ, has purely real entries; that<br />

strongly hyperbolic system [142]. We express this as<br />

−1Π can be diagonalized as<br />

Q −1 Π = SΛS −1 al matrix, Λ, has purely real entries; that is, Eq. (2.19) is a<br />

lic system [142]. We express this , as<br />

al matrix, Λ, Λ has = purely Λ real entries; that is, Eq. (2.19) is a<br />

ic system [142]. We express this as<br />

+ + Λ − ,<br />

the elements of Λ that have positive and negative signs, rethe<br />

nonzero elements of Λ− eaving through the boundary). The basic picture is illustrated in<br />

here we, for completeness, also illustrate a λ2 = 0 eigenvalue (i.e.,<br />

agating mode).<br />

his basic understanding, it is clear that a numerical upwind flux can<br />

d as<br />

(ˆn ·F) correspond to those elements<br />

∗ = QS Λ + S −1 u − + Λ − S −1 u + Now diagonalize this as<br />

u<br />

,<br />

–<br />

u * u **<br />

u +<br />

λ2 λ1 λ3 and we obtain<br />

Fig. 2.3. Sketch of the characteristic wave speeds of a t<br />

Λ = Λ + −<br />

+ + Λ − −1 boundary between , two states, u − and u + . The two intermed


know that<br />

with λ > 0; that is, the wave corresponding to λ1 is entering the domain, the<br />

λ1 = −λ, λ2 =0, λ3 = λ,<br />

Linear systems and fluxes<br />

∀i : −λiQ[u − − u + ] + [(Πu) − − (Πu) + ]=0<br />

wave corresponding to λ3 is leaving, and λ2 corresponds to a stationary wave<br />

must hold across each wave. This is also known as the Ra<br />

condition and is a simple consequence of conservation of u ac<br />

discontinuity. To appreciate this, consider the scalar wave eq<br />

∂u<br />

Consider the problem + λ∂u =0, x ∈ [a, b].<br />

∂t ∂x<br />

Integrating u over the interval, we have<br />

−<br />

with λ > 0; that is, the wave corresponding to λ1 is entering the domain, the<br />

wave corresponding to λ3 is leaving, and λ2 corresponds to a stationary wave<br />

as illustrated in Fig. 2.3.<br />

Following the well-developed theory or Riemann solvers [218, 303], we<br />

know that<br />

∀i : −λiQ[u<br />

λ<br />

− − u + ] + [(Πu) − − (Πu) + as illustrated in Fig. 2.3.<br />

Following the well-developed theory or Riemann solvers [218, 303], we<br />

know that<br />

∀i : −λiQ[u<br />

]=0, (2.20)<br />

must hold across each wave. This is also known as the Rankine-Hugoniot<br />

condition and is a simple consequence of conservation of u across the point of<br />

discontinuity. To appreciate this, consider the scalar wave equation<br />

− − u + ] + [(Πu) − − (Πu) + ]=0, (2.20)<br />

must hold across each wave. This is also known as the Rankine-Hugoniot<br />

condition and is a simple consequence of conservation of u across the point of<br />

discontinuity. To appreciate this, consider the scalar wave equation<br />

∂u<br />

+ λ∂u =0, x ∈ [a, b].<br />

∂t ∂x<br />

For non-smooth coefficients, it is a little more complex<br />

d<br />

dt<br />

Then we clearly have<br />

b<br />

Integrating over the interval, ∂u we have<br />

∂t a<br />

∂x<br />

b<br />

d<br />

Integrating over the interval, we have<br />

u dx = −λ (u(b, t) − u(a, t)) = f(a, t) − f(b<br />

a + λ∂u =0, x ∈ [a, b]. b<br />

since f u= dx λu. = −λ On(u(b, thet) other − u(a, hand, t)) = f(a, since t) −the f(b, wave t), is propagati<br />

dt a<br />

speed, λ, we also have<br />

b<br />

d<br />

dt<br />

u dx = d<br />

dt<br />

u +<br />

since f = λu. d On the u dx other = −λ hand,<br />

dt (u(b, t) since − u(a, thet)) wave = f(a, is propagating t) − f(b, t), at a constant<br />

speed, λ, we also b<br />

a have<br />

b<br />

(λt − a)u − +(b − λt)u + = λ(u −<br />

since f = λu. d On the other ahand,<br />

since the wave is propagating at a constant<br />

speed, λ, we alsou dt<br />

have dx = d − +<br />

(λt − a)u +(b− λt)u<br />

dt<br />

= λ(u − − u + ).<br />

Taking a → x− and b → x + a<br />

, we recover the jump conditions<br />

d b<br />

d − + Taking a → x − +<br />

− and b → x + , we recover the jump conditions


d<br />

dt<br />

b<br />

u dx = d<br />

(λt − a)u − +(b − λt)u + = λ(u − − u + ).<br />

Linear systems and fluxes<br />

a dt<br />

a →Hence, x by simple mass conservation, we achieve<br />

− and b → x + erive the proper numerical upwind flux for such cases, we need to<br />

careful. One should keep in mind that the much simpler local Laxhs<br />

flux also works in this case, albeit most likely leading to more<br />

ion than if the upwind flux , weis recover used. the jump conditions<br />

simplicity, we assume that we have only three entries in Λ, given as<br />

−λ(u − − u + )+(f − − f + )=0.<br />

for a → x− ,b→ x +<br />

λ1 = −λ, λ2 =0, λ3 = λ,<br />

0; that is, the wave corresponding to λ1 is entering the domain, the<br />

eralization to Eq. (2.20) is now straightforward.<br />

These are the Rankine-Hugoniot conditions<br />

rresponding to λ3 is leaving, and λ2 corresponds to a stationary wave<br />

rated in Fig. 2.3.<br />

wingFor the the well-developed general system, theory orthese Riemann are solvers [218, 303], we<br />

at<br />

36 2 The key ideas<br />

∀i : −λiQ[u − − u + ] + [(Πu) − − (Πu) + ]=0, (2.20)<br />

u * u **<br />

ld across each wave. This is also known as the Rankine-Hugoniot<br />

λ1 n and is a simple consequence of conservation of u across the point of<br />

nuity. They To appreciate must hold this, consider across the each scalar wave equation<br />

wave and can be used to<br />

connect across the interface<br />

∂u<br />

∂t<br />

+ λ∂u<br />

∂x<br />

=0, x ∈ [a, b].<br />

ing over the interval, we have<br />

Fig. 2.3. Sketch of the characteristic wave speeds of<br />

u –<br />

λ 2<br />

u +<br />

λ 3


Linear systems and fluxes<br />

2.4 Interlude on linear hyperbolic problems 37<br />

36 2 The key ideas<br />

So for the 3-wave problem we have<br />

turning to the problem in Fig. 2.3, we have the system of equations<br />

λQ − (u ∗ − u − )+ (Πu) ∗ − (Πu) − =0,<br />

[(Πu) ∗ − (Πu) ∗∗ ]=0,<br />

−λQ + (u ∗∗ − u + )+ (Πu) ∗∗ − (Πu) + =0,<br />

(u<br />

and the numerical flux is given as<br />

∗ , u∗∗ ) represents the intermediate states.<br />

e numerical flux can then be obtained by realizing that<br />

2.4 Interlude on linear hyperbolic problems 3<br />

eturning to the problem in Fig. 2.3, we have the system of equations<br />

λQ − (u ∗ − u − )+ (Πu) ∗ − (Πu) − =0,<br />

[(Πu) ∗ − (Πu) ∗∗ ]=0,<br />

−λQ + (u ∗∗ − u + )+ (Πu) ∗∗ − (Πu) + =0,<br />

e (u∗ , u∗∗ ) represents the intermediate states.<br />

he numerical flux can then be obtained by realizing that<br />

(ˆn ·F) ∗ =(Πu) ∗ =(Πu) ∗∗ ,<br />

λ 1<br />

u –<br />

λ 2<br />

u * u **<br />

Fig. 2.3. Sketch of the characteristic wave speeds of a<br />

boundary between two states, u − and u + . The two interme<br />

are used to derive the upwind flux. In this case, λ1 < 0, λ2<br />

(ˆn ·F) ∗ =(Πu) ∗ =(Πu) ∗∗ ,<br />

one can attempt to express using (u− , u + ) through the jump conditions<br />

. This leads to an upwind flux for the general discontinuous case.<br />

appreciate this approach, we consider a few examples.<br />

ple 2.5. Consider first the linear hyperbolic problem<br />

<br />

∂q<br />

∂ u a(x) 0 ∂ u<br />

+ A∂q = +<br />

=0,<br />

∂t ∂x ∂t v 0 −a(x) ∂x v<br />

To derive the proper numerical upwind flux for s<br />

be more careful. One should keep in mind that the m<br />

Friedrichs flux also works in this case, albeit most<br />

dissipation than if the upwind flux is used.<br />

For simplicity, we assume that we have only three<br />

one can attempt to express using (u− , u + ) through the jump condition<br />

e. This This leads approach to an upwind is general flux for and theyields general the discontinuous exact case.<br />

o appreciate this approach, we consider a few examples.<br />

upwind fluxes -- but requires that the system<br />

can be solved !<br />

ple 2.5. Consider first the linear hyperbolic problem<br />

∂q ∂q ∂ <br />

∂<br />

<br />

u +<br />

λ 3<br />

λ1 = −λ, λ2 =0, λ3 = λ,<br />

with λ > 0; that is, the wave corresponding to λ1 is en<br />

wave corresponding to λ3 is leaving, and λ2 correspond


which one can attempt to express using (u<br />

Linear systems and fluxes -- an example<br />

Consider<br />

− , u + ) through the jump conditions<br />

above. This leads to an upwind flux for the general discontinuous case.<br />

To appreciate this approach, we consider a few examples.<br />

Example 2.5. Consider first the linear hyperbolic problem<br />

<br />

∂q<br />

∂ u a(x) 0 ∂ u<br />

+ A∂q = +<br />

=0,<br />

∂t ∂x ∂t v 0 −a(x) ∂x v<br />

with a(x) being piecewise constant. For this simple equation, it is clear that<br />

u(x, t) propagates right while v(x, t) propagates left and we could use this to<br />

form a simple upwind flux.<br />

However, let us proceed using the Riemann jump conditions. If we introduce<br />

a ± as the values of a(x) on two sides of the interface, we recover the<br />

conditions<br />

a − (q ∗ − q − )+(Πq) ∗ − (Πq) − =0,<br />

−a + (q ∗ − q + )+(Πq) ∗ − (Πq) + =0,<br />

where q∗ refers to the intermediate state, (Πq) ∗ is the numerical flux along<br />

ˆn, and<br />

(Πq) ± = ˆn · (Aq) ± <br />

± a 0<br />

= ˆn ·<br />

0 −a ±<br />

<br />

± u<br />

v ±<br />

<br />

± ± a u<br />

= ˆn ·<br />

−a ± v ±<br />

Example 2.5. Consider first the linear hyperbolic problem<br />

<br />

∂q<br />

∂ u a(x) 0 ∂ u<br />

+ A∂q = +<br />

=0,<br />

∂t ∂x ∂t v 0 −a(x) ∂x v<br />

with a(x) being piecewise constant. For this simple equation, it is clear that<br />

u(x, t) propagates right while v(x, t) propagates left and we could use this to<br />

form a simple upwind flux.<br />

However, let us proceed using the Riemann jump conditions. If we introduce<br />

a<br />

<br />

.<br />

± as the values of a(x) on two sides of the interface, we recover the<br />

conditions<br />

a − (q ∗ − q − )+(Πq) ∗ − (Πq) − =0,<br />

−a + (q ∗ − q + )+(Πq) ∗ − (Πq) + =0,<br />

where q∗ refers to the intermediate state, (Πq) ∗ is the numerical flux along<br />

ˆn, and<br />

(Πq) ± = ˆn · (Aq) ± <br />

± a 0<br />

= ˆn ·<br />

0 −a ±<br />

<br />

± u<br />

v ±<br />

<br />

± ± a u<br />

= ˆn ·<br />

−a ± v ±<br />

<br />

.<br />

A bit of manipulation yields<br />

(Πq) ∗ 1<br />

=<br />

a + + a− ∂q<br />

∂ u a(x) 0 ∂ u<br />

+ A∂q = +<br />

∂t ∂x ∂t v 0 −a(x) ∂x v<br />

+ − − + + − − +<br />

a (Πq) + a (Πq) + a a (q − q ) ,<br />

which simplifies as<br />

=0,<br />

with a(x) being piecewise constant. For this simple equation, it is clear that<br />

u(x, t) propagates right while v(x, t) propagates left and we could use this to<br />

form a simple upwind flux.<br />

However, let us proceed using the Riemann jump conditions. If we introduce<br />

a ± as the values of a(x) on two sides of the interface, we recover the<br />

conditions<br />

a − (q ∗ − q − )+(Πq) ∗ − (Πq) − =0,<br />

−a + (q ∗ − q + )+(Πq) ∗ − (Πq) + =0,<br />

where q∗ refers to the intermediate state, (Πq) ∗ is the numerical flux along<br />

ˆn, and<br />

(Πq) ± = ˆn · (Aq) ± <br />

± a 0<br />

= ˆn ·<br />

0 −a ±<br />

<br />

± u<br />

v ±<br />

<br />

± ± a u<br />

= ˆn ·<br />

−a ± v ±<br />

<br />

.<br />

A bit of manipulation yields<br />

(Πq) ∗ 1<br />

=<br />

a + + a− + − − + + − − +<br />

a (Πq) + a (Πq) + a a (q − q ) ,<br />

which simplifies as<br />

(Πq) ∗ = 2a+ a− a + <br />

{{u}}<br />

ˆn ·<br />

+<br />

+ a− −{{v}}<br />

1<br />

propagates right while v(x, t) propagates left and we could use this to<br />

simple upwind flux.<br />

ever, let us proceed using the Riemann jump conditions. If we introas<br />

the values of a(x) on two sides of the interface, we recover the<br />

ns<br />

a<br />

<br />

[[u]]<br />

,<br />

2 [[v]]<br />

− (q ∗ − q − )+(Πq) ∗ − (Πq) − =0,<br />

−a + (q ∗ − q + )+(Πq) ∗ − (Πq) + =0,<br />

∗ ∗ refers to the intermediate state, (Πq) is the numerical flux along<br />

(Πq) ± = ˆn · (Aq) ± <br />

± a 0<br />

= ˆn ·<br />

0 −a ±<br />

<br />

± u<br />

v ±<br />

<br />

± ± a u<br />

= ˆn ·<br />

−a ± v ±<br />

<br />

.<br />

f manipulation yields<br />

(Πq) ∗ 1<br />

=<br />

a + + a− + − − + + − − +<br />

a (Πq) + a (Πq) + a a (q − q ) ,<br />

implifies as<br />

(Πq) ∗ = 2a+ a− a + <br />

{{u}}<br />

ˆn ·<br />

+<br />

+ a− −{{v}}<br />

1<br />

<br />

[[u]]<br />

,<br />

2 [[v]]<br />

numerical flux (Aq) ∗ follows directly from the definition of (Πq) ∗<br />

erve that if a(x) is smooth (i.e., a− = a + 38 2 The key ideas<br />

simply upwinding. Furthermore, for the general case, t<br />

to defining an intermediate wave speed, a<br />

) then the numerical flux is<br />

∗ , as<br />

a ∗ = 2a−a +<br />

a + Following the general approach, we have<br />

with<br />

Solving this yields Intermediate<br />

velocity<br />

,<br />

+ a− which is the harmonic average.


Let us also consider a slightly more complicated problem, originating<br />

Linear systems and fluxes -- an example<br />

electromagnetics.<br />

2.5 Exercises 39<br />

Example Consider 2.6. Consider Maxwell’s the equations<br />

one-dimensional Maxwell’s equations<br />

<br />

1<br />

ε(x) 0 ∂ E 01 ∂ E<br />

{{ZH}} + +<br />

=0. (2.2<br />

{{Z}}<br />

0 µ(x) ∂t H 10 ∂x H<br />

1<br />

2 [[E]]<br />

<br />

, E ∗ = 1<br />

<br />

{{YE}} +<br />

{{Y }}<br />

1<br />

2 [[H]]<br />

or<br />

H<br />

<br />

,<br />

∗ = 1<br />

<br />

{{ZH}} +<br />

{{Z}}<br />

1<br />

2 [[E]]<br />

<br />

, E ∗ = 1<br />

<br />

{{Y }}<br />

where<br />

Z ± <br />

µ ±<br />

=<br />

ε ± =(Y ± ) −1 ,<br />

2.5 Exercises 39<br />

Here (E, H) = (E(x, t),H(x, t)) represent the electric and magnetic fiel<br />

respectively, while ε and µ are the electric and magnetic material propertie<br />

known as permittivity and permeability, respectively.<br />

To simplify the notation, let us write this as<br />

Q ∂q<br />

+ A∂q<br />

∂t ∂x =0,<br />

or<br />

H<br />

where<br />

<br />

ε(x) 0<br />

01 E<br />

Q =<br />

, A = , q = ,<br />

0 µ(x) 10 H<br />

reflect the spatially varying material coefficients, the one-dimensional rotatio<br />

operator, and the vector of state variables, respectively.<br />

∗ = 1<br />

<br />

{{ZH}} +<br />

{{Z}}<br />

1<br />

2 [[E]]<br />

<br />

, E ∗ = 1<br />

<br />

{{YE}} +<br />

{{Y }}<br />

1<br />

2 [[H]]<br />

<br />

,<br />

where<br />

Z ± <br />

µ ±<br />

=<br />

ε ± =(Y ± ) −1 ,<br />

represents the impedance of the medium.<br />

If we again consider the simplest case of a continuous medium, things<br />

simplify considerably as<br />

H ∗ = {{H}} + Y<br />

2 [[E]], E∗ = {{E}} + Z<br />

2 [[H]],<br />

or<br />

H<br />

which we recognize as the Lax-Friedrichs flux since<br />

∗ = 1<br />

<br />

{{ZH}} +<br />

{{Z}}<br />

1<br />

2 [[E]]<br />

<br />

, E ∗ = 1<br />

<br />

{{Y }}<br />

where<br />

Z ± <br />

µ ±<br />

=<br />

ε ± =(Y ± ) −1 ,<br />

represents the impedance of the medium.<br />

If we again consider the simplest case of a cont<br />

simplify considerably as<br />

H ∗ = {{H}} + Y<br />

2 [[E]], E∗ Z<br />

= {{E}} +<br />

± <br />

µ ±<br />

=<br />

ε ± =(Y ± ) −1 ,<br />

the impedance of the medium.<br />

again consider the simplest case of a continuous medium, things<br />

onsiderably as<br />

H ∗ = {{H}} + Y<br />

2 [[E]], E∗ = {{E}} + Z<br />

2 [[H]],<br />

recognize as the Lax-Friedrichs flux since<br />

Y Z<br />

=<br />

ε µ = 1<br />

represents the impedance of the medium.<br />

If we again consider the simplest case of a cont<br />

simplify considerably as<br />

H<br />

√ = c<br />

εµ ∗ = {{H}} + Y<br />

2 [[E]], E∗ = {{E}} +<br />

which we recognize as the Lax-Friedrichs flux since<br />

Y Z<br />

=<br />

ε µ = 1<br />

The exact same approach leads to<br />

Now assume smooth materials:<br />

√ = c<br />

εµ<br />

We have recovered isthe the LF speed flux!<br />

of light (i.e., the fastest wave speed in th<br />

−1 −1/2


x ∈ D<br />

Lets move on<br />

At this point we have a good understanding of<br />

stability for linear problems -- through the flux.<br />

Lets now look at accuracy in more detail.<br />

k : u k Np <br />

h(x, t) = û<br />

n=1<br />

k Np <br />

n(t)ψn(x) = u<br />

i=1<br />

k h(x k i ,t)ℓ k i (x).<br />

Here, we have introduced two complementary expressions for the local solution<br />

known as the modal form, we use a local polynomial basis, ψn(x). A simple exa<br />

be ψn(x) =xn−1 . In the alternative form, known as the nodal representation, w<br />

N + 1 local grid points, xk i ∈ Dk , and express the polynomial through the associ<br />

Lagrange polynomial, ℓk i (x). The connection between these two forms is through<br />

the expansion coefficients, ûk n. We return to a discussion of these choices in much<br />

for now it suffices to assume that we have chosen one of these representations.<br />

The global solution u(x, t) is then assumed to be approximated by the piec<br />

polynomial approximation uh(x, t),<br />

K<br />

u(x, t) uh(x, t) = u<br />

k=1<br />

k y are d-dimensional simplexes.<br />

e local inner product and L<br />

h(x, t),<br />

2 (D k ) norm<br />

<br />

(u, v) k<br />

D =<br />

D k<br />

uv dx, u 2<br />

D k =(u, u) k<br />

D ,<br />

lobal broken inner product and norm<br />

K<br />

(u, v) Ω,h = (u, v) k<br />

D , u<br />

k=1<br />

2 Ω,h =(u, u) Ω,h .<br />

ects that Ω is only approximated by the union of D k , that is<br />

K<br />

Ω Ωh = D<br />

k=1<br />

k this slightly more general approach will pay off when we begin to consider<br />

more complex nonlinear and/or higher-dimensional problems.<br />

3.1 Legendre polynomials and nodal elements<br />

In Chapter 2 we started out by assuming that one can represent the global<br />

solution as a the direct sum of local piecewise polynomial solution as<br />

K<br />

u(x, t) uh(x, t) = u<br />

k=1<br />

Recall<br />

,<br />

ll not distinguish the two domains unless needed.<br />

we assume the local solution to be<br />

ne has local information as well as information from the neighalong<br />

an intersection between two elements. Often we will refer<br />

these intersections in an element as the trace of the element.<br />

s we discuss here, we will have two or more solutions or boundt<br />

the same physical location along the trace of the element.<br />

k h(x k ,t).<br />

The careful reader will note that this notation is a bit careless, as we do not<br />

address what exactly happens at the overlapping interfaces. However, a more<br />

careful definition does not add anything essential at this point and we will use<br />

this notation to reflect that the global solution is obtained by combining the<br />

K local solutions as defined by the scheme.<br />

The local solutions are assumed to be of the form<br />

x ∈ D k =[x k l ,x k r]: u k Np <br />

h(x, t) = û<br />

n=1<br />

k Np <br />

n(t)ψn(x) = u<br />

i=1<br />

k h(x k i ,t)ℓ k i (x).<br />

modal basis nodal basis


choosing this.<br />

rtant from a theoretical point of view. The results in Example 2.4 clearly<br />

rate, Local however, approximation<br />

that the accuracy of the method is closely linked to the<br />

r of the local polynomial representation and some care is warranted when<br />

sing this.<br />

et us To beginsimplify by introducing matters, the affine introduce mapping local affine mapping<br />

x ∈ D k : x(r) =x k l + 1+r<br />

2 hk , h k = x k r − x k l , (3.1)<br />

the reference variable r ∈ I = [−1, 1]. We consider local polynomial<br />

sentations of the form<br />

x ∈ D k : u k Np <br />

h(x(r),t)= û<br />

n=1<br />

k Np <br />

n(t)ψn(r) = u<br />

i=1<br />

k h(x k i ,t)ℓ k i (r).<br />

s first discuss the local modal expansion,<br />

Np <br />

uh(r) = ûnψn(r).<br />

n=1<br />

e we have dropped the superscripts for element k and the explicit time<br />

ndence, t, for clarity of notation.<br />

s a first choice, one could consider ψn(r) =rn−1 Let us begin by introducing the affine mapping<br />

x ∈ D<br />

r ∈ [−1, 1]<br />

Consider first the modal expansion<br />

(i.e., the simple monobasis).<br />

This leaves only the question of how to recover ûn. A natural way<br />

k : x(r) =x k l + 1+r<br />

2 hk , h k = x k r − x k l , (3.1)<br />

with the reference variable r ∈ I = [−1, 1]. We consider local polynomial<br />

representations of the form<br />

x ∈ D k : u k Np <br />

h(x(r),t)= û<br />

n=1<br />

k Np <br />

n(t)ψn(r) = u<br />

i=1<br />

k h(x k i ,t)ℓ k i (r).<br />

Let us first discuss the local modal expansion,<br />

Np <br />

uh(r) = ûnψn(r).<br />

n=1<br />

where we have dropped the superscripts for element k and the explicit time<br />

dependence, t, for clarity of notation.<br />

As a first choice, one could consider ψn(r) =rn−1 (i.e., the simple monomial<br />

basis). This leaves only the question of how to recover ûn. A natural way<br />

is by an L2 x ∈ D<br />

-projection; that is by requiring that<br />

Np <br />

(u(r), ψm(r)) I = ûn (ψn(r), ψm(r)) I ,<br />

n=1<br />

for each of the Np basis functions ψn. We have introduced the inner product<br />

on the interval I as 1<br />

k : u k Np <br />

h(x(r),t)= û<br />

n=1<br />

k Np <br />

n(t)ψn(r) = u<br />

i=1<br />

k h(x k i ,t)ℓ k i (r).<br />

et us first discuss the local modal expansion,<br />

Np <br />

uh(r) = ûnψn(r).<br />

n=1<br />

here we have dropped the superscripts for element k and the explicit time<br />

ependence, t, for clarity of notation.<br />

As a first choice, one could consider ψn(r) =rn−1 (i.e., the simple monoial<br />

basis). This leaves only the question of how to recover ûn. A natural way<br />

by an L2 Let us first discuss the local modal expansion,<br />

Np <br />

uh(r) = ûnψn(r).<br />

n=1<br />

where we have dropped the superscripts for element k<br />

dependence, t, for clarity of notation.<br />

As a first choice, one could consider ψn(r) =r<br />

-projection; that is by requiring that<br />

Np <br />

(u(r), ψm(r)) I = ûn (ψn(r), ψm(r)) I ,<br />

n=1<br />

r each of the Np basis functions ψn. We have introduced the inner product<br />

the interval I as<br />

1<br />

(u, v) I = uv dx.<br />

−1<br />

his yields<br />

n−1 (<br />

mial basis). This leaves only the question of how to reco<br />

is by an L2 Np <br />

uh(r) = ûnψn(r).<br />

n=1<br />

where we have dropped the superscripts for element k and the explicit time<br />

dependence, t, for clarity of notation.<br />

As a first choice, one could consider ψn(r) =r<br />

-projection; that is by requiring that<br />

Np <br />

(u(r), ψm(r)) I = ûn (ψn(r), ψm(r<br />

n=1<br />

for each of the Np basis functions ψn. We have introdu<br />

on the interval I as<br />

1<br />

(u, v) I = uv dx.<br />

−1<br />

This yields<br />

Mû = u,<br />

where<br />

leading to Np equations for the Np unknown expansion<br />

n−1 (i.e., the simple monomial<br />

basis). This leaves only the question of how to recover ûn. A natural way<br />

is by an L2-projection; that is by requiring that<br />

Np <br />

(u(r), ψm(r)) I = ûn (ψn(r), ψm(r)) I ,<br />

n=1<br />

for each of the Np basis functions ψn. We have introduced the inner product<br />

on the interval I as<br />

1<br />

(u, v) I = uv dx.<br />

−1<br />

This yields<br />

Mû = u,<br />

where<br />

Mij =(ψi, ψj) I , û = [û1, . . . , ûNp ]T We immediately have<br />

or<br />

, ui =(u, ψi) I ,<br />

leading to Np equations for the Np unknown expansion coefficients, ûi. How-<br />

Mij =(ψi, ψj) I , û = [û1, . . . , ûNp ]T , ui =


mial basis). This leaves only the question of how to recover ûn. A natural<br />

is by an L2-projection; that is by requiring that<br />

Local approximation<br />

(u(r), ψm(r)) I =<br />

Np <br />

Consider the most straightforward choice<br />

... and look at the condition number of M<br />

Table 3.1. Condition number of M based on the monomial basis for increasing<br />

value of N.<br />

(u, v) I = uv dx.<br />

N 2 4 8 16<br />

κ(M) 1.4e+1 3.6e+2 3.1e+5 3.0e+11<br />

This does not look like a robust choice for high N !<br />

(i + j − 1) −1 implies an increasing close to linear dependence of the basis as<br />

(i, j) increases, resulting in the ill-conditioning. The impact of this is that,<br />

even at moderately high order (e.g., N>10), one cannot accurately compute<br />

û and therefore not a good polynomial representation, uh.<br />

A natural solution to this problem is to seek an orthonormal basis as<br />

a more suitable and computationally stable approach. We can simply take<br />

the monomial basis, rn , and recover an orthonormal basis from this through<br />

an L2 But ever, why note? that - note that<br />

1 i+j<br />

Mij = 1+(−1)<br />

i + j − 1<br />

-based Gram-Schmidt orthogonalization approach. This results in the<br />

orthonormal basis [296]<br />

,<br />

Linear dependence for high N (i,j)<br />

n=1<br />

ûn (ψn(r), ψm(r)) I ,<br />

ψn(r) =rn−1 for each of the Np basis 3.1 Legendre functions polynomials ψn. Weand have nodal introduced elements the45 inner pro<br />

on the interval I as<br />

1<br />

This yields<br />

where<br />

−1<br />

Mû = u,<br />

Mij =(ψi, ψj) I , û = [û1, . . . , ûNp ]T , ui =(u, ψi) I ,<br />

leading to Np equations for the Np unknown expansion coefficients, ûi. H<br />

which resembles a Hilbert matrix, known to be very poorly conditioned.<br />

compute the condition number, κ(M), for M for increasing order of app<br />

imation, N, we observe in Table 3.1 the very rapidly deteriorating condi


− 1) Local approximation<br />

With this understanding, look for a better basis<br />

Idea - make M diagonal - an orthonormal basis<br />

−1 implies an increasing close to linear dependence of the basis as<br />

increases, resulting in the ill-conditioning. The impact of this is that,<br />

t moderately high order (e.g., N>10), one cannot accurately compute<br />

therefore not a good polynomial representation, uh.<br />

natural solution to this problem is to seek an orthonormal basis as<br />

e suitable and computationally stable approach. We can simply take<br />

onomial basis, rn , and recover an orthonormal basis from this through<br />

-based Gram-Schmidt orthogonalization approach. This results in the<br />

ormal basis [296]<br />

ψn(r) = ˜ Pn−1(r) = Pn−1(r)<br />

√ ,<br />

γn−1<br />

Pn(r) are the classic Legendre polynomials of order n and<br />

2<br />

γn =<br />

2n +1<br />

normalization. An easy way to compute this new basis is through the<br />

ence<br />

r ˜ Pn(r) =an ˜ Pn−1(r)+an+1 ˜ Pn+1(r),<br />

<br />

n<br />

an =<br />

2<br />

even at moderately high order (e.g., N>10), one cannot accurately<br />

û and therefore not a good polynomial representation, uh.<br />

A natural solution to this problem is to seek an orthonorma<br />

a more suitable and computationally stable approach. We can si<br />

the monomial basis, r<br />

,<br />

n , and recover an orthonormal basis from th<br />

an L2-based Gram-Schmidt orthogonalization approach. This resu<br />

orthonormal basis [296]<br />

ψn(r) = ˜ Pn−1(r) = Pn−1(r)<br />

√ ,<br />

γn−1<br />

where Pn(r) are the classic Legendre polynomials of order n and<br />

2<br />

γn =<br />

2n +1<br />

is the normalization. An easy way to compute this new basis is th<br />

recurrence<br />

r ˜ Pn(r) =an ˜ Pn−1(r)+an+1 ˜ Pn+1(r),<br />

<br />

n<br />

an =<br />

2<br />

(2n + 1)(2n − 1) ,<br />

with<br />

ψ1(r) = ˜ P0(r) = 1<br />

√ , ψ2(r) =<br />

2 ˜ <br />

3<br />

P1(r) =<br />

2 r.<br />

h<br />

A natural solution to this problem is to seek an orthonormal basis<br />

a more suitable and computationally stable approach. We can simply tak<br />

the monomial basis, r<br />

Appendix A discusses how to implement these recurrences and a<br />

n , and recover an orthonormal basis from this throug<br />

an L2-based Gram-Schmidt orthogonalization approach. This results in th<br />

orthonormal basis [296]<br />

ψn(r) = ˜ Pn−1(r) = Pn−1(r)<br />

√ ,<br />

γn−1<br />

where Pn(r) are the classic Legendre polynomials of order n and<br />

2<br />

γn =<br />

2n +1<br />

is the normalization. An easy way to compute this new basis is through th<br />

recurrence<br />

r ˜ Pn(r) =an ˜ Pn−1(r)+an+1 ˜ Pn+1(r),<br />

<br />

n<br />

an =<br />

2<br />

(2n + 1)(2n − 1) ,<br />

with<br />

ψ1(r) = ˜ P0(r) = 1<br />

√ , ψ2(r) =<br />

2 ˜ <br />

3<br />

P1(r) =<br />

2 r.<br />

even at moderately high order (e.g., N>10), one cannot accurately compute<br />

û and therefore not a good polynomial representation, uh.<br />

A natural solution to this problem is to seek an orthonormal basis as<br />

a more suitable and computationally stable approach. We can simply take<br />

the monomial basis, r<br />

Appendix A discusses how to implement these recurrences and a number<br />

n , and recover an orthonormal basis from this through<br />

an L2-based Gram-Schmidt orthogonalization approach. This results in the<br />

orthonormal basis [296]<br />

ψn(r) = ˜ Pn−1(r) = Pn−1(r)<br />

√ ,<br />

γn−1<br />

where Pn(r) are the classic Legendre polynomials of order n and<br />

2<br />

γn =<br />

2n +1<br />

is the normalization. An easy way to compute this new basis is through the<br />

recurrence<br />

r ˜ Pn(r) =an ˜ Pn−1(r)+an+1 ˜ Pn+1(r),<br />

<br />

n<br />

an =<br />

2<br />

(2n + 1)(2n − 1) ,<br />

with<br />

ψ1(r) = ˜ P0(r) = 1<br />

√ , ψ2(r) =<br />

2 ˜ <br />

3<br />

P1(r) =<br />

2 r.<br />

The Legendre Polynomials<br />

Evaluated as<br />

Appendix A discusses how to implement these recurrences and a number of<br />

˜


erty quadrature ˜Pn(r) of wesuch use can the quadratures library be computed routine is that<br />

46<br />

JacobiP.m using theyJacobiGQ.m, are<br />

3 Making<br />

asexact,<br />

provided as discussed u is in Appendix<br />

it work in one dimension<br />

fTo order compute 2Np −the 1 [90]. Np = The N +1 weights nodes and andnodes weights for (r, thew) Gaussian for an (2N +1)-th-or<br />

Local approximation<br />

obiP(r,0,0,n);<br />

s, the mass matrix, M, is the identity and the problem of<br />

resolved. So we It leaves could openevaluate the question these of how asto<br />

compute ûn,<br />

s<br />

ûn =(u, ψn) I =(u, ˜ For a general function u, the evaluation of this inner produ<br />

only practical way is to approximate the integral with a<br />

one can use a Gaussian quadrature of the form<br />

Np <br />

Pn−1) I .<br />

ûn u(ri) ˜ accurate be computed quadrature, using JacobiGQ.m, one executes as discussed the command in Appendix A.<br />

ature, >> one [r,w] executes =JacobiGQ(0,0,N);<br />

the command<br />

For JacobiGQ(0,0,N);<br />

the purpose of later generalizations, we consider a slightly different<br />

Pn−1(ri)wi,<br />

e Np = N +1 nodes and weights (r, w) for an (2N +1)-th-order<br />

proach and define ûn such that the approximation is interpolatory; tha<br />

e of later generalizations, we consider a slightly different i=1 ap-<br />

fine ûn such that the approximation<br />

<br />

is interpolatory; that is<br />

u(ξi) = ûn ˜ We instead require that<br />

Pn−1(ξi),<br />

u(ξi) =<br />

Np <br />

Np<br />

where ri are the quadrature points and wi are the quad<br />

important property of such quadratures is that they are e<br />

a polynomialn=1 of order 2Np − 1 [90]. The weights and nod<br />

quadrature canfor be computed some using ξi JacobiGQ.m, as discus<br />

To compute the Np = N +1 nodes and weights (r, w) for a<br />

accurate quadrature, one executes the command<br />

ûn<br />

n=1<br />

˜ Pn−1(ξi),<br />

where ξi represents a set of Np distinct grid points. Note that these need<br />

ents<br />

be associated<br />

a set of<br />

with<br />

Np distinct<br />

any<br />

grid<br />

quadrature<br />

points.<br />

points.<br />

Note that<br />

We<br />

these<br />

can now<br />

need<br />

write<br />

not<br />

ith any This quadrature yields points. We >> can [r,w] Vû now =JacobiGQ(0,0,N);<br />

= write u,<br />

where<br />

Vû = u,<br />

For the purpose of later generalizations, we consider a sl<br />

proach and define ûn such that the approximation is int<br />

Vij = ˜ Pj−1(ξi), ûi =ûi, ui = u(ξi).<br />

This matrix, V, is recognized as a generalized Vandermonde matrix and<br />

play a pivotal role in all of the subsequent developments of the scheme. T<br />

matrix establishes the connection between the modes, û, and the nodal val<br />

u(ξi) = ûn<br />

n=1<br />

˜ Vij =<br />

Pn−1(ξi),<br />

˜ Pj−1(ξi), ûi =ûi, ui = u(ξi).<br />

V is a Vandermonde matrix<br />

, is recognized as a generalized Vandermonde matrix and will<br />

or<br />

Np


imating polynomial of order N. Hence, the Lebesque constant indicates h<br />

o appreciate how this relates to the conditioning of V, recognize t<br />

Since we have already determined that<br />

Local approximation<br />

˜ Pn is the optimal<br />

Np with the need to choose the grid points ξi to define the Vand<br />

u(r) uh(r) = ûn There is significant freedom in this choice, so let us try to<br />

n=1<br />

intuition for a reasonable criteria.<br />

We first recognize that if<br />

˜ re · ∞ is the usual maximum norm and u<br />

Pn−1(r),<br />

olant (i.e., u(ξi) =uh(ξi) at the grid points, ξi), then we can write<br />

∗ far away the interpolation may be from the best<br />

represents<br />

possible<br />

the<br />

polynomial<br />

best appr<br />

rep<br />

tingsentation polynomial u of order N. Hence, the Lebesque constant indicates h<br />

away the interpolation may be from the best possible polynomial rep<br />

∗ nsequence of uniqueness . Note that Λofisthe determined polynomial solelyinterpolation, by the grid points, we have ξi. To<br />

an optimal approximation, we should therefore aim to identify those poin<br />

Recall first that we have<br />

Np <br />

u(r) uh(r) = ûn<br />

n=1<br />

˜ ation u<br />

Np <br />

u(r) uh(r) = u(ξi)ℓi(r).<br />

Pn−1(r),<br />

i=1<br />

∗ ξi, that . Note minimize thatthe Λ is Lebesque determined constant. solely by the grid points, ξi. To<br />

optimalTo approximation, appreciate how we thisshould relates to therefore the conditioning aim to identify of V, recognize those that poi<br />

hat aminimize consequence theof Lebesque uniqueness constant. of the polynomial interpolation, we have<br />

To appreciate how this relates to the conditioning of V, recognize that<br />

V<br />

nsequence of uniqueness of the polynomial interpolation, we have<br />

T ℓ(r) = ˜ V<br />

P (r),<br />

T ℓ(r) = ˜ P (r),<br />

re ℓ =[ℓ1(r),...,ℓNp (r)]T and ˜ P (r) =[ ˜ P0(r),..., ˜ PN (r)] T and<br />

. We are<br />

d in the particular solution, ℓ, which minimizes the Lebesque const<br />

ecall Cramer’s rule for solving linear systems of equations<br />

This immediately implies<br />

is an interpolant (i.e., u(ξi) =uh(ξi) at the grid points, ξi),<br />

it as<br />

Np <br />

u(r) uh(r) = u(ξi)ℓi(r).<br />

where ℓ =[ℓ1(r),...,ℓNp (r)]T and ˜ P (r) =[ ˜ P0(r),..., ˜ PN (r)] T . We are int<br />

ested in the particular solution, ℓ, which minimizes the Lebesque constant<br />

V i=1<br />

T ℓ(r) = ˜ P (r),<br />

ℓi(r) = Det[VT (:, 1), VT (:, 2), . . . , ˜ P (r), VT (:,i+ 1),...,VT (:,Np)]<br />

or<br />

re ℓ =[ℓ1(r),...,ℓNp (r)]T and ˜ P (r) =[ ˜ P0(r),..., ˜ PN (r)] T . We are in<br />

d in the particular solution, ℓ, which minimizes the Lebesque constant<br />

recall Cramer’s rule for solving linear systems of equations<br />

ℓi(r) = Det[VT (:, 1), VT (:, 2), . . . , ˜ P (r), VT (:,i+ 1),...,VT (:,Np)]<br />

Det(VT we recall Cramer’s rule for solving linear systems of equations<br />

ℓi(r) =<br />

.<br />

)<br />

Det[VT (:, 1), VT (:, 2), . . . , ˜ P (r), VT (:,i+ 1),...,VT (:,Np)]<br />

Det(VT Det(V<br />

.<br />

)<br />

It suggests that it is reasonable to seek ξi such that the denominator (i.e.,<br />

determinant of V), is maximized.<br />

T )<br />

ggests that it is reasonable to seek ξi such that the denominator (i.<br />

rminant of V), is maximized.<br />

or this<br />

so we<br />

one<br />

should<br />

dimensional<br />

choose<br />

case,<br />

the<br />

the<br />

point<br />

solution<br />

to maximize<br />

to this problem<br />

the<br />

is know<br />

tively simple form as the Np zeros of [151, 159]<br />

determinant -- the solution is<br />

For this one dimensional case, the solution to this problem is known i<br />

relatively simple form as the Np zeros of [151, 159]<br />

zeros of<br />

f(r) = (1 − r 2 ) ˜ P ′ N (r).<br />

uggests that it is reasonable to seek ξi such that the denominator (i.e.,<br />

2 ˜′


It suggests that it is reasonable to seek ξi such that the denomin<br />

determinant Local approximation<br />

of V), is maximized.<br />

For this one dimensional case, the solution to this problem<br />

relatively simple form as the Np zeros of [151, 159]<br />

This problem has an exact solution<br />

zeros of<br />

f(r) = (1 − r 2 ) ˜ P ′ N (r).<br />

also known as the Gauss-Lobatto quadrature points<br />

These are closely related to the normalized Legendre polynom<br />

46 3 Making it work in one dimension<br />

known as the Legendre-Gauss-Lobatto (LGL) quadrature poin<br />

library For a general routine function JacobiGL.m u, the evaluation in Appendix of this inner A, product theseisnodes nontrivial. canThe be c<br />

Note: These happen to also be quadrature points<br />

which we could use to evaluate<br />

only practical way is to approximate the integral with a sum. In particular,<br />

one can use a Gaussian quadrature of the form<br />

>> [r] = JacobiGL(0,0,N);<br />

ûn <br />

Np <br />

i=1<br />

u(ri) ˜ Pn−1(ri)wi,<br />

However, they were only chosen to be good for<br />

interpolation -- this is essential in 2D/3D<br />

where ri are the quadrature points and wi are the quadrature weights. An<br />

important property of such quadratures is that they are exact, provided u is<br />

a polynomial of order 2Np − 1 [90]. The weights and nodes for the Gaussian<br />

quadrature can be computed using JacobiGQ.m, as discussed in Appendix A.<br />

To compute the Np = N +1 nodes and weights (r, w) for an (2N +1)-th-order


determinant continues to increase in size with N, reflecting a well-behaved<br />

interpolation where ℓi(r) takes small values in between the nodes. For the<br />

Local approximation - an example<br />

equidistant nodes, the situation is entirely different.<br />

While the determinant has comparable values for small values of N, the<br />

determinant for the equidistant nodes begins to decay exponentially fast once<br />

N exceeds So does 16, caused it really by the matter? close to linear -- dependence consider of Vthe<br />

last columns<br />

a)<br />

c)<br />

1.00 −1.00 1.00 −1.00 1.00 −1.00 1.00<br />

1.00 −0.67 0.44 −0.30 0.20 −0.13 0.09<br />

1.00 −0.33 0.11 −0.04 0.01 −0.00 0.00<br />

1.00 0.00 0.00 0.00 0.00 0.00 0.00<br />

1.00 0.33 0.11 0.04 0.01 0.00 0.00<br />

1.00 0.67 0.44 0.30 0.20 0.13 0.09<br />

1.00 1.00 1.00 1.00 1.00 1.00 1.00<br />

0.71 −1.22 1.58 −1.87 2.12 −2.35 2.55<br />

0.71 −0.82 0.26 0.49 −0.91 0.72 −0.04<br />

0.71 −0.41 −0.53 0.76 0.03 −0.78 0.49<br />

0.71 0.00 −0.79 0.00 0.80 0.00 −0.80<br />

0.71 0.41 −0.53 −0.76 0.03 0.78 0.49<br />

0.71 0.82 0.26 −0.49 −0.91 −0.72 −0.04<br />

0.71 1.22 1.58 1.87 2.12 2.35 2.55<br />

b)<br />

d)<br />

1.00 −1.00 1.00 −1.00 1.00 −1.00 1.00<br />

1.00 −0.83 0.69 −0.57 0.48 −0.39 0.33<br />

1.00 −0.47 0.22 −0.10 0.05 −0.02 0.01<br />

1.00 0.00 0.00 0.00 0.00 0.00 0.00<br />

1.00 0.47 0.22 0.10 0.05 0.02 0.01<br />

1.00 0.83 0.69 0.57 0.48 0.39 0.33<br />

1.00 1.00 1.00 1.00 1.00 1.00 1.00<br />

0.71 −1.22 1.58 −1.87 2.12 −2.35 2.55<br />

0.71 −1.02 0.84 −0.35 −0.28 0.81 −1.06<br />

0.71 −0.57 −0.27 0.83 −0.50 −0.37 0.85<br />

0.71 0.00 −0.79 0.00 0.80 0.00 −0.80<br />

0.71 0.57 −0.27 −0.83 −0.50 0.37 0.85<br />

0.71 1.02 0.84 0.35 −0.29 −0.81 −1.06<br />

0.71 1.22 1.58 1.87 2.12 2.35 2.55<br />

Bad points Good points<br />

Fig. 3.1. Entries of V for N = 6 and different choices of the basis, ψn(r), and<br />

evaluation points, ξi. For (a) and (c) we use equidistant points and (b) and (d) are<br />

based on LGL points. Furthermore, (a) and (b) are based on V computed using a<br />

simple monomial basis, ψn(r) =r n−1 , while (c) and (d) are based on the orthonormal<br />

Bad<br />

basis<br />

Good<br />

basis


Local approximation - an example<br />

Determinant of V<br />

10 25<br />

10 20<br />

10 15<br />

10 10<br />

10 5<br />

10 0<br />

3.1 Legendre polynomials and nodal elements 49<br />

Equidistant nodes<br />

Legendre Gauss<br />

Lobatto nodes<br />

2 6 10 14 18<br />

N<br />

22 26 30 34<br />

50 3 Making it work in one dimension<br />

Fig. 3.2. Growth of the determinant of the Vandermonde matrix, V, when computed<br />

based on LGL nodes and the simple equidistant nodes.<br />

It is the basic<br />

in V. For the interpolation, this manifests itself as the well-known Runge<br />

phenomenon [90] for interpolation at equidistant grids. The rapid decay of<br />

the determinant enables an exponentially fast growth of ℓi(r) between the<br />

grid points, resulting in a very poorly behaved interpolation. This is also<br />

reflected in the corresponding Lebesque constant, which grows like 2Np 7<br />

6<br />

5<br />

4<br />

structure that 3<br />

for<br />

the equidistant nodes while it grows like2 log Np in the optimal case of the<br />

LGLreally nodes (see [151, matters<br />

307] and references 1therein).<br />

The main lesson here is that it is generally 0 not a good idea to use equidistant<br />

grid points when representing the local −1 interpolating polynomial solution.<br />

This point is particularly important when the interest is in the development<br />

of <strong>methods</strong> performing robustly at higher values of N.<br />

ξi One can ask, however, whether there is anything special about the LGL<br />

α<br />

10<br />

9<br />

8<br />

−1.25 −1 −0.75 −0.5 −0.25 0 0.25<br />

It matters a great deal<br />

Determinant of V<br />

10 25<br />

10 20<br />

10 15<br />

10 10<br />

10 5<br />

10 0<br />

N=4<br />

N=8<br />

N=16<br />

N=24<br />

N=32<br />

0 2 4 6 8 10<br />

α


where To summarize ℓ =[ℓ1(r),...,ℓNp matters, we have local approximations of the form<br />

(r)]T and ˜ P (r) =[ ˜ P0(r),..., ˜ PN (r)] T . W<br />

ested in the particular solution, ℓ, which minimizes the Lebesque<br />

This highlights that it is the overall structure of the nodes rather than<br />

the details of the individual node position that is important; for example, one<br />

could optimize these nodal sets Npfor<br />

various applications. Np<br />

Lets summarize this<br />

<br />

u(r) uh(r) = ûn ˜ we recall Cramer’s rule for solving linear systems of equations<br />

Pn−1(r) = u(ri)ℓi(r), (3.3<br />

So we have the local approximations<br />

n=1<br />

To summarize matters, we have local approximations i=1 of the form<br />

ℓi(r) = Det[VT (:, 1), VT (:, 2), . . . , ˜ P (r), VT (:,i+ 1),...,VT (:<br />

Det(VT )<br />

where ξi = ri are the Legendre-Gauss-Lobatto quadrature points. A centr<br />

Np <br />

u(r) uh(r) = ûn ˜ Np <br />

component of this construction is thePn−1(r) Vandermonde = u(ri)ℓi(r), matrix, V, which (3.3) estab<br />

lishes the connections<br />

n=1<br />

i=1<br />

It suggests that it is reasonable to seek ξi such that the denomina<br />

where ξi = ri are the Legendre-Gauss-Lobatto quadrature points. A central<br />

component of this u = construction Vû, V is the Vandermonde matrix, V, which establishes<br />

the connections<br />

T ℓ(r) = ˜ P (r), Vij = ˜ determinant of V), is maximized.<br />

Pj(ri).<br />

For this one dimensional case, the solution to this problem is<br />

relatively simple form as the Np zeros of [151, 159]<br />

and are the Legendre Gauss Lobatto points:<br />

u = Vû, V T ℓ(r) = ˜ P (r), Vij = ˜ By carefully choosing the orthonormal Legendre basis,<br />

Pj(ri).<br />

˜ ri<br />

Pn(r), and the nod<br />

points, ri, we have ensured that V is a well-conditioned object and that th<br />

resulting interpolation zeros of is well behaved. A script for initializing V is given i<br />

Vandermonde1D.m.<br />

By carefully choosing the orthonormal Legendre basis, ˜ f(r) = (1 − r Pn(r), and the nodal<br />

points, ri, we have ensured that V is a well-conditioned object and that the<br />

resulting interpolation is well behaved. A script for initializing V is given in<br />

Vandermonde1D.m.<br />

2 ) ˜ P ′ N (r).<br />

These This areleads closely to a related robust toway theof normalized computing/evaluating<br />

Legendre polynomi<br />

known a high-order as the Legendre-Gauss-Lobatto polynomial approximation. (LGL) quadrature points<br />

library routine JacobiGL.m in Appendix A, these nodes can be co<br />

but is it accurate ?<br />

>> [r] = JacobiGL(0,0,N);


e notation A second look at approximation<br />

k on a more rigorous discussion, we need to introduce some<br />

ion. InWe particular, will need both global a little norms, more defined notation on Ω, and broed<br />

over Ωh as sums of K elements D<br />

Regular energy norms<br />

k , need to be introduced.<br />

ion, u(x,t), we define the global L2-norm over the domain<br />

u 2 <br />

Ω = u<br />

Ω<br />

2 dx<br />

oken norms<br />

u 2 K<br />

Ω,h = u<br />

k=1<br />

2<br />

<br />

k<br />

D , u2 k<br />

D =<br />

D k<br />

u 2 dx.<br />

we define the associated Sobolev norms<br />

u (α) 2 Ω, u 2 K<br />

Ω,q,h = u 2<br />

D k , u2<br />

,q D k ,q =<br />

q<br />

u (α) 2<br />

additional notation. In particular, both global norms, defined on Ω, a<br />

ken norms, defined over Ωh as sums of K elements D k , need to be intr<br />

For the solution, u(x,t), we define the global L2-norm over the<br />

Ω ∈ R d as<br />

u 2 <br />

Ω = u<br />

Ω<br />

2 dx<br />

as well as the broken norms<br />

u 2 K<br />

Ω,h = u<br />

k=1<br />

2<br />

<br />

k<br />

D , u2 k<br />

D =<br />

D k<br />

u 2 dx.<br />

In a similar way, we define the associated Sobolev norms<br />

u 2 q<br />

Ω,q = u<br />

|α|=0<br />

(α) 2 Ω, u 2 K<br />

Ω,q,h = u<br />

k=1<br />

2<br />

D k , u2<br />

,q D k ,q =<br />

q<br />

u<br />

|α|=0<br />

where α is a multi-index of length d. For the one-dimensional cas<br />

often considered in this chapter, this norm is simply the L2 Before we embark on a more rigorous discussion, we need to introduce some<br />

additional notation. In particular, both global norms, defined on Ω, and broken<br />

norms, defined over Ωh as sums of K elements D<br />

-norm of<br />

k , need to be introduced.<br />

For the solution, u(x,t), we define the global L2-norm over the domain<br />

Ω ∈ R d as<br />

u 2 <br />

Ω = u<br />

Ω<br />

2 dx<br />

as well as the broken norms<br />

u 2 K<br />

Ω,h = u<br />

k=1<br />

2<br />

<br />

k<br />

D , u2 k<br />

D =<br />

D k<br />

u 2 dx.<br />

In a similar way, we define the associated Sobolev norms<br />

u 2 q<br />

Ω,q = u<br />

|α|=0<br />

(α) 2 Ω, u 2 K<br />

Ω,q,h = u<br />

k=1<br />

2<br />

D k , u2<br />

,q D k ,q =<br />

q<br />

u<br />

|α|=0<br />

(α) 2<br />

D k,<br />

where α is a multi-index of length d. For the one-dimensional case, most<br />

often considered in this chapter, this norm is simply the L2 Sobolev norms<br />

76 4 Insight through theory<br />

derivative. We will also define the space of functions, u ∈ H<br />

Semi-norms<br />

-norm of the q-th<br />

q (Ω), as those<br />

functions for which uΩ,q or uΩ,q,h is bounded.<br />

Finally, we will need the semi-norms<br />

k=1<br />

|u| 2 Ω,q,h =<br />

|α|=0<br />

D k ,q<br />

D k,<br />

u (α) 2<br />

D k.<br />

ulti-index of length d. For the one-dimensional case, most<br />

in this chapter, this norm is simply the L2-norm of the q-th<br />

4.2 Briefly on convergence<br />

K<br />

k=1<br />

|u| 2<br />

D k ,q<br />

, |u|2<br />

= <br />

|α|=q


Lagrange polynomial, ℓ<br />

Approximation theory<br />

k i (x). The connection between these two forms is through<br />

the expansion coefficients, ûk n. We return to a discussion of these choices in much m<br />

for now it suffices to assume that we have chosen one of these representations.<br />

The global solution u(x, t) is then assumed to be approximated by the piecew<br />

polynomial approximation uh(x, t),<br />

K<br />

u(x, t) uh(x, t) = u<br />

k=1<br />

k K<br />

, v) Ω,h = (u, v) k<br />

D , u<br />

k=1<br />

h(x, t),<br />

2 Ω,h =(u, u) Ω,h .<br />

cts that Ω is only approximated by the union of D k , that is<br />

K<br />

Ω Ωh = D<br />

k=1<br />

k K<br />

u(x, t) uh(x, t) = u<br />

k=1<br />

Recall<br />

,<br />

not distinguish we assume the twothe domains local unless solution needed. to be<br />

has local information as well as information from the neighong<br />

an intersection between two elements. Often we will refer<br />

ese intersections in an element as the trace of the element.<br />

e discuss here, we will have two or more solutions or boundthe<br />

same physical location along the trace of the element.<br />

k h(x k ,t).<br />

The careful reader will note that this notation is a bit careless, as we do not<br />

address what exactly happens at the overlapping interfaces. However, a more<br />

careful definition does not add anything essential at this point and we will use<br />

this notation to reflect that the global solution is obtained by combining the<br />

K local solutions as defined by the scheme.<br />

The local solutions are assumed to be of the form<br />

x ∈ D k =[x k l ,x k r]: u k Np <br />

h(x, t) = û<br />

n=1<br />

k Np <br />

n(t)ψn(x) = u<br />

i=1<br />

k h(x k i ,t)ℓ k Here, we have introduced two complementary expressions<br />

known as the modal form, we use a local polynomial basis<br />

be ψn(x) =x<br />

i (x).<br />

n−1 . In the alternative form, known as the n<br />

N + 1 local grid points, xk i ∈ Dk , and express the polynom<br />

Lagrange polynomial, ℓk i (x). The connection between thes<br />

the expansion coefficients, ûk n. We return to a discussion of<br />

for now it suffices to assume that we have chosen one of th<br />

The global solution u(x, t) is then assumed to be app<br />

polynomial approximation uh(x, t),<br />

K<br />

The question is in what sense is<br />

u(x, t) uh(x, t) =<br />

We have observed improved accuracy in two ways<br />

• Increase K/decrease h<br />

• Increase N<br />

k=1<br />

u


simplicity, we assume that all elements have length h (i.e., D k = x k + r<br />

˜Pn(r) = Pn(r)<br />

√ , γn =<br />

γn<br />

simplicity, we assume that all elements have length h (i.e., D<br />

Approximation theory<br />

k = xk +<br />

where xk = 1<br />

2 (xkr +xk where x<br />

l ) represents the cell center and r ∈ [−1, 1] is the refere<br />

k = 1<br />

2 (xk r +xk l ) represents the cell center and r ∈ vh(r) [−1, 1] = is the reference ˆvn<br />

coordinate).<br />

n=0<br />

˜ Pn(r),<br />

coordinate). We begin by considering the standard interval and introduce the new vari-<br />

We able Let begin us assume by considering all elements the standard have interval size h and introduce consider the new va<br />

able<br />

v(r) =u(hr) =u(x);<br />

that is, v is defined on thev(r) standard =u(hr) interval, =u(x); I =[−1, 1], and xk represents v ∈ L<br />

= 0, x ∈<br />

2 (I). Note, that for simplicity of the n<br />

with standard notation, we now have the sum runn<br />

than from 1 to N + 1 = Np, as used previously.<br />

d<br />

r<br />

[−h, h]. We discussed in Chapter 3 the advantage of using a local orthonormal<br />

(1 − r2 ) d<br />

dr ˜ Pn + n(n + 1) ˜ Pn =0. (4.2)<br />

Prior to discussing interpolation, used throughou<br />

sic question of how well the properties of the projection where we utilize the<br />

that is, v is defined on the standard interval, I =[−1, 1], and xk basis – in this case, the normalized Legendre polynomials<br />

= 0, x<br />

[−h, h]. We discussed in Chapter 3 the advantage of using a local orthonorm<br />

basis – in this case, the normalized ˜Pn(r) = Legendre polynomials<br />

Pn(r)<br />

to find ˜vn as<br />

N<br />

<br />

2<br />

√ γn<br />

, γn =<br />

˜Pn(r) = Pn(r)<br />

√ , γn =<br />

γn<br />

2<br />

2n +1 .<br />

Here, Pn(r) are the classic Legendre polynomials of order n. A key prope<br />

dr<br />

of these polynomials is that they satisfy a singular Sturm-Liouville proble<br />

˜ Pn + n(n + 1) ˜ Pn interpolation, used throughout this text, we consider =0. (4.2)<br />

d<br />

dr (1 − r2 ) d<br />

dr ˜ Pn + n(n + 1) ˜ N Pn =0. (4<br />

n=0<br />

Let us consider the basic question of how well<br />

2n +1 .<br />

Here, Pn(r) are the classic Legendre polynomials of order n. A key property<br />

of these polynomials is that they satisfy a singular Sturm-Liouville problem<br />

d<br />

dr (1 − r2 ) d<br />

Let us consider the basic question of how well<br />

vh(r) = ˆvn ˜ ote, that for simplicity of the notation and to conform<br />

n, we now have the sum running from 0 to N rather<br />

= Np, as used previously.<br />

projection where we utilize the orthonomality of<br />

Pn(r),<br />

˜ Pn(r)<br />

<br />

˜vn = v(r)<br />

I<br />

˜ Pn(r) dr.<br />

2<br />

2<br />

2n +1 .<br />

assic Legendre polynomials of order n. A key property<br />

s that they satisfy a singular Sturm-Liouville problem<br />

We consider expansions as<br />

vh(r) =<br />

n=0<br />

ˆvn ˜ Pn(r),<br />

˜vn =<br />

I<br />

N<br />

2 h<br />

v(r) ˜ Pn(r) dr.


simplicity, we assume that all elements have length h (i.e., D k = x k + r<br />

˜Pn(r) = Pn(r)<br />

√ , γn =<br />

γn<br />

2<br />

2n +1 .<br />

simplicity, we assume that all elements have length h (i.e., D<br />

Approximation theory<br />

k = xk +<br />

where xk = 1<br />

2 (xkr +xk where x<br />

l ) represents the cell center and r ∈ [−1, 1] is the refere<br />

k = 1<br />

2 (xk r +xk l ) represents the cell center and r ∈ vh(r) [−1, 1] = is the reference ˆvn<br />

coordinate).<br />

n=0<br />

˜ Pn(r),<br />

assic Legendre polynomials of order n. A key property<br />

s that they satisfy a singular Sturm-Liouville problem<br />

coordinate). We begin by considering the standard interval and introduce the new vari-<br />

We able Let begin us assume by considering all elements the standard have interval size h and introduce consider the new va<br />

able<br />

v(r) =u(hr) =u(x);<br />

that is, v is defined on thev(r) standard =u(hr) interval, =u(x); I =[−1, 1], and xk represents v ∈ L<br />

4.3 Approximations by orthogonal polynomials and consistency 79<br />

= 0, x ∈<br />

2 (I). Note, that for simplicity of the n<br />

with standard notation, we now have the sum runn<br />

than from 1 to N + 1 = Np, as used previously.<br />

d<br />

r<br />

[−h, h]. We discussed in Chapter 3 the advantage of using a local orthonormal<br />

(1 − r2 ) d<br />

dr ˜ Pn + n(n + 1) ˜ Pn =0. (4.2)<br />

An immediate consequence of<br />

Prior<br />

the orthonormality<br />

to discussing<br />

of<br />

interpolation,<br />

the basis is that<br />

used throughou<br />

sic question of how well the properties∞ of the projection where we utilize the<br />

We consider expansions as<br />

that is, v is defined on the standard interval, I =[−1, 1], and xk basis – in this case, the normalized Legendre polynomials<br />

= 0, x<br />

[−h, h]. We discussed in Chapter 3 the advantage of using a local orthonorm<br />

basis – in this case, the normalized ˜Pn(r) = Legendre polynomials<br />

Pn(r)<br />

v − vh<br />

N<br />

2<br />

2<br />

I = |˜vn|<br />

n=N+1<br />

2 to find ,<br />

˜vn as<br />

<br />

˜Pn(r) = Pn(r) 2<br />

√ , γn =<br />

γn 2n +1 .<br />

√ , γn =<br />

γn 2n +1 .<br />

Here, Pn(r) are the classic Legendre polynomials of order n. A key property<br />

of these polynomials is that they satisfy a singular Sturm-Liouville problem<br />

d<br />

dr (1 − r2 ) d<br />

Let us consider the basic question of how well<br />

vh(r) = ˆvn ˜ vh(r) = ˆvn<br />

n=0<br />

Pn(r),<br />

˜ Pn(r),<br />

ote, that for simplicity of the notation and to conform<br />

n, we now have the sum running from 0 to N rather<br />

= Np, as used previously.<br />

projection where we utilize the orthonomality of ˜ Pn(r)<br />

<br />

˜vn = v(r)<br />

I<br />

˜ recognized as Parseval’s identity. A basic result for this approximation error<br />

follows directly.<br />

Theorem 4.1. Assume that v ∈ H<br />

Pn(r) dr.<br />

p ˜vn = v(r)<br />

I<br />

(I) and that vh represents a polynomial<br />

projection of order N. Then<br />

where<br />

<br />

3q,<br />

0 ≤ q ≤ 1<br />

and 0 ≤ q ≤ p.<br />

˜ Pn(r) dr.<br />

Here, Pn(r) are the classic Legendre polynomials of order n. A key prope<br />

dr<br />

of these polynomials is that they satisfy a singular Sturm-Liouville proble<br />

˜ Pn + n(n + 1) ˜ Pn interpolation, used throughout v −this vhtext, I,q ≤ we N consider =0. (4.2)<br />

ρ−p |v| I,p ,<br />

d<br />

dr (1 − r2 ) d<br />

dr ˜ Pn + n(n + 1) ˜ ρ = 2<br />

2q − N Pn =0. (4<br />

1<br />

2 , q ≥ 1<br />

n=0<br />

Let us consider the basic question of how well<br />

Proof. We will just sketch the proof; the details can be found in [43]. Com-<br />

bining the definition 2 of ˜vn and Eq. (4.2), we recover<br />

N<br />

2 h


(N +1+σ)! n=N+1<br />

v − vh<br />

Approximation theory<br />

2<br />

|˜vn| I,0 (n − σ)!<br />

n=N+1<br />

,<br />

hich, combined with Lemma 4.2 and σ = min(N +1,p), gives the result.<br />

(N +1− σ)!<br />

≤<br />

(N +1+σ)!<br />

A sharper result can be obtained by using<br />

∞<br />

2 (n + σ)!<br />

which, combined with Lemma 4.2 and σ = min(N +1,p), gives the result.<br />

A generalization of this is the following result.<br />

A generalization of this is the following result.<br />

emma 4.4. If v ∈ H p (I), p ≥ 1 then<br />

Lemma 4.4. If v ∈ H p (I), p ≥ 1 then<br />

v (q) − v (q)<br />

h I,0 ≤<br />

v (q) − v (q)<br />

h I,0 ≤<br />

where σ = min(N +1,p) and q ≤ p.<br />

here σ = min(N +1,p) and q ≤ p.<br />

<br />

1/2 (N +1− σ)!<br />

|v| I,σ |v| , I,σ ,<br />

(N + 1 + σ − 4q)!<br />

The proof follows the one above and is omitted. Note that in the limit<br />

N ≫ p, the Stirling formula gives<br />

in agreement with Theorem 4.1.<br />

<br />

(N +1− σ)!<br />

(N + 1 + σ − 4q)!<br />

1/2 v (q) − v (q)<br />

h I,0 ≤ N 2q−p |v| I,p ,<br />

(n − σ)!<br />

The proof follows the one above and is omitted. Note that in the limit<br />

≫ p, Note the Stirling that in formula the limit gives of N>>p we recover<br />

agreement with Theorem 4.1.<br />

v (q) − v (q)<br />

h I,0 ≤ N 2q−p |v| I,p ,<br />

A minor issues arises -- these results are based on<br />

projections and we are using interpolations ?


The above estimates is based on areN all + n=0 related 1 points. to The projections. difference However, mayn=0 beasminor, we disbut<br />

it is essential to<br />

cussedisinbased Chapter appreciate on N 3, + we1 are points. it. Consider oftenThe concerned difference withmay the be interpolations minor, butofitvis essential to<br />

where<br />

Approximation theory<br />

4.3 Approximations by orthogonal polynomials a<br />

v = V ˆv,<br />

The above estimates are all related to projections.<br />

We consider cussed in Chapter 3, we are often concerned with th<br />

where N<br />

vh(r) = ˆvn v = V ˆv,<br />

˜ N<br />

Pn(r), ˜vh(r) = ˜vn ˜ appreciate it. Consider<br />

N<br />

vh(r) = ˆvn<br />

n=0<br />

Pn(r),<br />

˜ N<br />

Pn(r), ˜vh(r) = ˜vn<br />

n=0<br />

˜ N<br />

vh(r) = ˆvn<br />

n=0<br />

Pn(r),<br />

where vh(x) is the usual approximation based on interpolation and ˜vh(r) refers<br />

˜ N<br />

Pn(r), ˜vh(r) = ˜vn<br />

n=0<br />

˜ ere vh(x) is the usual approximation based on interpolation and ˜vh(r) ref<br />

the approximation based on projection.<br />

Pn(r),<br />

By the interpolation property we have<br />

where vh(x) is the usual approximation based on interpolation and ˜vh(r) refers<br />

∞<br />

to the approximation based on projection.<br />

(V ˆv)i = vh(ri) = ˜vn ˜ N<br />

Pn(ri) = ˜vn ˜ ∞<br />

Pn(ri)+ ˜vn ˜ interpolation projection<br />

Pn(ri),<br />

is based on N + 1 points. The difference may be minor, but it is essential to<br />

appreciate it. Consider<br />

to the approximation By the interpolation based on projection. property we have<br />

n=0<br />

n=0<br />

By the interpolation is based property on N ∞ + we1 have points. The difference may be minor,<br />

appreciate (V ˆv)i = vh(ri)<br />

∞ it. = ˜vn Consider N<br />

∞<br />

˜ N<br />

Pn(ri) = ˜vn ˜ ∞<br />

Pn(ri)+ ˜vn ˜ n=0<br />

n=0<br />

n=N+1<br />

Compare the two<br />

Pn(ri),<br />

m which we recover<br />

where vh(x) is the usual approximation based on interpolation and ˜vh(r) refers<br />

to the approximation based on projection.<br />

By the interpolation property we haven=0<br />

n=0<br />

N<br />

vh(r) = ˆvn ˜ ∞<br />

˜vn Pn(r), ˜vh(r) =<br />

˜ n=0<br />

n=N+1<br />

˜vn Pn(ri),<br />

˜ Pn(r), r =(r0,...,rN ) T .<br />

∞<br />

(V ˆv)i = vh(ri) = ˜vn<br />

n=0<br />

n=0<br />

˜ N<br />

Pn(ri) = ˜vn<br />

n=0<br />

˜ from which we recover n=0<br />

Pn(ri)+<br />

from which we recover<br />

∞<br />

n=N+1<br />

V ˆv = V ˜v + ˜vn<br />

∞<br />

from which we recover<br />

n=N+1<br />

˜ Pn(r), r =(r0,...,rN ) T V ˆv = V ˜v +<br />

n=N+1<br />

.<br />

is implies<br />

This implies<br />

This implies<br />

(V ˆv)i = vh(ri) =<br />

V ˆv = V ˜v +<br />

˜vn ˜ Pn(ri) =<br />

∞<br />

N<br />

n=0<br />

˜vn ˜ Pn<br />

where∞ vh(x) is the usual approximation based on interpol<br />

˜vn to the approximation based on projection.<br />

n=N+1<br />

By the interpolation property we have<br />

˜ Pn(r), r =(r0,...,rN ) T n=N+1<br />

.<br />

vh(r) = ˜vh(r)+ ˜ P T (r)V −1<br />

∞<br />

˜vn ˜ ∞<br />

˜vn Pn(r). ˜ Pn(r).<br />

This implies<br />

V ˆv = V ˜v +<br />

vh(r) = ˜vh(r)+ ˜ P T (r)V −1<br />

vh(r) = ˜vh(r)+ ∞<br />

˜ P T (r)V −1<br />

˜vn ˜ n=N+1<br />

Now, consider the additional term on the right-hand<br />

Pn(r).<br />

side,<br />

∞<br />

(V ˆv)i = vh(ri) = ˜vn ˜ N<br />

Pn(ri) = ˜vn ˜ vh(r) = ˜vh(r)+<br />

Pn(ri)+<br />

˜ P T (r)V −1<br />

˜vn<br />

n=N+1<br />

˜ Pn(r). n=N+1<br />

Now, consider the˜P additional term on the right-hand side,<br />

T (r)V −1<br />

∞<br />

˜vn ˜ w, consider the additional term on the right-hand side,<br />

∞ <br />

T<br />

Pn(r) = ˜vn<br />

˜P −1<br />

(r)V Pn(r) ˜<br />

∞ ∞<br />

,<br />

<br />

Now, consider the additional term on the right-hand side,<br />

˜vn ˜ Pn(ri)+<br />

˜vn ˜ Pn(r), r =(r0,...,rN ) T .<br />

∞<br />

n=N+1<br />

n=N+1<br />

˜vn ˜ Pn(ri),


h h<br />

This implies<br />

vh(r) = ˜vh(r)+ ˜ P T (r)V −1<br />

∞<br />

˜vn ˜ n=N+1<br />

Now, consider the additional term on the right-handPn(r). side,<br />

Approximation theory<br />

n=N+1<br />

Now, Consider consider the this additional term term on the right-hand side,<br />

˜P T (r)V −1<br />

∞<br />

˜vn ˜ ˜P<br />

∞ <br />

T<br />

Pn(r) = ˜vn<br />

˜P −1<br />

(r)V Pn(r) ˜ ,<br />

T (r)V −1<br />

∞<br />

˜vn<br />

n=N+1<br />

˜ ∞ <br />

T<br />

Pn(r) = ˜vn<br />

˜P −1<br />

(r)V Pn(r) ˜ ,<br />

n=N+1<br />

which is allowed provided v ∈ Hp P1: FQF/GQE (I), p>1/2 P2: FQF/GQE[280]. QC: FQF/GQE Since T1: FQF<br />

n=N+1<br />

n=N+1<br />

which is allowed provided v ∈ Hp (I), p>1/2 [280]. Since<br />

˜P T (r)V −1 N<br />

Pn(r) ˜ = ˜pl<br />

l=0<br />

˜ Pl(r), V ˜p = ˜ ˜P<br />

Pn(r),<br />

we can interpret the additional term as those high-order modes (recall n>N)<br />

that look like lower order modes on the grid. It is exactly this phenomenon<br />

that is known as aliasing and which is the fundamental difference between an<br />

interpolation and a projection.<br />

The final statement, the proof of which is technical and given in [27], yields<br />

the following<br />

T (r)V −1 N<br />

Pn(r) ˜ = ˜pl<br />

l=0<br />

˜ Pl(r), V ˜p = ˜ Pn(r),<br />

we Caused can interpret by interpolation the additional termof ashigh those high-order modes (recall n>N)<br />

that look like lower order modes on the grid. It is exactly this phenomenon<br />

frequency unresolved modes<br />

n = 6<br />

that is known as aliasing and which is the fundamental difference between an<br />

interpolation and a projection.<br />

The final statement, Aliasing the proof of which is technical and given in [27], yields<br />

n = −2<br />

the following<br />

n = 10<br />

Caused by the grid<br />

n n<br />

CUUK702-Hesthaven June 6, 2006 18:54<br />

2.2 Discrete trigonometric polynomials 3


Approximation theory<br />

This 82 has 4 Insight a some through impact theory on the accuracy<br />

Theorem 4.5. Assume that v ∈ Hp (I), p > 1<br />

Theorem 4.5. Assume that v ∈ H<br />

polynomial interpolation of order N. Then<br />

p (I), p > 1<br />

polynomial interpolation of order N. Then<br />

where 0 ≤ q ≤ p.<br />

v − vhI,q ≤ N 2q−p+1/2 v − vhI,q ≤ N |v| I,p ,<br />

2q−p+1/2 |v| I,p ,<br />

2 , and that vh 2 represents a<br />

, and that vh represents a<br />

Note in particular that the order of convergence can be up to one order<br />

lower due to the aliasing errors, but not worse than that.<br />

Example 4.6. Let us revisit Example 3.2 where we considered the error when<br />

computing the discrete derivative. Using Theorem 4.5, we get a crude estimate<br />

as<br />

v ′ − DrvhI,0 ≤v− vhI,1 ≤ N 5/2−p Example 4.6. Let us revisit Example 3.2 where we considered the error when<br />

computing the discrete derivative. Using Theorem 4.5, we get a crude estimate<br />

as<br />

v |v| I,p .<br />

′ − DrvhI,0 ≤v− vhI,1 ≤ N 5/2−p |v| I,p .<br />

For the first example, that is,<br />

v(r) = exp(sin(πr)),


the situation is different. Note that v (i) ∈ Hi (I). In this case, the computations<br />

indicate<br />

Approximation theory<br />

This 82 has 4 Insight a some through impact theory on the accuracy<br />

Theorem 4.5. Assume that v ∈ Hp (I), p > 1<br />

which is considerably better than indicated by the general result. An improved<br />

estimate, valid only for interpolation of Gauss-Lobatto grid points, is [27]<br />

polynomial interpolation v of order N. Then<br />

′ − DrvhI,0 ≤ N 1−p Theorem 4.5. Assume that v ∈ H<br />

|v| I,p ,<br />

p (I), p > 1<br />

polynomial interpolation of order N. Then<br />

2 , and that vh 2 represents a<br />

, and that vh represents a<br />

v − vhI,q ≤ N 2q−p+1/2 and is much closer to the observed v − vh behavior but suboptimal. |v| I,p , This is, however,<br />

I,q ≤ N<br />

a sharp result for the general case, reflecting that special cases may show<br />

better whereresults. 0 ≤ q ≤ p.<br />

2q−p+1/2 |v| I,p ,<br />

where 0 ≤ q ≤ p.<br />

Note in particular that the order of convergence can be up to one order<br />

We return to the behavior for the general element of length h to recover<br />

lower due to the aliasing errors, but not worse than that.<br />

the estimates for u(x) rather than v(r). We have the following bound:<br />

To also account for the cell size we have<br />

Example 4.6. Let us revisit Example 3.2 where we considered the error when<br />

computing the discrete derivative. Using Theorem 4.5, we get a crude estimate<br />

as<br />

v ′ − DrvhI,0 ≤v− vhI,1 ≤ N 5/2−p Theorem 4.7. Assume that u ∈ H<br />

|v| I,p .<br />

p (D k ) and that uh represents a piecewise<br />

polynomial approximation of order N. Then<br />

u − uhΩ,q,h ≤ Ch σ−q Example 4.6. Let us revisit Example 3.2 where we considered the error when<br />

computing the discrete derivative. Using Theorem 4.5, we get a crude estimate<br />

as<br />

v |u|Ω,σ,h,<br />

′ − DrvhI,0 ≤v− vhI,1 ≤ N 5/2−p |v| I,p .<br />

for For 0 ≤ the q ≤ first σ, example, and σ = min(N that is, +1,p).<br />

(v (i) )r − Drv (i)<br />

h I,0 ≤ CN1/2−i ,<br />

v(r) = exp(sin(πr)),


u − uh 2<br />

D k ,q ≤ h1−2qv − vh 2<br />

Similarly fashion, we have<br />

Approximation theory<br />

I,q ≤ h1−2q |v| 2<br />

I,σ = h2σ−2q |u| 2<br />

D k ,σ ,<br />

where we have used Lemma 4.4 and defined σ = min(N +1,p). Summing over<br />

all elements yields the result. <br />

For a more general grid where the element length is variable, it is natural to<br />

use h = maxk hk u<br />

(i.e., the maximum interval length). Combining this with<br />

Theorem 4.7 yields the main approximation result:<br />

2<br />

D k ,q =<br />

q<br />

|u| 2<br />

D k ,p =<br />

q<br />

h 1−2p |v| 2<br />

p=0<br />

p=0<br />

Combining everything, we have the general result<br />

We combine these estimates to obtain<br />

Theorem 4.8. Assume that u ∈ Hp (D k ), p>1/2, and that uh represents a<br />

piecewise polynomial interpolation of order N. Then<br />

h<br />

u − uhΩ,q,h ≤ C<br />

σ−q<br />

u − uh<br />

|u|Ω,σ,h,<br />

N p−2q−1/2 2<br />

D k ,q ≤ h1−2qv − vh 2<br />

I,q ≤ h1−2q |v| 2<br />

I,σ<br />

for 0 ≤ q ≤ σ, and σ = min(N +1,p).<br />

This result gives the essence of the approximation properties; that is, it shows<br />

clearly under which conditions on u we can expect consistency and what<br />

convergence rate to expect depending on the regularity of u and the norm in<br />

which Theorem the error4.7 is measured. yields the main approximation result:<br />

4.4 Stability<br />

I,p ≤ h1−2q <br />

= h2σ−<br />

where we have used Lemma 4.4 and defined σ = min(N +1,p<br />

all elements yields the result.<br />

For a more general grid where the element length is variable<br />

use h = maxk hk (i.e., the maximum interval length). Com<br />

with<br />

Theorem 4.8. Assume that u ∈ Hp (D k ), p>1/2, and tha<br />

piecewise polynomial interpolation of order N. Then


Lets summarize<br />

Fluxes:<br />

• For linear systems, we can derive exact upwind fluxes<br />

using Rankine-Hugonoit conditions.<br />

Accuracy:<br />

• Legendre polynomials are the right basis<br />

• We must use good interpolation points<br />

• Local accuracy depends on elementwise smoothness<br />

• Aliasing appears due to the grid but is under control<br />

• For smooth problems, we have a spectral method<br />

• Convergence can be recovered in two ways<br />

• Increase N<br />

• Decrease h

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!