13.07.2015 Views

An introduction to semilinear elliptic equations - BCAM

An introduction to semilinear elliptic equations - BCAM

An introduction to semilinear elliptic equations - BCAM

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

ContentsIntroductionNotationvviiChapter 1. ODE methods 11.1. The case of the line 21.2. The case of the interval 71.3. The case of R N , N ≥ 2 141.4. The case of the ball of R N , N ≥ 2 28Chapter 2. Variational methods 312.1. Linear <strong>elliptic</strong> <strong>equations</strong> 312.2. C 1 functionals 352.3. Global minimization 392.4. Constrained minimization 472.5. The mountain pass theorem 552.6. Specific methods in R N 662.7. Study of a model case 75Chapter 3. Methods of super- and subsolutions 813.1. The maximum principles 813.2. The spectral decomposition of the Laplacian 843.3. The iteration method 893.4. The equation −△u = λg(u) 94Chapter 4. Regularity and qualitative properties 1014.1. Interior regularity for linear <strong>equations</strong> 1014.2. L p regularity for linear <strong>equations</strong> 1034.3. C 0 regularity for linear <strong>equations</strong> 1114.4. Bootstrap methods 113iii


viINTRODUCTIONtechniques that can be used <strong>to</strong> handle the case of unbounded domains,symmetrization and concentration-compactness.The third chapter is devoted <strong>to</strong> the method of super- and subsolutions.We first introduce the weak and strong maximum principles,and then an existence result based on an iteration technique.In the fourth chapter, we study some qualitative properties of thesolutions. We study the L p and C 0 regularity for the linear equation,and then the regularity for nonlinear <strong>equations</strong> by a bootstrapmethod. Finally, we study the symmetry properties of the solutionsby the moving planes technique.Of course, there are other important methods for the study of<strong>elliptic</strong> <strong>equations</strong>, in particular the degree theory and the bifurcationtheory. We did not study these methods because their most interestingapplications require the use of the C m,α regularity theory, whichwe could not afford <strong>to</strong> present in such an introduc<strong>to</strong>ry text. Theinterested reader might consult for example H. Brezis and L. Nirenberg[14].


viiiNOTATIONF the Fourier transform in R N , defined by 1∫Fu(ξ) = e −2πix·ξ u(x) dxR N ∫F = F −1 , given by Fv(x) = e 2πiξ·x v(ξ) dξR Nû= FuC c (Ω) the space of continuous functions Ω → R with compactsupportCc k (Ω) the space of functions of C k (Ω) with compact supportC b (Ω) the Banach space of continuous, bounded functions Ω →R, equipped with the <strong>to</strong>pology of uniform convergenceC(Ω) the space of continuous functions Ω → R. When Ω isbounded, C(Ω) is a Banach space when equipped withthe <strong>to</strong>pology of uniform convergenceC b,u (Ω) the Banach space of uniformly continuous and boundedfunctions Ω → R equipped with the <strong>to</strong>pology of uniformconvergenceCb,u m (Ω) the Banach space of functions u ∈ C b,u(Ω) such thatD α u ∈ C b,u (Ω), for every multi-index α such that |α| ≤m. Cb,u m (Ω) is equipped with the norm of W m,∞ (Ω)C m,α (Ω) for 0 ≤ α < 1, the Banach space of functions u ∈Cb,u m (Ω) such that‖u‖ C m,α|D β u(x) − D β u(y)|= ‖u‖ W m,∞ + supx,y∈Ω |x − y| α < ∞.|β|=mD(Ω) = Cc ∞ (Ω), the Fréchet space of C ∞ functions Ω → R(or Ω → C) compactly supported in Ω, equipped withthe <strong>to</strong>pology of uniform convergence of all derivativeson compact subsets of ΩC 0 (Ω) the closure of Cc ∞ (Ω) in L ∞ (Ω)C0 m (Ω) the closure of Cc ∞ (Ω) in W m,∞ (Ω)D ′ (Ω) the space of distributions on Ω, that is the <strong>to</strong>pologicaldual of D(Ω)1 with this definition of the Fourier transform, ‖Fu‖L 2 = ‖u‖ L 2, F(u ⋆ v) =FuFv and F(D α u) = (2πi) |α| ∏ Nj=1 xα jj Fu.


NOTATIONixp ′ the conjugate of p given by 1 p + 1 p ′ = 1L p (Ω)the Banach space of (classes ∫ of) measurable functionsu : Ω → R such that |u(x)| p dx < ∞ if 1 ≤ p < ∞,Ωor ess sup |u(x)| < ∞ if p = ∞. L p (Ω) is equipped withx∈Ωthe norm⎧( ⎨ ∫1dx)‖u‖ L p = Ω |u(x)|p pif p < ∞⎩ess sup x∈Ω |u(x)|, if p = ∞.L p loc (Ω) the set of measurable functions u : Ω → R such thatu |ω ∈ L p (ω) for all ω ⊂⊂ ΩW m,p (Ω) the space of (classes of) measurable functions u : Ω → Rsuch that D α u ∈ L p (Ω) in the sense of distributions, forevery multi-index α ∈ N N with |α| ≤ m. W m,p (Ω) is aBanach space when equipped with the norm ‖u‖ W m,p∑=|α|≤m ‖Dα u‖ L pW m,ploc(Ω) the set of measurable functions u : Ω → R such thatu |ω ∈ W m,p (ω) for all ω ⊂⊂ ΩW m,p0 (Ω) the closure of Cc ∞ (Ω) in W m,p (Ω)W −m,p′ (Ω) the <strong>to</strong>pological dual of W m,p0 (Ω)H m (Ω) = W m,2 (Ω). H m (Ω) is equipped with the equivalentnorm( ∑∫) 1‖u‖ H m = |D α u(x)| 2 2dx ,|α|≤mΩand H m (Ω) ∫is a Hilbert space for the scalar product(u, v) H m = R(u(x)v(x)) dxHloc m (Ω)m,2= WlocH0 m (Ω) = W m,20 (Ω)H −m (Ω) = W −m,2 (Ω)|u| m,p,Ω = ∑ |α|=m ‖Dα u‖ L p (Ω)Ω


CHAPTER 1ODE methodsConsider the problem{−△u = g(u) in Ω,u = 0 in ∂Ω,where Ω is the ballΩ = {x ∈ R N ; |x| < R},for some given 0 < R ≤ ∞. In the case R = ∞, the boundarycondition is unders<strong>to</strong>od as u(x) → 0 as |x| → ∞. Throughout thischapter, we assume that g : R → R is a locally Lipschitz continuousfunction. We look for nontrivial solutions, i.e. solutions u ≢ 0 (clearly,u ≡ 0 is a solution if and only if g(0) = 0). In this chapter, we studytheir existence by purely ODE methods.If N = 1, then the equation is simply the ordinary differentialequationu ′′ + g(u) = 0, −R < r < R,and the boundary condition becomes u(±R) = 0, or u(r) → 0 as r →±∞ in the case R = ∞. In Sections 1.1 and 1.2, we solve completelythe above problem. We give necessary and sufficient conditions on gso that there exists a solution, and we characterize all the solutions.In the case N ≥ 2, then one can also reduce the problem <strong>to</strong>an ordinary differential equation. Indeed, if we look for a radiallysymmetric solution u(x) = u(|x|), then the equation becomes theODEu ′′ + N − 1 u ′ + g(u) = 0, 0 < r < R,rand the boundary condition becomes u(R) = 0, or u(r) → 0 as r →∞ in the case R = ∞. The approach that we will use for solvingthis problem is the following. Given u 0 > 0, we solve the ordinary1


2 1. ODE METHODSdifferential equation with the initial values u(0) = u 0 , u ′ (0) = 0.There exists a unique solution, which is defined on a maximal interval[0, R 0 ). Next, we try <strong>to</strong> adjust the initial value u 0 in such a waythat R 0 > R and u(R) = 0 (R 0 = ∞ and limr→∞u(r) = 0 in the caseR = ∞). This is called the shooting method. In Sections 1.3 and 1.4,we give sufficient conditions on g for the existence of solutions. Wealso obtain some necessary conditions.1.1. The case of the lineWe begin with the simple case N = 1 and R = ∞. In other words,Ω = R. In this case, we do not need <strong>to</strong> impose radial symmetry (butwe will see that any solution is radially symmetric up <strong>to</strong> a translation).We consider the equationfor all x ∈ R, with the boundary conditionu ′′ + g(u) = 0, (1.1.1)lim u(x) = 0. (1.1.2)x→±∞We give a necessary and sufficient condition on g for the existenceof nontrivial solutions of (1.1.1)–(1.1.2). Moreover, we characterizeall solutions. We show that all solutions are derived from a uniquepositive, even one and a unique negative, even one (whenever theyexist) by translations.We begin by recalling some elementary properties of the equation(1.1.1).Remark 1.1.1. The following properties hold.(i) Given x 0 , u 0 , v 0 ∈ R, there exists a unique solution u of (1.1.1)such that u(x 0 ) = u 0 and u ′ (x 0 ) = v 0 , defined on a maximalinterval (a, b) for some −∞ ≤ a < x 0 < b ≤ ∞. In addition, ifa > −∞, then |u(x)| + |u ′ (x)| → ∞ as x ↓ a (similarly, |u(x)| +|u ′ (x)| → ∞ as x ↑ b if b < ∞). This is easily proved by solvingthe integral equationu(x) = u 0 + (x − x 0 )v 0 −∫ xx 0∫ sx 0g(u(σ)) dσ ds,on the interval (x 0 − α, x 0 + α) for some α > 0 sufficiently small(apply Banach’s fixed point theorem in C([x 0 − α, x 0 + α])), andthen by considering the maximal solution.


1.1. THE CASE OF THE LINE 3(ii) It follows in particular from uniqueness that if u satisfies (1.1.1)on some interval (a, b) and if u ′ (x 0 ) = 0 and g(u(x 0 )) = 0 forsome x 0 ∈ (a, b), then u ≡ u(x 0 ) on (a, b).(iii) If u satisfies (1.1.1) on some interval (a, b) and x 0 ∈ (a, b), then12 u′ (x) 2 + G(u(x)) = 1 2 u′ (x 0 ) 2 + G(u(x 0 )), (1.1.3)for all x ∈ (a, b), whereG(s) =∫ s0g(σ) dσ, (1.1.4)for s ∈ R. Indeed, multiplying the equation by u ′ , we obtaind{ 1dx 2 u′ (x) 2 + G(u(x))}= 0,for all x ∈ (a, b).(iv) Let x 0 ∈ R and h > 0. If u satisfies (1.1.1) on (x 0 − h, x 0 + h)and u ′ (x 0 ) = 0, then u is symmetric about x 0 , i.e. u(x 0 + s) ≡u(x 0 − s) for all 0 ≤ s < h. Indeed, let v(s) = u(x 0 + s) andw(s) = u(x 0 − s) for 0 ≤ s < h. Both v and w satisfy (1.1.1) on(−h, h) and we have v(0) = w(0) and v ′ (0) = w ′ (0), so that byuniqueness v ≡ w.(v) If u satisfies (1.1.1) on some interval (a, b) and u ′ has at least twodistinct zeroes x 0 , x 1 ∈ (a, b), then u exists on (−∞, +∞) and uis periodic with period 2|x 0 − x 1 |. This follows easily from (iv),since u is symmetric about both x 0 and x 1 .We next give some properties of possible solutions of (1.1.1)–(1.1.2).Lemma 1.1.2. If u ≢ 0 satisfies (1.1.1)–(1.1.2), then the followingproperties hold.(i) g(0) = 0.(ii) Either u > 0 on R or else u < 0 on R.(iii) u is symmetric about some x 0 ∈ R, and u ′ (x) ≠ 0 for all x ≠ x 0 .In particular, |u(x − x 0 )| is symmetric about 0, increasing forx < 0 and decreasing for x > 0.(iv) For all y ∈ R, u(· − y) satisfies (1.1.1)–(1.1.2).Proof. If g(0) ≠ 0, then u ′′ (x) has a nonzero limit as x → ±∞,so that u cannot have a finite limit. This proves (i). By (1.1.2),


4 1. ODE METHODSu cannot be periodic. Therefore, it follows from Remark 1.1.1 (v)and (iv) that u ′ has exactly one zero on R and is symmetric about thiszero. Properties (ii) and (iii) follow. Property (iv) is immediate. □By Lemma 1.1.2, we need only study the even, positive or negativesolutions (since any solution is a translation of an even positive ornegative one), and we must assume g(0) = 0. Our main result of thissection is the following.Theorem 1.1.3. Let g : R → R be locally Lipschitz continuouswith g(0) = 0. There exists a positive, even solution of (1.1.1)–(1.1.2)if and only if there exists u 0 > 0 such thatg(u 0 ) > 0, G(u 0 ) = 0 and G(u) < 0 for 0 < u < u 0 , (1.1.5)where G is defined by (1.1.4). In addition, such a solution is unique.Similarly, there exists a negative, even solution of (1.1.1)–(1.1.2) ifand only if there exists v 0 < 0 such thatg(v 0 ) < 0, G(v 0 ) = 0 and G(u) < 0 for v 0 < u < 0, (1.1.6)and such a solution is unique.Proof. We only prove the first statement, and we proceed in fivesteps.Step 1. Let x 0 ∈ R and let u ∈ C 2 ([x 0 , ∞)). If u(x) → l ∈ Rand u ′′ (x) → 0 as x → ∞, then u ′ (x) → 0. Indeed, we havefor s > x ≥ x 0 . Therefore,u(x + 1) − u(x) =u ′ (s) = u ′ (x) +∫ x+1x∫ sxu ′′ (σ) dσ,u ′ (s) ds = u ′ (x) +∫ x+1 ∫ sfrom which the conclusion follows immediately.Step 2. If u is even and satifies (1.1.1)–(1.1.2), thenxxu ′′ (σ) dσ ds,12 u′ (x) 2 + G(u(x)) = 0, (1.1.7)for all x ∈ R andG(u(0)) = 0. (1.1.8)Indeed, letting x 0 → ∞ in (1.1.3), and using Step 1 and (1.1.2), weobtain (1.1.7). (1.1.8) follows, since u ′ (0) = 0.


1.1. THE CASE OF THE LINE 5Step 3. If u is a positive, even solution of (1.1.1)–(1.1.2),then g satisfies (1.1.5) with u 0 = u(0). Indeed, we have G(u 0 ) = 0by (1.1.8). Since u ′ (x) ≠ 0 for x ≠ 0 (by Lemma 1.1.2 (iii)), it followsfrom (1.1.7) that G(u(x)) < 0 for all x ≠ 0, thus G(u) < 0 for all0 < u < u 0 . Finally, since u is decreasing for x > 0 we have u ′ (x) ≤ 0for all x ≥ 0. This implies that u ′′ (0) ≤ 0, i.e. g(u 0 ) ≥ 0. If g(u 0 ) = 0,then u ≡ u 0 by uniqueness, which is absurd by (1.1.2). Therefore, wemust have g(u 0 ) > 0.Step 4. If g satisfies (1.1.5), then the solution u of (1.1.1) withthe initial values u(0) = u 0 and u ′ (0) = 0 is even, decreasing for x > 0and satisfies (1.1.2). Indeed, since g(u 0 ) > 0, we have u ′′ (0) < 0.Thus u ′ (x) < 0 for x > 0 and small. u ′ cannot vanish while u remainspositive, for otherwise we would have by (1.1.7) G(u(x)) = 0 for somex such that 0 < u(x) < u 0 . This is ruled out by (1.1.5). Furthermore,u cannot vanish in finite time, for then we would have u(x) = 0 forsome x > 0 and thus u ′ (x) = 0 by (1.1.7), which would imply u ≡ 0(see Remark 1.1.1 (ii)). Therefore, u is positive and decreasing forx > 0, and thus has a limit l ∈ [0, u 0 ) as x → ∞. We show that l = 0.Since u ′′ (x) → g(l) as x → ∞, we must have g(l) = 0. By Step 1,we deduce that u ′ (x) → 0 as x → ∞. Letting x → ∞ in (1.1.7)(which holds, because of (1.1.3) and the assumption G(u 0 ) = 0), wefind G(l) = 0, thus l = 0. Finally, u is even by Remark 1.1.1 (iv).Step 5. Conclusion. The necessity of condition (1.1.5) followsfrom Step 3, and the existence of a solution follows from Step 4. Itthus remain <strong>to</strong> show uniqueness. Let u and ũ be two positive, evensolutions. We deduce from Step 3 that g satisfies (1.1.5) with bothu 0 = u(0) and u 0 = ũ(0). It easily follows that ũ(0) = u(0), thusũ(x) ≡ u(x).□Remark 1.1.4. If g is odd, then the statement of Theorem 1.1.3is simplified. There exists solution u ≢ 0 of (1.1.1)–(1.1.2) if andonly if (1.1.5) holds. In this case, there exists a unique positive, evensolution of (1.1.1)–(1.1.2), which is decreasing for x > 0. <strong>An</strong>y othersolution ũ of (1.1.1)–(1.1.2) has the form ũ(x) = εu(x − y) for ε = ±1and y ∈ R.Remark 1.1.5. Here are some applications of Theorem 1.1.3 andRemark 1.1.4.


6 1. ODE METHODS(i) Suppose g(u) = −λu for some λ ∈ R (linear case). Then there isno nontrivial solution of (1.1.1)–(1.1.2). Indeed, neither (1.1.5)nor (1.1.6) hold. One can see this directly by calculating allsolutions of the equation. If λ = 0, then all the solutions havethe form u(x) = a + bx for some a, b ∈ R. If λ > 0, then allthe solutions have the form u(x) = ae √ λx + be −√λx for somea, b ∈ R. If λ < 0, then all the solutions have the form u(x) =ae i√−λx + be −i√−λx for some a, b ∈ R.(ii) Suppose g(u) = −λu + µ|u| p−1 u for some λ, µ ∈ R and somep > 1. If λ ≤ 0 or if µ ≤ 0, then there is no nontrivial solutionof (1.1.1)–(1.1.2). If λ, µ > 0, then there is the solution( λ(p + 1)) 1 (p−1 p − 1 √ )) − 2p−1u(x) =cosh(λx .2µ2All other solutions have the form ũ(x) = εu(x − y) for ε = ±1and y ∈ R. We need only apply Remark 1.1.4.(iii) Suppose g(u) = −λu + µ|u| p−1 u − ν|u| q−1 u for some λ, µ, ν ∈ Rand some 1 < p < q. The situation is then much more complex.a) If λ < 0, then there is no nontrivial solution.b) If λ = 0, then the only case when there is a nontrivial solutionis when µ < 0 and ν < 0. In this case, there is theeven, positive decreasing solution u corresponding <strong>to</strong> the intialvalue u(0) = ((q + 1)µ/(p + 1)ν) 1q−p and u ′ (0) = 0. Allother solutions have the form ũ(x) = εu(x − y) for ε = ±1and y ∈ R.c) If λ > 0, µ ≤ 0 and ν ≥ 0, then there is no nontrivial solution.d) If λ > 0, µ > 0 and ν ≤ 0, then there is the even, positivedecreasing solution u corresponding <strong>to</strong> the intial value u 0 > 0given byµp + 1 up−1 0 − νq + 1 uq−1 0 = λ 2 . (1.1.9)All other solutions have the form ũ(x) = εu(x−y) for ε = ±1and y ∈ R.e) If λ > 0, µ > 0 and ν > 0, let u = ((q +1)(p−1)µ/(p+1)(q −1)ν) 1q−p . Ifµp + 1 up−1 −νq + 1 uq−1 ≤ λ 2 ,


1.2. THE CASE OF THE INTERVAL 7then there is no nontrivial solution. Ifµp + 1 up−1 −νq + 1 uq−1 > λ 2 ,then there is the even, positive decreasing solution u corresponding<strong>to</strong> the intial value u ∈ (0, u) given by (1.1.9). Allother solutions have the form ũ(x) = εu(x − y) for ε = ±1and y ∈ R.f) If λ > 0 and ν < 0, then there is the even, positive decreasingsolution u corresponding <strong>to</strong> the intial value u 0 > 0 givenby (1.1.9). All other solutions have the form ũ(x) = εu(x−y)for ε = ±1 and y ∈ R.1.2. The case of the intervalIn this section, we consider the case where Ω is a bounded interval,i.e. N = 1 and R < ∞. In other words, Ω = (−R, R). We consideragain the equation (1.1.1), but now with the boundary conditionu(−R) = u(R) = 0. (1.2.1)The situation is more complex than in the preceding section. Indeed,note first that the condition g(0) = 0 is not anymore necessary. Forexample, in the case g(u) = 4u − 2 and R = π, there is the solutionu(x) = sin 2 x. Also, there are necessary conditions involving not onlyg, but relations between g and R. For example, let g(u) = u. Since inthis case all solutions of (1.1.1) have the form u(x) = a sin(x + b), wesee that there is a nontrivial solution of (1.1.1)-(1.2.1) if and only ifR = kπ/2 for some positive integer k. Moreover, this example showsthat, as opposed <strong>to</strong> the case R = ∞, there is not uniqueness of positive(or negative) solutions up <strong>to</strong> translations.We give a necessary and sufficient condition on g for the existenceof nontrivial solutions of (1.1.1)-(1.2.1). Moreover, we characterizeall solutions. The characterization, however, is not as simple as inthe case R = ∞. In the case of odd nonlinearities, the situationis relatively simple, and we show that all solutions are derived frompositive solutions on smaller intervals by reflexion.We recall some simple properties of the equation (1.1.1) whichfollow from Remark 1.1.1.Remark 1.2.1. The following properties hold.


8 1. ODE METHODS(i) Suppose that u satisfies (1.1.1) on some interval (a, b), thatu(a) = u(b) = 0 and that u > 0 on (a, b). Then u is symmetricwith respect <strong>to</strong> (a + b)/2, i.e. u(x) ≡ u(a + b − x), andu ′ (x) > 0 for all a < x < (a + b)/2. Similarly, if u < 0 on (a, b),then u is symmetric with respect <strong>to</strong> (a + b)/2 and u ′ (x) < 0 forall a < x < (a + b)/2. Indeed, suppose that u ′ (x 0 ) = 0 for somex 0 ∈ (a, b). Then u is symmetric about x 0 , by Remark 1.1.1 (iv).If x 0 < (a + b)/2, we obtain in particular u(2x 0 − a) = u(a) = 0,which is absurd since u > 0 on (a, b) and 2x 0 −a ∈ (a, b). We obtainas well a contradiction if x 0 > (a+b)/2. Therefore, (a+b)/2is the only zero of u ′ on (a, b) and u is symmetric with respect<strong>to</strong> (a + b)/2. Since u > 0 on (a, b), we must then have u ′ (x) > 0for all a < x < (a + b)/2.(ii) Suppose again that u satisfies (1.1.1) on some interval (a, b), thatu(a) = u(b) = 0 and that u > 0 on (a, b). Then g((a + b)/2) > 0.If instead u < 0 on (a, b), then g((a+b)/2) < 0. Indeed, it followsfrom (i) that u achieves its maximum at (a+b)/2. In particular,0 ≤ u ′′ ((a+b)/2) = −g(u((a+b)/2)). Now, if g(u((a+b)/2)) = 0,then u ≡ u((a + b)/2) by uniqueness, which is absurd.Remark 1.2.2. In view of Remarks 1.1.1 and 1.2.1, we see thatany nontrivial solution u of (1.1.1)-(1.2.1) must have a specific form.More precisely, we can make the following observations.(i) u can be positive or negative, in which case u is even and |u(x)|is decreasing for x ∈ (0, R) (by Remark 1.2.1 (i)).(ii) If u is neither positive nor negative, then u ′ vanishes at leasttwice in Ω, so that u is the restriction <strong>to</strong> Ω of a periodic solutionin R (by Remark 1.1.1).(iii) Suppose u is neither positive nor negative and let τ > 0 be theminimal period of u. Set w(x) = u(−R + x), so that w(0) =w(τ) = 0. Two possibilities may occur.a) Either w > 0 (respectively w < 0) on (0, τ) (and thus w ′ (0) =w ′ (τ) = 0 because u is C 1 ). In this case, we clearly haveR = kτ for some integer k ≥ 1, and so u is obtained byperiodicity from a positive (respectively negative) solution(u itself) on the smaller interval (−R, −R + τ).b) Else, w vanishes in (0, τ), and then there exists σ ∈ (0, τ)such that w > 0 (respectively w < 0) on (0, σ), w is symmetricabout σ/2, w < 0 (respectively w > 0) on (σ, τ) and


1.2. THE CASE OF THE INTERVAL 9w is symmetric about (τ + σ)/2. In this case, u is obtainedfrom a positive solution and a negative solution on smallerintervals (u on (−R, −R + σ) and u on (−R + σ, −R + τ)).The derivatives of these solutions must agree at the endpoints(because u is C 1 ) and 2R = mσ + n(τ − σ), where m andn are positive integers such that n = m or n = m + 1 orn = m − 1. To verify this, we need only show that w takesboth positive and negative values in (0, τ) and that w vanishesonly once (the other conclusions then follow easily). Wefirst show that w takes values of both signs. Indeed, if forexample w ≥ 0 on (0, τ), then w vanishes at some τ 1 ∈ (0, τ)and w ′ (0) = w ′ (τ 1 ) = w ′ (τ) = 0. Then w is periodic of period2τ 1 and of period 2(τ − τ 1 ) by Remark 1.1.1 (v). Since τ isthe minimal period of w, we must have τ 1 = τ/2. Therefore,w ′ must vanish at some τ 2 ∈ (0, τ 1 ), and so w has the period2τ 2 < τ, which is absurd. Finally, suppose w vanishes twicein (0, τ). This implies that w ′ has three zeroes τ 1 < τ 2 < τ 3in (0, τ). By Remark 1.1.1 (v), w is periodic with the periods2(τ 2 − τ 1 ) and 2(τ 3 − τ 2 ). We must then have 2(τ 2 − τ 1 ) ≥ τand 2(τ 3 − τ 2 ) ≥ τ. It follows that τ 3 − τ 1 ≥ τ, which isabsurd.(iv) Assume g is odd. In particular, there is the trivial solutionu ≡ 0. Suppose u is neither positive nor negative, u ≢ 0 andlet τ > 0 be the minimal period of u. Then it follows from (iii)above that u(τ − x) = −u(x) for all x ∈ [0, τ]. Indeed, the firstpossibility of (iii) cannot occur since if u(0) = u ′ (0) = 0, thenu ≡ 0 by uniqueness (because g(0) = 0). Therefore, the secondpossibility occurs, but by oddness of g and uniqueness, we musthave σ = τ/2, and u(τ − x) = −u(x) for all x ∈ [0, τ]. In otherwords, u is obtained from a positive solution on (−R, −R + σ),with σ = R/2m for some positive integer m, which is extended<strong>to</strong> (−R, R) by successive reflexions.It follows from the above Remark 1.2.2 that the study of thegeneral nontrivial solution of (1.1.1)-(1.2.1) reduces <strong>to</strong> the study ofpositive and negative solutions (for possibly different values of R). Wenow give a necessary and sufficient condition for the existence of suchsolutions.


10 1. ODE METHODSTheorem 1.2.3. There exists a solution u > 0 of (1.1.1)-(1.2.1)if and only if there exists u 0 > 0 such that(i) g(u 0 ) > 0;(ii) G(u) < G(u 0 ) for all 0 < u < u 0 ;(iii) either∫G(u 0 ) > 0 or else G(u 0 ) = 0 and g(0) < 0;u0ds(iv) √ √2 G(u0 ) − G(s) = R.0In this case, u > 0 defined by∫ u0u(x)ds√ √ = |x|, (1.2.2)2 G(u0 ) − G(s)for all x ∈ Ω, satisfies (1.1.1)-(1.2.1). Moreover, any positive solutionhas the form (1.2.2) for some u 0 > 0 satisfying (i)–(ii).Similarly, there exists a solution u < 0 of (1.1.1)-(1.2.1) if andonly if there exists v 0 < 0 such that g(v 0 ) < 0, G(v 0 ) < G(v) for allv 0 < v < 0, g(0) > 0 if G(v 0 ) = 0, and∫ 0v 0In this case, u < 0 defined by∫ u(x)v 0ds√2√G(s) − G(v0 ) = R.ds√ √ = |x|, (1.2.3)2 G(s) − G(v0 )for all x ∈ Ω, satisfies (1.1.1)-(1.2.1). Moreover, any negative solutionhas the form (1.2.3) for some v 0 < 0 as above.Proof. We consider only the case of positive solutions, the othercase being similar. We proceed in two steps.Step 1. The conditions (i)–(iv) are necessary. Let u 0 = u(0).(i) follows from Remark 1.2.1 (ii). Since u ′ (0) = 0 by Remark 1.2.1 (i),it follows from (1.1.3) that12 u′ (x) 2 + G(u(x)) = G(u 0 ), (1.2.4)for all x ∈ (a, b). Since u ′ (x) ≠ 0 for all x ∈ (−R, R), x ≠ 0 (againby Remark 1.2.1 (i)), (1.2.4) implies (ii). It follows from (1.2.4) thatG(u 0 ) = u ′ (R) 2 /2 ≥ 0. Suppose now G(u 0 ) = 0. If g(0) > 0, then (ii)cannot hold, and if g(0) = 0, then u cannot vanish (by Theorem 1.1.3).


1.2. THE CASE OF THE INTERVAL 11Therefore, we must have g(0) < 0, which proves (iii).follows from (1.2.4) thatFinally, i<strong>to</strong>n (0, R). Therefore,u ′ (x) = − √ 2 √ G(u 0 ) − G(u(x)),dF (u(x)) = 1, wheredxF (y) =∫ u0yds√2√G(u0 ) − G(s) ;and so F (u(x)) = x, for x ∈ (0, R). (1.2.2) follows for x ∈ (0, R). Thecase x ∈ (−R, 0) follows by symmetry. Letting now x = R in (1.2.2),we obtain (iv).Step 2. Conclusion. Suppose (i)–(iv), and let u be definedby (1.2.2). It is easy <strong>to</strong> verify by a direct calculation that u satisfies(1.1.1) in Ω, and it follows from (iv) that u(±R) = 0. Finally, thefact that any solution has the form (1.2.2) for some u 0 > 0 satisfying(i)–(iv) follows from Step 1.□Remark 1.2.4. Note that in general there is not uniqueness ofpositive (or negative) solutions. For example, if R = π/2 and g(u) =u, then u(x) = a cos x is a positive solution for any a > 0. In general,any u 0 > 0 satisfying (i)–(iv) gives rise <strong>to</strong> a solution given by (1.2.2).Since u(0) = u 0 , two distinct values of u 0 give rise <strong>to</strong> two distinctsolutions. For some nonlinearities, however, there exists at most oneu 0 > 0 satisfying (i)–(iv) (see Remarks 1.2.5 and 1.2.6 below).We now apply the above results <strong>to</strong> some model cases.Remark 1.2.5. Consider g(u) = a + bu, a, b ∈ R.(i) If b = 0, then there exists a unique solution u of (1.1.1)-(1.2.1),which is given by u(x) = a(R 2 − x 2 )/2. This solution has thesign of a and is nontrivial iff a ≠ 0.(ii) If a = 0 and b > 0, then there is a nontrivial solution of (1.1.1)-(1.2.1) if and only if 2 √ bR = kπ for some positive integer k. Inthis case, any nontrivial solution u of (1.1.1)-(1.2.1) is given byu(x) = c sin ( √ b(x + R)) for some c ∈ R, c ≠ 0. In particular,the set of solutions is a one parameter family.(iii) If a = 0 and b ≤ 0, then the only solution of (1.1.1)-(1.2.1) isu ≡ 0.


12 1. ODE METHODS(iv)√If a ≠ 0 and b > 0, then several cases must be considered. IfbR = (π/2) + kπ for some nonnegative integer k, then thereis no solution of (1.1.1)-(1.2.1). If √ bR = kπ for some positiveinteger k, then there is a nontrivial solution of (1.1.1)-(1.2.1),and all solutions have the formu(x) = a (√cos ( bx))b cos ( √ bR) − 1 + c sin ( √ bx),for some c ∈ R. In particular, the set of solutions is a one parameterfamily. If c = 0, then u has constant sign and u ′ (−R) =u ′ (R) = 0. (If in addition k is even, then also u(0) = u ′ (0) = 0.)If √ c ≠ 0, then u takes √both positive and negative values. IfbR ≠ (π/2) + kπ and bR ≠ kπ for all nonnegative integersk, then there is a unique solution of (1.1.1)-(1.2.1) given by theabove formula with c = 0. Note that this solution has constantsign if √ bR ≤ π and changes sign otherwise.(v) If a ≠ 0 and b < 0, then there is a unique solution of (1.1.1)-(1.2.1) given byu(x) = a (1 − cosh (√ −bx))b cosh ( √ .−bR)Note that in particular u has constant sign (the sign of a) in Ω.Remark 1.2.6. Consider g(u) = au + b|u| p−1 u, with a, b ∈ R,b ≠ 0 and p > 1. Note that in this case, there is always the trivialsolution u ≡ 0. Note also that g is odd, so that by Remark 1.2.2 (iv)and Theorem 1.2.3, there is a solution of (1.1.1)-(1.2.1) every timethere exists u 0 > 0 and a positive integer m such that properties (i),(ii) and (iv) of Theorem 1.2.3 are satisfied and such that∫ u00ds√ √2 G(u0 ) − G(s) = r2m . (1.2.5)Here, G is given by G(u) = a 2 u2 +bp + 1 |u|p+1 .(i) If a ≤ 0 and b < 0, then there is no u 0 > 0 such that g(u 0 ) > 0.In particular, there is no nontrivial solution of (1.1.1)-(1.2.1).(ii) If a ≥ 0 and b > 0, then g > 0 and G is increasing on [0, ∞).Therefore, there is a pair ±u of nontrivial solutions of (1.1.1)-(1.2.1) every time there is u 0 > 0 and an integer m ≥ 1 such


1.2. THE CASE OF THE INTERVAL 13∫ u00that property (1.2.5) is satisfied. We haveds√2√G(u0 ) − G(s)=∫ 10dt√ √:= φ(u 0 ). (1.2.6)2a2 (1 − t2 ) + bp+1 up−1 0 (1 − t p+1 )It is clear that φ : [0, ∞) → (0, ∞) is decreasing, that φ(∞) = 0and thatφ(0) =∫ 10dt√ √2a2 (1 − t2 ) = π2 √ a(+∞ if a = 0),by using the change of variable t = sin θ. Therefore, givenany integer m > 2 √ aR/π, there exists a unique u 0 (k) suchthat (1.2.5) is satisfied. In particular, the set of nontrivial solutionsof (1.1.1)-(1.2.1) is a pair of sequences ±(u n ) n≥0 . Wesee that there exists a positive solution (which corresponds <strong>to</strong>m = 1) iff 2 √ aR < π.(iii) If a > 0 and b < 0, then both g and G are increasing on (0, u ∗ )with u ∗ = (−a/b) 1p−1 . On (u∗ , ∞), g is negative and G is decreasing.Therefore, the assumptions (i)–(iii) of Theorem 1.2.3are satisfied iff u 0 ∈ (0, u ∗ ). Therefore, there is a pair ±u of nontrivialsolutions of (1.1.1)-(1.2.1) every time there is u 0 ∈ (0, u ∗ )and an integer m ≥ 1 such that property (1.2.5) is satisfied.Note that for u 0 ∈ (0, u ∗ ), formula (1.2.6) holds, but since b < 0,φ is now increasing on (0, u ∗ ), φ(0) = π/2 √ a and φ(u ∗ ) = +∞.Therefore, there exists nontrivial solutions iff 2 √ aR > π, and inthis case, there exists a unique positive solution. Moreover, stillassuming 2 √ aR > π, the set of nontrivial solutions of (1.1.1)-(1.2.1) consists of l pairs of solutions, where l is the integer par<strong>to</strong>f 2 √ aR/π. Every pair of solution corresponds <strong>to</strong> some integerm ∈ {1, . . . , l} and u 0 ∈ (0, u ∗ ) defined by φ(u 0 ) = R/2m.(iv) If a < 0 and b > 0, then assumptions (i)–(iii) of Theorem 1.2.3are satisfied iff u 0 > u ∗ with u ∗ = (−a(p + 1)/2b) 1p−1 . Therefore,there is a pair ±u of nontrivial solutions of (1.1.1)-(1.2.1)every time there is u 0 > u ∗ and an integer m ≥ 1 such that property(1.2.5) is satisfied. Note that for u 0 > u ∗ , formula (1.2.6)holds, and that φ is decreasing on (u ∗ , ∞), φ(u ∗ ) = +∞ and


14 1. ODE METHODSφ(∞) = 0. Therefore, given any integer k > 2 √ aR/π, thereexists a unique u 0 (k) such that (1.2.5) is satisfied. In particular,the set of nontrivial solutions of (1.1.1)-(1.2.1) is a pair ofsequences ±(u n ) n≥0 . We see that there always exists a positivesolution (which corresponds <strong>to</strong> m = 1).1.3. The case of R N , N ≥ 2In this section, we look for radial solutions of the equation{−△u = g(u) in R N ,u(x) −→ |x|→∞ 0.As observed before, the equation for u(r) = u(|x|) becomes the ODEu ′′ + N − 1 u ′ + g(u) = 0, r > 0,rwith the boundary condition u(r) −→ 0. For simplicity, we considerr→∞the model caseg(u) = −λu + µ|u| p−1 u.(One can handle more general nonlinearities by the method we willuse, see McLeod, Troy and Weissler [38].) Therefore, we look forsolutions of the ODEu ′′ + N − 1 u ′ − λu + µ|u| p−1 u = 0, (1.3.1)rfor r > 0 such thatu(r) −→ 0. (1.3.2)r→∞Due <strong>to</strong> the presence of the nonau<strong>to</strong>nomous term (N − 1)u ′ /r in theequation (1.3.1), this problem turns out <strong>to</strong> be considerably more difficultthan in the one-dimensional case. On the other hand, it has aricher structure, in the sense that there are “more” solutions.We observe that, given u 0 > 0, there exists a unique, maximalsolution u ∈ C 2 ([0, R m )) of (1.3.1) with the initial conditions u(0) =u 0 and u ′ (0) = 0, with the blow up alternative that either R m = ∞ orelse |u(r)|+|u ′ (r)| → ∞ as r ↑ R m . To see this, we write the equationin the form(r N−1 u ′ (r)) ′ = λr N−1 (u(r) − µ|u(r)| p−1 u(r)), (1.3.3)thus, with the initial conditions,


1.3. THE CASE OF R N , N ≥ 2 15u(r) = u 0 +∫ r0∫ ss −(N−1) σ N−1 (u(σ) − µ|u(σ)| p−1 u(σ)) dσ ds. (1.3.4)0This last equation is solved by the usual fixed point method. Forr > 0, the equation is not anymore singular, so that the solution canbe extended by the usual method <strong>to</strong> a maximal solution which satisfiesthe blow up alternative.The nonau<strong>to</strong>nomous term in the equation introduces some dissipation.To see this, let u be a solution on some interval (a, b), with0 < a < b < ∞, and setE(u, r) = 1 2 u′ (r) 2 − λ 2 u(r)2 + µp + 1 |u(r)|p+1 . (1.3.5)Multiplying the equation by u ′ (r), we obtaindEdr = −N − 1 u ′ (r) 2 , (1.3.6)rso that E(u, r) is a decreasing quantity.Note that if µ > 0, there is a constant C depending only on p, µ, λsuch thatE(u, r) ≥ 1 2 (u′ (r) 2 + u(r) 2 ) − C.In particular, all the solutions of (1.3.1) exist for all r > 0 and staybounded as r → ∞.The first result of this section is the following.Theorem 1.3.1. Assume λ, µ > 0 and (N − 2)p < N + 2. Thereexists x 0 > 0 such that the solution u of (1.3.1) with the initial conditionsu(0) = x 0 and u ′ (0) = 0 is defined for all r > 0, is positive anddecreasing. Moreover, there exists C such thatfor all r > 0.u(r) 2 + u ′ (r) 2 ≤ Ce −2√ λr , (1.3.7)When N = 1 (see Section 1.1), there is only one radial solutionsuch that u(0) > 0 and u(r) → 0 as r → ∞. When N ≥ 2, thereare infinitely many such solutions. More precisely, there is at leas<strong>to</strong>ne such solution with any prescribed number of nodes, as shows thefollowing result.


16 1. ODE METHODSTheorem 1.3.2. Assume λ, µ > 0 and (N − 2)p < N + 2. Thereexists an increasing sequence (x n ) n≥0 of positive numbers such thatthe solution u n of (1.3.1) with the initial conditions u n (0) = x n andu ′ n(0) = 0 is defined for all r > 0, has exactly n nodes, and satisfiesfor some constant C the estimate (1.3.7).We use the method of McLeod, Troy and Weissler [38] <strong>to</strong> prove theabove results. The proof is rather long and relies on some preliminaryinformations on the <strong>equations</strong>, which we collect below.Proposition 1.3.3. If u is the solution of{u ′′ + N−1ru ′ + |u| p−1 u = 0,u(0) = 1, u ′ (0) = 0,(1.3.8)then the following properties hold.(i) If N ≥ 3 and (N − 2)p ≥ N + 2, then u(r) > 0 and u ′ (r) < 0for all r > 0. Moreover, u(r) → 0 as r → ∞.(ii) If (N−2)p < N+2, then u oscillates indefinitely. More precisely,for any r 0 ≥ 0 such that u(r 0 ) ≠ 0, there exists r 1 > r 0 suchthat u(r 0 )u(r 1 ) < 0.Proof. We note that u ′′ (0) < 0, so that u ′ (r) < 0 for r > 0 andsmall. Now, if u ′ would vanish while u remains positive, we wouldobtain u ′′ < 0 from the equation, which is absurd. So u ′ < 0 while uremains positive. Next, we deduce from the equation that( u′22 + |u|p+1p + 1) ′ N − 1 = − u ′2 , (1.3.9)r(r N−1 uu ′ ) ′ + r N−1 |u| p+1 = r N−1 u ′2 , (1.3.10)and( rN2 u′2 + rNp + 1 |u|p+1) ′ N − 2 + r N−1 u ′22= Np + 1 rN−1 |u| p+1 . (1.3.11)We first prove property (i). Assume by contradiction that u has afirst zero r 0 . By uniqueness, we have u ′ (r 0 ) ≠ 0. Integrating (1.3.10)and (1.3.11) on (0, r 0 ), we obtain∫ r00r N−1 u p+1 =∫ r00r N−1 u ′2 ,


1.3. THE CASE OF R N , N ≥ 2 17andand so,r0N 2 u′ (r 0 ) 2 + N − 2 ∫ r0r N−1 u ′2 =N ∫ r0r N−1 |u| p+1 ;2 0p + 1 0(0 < rN 0N2 u′ (r 0 ) 2 =p + 1 − N − 2 ) ∫ r 0r N−1 u ′2 ≤ 0,2 0which is absurd. This shows that u(r) > 0 (hence u ′ (r) < 0) forall r > 0. In particular, u(r) decreases <strong>to</strong> a limit l ≥ 0 as r → ∞.Since u ′ (r) is bounded by (1.3.9), we deduce from the equation thatu ′′ (r) → −l p , which implies that l = 0. This proves property (i)We now prove property (ii), and we first show that u must havea first zero. Indeed, suppose by contradiction that u(r) > 0 for allr > 0. It follows that u ′ (r) < 0 for all r > 0. Thus u has a limitl ≥ 0 as r → ∞. Note that by (1.3.6), u ′ is bounded, so that by theequation u ′′ (r) → −l p as r → ∞, which implies that l = 0. Observethatr N−1 u ′ (r) = −∫ r0s N−1 u(s) p .Since the right-hand side of the above identity is a negative, decreasingfunction, we havefor r ≥ 1, thusr N−1 u ′ (r) ≤ −c < 0,u ′ (r) ≤ u ′ (1) − c∫ r1s −(N−1) ds.If N = 2, we deduce that u ′ must vanish in finite time, which isabsurd. So we now suppose N ≥ 3. We haveTherefore,−r N−1 u ′ (r) =which implies that∫ r0∫ rs N−1 u p ≥ u(r) p s N−1 = rN N u(r)p .(1)r2 ′−(p − 1)u(r)p−1 ≥ 0,2N0u(r) ≤ Cr − 2p−1 . (1.3.12)


18 1. ODE METHODSBy the assumption on p, this implies that∫ ∞We now integrate (1.3.11) on (0, r):r N 2 u′ (r) 2 +0rNp + 1 u(r)p+1 + N − 22r N−1 u(r) p+1 < ∞. (1.3.13)∫ r0= Np + 1s N−1 u ′2∫ rLetting r → ∞ and applying (1.3.13), we deduce that∫ ∞00s N−1 u p+1 . (1.3.14)r N−1 u ′ (r) 2 < ∞. (1.3.15)It follows in particular from (1.3.13) and (1.3.15) that there existr n → ∞ such thatr N n ((u ′ (r n ) 2 + u(r n ) p+1 ) → 0.Letting r = r n in (1.3.14) and applying (1.3.13) and (1.3.15), wededuce by letting n → ∞N − 22∫ ∞0s N−1 u ′2 =Np + 1Finally, we integrate (1.3.10) on (0, r):r N−1 u(r)u ′ (r) +∫ r0∫ ∞0s N−1 u p+1 =s N−1 u p+1 . (1.3.16)∫ r0s N−1 u ′2 . (1.3.17)We observe that u(r n ) ≤ cr − Np+1n and that |u ′ (r n )| ≤ cr − N 2n . By theassumption on p, this implies that rn N−1 u(r n )u ′ (r n ) → 0. Lettingr = r n in (1.3.17) and letting n → ∞, we obtain∫ ∞0s N−1 u p+1 =∫ ∞0s N−1 u ′2 .Together with (1.3.16), this implies( N0 =p + 1 − N − 2 ) ∫ ∞r N−1 u ′2 > 0,2which is absurd.0


1.3. THE CASE OF R N , N ≥ 2 19In fact, with the previous argument, one shows as well that ifr ≥ 0 is such that u(r) ≠ 0 and u ′ (r) = 0, then there exists ρ > rsuch that u(ρ) = 0.To conclude, we need only show that if ρ > 0 is such that u(ρ) = 0,then there exists r > ρ such that u(r) ≠ 0 and u ′ (r) = 0. To see this,note that u ′ (ρ) ≠ 0 (for otherwise u ≡ 0 by uniqueness), and supposefor example that u ′ (ρ) > 0. If u ′ (r) > 0 for all r ≥ ρ, then (sinceu is bounded) u converges <strong>to</strong> some positive limit l as r → ∞; andso, by the equation, u ′′ (r) → −l p as r → ∞, which is absurd. Thiscompletes the proof.□Remark 1.3.4. Here are some comments on Proposition 1.3.3and its proof.(i) Property (ii) does not hold for singular solutions of (1.3.8). Indeed,for p > N/(N − 2), there is the (singular) solution( (N − 2)p − N) 1 (p−1 2) 2p−1u(r) =, (1.3.18)2(p − 1)rwhich is positive for all r > 0.(ii) The argument at the beginning of the proof of property (ii) showsthat any positive solution u of (1.3.8) on [R, ∞) (R ≥ 0) satisfiesthe estimate (1.3.12) for r large. This holds for any value of p.The explicit solutions (1.3.18) show that this estimate cannot beimproved in general.(iii) Let p > 1, N ≥ 3 and let u be a positive solution of (1.3.8) on(R, ∞) for some R > 0. If u(r) → 0 as r → ∞, then there existsc > 0 such thatu(r) ≥c , (1.3.19)rN−2 for all r ≥ R. Indeed, (r N−1 u ′ ) ′ = −r N−1 u p ≤ 0, so thatu ′ (r) ≤ R N−1 u ′ (R)r −(N−1) . Integrating on (r, ∞), we obtain(N − 2)r N−2 u(r) ≥ −R N−1 u ′ (R). Since u > 0 and u(r) → 0 asr → ∞, we may assume without loss of generality that u ′ (R) < 0and (1.3.19) follows.Corollary 1.3.5. Assume λ, µ > 0 and (N − 2)p < N + 2. Forany ρ > 0 and any n ∈ N, n ≥ 1, there exists M n,ρ such that ifx 0 > M n,ρ , then the solution u of (1.3.1) with the initial conditionsu(0) = x 0 and u ′ (0) = 0 has at least n zeroes on (0, ρ).


20 1. ODE METHODSProof. Changing u(r) <strong>to</strong> (µ/λ) 1p−1 u(λ− 1 2 r), we are reduced <strong>to</strong>the equationu ′′ + N − 1 u ′ − u + |u| p−1 u = 0. (1.3.20)rLet now R > 0 be such that the solution v of (1.3.8) has n zeroes on(0, R) (see Proposition 1.3.3).Let x > 0 and let u be the solution of (1.3.20) such that u(0) = x,u ′ (0) = 0. Setũ(r) = 1 x u (rx p−12so that {ũ ′′ + N−1rũ ′ − 1x p−1 ũ + |ũ| p−1 ũ = 0,ũ(0) = 1, ũ ′ (0) = 0.It is not difficult <strong>to</strong> show that ũ → v in C 1 ([0, R]) as x → ∞. Sincev ′ ≠ 0 whenever v = 0, this implies that for x large enough, sayx ≥ x n , ũ has n zeroes on (0, R). Coming back <strong>to</strong> u, this means thatu has n zeroes on (0, (R/x) p−12 ). The result follows with for exampleM n,ρ = max{x n , (R/ρ) 2p−1 }.□Lemma 1.3.6. For every c > 0, there exists α(c) > 0 with thefollowing property. If u is a solution of (1.3.1) and if E(u, R) =−c < 0 and u(R) > 0 for some R ≥ 0 (E is defined by (1.3.5)), thenu(r) ≥ α(c) for all r ≥ R.Proof. Let f(x) = µ|x| p+1 /(p + 1) − λx 2 /2 for x ∈ R, and let−m = min f < 0. One verifies easily that for every c ∈ (0, m) theequation f(x) = −c has two positive solutions 0 < α(c) ≤ β(c), andthat if f(x) ≤ −c, then x ∈ [−β(c), −α(c)] ∪ [α(c), β(c)]. It followsfrom (1.3.6) that f(u(r)) ≤ −c for all r ≥ R, from which the resultfollows immediately.□We are now in a position <strong>to</strong> prove Theorem 1.3.1.Proof of Theorem 1.3.1. Let),A 0 = {x > 0; u > 0 on (0, ∞)},where u is the solution of (1.3.1) with the initial values u(0) = x,u ′ (0) = 0.


1.3. THE CASE OF R N , N ≥ 2 21We claim that I = (0, (λ(p+1)/2µ) 1p−1 ) ⊂ A0 , so that A 0 ≠ ∅. Indeed,suppose x ∈ I. It follows that E(u, 0) < 0; and so, inf u(r) > 0r≥0by Lemma 1.3.6. On the other hand, A 0 ⊂ (0, M 1,1 ) by Corollary1.3.5. Therefore, we may consider x 0 = sup A 0 . We claim thatx 0 has the desired properties.Indeed, let u be the solution with initial value x 0 . We first notethat x 0 ∈ A 0 . Otherwise, u has a first zero at some r 0 > 0. Byuniqueness, u ′ (r 0 ) ≠ 0, so that u takes negative values. By continuousdependance, this is the case for solutions with initial values close <strong>to</strong>x 0 , which contradicts the property x 0 ∈ A 0 . On the other hand, wehave x 0 > (λ(p + 1)/2µ) 11p−1 > (λ/µ)p−1 . This implies that u ′′ (0) < 0,so that u ′ (r) < 0 for r > 0 and small. We claim that u ′ (r) cannotvanish. Otherwise, for some r 0 > 0, u(r 0 ) > 0, u ′ (r 0 ) = 0 andu ′′ (r 0 ) ≥ 0. This implies that u(r 0 ) ≤ (λ/µ) 1p−1 , which in turn impliesE(u, r 0 ) < 0. By continuous dependance, it follows that for v 0 close <strong>to</strong>x 0 , we have E(v, r 0 ) < 0, which implies that v 0 ∈ A 0 by Lemma 1.3.6.This contradicts again the property x 0 = sup A 0 . Thus u ′ (r) < 0 forall r > 0. Letm = inf u(r) = lim u(r) ≥ 0r≥0 r→∞We claim that m = 0. Indeed if m > 0, we deduce from the equationthat (since u ′ is bounded)u ′′ (r) −→r→∞λm − µm p .Thus, either m = 0 or else m = (λ/µ) 1p−1 . In this last case, sinceu ′ (r n ) → 0 for some sequence r n → ∞, we have lim inf E(u, r) < 0as r → ∞, which is again absurd by Lemma 1.3.6. Thus m = 0.The exponential decay now follows from the next lemma (see alsoProposition 4.4.9 for a more general result).□Lemma 1.3.7. Assume λ, µ > 0. If u is a solution of (1.3.1) on[r 0 , ∞) such that u(r) → 0 as r → ∞, then there exists a constant Csuch thatfor r ≥ r 0 .u(r) 2 + u ′ (r) 2 ≤ Ce −2√ λr ,


22 1. ODE METHODSProof. Let v(r) = (µ/λ) 1p−1 u(λ− 1 2 r), so that v is a solutionof (1.3.20). Setf(r) = v(r) 2 + v ′ (r) 2 − 2v(r)v ′ (r).We see easily that for r large enough v(r)v ′ (r) < 0, so that, by possiblychosing r 0 larger,f(r) ≥ v(r) 2 + v ′ (r) 2 , (1.3.21)for r ≥ r 0 . <strong>An</strong> elementary calculation shows thatf ′ 2(N − 1)(r) + 2f(r) = − (v ′2 − vv ′ ) + 2|v| p−1 (v 2 − vv ′ )r≤ 2|v| p−1 (v 2 − vv ′ ) ≤ 2|v| p−1 f.It follows thatf ′ (r)f(r) + 2 − 2|v|p−1 ≤ 0;and so, given r 0 sufficiently large,d(∫ rlog(f(r)) + 2r − 2 |v| p−1) ≤ 0.drr 0Since v is bounded, we first deduce that f(r) ≤ Ce −r . Applyingthe resulting estimate |v(r)| ≤ Ce − r 2 in the above inequality, we nowdeduce that f(r) ≤ Ce −2r . Using (1.3.21), we obtain the desiredestimate.□Finally, for the proof of Theorem 1.3.2, we will use the followinglemma.Lemma 1.3.8. Let n ∈ N, x > 0, and let u be the solutionof (1.3.1) with the initial conditions u(0) = x and u ′ (0) = 0. Assumethat u has exactly n zeroes on (0, ∞) and that u 2 + u ′2 → 0 asr → ∞. There exists ε > 0 such that if |x − y| ≤ ε, then the correspondingsolution v of (1.3.1) has at most n + 1 zeroes on (0, ∞).Proof. Assume for simplicity that λ = µ = 1. We first observethat E(u, r) > 0 for all r > 0 by Lemma 1.3.6. This implies thatif r > 0 is a zero of u ′ , then |u(r)| p−1 > (p + 1)/2 > 1, so thatu(r)u ′′ (r) < 0, by the equation. In particular, if r 2 > r 1 are twoconsecutive zeroes of u ′ , it follows that u(r 1 )u(r 2 ) < 0, so that u hasa zero in (r 1 , r 2 ). Therefore, since u has a finite number of zeroes, itfollows that u ′ has a finite number of zeroes.


1.3. THE CASE OF R N , N ≥ 2 23Let r ′ ≥ 0 be the largest zero of u ′ and assume, for example, thatu(r ′ ) > 0. In particular, u(r ′ ) > 1 and u is decreasing on [r ′ , ∞).Therefore, there exists a unique r 0 ∈ (r ′ , ∞) such that u(r 0 ) = 1, andwe have u ′ (r 0 ) < 0. By continuous dependance, there exists ε > 0such that if |x − y| ≤ ε, and if v is the solution of (1.3.1) with theinitial conditions v(0) = x, then the following properties hold.(i) There exists ρ 0 ∈ [r 0 − 1, r 0 + 1] such that v has exactly n zeroeson [0, ρ 0 ](ii) v(ρ 0 ) = 1 and v ′ (ρ 0 ) < 0.Therefore, we need only show that, by choosing ε possibly smaller,v has at most one zero on [ρ 0 , ∞). To see this, we suppose v hasa first zero ρ 1 > ρ 0 , and we show that if ε is small enough, thenv < 0 on (ρ 1 , ∞). Since v(ρ 1 ) = 0, we must have v ′ (ρ 1 ) < 0; andso, v ′ (r) < 0 for r − ρ 1 > 0 and small. Furthermore, it follows fromthe equation that v ′ cannot vanish while v > −1. Therefore, thereexist ρ 3 > ρ 3 > ρ 1 such that v ′ < 0 on [ρ 1 , ρ 3 ] and v(ρ 2 ) = −1/4,v(ρ 3 ) = −1/2. By Lemma 1.3.6, we obtain the desired result if weshow that E(v, ρ 3 ) < 0 provided ε is small enough. To see this, wefirst observe that, since u > 0 on [r ′ , ∞),Let∀M > 0, ∃ε ′ ∈ (0, ε) such that ρ 1 > M if |x − y| ≤ ε ′ .It follows from (1.3.6) thatddrf(x) = |x|p+1p + 1 − x22 .2(N − 1)E(v, r) + E(v, r) =r2(N − 1)f(v(r));rand so,ddr (r2(N−1) E(v, r)) = 2(N − 1)r 2N−3 f(v(r)).Integrating on (ρ 0 , ρ 3 ), we obtainρ 2(N−1)3 E(v, ρ 3 ) = ρ 2(N−1)0 E(v, ρ 0 ) + 2(N − 1)Note that (by continuous dependence)ρ 2(N−1)0 E(v, ρ 0 ) 2(N−1) ≤ C,∫ ρ3ρ 0r 2N−3 f(v(r)) dr.


24 1. ODE METHODSwith C independent of y ∈ (x−ε, x+ε). On the other hand, f(v(r)) ≤0 on (ρ 0 , ρ 3 ) since −1 ≤ v ≤ 1, and there exists a > 0 such thatf(θ) ≤ −a for θ ∈ (−1/4, −1/2). It follows thatρ 2(N−1)3 E(v, ρ 3 ) ≤ C − 2(N − 1)a∫ ρ3ρ 2r 2N−3 dr≤ C − 2(N − 1)aρ 2N−32 (ρ 3 − ρ 2 ).Since v ′ is bounded on (ρ 2 , ρ 3 ) independently of y such that |x − y| ≤ε ′ , it follows that ρ 3 − ρ 2 is bounded from below. Therefore, we seethat E(v, ρ 3 ) < 0 if ε is small enough, which completes the proof. □Proof of Theorem 1.3.2. LetA 1 = {x > x 0 ; u has exactly one zero on (0, ∞)}.By definition of x 0 and Lemma 1.3.8, we have A 1 ≠ ∅. In addition, itfollows from Corollary 1.3.5 that A 1 is bounded. Letx 1 = sup A 1 ,and let u 1 be the corresponding solution. By using the argument ofthe proof of Theorem 1.3.1, one shows easily that u 1 has the desiredproperties. Finally, one defines by inductionandA n+1 = {x > x n ; u has exactly n + 1 zeroes on (0, ∞},x n+1 = sup A n+1 ,and one show that the corresponding solution u n has the desired properties.□Remark 1.3.9. Here are some comments on the cases when theassumptions of Theorems 1.3.1 and 1.3.2 are not satisfied.(i) If λ, µ > 0 and (N − 2)p ≥ N + 2, then there does not exist anysolution u ≢ 0, u ∈ C 1 ([0, ∞)) of (1.3.1)-(1.3.2). Indeed, supposefor simplicity λ = µ = 1 and assume by contradiction thatthere is a solution u. Arguing as in the proof of Lemma 1.3.7, oneshows easily that u and u ′ must have exponential decay. Next,arguing as in the proof of Proposition 1.3.3, one shows that∫ ∞0s N−1 |u| p+1 =∫ ∞0s N−1 u ′2 +∫ ∞0s N−1 u 2 ,


1.3. THE CASE OF R N , N ≥ 2 25and∫N ∞s N−1 |u| p+1 = N − 2 ∫ ∞s N−1 u ′2 + N ∫ ∞s N−1 u 2 .p + 1 02 02 0It follows that( N − 20 < −N ) ∫ ∞∫ ∞s N−1 |u| p+1 = − s N−1 u 2 < 0,2 p + 1 00which is absurd.(ii) If λ > 0 and µ < 0, then there does not exist any solutionu ≢ 0, u ∈ C 1 ([0, ∞)) of (1.3.1)-(1.3.2). Indeed, suppose forexample λ = 1 and µ = −1 and assume by contradiction thatthere is a solution u. Since E(u, r) is decreasing and u → 0,we see that u ′ is bounded. It then follows the equation thatu ′′ → 0 as r → ∞; and so, u ′ → 0 (see Step 1 of the proof ofTheorem 1.1.3). Therefore, E(u, r) → 0 as r → ∞, and sinceE(u, r) is nonincreasing, we must have in particular E(u, 0) ≥ 0.This is absurd, since E(u, 0) = −u(0) 2 /2 − u(0) p+1 /(p + 1) < 0.(iii) If λ = 0 and µ < 0, then there does not exist any solutionu ≢ 0, u ∈ C 1 ([0, ∞)) of (1.3.1)-(1.3.2). This follows from theargument of (ii) above.(iv) If λ = 0, µ > 0 and (N − 2)p = N + 2, then for any x > 0 thesolution u of (1.3.1) such that u(0) = x is given byu(r) = x(1 + µx 4N−2N(N − 2) r2) − N−22.In particular, u(r) ≈ r −(N−2) as r → ∞. Note that u ∈L p+1 (R N ). In addition, u ∈ H 1 (R N ) if and only if N ≥ 5.(v) If λ = 0, µ > 0 and (N − 2)p > N + 2, then for any x > 0 thesolution u of (1.3.1) such that u(0) = x satisfies (1.3.2). (Thisfollows from Proposition 1.3.3.) However, u has a slow decayas r → ∞ in the sense that u ∉ L p+1 (R N ). Indeed, if u werein L p+1 (R N ), then arguing as in the proof of Proposition 1.3.3(starting with (1.3.13)) we would get <strong>to</strong> a contradiction. Alternatively,this follows from the lower estimate (1.3.19).(vi) If λ = 0, µ > 0 and (N −2)p < N +2, then for any x > 0 the solutionu of (1.3.1) such that u(0) = x satisfies (1.3.2). However, uhas a slow decay as r → ∞ in the sense that u ∉ L p+1 (R N ). Thislast property follows from the argument of (v) above. The propertyu(r) → 0 as r → ∞ is more delicate, and one can proceed as


26 1. ODE METHODSfollows. We show by contradiction that E(u, r) → 0 as r → ∞.Otherwise, since E(u, r) is nonincreasing, E(u, r) ↓ l > 0 asr → ∞. Let 0 < r 1 < r 2 ≤ . . . be the zeroes of u (see Proposition1.3.3). We deduce that u ′ (r n ) 2 → 2l as n → ∞. Considerthe solution ω of the equation ω ′′ +µ|ω| p−1 ω = 0 with the initialvalues ω(0) = 0, ω ′ (0) = √ 2l. ω is anti-periodic with minimalperiod 2τ for some τ > 0. By a continuous dependenceargument, one shows that r n+1 − r n → τ as n → ∞ and that|u(r n + ·) − ω(·) sign u ′ (r n )| → 0 in C 1 ([0, τ]). This implies thatr n ≤ 2nτ for n large and that∫ rn+1r nu ′ (r) 2 dr ≥ 1 2∫ τ0ω ′ (r) 2 dr ≥ δ > 0,for some δ > 0 and n large. It follows that∫ rn+1We deduce thatu ′ (r) 2r nr∫ ∞0dr ≥δ δ≥r n+1 2τ(n + 1) .u ′ (r) 2= +∞,rwhich yields a contradiction (see (1.3.6)).(vii) If λ < 0, then there does not exist any solution u of (1.3.1) withu ∈ L 2 (R N ). This result is delicate. It is proved in Ka<strong>to</strong> [27]in a more general setting (see also Agmon [2]). We follow herethe less general, but much simpler argument of Lopes [34]. Weconsider the case µ < 0, which is slightly more delicate, and weassume for example λ = µ = −1. Setting ϕ(r) = r N−12 u(r), wesee thatSettingϕ ′′ + ϕ =H(r) = 1 2 ϕ′2 + 1 2 ϕ2 −= 1 2 ϕ′2 + 1 2 ϕ2[ 1 −(N − 1)(N − 3)4r 2 ϕ + r − (N−1)(p−1)2 |ϕ| p−1 ϕ.(N − 1)(N − 3)8r 2 ϕ 2 − 1 (N−1)(p−1)r− 2 |ϕ| p+1p + 1(N − 1)(N − 3)8r 2− |u|p−1p + 1],


28 1. ODE METHODS∞, and has exactly n zeroes on [0, ∞). This uniqueness propertywas established for n = 0 only, and its proof is very delicate (seeKwong [29] and McLeod [37]). It implies in particular uniqueness,up <strong>to</strong> translations, of positive solutions of the equation −△u = g(u)in R N such that u(x) → 0 as |x| → ∞. Indeed, it was shown byGidas, Ni and Nirenberg [22] that any such solution is sphericallysymmetric about some point of R N .1.4. The case of the ball of R N , N ≥ 2In this section, we suppose that Ω = B R = {x ∈ R N ; |x| < R}and we look for radial solutions of the equation{−△u = g(u) in Ω,u = 0 on ∂Ω.The equation for u(r) = u(|x|) becomes the ODEu ′′ + N − 1 u ′ + g(u) = 0, 0 < r < R,rwith the boundary condition u(R) = 0.It turns out that for the study of such problems, variational methodsor super- and subsolutions methods give in many situations moregeneral results. (See Chapters 2 and 3) However, we present belowsome simple consequences of the results of Section 1.3.For simplicity, we consider the model caseg(u) = −λu + µ|u| p−1 u,and so we look for solutions of the ODEu ′′ + N − 1 u ′ − λu + µ|u| p−1 u = 0, (1.4.1)rfor 0 < r < R such thatu(R) = 0. (1.4.2)We first apply Proposition 1.3.3, and we obtain the following conclusions.(i) Suppose λ = 0, µ > 0 and (N − 2)p ≥ N + 2. Then for everyx > 0, the solution u of (1.4.1) with the initial conditionsu ′ (0) = 0 and u(0) = x does not satisfy (1.4.2). This followsfrom property (i) of Proposition 1.3.3. Indeed, if we denote


1.4. THE CASE OF THE BALL OF R N , N ≥ 2 29by u the solution corresponding <strong>to</strong> x = 1 and µ = 1, thenu(r) = xu(x p−12 r).(ii) Suppose λ = 0, µ > 0 and (N − 2)p < N + 2. Then for everyinteger n ≥ 0, there exists a unique x n > 0 such that the solutionu of (1.4.1) with the initial conditions u ′ (0) = 0 and u(0) = x nsatisfies (1.4.2) and has exactly n zeroes on (0, R). This followsfrom property (ii) of Proposition 1.3.3 and the formula u(r) =u 0 u(u p−120 r).(iii) Suppose λ, µ > 0 and (N − 2)p < N + 2. Then for every sufficientlylarge integer n, there exists x n > 0 such that the solutionu of (1.4.1) with the initial conditions u ′ (0) = 0 andu(0) = x n satisfies (1.4.2) and has exactly n zeroes on (0, R).Indeed, by scaling, we may assume without loss of generalitythat λ = µ = 1. Next, given any x > 0, it follows easily from theproof of Corollary 1.3.5 that the corresponding solution of (1.4.1)oscillates indefinitely. Moreover, it follows easily by continuousdependence that for any integer k ≥ 1 the k th zero of u dependscontinuously on x. The result now follows from Corollary 1.3.5.For results in the other cases, see Section 2.7.


CHAPTER 2Variational methodsIn this chapter, we present the fundamental variational methodsthat are useful for the resolution of nonlinear PDEs of <strong>elliptic</strong> type.The reader is referred <strong>to</strong> Kavian [28] and Brezis and Nirenberg [14]for a more complete account of variational methods.2.1. Linear <strong>elliptic</strong> <strong>equations</strong>This section is devoted <strong>to</strong> the basic results of existence of solutionsof linear <strong>elliptic</strong> <strong>equations</strong> of the form{−△u + au + λu = f in Ω,(2.1.1)u = 0 in ∂Ω.Here, a ∈ L ∞ (Ω), λ is a real parameter and, throughout this section,Ω is any domain of R N (not necessarily bounded nor smooth, unlessotherwise specified). We will study a weak formulation of the problem(2.1.1). Given u ∈ H 1 (Ω), it follows that −△u+au+λu ∈ H −1 (Ω)(by Proposition 5.1.21), so that the equation (2.1.1) makes sense inH −1 (Ω) for any f ∈ H −1 (Ω). Taking the H −1 − H01 duality produc<strong>to</strong>f the equation (2.1.1) with any v ∈ H0 1 (Ω), we obtain (by formula(5.1.5))∫∫∇u · ∇v +ΩΩ∫auv + λ uv = (f, v) H −1 ,H 1. (2.1.2)0ΩMoreover, the boundary condition can be interpreted (in a weak sense)as u ∈ H 1 0 (Ω). This motivates the following definition.A weak solution u of (2.1.1) is a function u ∈ H 1 0 (Ω) that satisfies(2.1.2) for every v ∈ H 1 0 (Ω). In other words, a weak solution uof (2.1.1) is a function u ∈ H 1 0 (Ω) such that −△u + au + λu = f inH −1 (Ω). We will often call a weak solution simply a solution.31


32 2. VARIATIONAL METHODSThe simplest <strong>to</strong>ol for the existence and uniqueness of weak solutionsof the equation (2.1.1) is Lax-Milgram’s lemma.Lemma 2.1.1 (Lax-Milgram). Let H be a Hilbert space and considera bilinear functional b : H × H → R. If there exist C < ∞ andα > 0 such that{|b(u, v)| ≤ C‖u‖ ‖v‖, for all (u, v) ∈ H × H (continuity),|b(u, u)| ≥ α‖u‖ 2 , for all u ∈ H (coerciveness),then, for every f ∈ H ⋆ (the dual space of H), the equationhas a unique solution u ∈ H.b(u, v) = (f, v) H ⋆ ,H for all v ∈ H, (2.1.3)Proof. By the Riesz-Fréchet theorem, there exists ϕ ∈ H suchthat(f, v) H⋆ ,H = (ϕ, v) H ,for all v ∈ H. Furthermore, for any given u ∈ H, the applicationv ↦→ b(u, v) defines an element of H ⋆ ; and so, by the Riesz-Fréchettheorem, there exists an element of H, which we denote by Au, suchthatb(u, v) = (Au, v) H ,for all v ∈ H. It is clear that A : H → H is a linear opera<strong>to</strong>r suchthat {‖Au‖ H ≤ C‖u‖ H ,(Au, u) H ≥ α‖u‖ 2 H ,for all u ∈ H. We see that (2.1.3) is equivalent <strong>to</strong> Au = ϕ. Givenρ > 0, this last equation is equivalent <strong>to</strong>u = T u, (2.1.4)where T u = u + ρϕ − ρAu. It is clear that T : H → H is continuous.Moreover, T u − T v = (u − v) − ρA(u − v); and so,‖T u − T v‖ 2 H = ‖u − v‖ 2 H + ρ 2 ‖A(u − v)‖ 2 H − 2ρ(A(u − v), u − v) H≤ (1 + ρ 2 C 2 − 2ρα)‖u − v‖ 2 H.Choosing ρ > 0 small enough so that 1 + ρ 2 C 2 − 2ρα < 1, T is a strictcontraction. By Banach’s fixed point theorem, we deduce that T has aunique fixed point u ∈ H, which is the unique solution of (2.1.4). □


2.1. LINEAR ELLIPTIC EQUATIONS 33In order <strong>to</strong> study the equation (2.1.1), we make the followingdefinition.Given a ∈ L ∞ (Ω), we setλ 1 (−∆ + a; Ω) ={ ∫ }inf (|∇u| 2 + au 2 ); u ∈ H0 1 (Ω), ‖u‖ L 2 = 1 . (2.1.5)ΩWhen there is no risk of confusion, we denote λ 1 (−∆ + a; Ω) byλ 1 (−∆ + a) or simply λ 1 .Remark 2.1.2. Note that λ 1 (−∆ + a; Ω) ≥ −‖a‖ L ∞. Moreover,it follows from (2.1.5) that∫ ∫∫|∇u| 2 + a|u| 2 ≥ λ 1 (−△ + a) |u| 2 , (2.1.6)ΩΩΩfor all u ∈ H0 1 (Ω).When Ω is bounded, we will see in Section 3.2 that λ 1 (−∆+a; Ω)is the first eigenvalue of −∆ + a in H 1 0 (Ω). In the general case, thereis the following useful inequality.Lemma 2.1.3. Let a ∈ L ∞ (Ω) and let λ 1 = λ 1 (−∆ + a; Ω) bedefined by (2.1.5). Consider λ > −λ 1 and set{λ + λ}1α = min 1,> 0, (2.1.7)1 + λ 1 + ‖a‖ L ∞by Remark 2.1.2. It follows that∫ ∫ ∫|∇u| 2 + au 2 + λΩfor all u ∈ H 1 0 (Ω).ΩΩu 2 ≥ α‖u‖ 2 H1, (2.1.8)Proof. We denote by Φ(u) the left-hand side of (2.1.8). It followsfrom (2.1.6) that, given any 0 ≤ ε ≤ 1,∫∫Φ(u) 2 ≥ ε (|∇u| 2 + a|u| 2 ) + ((1 − ε)λ 1 + λ) |u| 2ΩΩ∫∫≥ ε |∇u| 2 + ((1 − ε)λ 1 + λ − ε‖a‖ L ∞) |u| 2ΩΩ∫∫= ε |∇u| 2 + (λ + λ 1 − ε(λ 1 + ‖a‖ L ∞)) |u| 2 .The result follows by letting ε = α.ΩΩ□


34 2. VARIATIONAL METHODSOur main result of this section is the following existence anduniqueness result.Theorem 2.1.4. Let a ∈ L ∞ (Ω) and let λ 1 = λ 1 (−∆ + a; Ω)be defined by (2.1.5). If λ > −λ 1 , then for every f ∈ H −1 (Ω), theequation (2.1.1) has a unique weak solution. In addition,α‖u‖ H 1 ≤ ‖f‖ H −1 ≤ (1 + ‖a‖ L ∞ + |λ|)‖u‖ H 1, (2.1.9)where α is defined by (2.1.7). In particular, the mapping f ↦→ u is anisomorphism H −1 (Ω) → H 1 0 (Ω).Proof. Let∫b(u, v) =Ω∫ ∫∇u · ∇v + auv + λ uv,ΩΩfor u, v ∈ H0 1 (Ω). It is clear that b is continuous, and it followsfrom (2.1.8) that b is coercive. Existence and uniqueness now follow byapplying Lax-Milgram’s lemma in H = H0 1 (Ω) with b defined above.Next, we deduce from (2.1.8) thatα‖u‖ 2 H 1 ≤ b(u, u) = (f, u) H −1 ,H 1 0 ≤ ‖f‖ H −1‖u‖ H 1,from which we obtain the left-hand side of (2.1.9). Finally,‖f‖ H −1 ≤ ‖∆u‖ H −1 +‖au‖ H −1 +|λ| ‖u‖ H −1 ≤ (1+‖a‖ L ∞ +|λ|)‖u‖ H 1,which proves the right-hand side of (2.1.9).Remark 2.1.5. If a = 0, then λ 1 = λ 1 (−∆; Ω) depends only onΩ. λ 1 may equal 0 or be positive. The property λ 1 > 0 is equivalent<strong>to</strong> Poincaré’s inequality. In particular, if Ω has finite measure, thenλ 1 > 0 by Theorem 5.4.19. On the other hand, one verifies easily thatif Ω = R N , then λ 1 = 0 (Take for example u ε (x) = ε N 2 ϕ(εx) withϕ ∈ Cc∞ (R N ), ϕ ≢ 0 and let ε ↓ 0). If Ω = R N \ K, where K is acompact subset of R N , a similar argument (translate u ε in such a waythat supp u ε ⊂ Ω) shows that as well λ 1 = 0.Remark 2.1.6. The assumption λ > −λ 1 implies the existenceof a solution of (2.1.1) for all f ∈ H −1 (Ω). However, this conditionmay be necessary or not, depending on Ω. Let us consider severalexamples <strong>to</strong> illustrate this fact.(i) Suppose Ω is bounded. Let (λ n ) n≥1 be the sequence of eigenvaluesof −△ + a in H0 1 (Ω) (see Section 3.2) and let (ϕ n ) n≥1be a corresponding orthonormal system of eigenvec<strong>to</strong>rs. Given□


2.2. C 1 FUNCTIONALS 35f ∈ H −1 (Ω), we may write f = ∑ n≥1 α nϕ n with ∑ λ −1n |α n | 2


36 2. VARIATIONAL METHODSSuch a L is then unique, is called the derivative of F at X and isdenoted F ′ (x). F ∈ C 1 (X, R) if F is differentiable at all x ∈ X andif the mapping x ↦→ F ′ (x) is continuous X → X ⋆ .There is a weaker notion of derivative, the Gâteaux derivative. Afunctional F ∈ C(X, R) is Gâteaux-differentiable at some point x ∈ Xif there exists L ∈ X ⋆ such thatF (x + ty) − F (x)t−→t↓0(L, y) X⋆ ,X,for all y ∈ X. Such a L is then unique, is called the Gâteaux-derivativeof F at X and is denoted F ′ (x). It is clear that if a functionalis Fréchet-differentiable at some x ∈ X, then it is also Gâteauxdifferentiableand both derivatives agree. On the other hand, thereexist functionals that are Gâteaux-differentiable at some point wherethey are not Fréchet-differentiable. However, it is well-know that ifa functional F ∈ C(X, R) is Gâteaux-differentiable at every pointx ∈ X, and if its Gâteaux derivative F ′ (x) is continuous X → X ⋆ ,then F ∈ C 1 (X, R). In other words, in order <strong>to</strong> show that F is C 1 , weneed only show that F is Gâteaux-differentiable at every point x ∈ X,and that F ′ (x) is continuous X → X ⋆ .We now give several examples of functionals arising in PDEs andwhich are C 1 in appropriate Banach spaces. In what follows, Ω is anarbitrary domain of R N .Consider a function g ∈ C(R, R), and assume that there exist1 ≤ r < ∞ and a constant C such thatfor all u ∈ R. Setting|g(u)| ≤ C|u| r , (2.2.1)G(u) =∫ u0g(s) ds, (2.2.2)it follows that |G(u)| ≤Cr + 1 |u|r+1 . Therefore, we may define∫J(u) = G(u(x)) dx, (2.2.3)for all u ∈ L r+1 (Ω). Our first result is the following.ΩProposition 2.2.1. Assume g ∈ C(R, R) satisfies (2.2.1) forsome r ∈ [1, ∞), let G be defined by (2.2.2) and let J be defined


2.2. C 1 FUNCTIONALS 37by (2.2.3). It follows that the mapping u ↦→ g(u) is continuous fromL r+1 (Ω) <strong>to</strong> L r+1r (Ω). Moreover, J ∈ C 1 (L r+1 (Ω), R) andfor all u ∈ L r+1 (Ω).J ′ (u) = g(u), (2.2.4)Proof. It is clear that ‖g(u)‖ r+1 ≤ C‖u‖ r+1L rL, thus g mapsr+1L r+1 (Ω) <strong>to</strong> L r+1r (Ω). We now show that g is continuous. Assume bycontradiction that u n → u in L r+1 (Ω) as n → ∞ and that ‖g(u n ) −g(u)‖ r+1 ≥ ε > 0. By possibly extracting a subsequence, we mayL rassume that u n → u a.e.; and so, g(u n ) → g(u) a.e. Furthermore, wemay also assume that there exists f ∈ L r+1 (Ω) such that |u n | ≤ f a.e.Applying (2.2.1) and the dominated convergence theorem, we deducethat g(u n ) → g(u) in L r+1r (Ω). Contradiction.Consider now u, v ∈ L r+1 (Ω). Since g = G ′ , we see thatG(u + tv) − G(u)t− g(u)v −→t↓00,a.e. Note that by (2.2.1), |g(u)v| ≤ C|u| r |v| ∈ L 1 (Ω) and for 0 < t < 1|G(u + tv) − G(u)|≤ 1 ∫ u+tv∣ g(s) ds∣ ≤ C|v|(|u| r + t r |v| r )ttu≤ C|v|(|u| r + |v| r ) ∈ L 1 (Ω).By dominated convergence, we deduce that∫G(u + tv) − G(u)∣ − g(u)v∣ −→tΩt↓00.This means that J is Gâteaux differentiable at u and that J ′ (u) =g(v). Since g is continuous L r+1 (Ω) → L r+1r (Ω), the result follows.□Consider again a function g ∈ C(R, R), and assume now thatthere exist 1 ≤ r < ∞ and a constant C such that|g(u)| ≤ C(|u| + |u| r ), (2.2.5)for all u ∈ R. (Note that in particular g(0) = 0.) Consider G definedby (2.2.2) and, given h 1 ∈ H −1 (Ω) and h 2 ∈ L r+1r (Ω), letJ(u) = 1 ∫ ∫|∇u| 2 − G(u)2ΩΩ


38 2. VARIATIONAL METHODS− (h 1 , u) H −1 ,H0 1 − (h 2, u) r+1L r ,Lr+1, (2.2.6)for u ∈ H0 1 (Ω) ∩ L r+1 (Ω). We note that G(u) ∈ L 1 (Ω), so J is welldefined. LetX = H0 1 (Ω) ∩ L r+1 (Ω), (2.2.7)and set‖u‖ X = ‖u‖ H 1 + ‖u‖ L r+1, (2.2.8)for u ∈ X. It follows immediately that X is a Banach space with thenorm ‖ · ‖ X . One can show that X ⋆ = H −1 (Ω) + L r+1r (Ω), where theBanach space H −1 (Ω) + L r+1r (Ω) is defined appropriately (see Berghand Löfström [10], Lemma 2.3.1 and Theorem 2.7.1). We will notuse that property, whose proof is rather delicate, but we will use thesimpler properties H −1 (Ω) ↩→ X ⋆ and L r+1r (Ω) ↩→ X ⋆ . This is immediatesince, given f ∈ H −1 (Ω), the mapping u ↦→ (f, u) H −1 ,H01 definesclearly an element of X ⋆ . Furthermore, this defines an injection becauseif (f, u) H −1 ,H0 1 for all u ∈ X, then in particular (f, u) H −1 ,H0 1 forall u ∈ Cc ∞ (Ω). By density of Cc ∞ (Ω) in H0 1 (Ω), we deduce f = 0. Asimilar argument shows that L r+1r (Ω) ↩→ X ⋆ .Corollary 2.2.2. Assume that g ∈ C(R, R) satisfies (2.2.5) andlet h 1 ∈ H −1 (Ω) and h 2 ∈ L r+1r (Ω). Let J be defined by (2.2.6) andlet X be defined by (2.2.7)-(2.2.8). Then g is continuous X → X ⋆ ,J ∈ C 1 (X, R) andfor all u ∈ X.J ′ (u) = −△u − g(u) − h 1 − h 2 , (2.2.9)Proof. We first show that g is continuous X → X ⋆ , and for thatwe split g in two parts. Namely, we setg(u) = g 1 (u) + g 2 (u),where g 1 (u) = g(u) for |u| ≤ 1 and g 1 (u) = 0 for |u| ≥ 2. It followsimmediately that|g 1 (u)| ≤ C|u|,and that|g 2 (u)| ≤ C|u| r ,by possibly modifying the value of C. By Proposition 2.2.1, we seethat the mapping u ↦→ g 1 (u) is continuous L 2 (Ω) → L 2 (Ω), henceH 1 0 (Ω) → H −1 (Ω), hence X → X ⋆ . As well, the mapping u ↦→ g 2 (u)


2.3. GLOBAL MINIMIZATION 39is continuous L r+1 (Ω) → L r+1r (Ω), hence X → X ⋆ . Therefore, g =g 1 + g 2 is continuous X → X ⋆ .We now define˜J(u) = 1 ∫|∇u| 2 ,2so that ˜J ∈ C 1 (H 1 0 (Ω), R) ⊂ C 1 (X, R) and ˜J ′ (u) = −△u (see Corollary5.1.22). Next, letJ 0 (u) = (h 1 , u) H −1 ,H 1 0 + (h 2, u)Lr+1r ,L r+1 := J 1 0 (u) + J 2 0 (u).One verifies easily that J0 1 ∈ C 1 (H0 1 (Ω), R) and that J0 1′ (u) = h 1 .Also, J0 2 ∈ C 1 (L r+1 , R) and that J0 2′ (u) = h 2 . Thus J 0 ∈ C 1 (X, R)and J 0(u) ′ = h 1 + h 2 . Finally, let∫J l (u) = G l (u),for l = 1, 2, where G l (u) =ΩΩ∫ u0g l (s) ds.The result now followsby applying Proposition 2.2.1 <strong>to</strong> the functionals J l and writing J =˜J − J 0 − J 1 − J 2 .□Corollary 2.2.3. Assume that g ∈ C(R, R) satisfies (2.2.5),with the additional assumption (N −2)r ≤ N +2, and let h ∈ H −1 (Ω).Let J be defined by (2.2.6) (with h 1 = h and h 2 = 0). Then g iscontinuous H0 1 (Ω) → H −1 (Ω), J ∈ C 1 (H0 1 (Ω), R) and (2.2.9) holdsfor all u ∈ H0 1 (Ω).Proof. Since H0 1 (Ω)∩L r+1 (Ω) = H0 1 (Ω) by Sobolev’s embeddingtheorem, the result follows from Corollary 2.2.2.□2.3. Global minimizationWe begin by recalling some simple properties. Let X be a Banachspace and consider a functional F ∈ C 1 (X, R). A critical point of Fis an element x ∈ X such that F ′ (x) = 0. If F achieves its minumum,i.e. if there exists x 0 ∈ X such thatF (x 0 ) = infx∈X F (x),then x 0 is a critical point of F . Indeed, if F ′ (x 0 ) ≠ 0, then there existsy ∈ X such that (F ′ (x 0 ), y) X ⋆ ,X < 0. It follows from the definition of


40 2. VARIATIONAL METHODSthe derivative that F (x 0 + ty) ≤ F (x 0 ) + t 2 (F ′ (x 0 ), y) X ⋆ ,X < F (x 0 )for t > 0 small enough, which is absurd.In this section, we will construct solutions of the equation{−△u = g(u) + h in Ω,(2.3.1)u = 0 in ∂Ω,by minimizing a functional J such that J ′ (u) = −△u − g(u) − h in anappropriate Banach space. Of course, this will require assumptionson g and h. We begin with the following result.Theorem 2.3.1. Assume that g ∈ C(R, R) satisfies (2.2.5), withthe additional assumption (N − 2)r ≤ N + 2. Let λ 1 = λ 1 (−∆) bedefined by (2.1.5), and suppose further thatG(u) ≤ − λ 2 u2 , (2.3.2)for all u ∈ R, with λ > −λ 1 . (Here, G is defined by (2.2.2).) Finally,let h ∈ H −1 (Ω) and let J be defined by (2.2.6) with h 1 = h andh 2 = 0 (so that J ∈ C 1 (H0 1 (Ω), R) by Corollary 2.2.3). Then thereexists u ∈ H0 1 (Ω) such thatJ(u) =inf J(v).v∈H0 1(Ω) In particular, u is a weak solution of (2.3.1) in the sense that u ∈H 1 0 (Ω) and −△u = g(u) + h in H −1 (Ω).For the proof of Theorem 2.3.1, we will use the following lemma.Lemma 2.3.2. Let λ > −λ 1 , where λ 1 = λ 1 (−∆) is definedby (2.1.5). Let h ∈ H −1 (Ω) and setΨ(u) = 1 ∫|∇u| 2 + λ ∫u 2 + (h, u)22H −1 ,H0 1,Ωfor all u ∈ H 1 0 (Ω). If (u n ) n≥0 is a bounded sequence of H 1 0 (Ω), thenthere exist a subsequence (u nk ) k≥0 and u ∈ H 1 0 (Ω) such thatΨ(u) ≤ lim inf Ψ(u n k), (2.3.3)k→∞and u nk −→ u a.e. in Ω.k→∞Proof. Since (u n ) n≥0 is a bounded sequence of H0 1 (Ω), thereexist u ∈ H0 1 (Ω) and a subsequence (u nk ) k≥0 such that u nk → u a.e.Ω


2.3. GLOBAL MINIMIZATION 41in Ω as k → ∞ and u nk → u in L 2 (Ω ∩ {|x| < R}) for all R > 0 (seeRemark 5.5.6). For proving (2.3.3), we proceed in two steps.Step 1. For every f ∈ H −1 (Ω),(f, u nk ) H −1 ,H 1 0−→ (f, u) Hk→∞ −1 ,H0 1.Indeed,(f, u nk − u) H −1 ,H01 −→ 0,k→∞when f ∈ C c (Ω), by local L 2 convergence. The result follows bydensity of C c (Ω) in H −1 (Ω) (see Proposition 5.1.18).Step 2. Conclusion. By Step 1, we need only show that if∫∫Φ(u) = |∇u| 2 + λ u 2 ,Ωthen Φ(u) ≤ lim inf Φ(u nk ) as k → ∞. Indeed, we have Φ(u) ≥α‖u‖ 2 Hby (2.1.8). Since clearly Φ(u) ≤ max{1, λ}‖u‖ 2 1H, it follows1that‖|v‖| = Φ(v) 1 2 ,defines an equivalent norm on H0 1 (Ω). We equip H −1 (Ω) with thecorresponding dual norm ‖| · ‖| ⋆. (Note that this dual norm is equivalent<strong>to</strong> the original one and that, by definition, the duality product( · , · ) H −1 ,H0 1 is unchanged.) We have‖|u‖| = sup{(f, u) H −1 ,H 1 0 ; f ∈ H−1 (Ω), ‖|f‖| ⋆= 1}.By Step 1,(f, u) H −1 ,H01 = lim (f, u n k) H −1 ,Hk→∞ 0 1,for every f ∈ H −1 (Ω). Since (f, u nk ) H −1 ,H01 ≤ ‖|f‖| ⋆ ‖|u n k‖|, we deducethat(f, u) H −1 ,H01 ≤ ‖|f‖| ⋆lim inf ‖|u n k‖|,k→∞from which the result follows.□Proof of Theorem 2.3.1. We first note that, by (2.3.2),J(u) ≥ 1 ∫|∇u| 2 + λ ∫u 2 − (h, u)22H −1 ,H0 1.By (2.1.8), this implies thatΩJ(u) ≥ α 2 ‖u‖2 H − ‖h‖ 1 H −1‖u‖ H 1 ≥ α 4 ‖u‖2 H − 1 1 α ‖h‖2 H−1, (2.3.4)ΩΩ


42 2. VARIATIONAL METHODSfor all u ∈ H 1 0 (Ω), where α is defined by (2.1.7). It follows from (2.3.4)that J is bounded from below. Letm =inf J(v) > −∞,v∈H0 1(Ω) and let (u n ) n≥0 ⊂ H0 1 (Ω) be a minimizing sequence. It follows inparticular from (2.3.4) that (u n ) n≥0 is bounded in H0 1 (Ω). We nowwriteJ(u) = J 1 (u) + J 2 (u),whereandJ 1 (u) = 1 2∫Ω∫J 2 (u) =|∇u| 2 + λ 2Ω∫Ωu 2 − (h, u) H −1 ,H 1 0 ,(−G(u) − λ 2 u2 ).Applying Lemma 2.3.2, we find that there exist u ∈ H0 1 (Ω) and asubsequence (u nk ) k≥0 such that u nk → u a.e. in Ω as k → ∞ andJ 1 (u) ≤ lim infk→∞ J 1(u nk ).Since −G(t) − λ 2 t2 ≥ 0 by (2.3.2), it follows from Fa<strong>to</strong>u’s lemma thatJ 2 (u) ≤ lim infk→∞ J 2(u nk );and so, J(u) ≤ lim inf J(u nk ) = m as k → ∞. Therefore, J(u) = m,which proves the first part of the result. Finally, we have J ′ (u) = 0,i.e. −△u = g(u) + h by Corollary 2.2.3.□Remark 2.3.3. If Ω is bounded, then one can weaken the assumption(2.3.2). One may assume instead that (2.3.2) holds for |u|large enough. Indeed, we have thenG(u) ≤ C − λ 2 u2 ,for all u ∈ R and some constant C. The construction of the minimizingsequence is made as above, since one obtains instead of (2.3.4)J(u) ≥ α 2 ‖u‖2 H −‖h‖ 1 H −1‖u‖ H 1 −C|Ω| ≥ α 4 ‖u‖2 H − 1 1 α ‖h‖2 H−1 −C|Ω|.For the passage <strong>to</strong> the limit, we have G(u) ≤ µu 2 for µ large enough(since G(u) = O(u 2 ) near 0). Therefore, one can use the compact


2.3. GLOBAL MINIMIZATION 43embedding H 1 0 (Ω) ↩→ L 2 (Ω) (Theorem 5.5.5) <strong>to</strong> pass <strong>to</strong> the limit inthe negative part of J.Remark 2.3.4. We give below some applications of Theorem 2.3.1and Remark 2.3.3.(i) If Ω is a bounded subset, then Theorem 2.3.1 (<strong>to</strong>gether withRemark 2.3.3 above) applies for example <strong>to</strong> the equation−△u + λu + a|u| p−1 u − b|u| q−1 u = f,where f ∈ H −1 (Ω) (for example, f may be a constant), λ ∈ R,a > 0, b ∈ R and 1 < q < p ≤ (N + 2)/(N − 2).(ii) When Ω is not bounded, Theorem 2.3.1 applies <strong>to</strong> the same equationwith the additional restrictions λ > 0 and b < λ p−q q−1p−1 ap−1 (p−1)(q + 1)(p − q) − p−qp−1 ((q − 1)(p + 1))− q−1p−1 .In the examples of Remark 2.3.4 (i) and (ii), one can indeed removethe assumption p ≤ (N + 2)/(N − 2). More generally, one canremove this assumption in Theorem 2.3.1, provided one assumes astronger upper bound on G. This is the object of the following result.Theorem 2.3.5. Assume that g ∈ C(R, R) satisfies (2.2.5) forsome r ≥ 1. Let λ 1 = λ 1 (−∆) be defined by (2.1.5), and suppose thatG satisfies (2.3.2) for all u ∈ R, with λ > −λ 1 . Suppose further thatG(u) ≤ −a|u| r+1 , (2.3.5)for all |u| ≥ M, where a > 0. Finally, let h 1 ∈ H −1 (Ω) and h 2 ∈L r+1r (Ω) and let J be defined by (2.2.6) (so that J ∈ C 1 (H0 1 (Ω) ∩L r+1 (Ω), R) by Corollary 2.2.2). Then there exists u ∈ H0 1 (Ω) ∩L r+1 (Ω) such thatJ(u) =inf J(v).v∈H0 1(Ω) In particular, u is a weak solution of (2.3.1) with h = h 1 + h 2 inthe sense that u ∈ H 1 0 (Ω) ∩ L r+1 (Ω) and −△u = g(u) + h 1 + h 2 in(H 1 0 (Ω) ∩ L r+1 (Ω)) ⋆ .Proof. The proof is parallel <strong>to</strong> the proof of Theorem 2.3.1. Wefirst observe that by (2.3.2) and (2.3.5) we haveG(u) ≤ − λ 2 u2 − a|u| r+1 , (2.3.6)


44 2. VARIATIONAL METHODSfor all u ∈ R, by possibly modifying a > 0 and λ > −λ 1 . It followsfrom (2.3.6) thatJ(u) ≥ 1 ∫|∇u| 2 + λ ∫ ∫u 2 + a |u| r+12 Ω 2 ΩΩ− (h 1 , u) H −1 ,H0 1 − (h 2, u) r+1L r ,L r+1.By (2.1.8), this implies thatJ(u) ≥ α 2 ‖u‖2 H + 1 a‖u‖r+1 L− ‖h r+1 1 ‖ H −1‖u‖ H 1 − ‖h 2 ‖ r+1 ‖u‖ L r+1,Lso thatJ(u) ≥ α 4 ‖u‖2 H 1 + a 2 ‖u‖r+1 L r+1− 1 α ‖h‖2 H −1 −2 1 r ra 1 r (r + 1) r+1rr‖h 2 ‖ r+1rL r+1r, (2.3.7)for all u ∈ H 1 0 (Ω) ∩ L r+1 (Ω), where α is defined by (2.1.7). It followsfrom (2.3.7) that J is bounded from below on H 1 0 (Ω) ∩ L r+1 (Ω). Letm =infv∈H 1 0 ∩Lr+1 J(v) > −∞,and let (u n ) n≥0 ⊂ H 1 0 (Ω) ∩ L r+1 (Ω) be a minimizing sequence. Itfollows in particular from (2.3.7) that (u n ) n≥0 is bounded in H 1 0 (Ω) ∩L r+1 (Ω). We now writewhereandJ(u) = J 1 (u) + J 2 (u) + J 3 (u),J 1 (u) = 1 ∫|∇u| 2 + λ ∫u 2 − (h 1 , u)2 Ω 2H −1 ,H 1,0Ω∫J 2 (u) =(−G(u) − λ )2 u2 ,ΩJ 3 (u) = (h 2 , u)Lr+1r ,L r+1.Applying Lemma 2.3.2, we find that there exist u ∈ H0 1 (Ω) and asubsequence (u nk ) k≥0 such that u nk → u a.e. in Ω as k → ∞ andJ 1 (u) ≤ lim infk→∞ J 1(u nk ).


2.3. GLOBAL MINIMIZATION 45Since −G(t) − λt 2 /2 ≥ 0 by (2.3.2), it follows from Fa<strong>to</strong>u’s lemmathatJ 2 (u) ≤ lim infk→∞ J 2(u nk ).Applying Corollary 5.5.2 and Lemma 5.5.3, we may also assume, afterpossibly extracting a subsequence, that(h 2 , u nk ) r+1 −→ (hL r ,Lr+1 2, u) r+1k→∞ L r ,L r+1;and so, J(u) ≤ lim inf J(u nk ) = m as k → ∞. Therefore, J(u) = m,which proves the first part of the result. Finally, we have J ′ (u) = 0,i.e. −△u = g(u) + h by Corollary 2.2.2.□Remark 2.3.6. Here are some comments on Theorem 2.3.5.(i) If Ω is bounded, then one does not need the assumption (2.3.2).(See Remark 2.3.3 for the necessary modifications <strong>to</strong> the proof.)(ii) One may apply Theorem 2.3.5 (along with (i) above) <strong>to</strong> theexamples of Remark 2.3.4, but without the restriction p ≤ (N +2)/(N − 2).Let us observe that the equation (2.3.1) may have one or severalsolutions, depending on g and h. For example, if h = 0 and g(0) = 0,then u = 0 is a trivial solution. It may happen that there are moresolutions. In that case, we speak of nontrivial solutions. We givebelow two examples that illustrate the two different situations.Theorem 2.3.7. Let g, λ and h be as in Theorem 2.3.1, and letu be the solution of (2.3.1) given by Theorem 2.3.1. If the mappingu ↦→ g(u)+λu is nonincreasing, then u is the unique solution in H 1 0 (Ω)of (2.3.1).andProof. We write J(u) = J 0 (u) + J 1 (u) + J 2 (u) withJ 0 (u) = 1 ∫|∇u| 2 + λ ∫u 2 ,2 Ω 2 Ω∫J 1 (u) =(−G(u) − λ )2 u2 ,ΩJ 2 (u) = (h, u) H −1 ,H 1 0 .We observe that, since λ > −λ 1 , J 0 (u) is strictly convex. (Indeed, ifa(u, v) is a bilinear functional such that a(u, u) ≥ 0, then the mappingu ↦→ a(u, u) is convex; and if a(u, u) > 0 for all u ≠ 0, then it is strictly


46 2. VARIATIONAL METHODSconvex.) Furthermore, J 1 is convex because the mapping u ↦→ −g(u)−λu is nondecreasing. Finally, J 2 is linear, thus convex. Therefore, Jis strictly convex. Assume now u and v are two solutions, so thatJ ′ (u) = J ′ (v) = 0. It follows that (J ′ (u) − J ′ (v), u − v) H −1 ,H0 1 = 0,and since J is strictly convex, this implies u = v.□Remark 2.3.8. One shows similarly that, under the assumptionsof Theorem 2.3.5, and if the mapping u ↦→ g(u) + λu is nonincreasing,then the solution of (2.3.1) is unique in H 1 0 (Ω) ∩ L r+1 (Ω). Notealso that if λ = −λ 1 , then the same conclusion holds, provided themapping u ↦→ g(u) + λu is decreasing. Indeed, in this case, J 0 is notstrictly convex (but still convex), but J 1 is strictly convex.The above results apply for example <strong>to</strong> the equation −△u + λu +a|u| p−1 u = h 1 + h 2 , with λ > −λ 1 , a > 0, p > 1, h 1 ∈ H −1 (Ω) andh 2 ∈ H −1 (Ω) ∩ L p+1p (Ω). In the case λ < −λ1 , the situation is quitedifferent, as shows the following result.Theorem 2.3.9. Let Ω be a bounded domain of R N , and assumeλ < −λ 1 where λ 1 = λ 1 (−∆) is defined by (2.1.5). Let a > 0 andp > 1. Then the equation− △u + λu + a|u| p−1 u = 0, (2.3.8)has at least three distinct solutions 0, u and −u, where u ∈ H 1 0 (Ω) ∩L p+1 (Ω), u ≠ 0 and u ≥ 0.Proof. It is clear that 0 is a solution. On the other hand,there is a solution that minimizes J(u) on H0 1 (Ω) ∩ L p+1 (Ω) (see Remark2.3.6), whereJ(u) = 1 ∫|∇u| 2 + λ ∫u 2 +a ∫|u| p+1 .2 Ω 2 Ω p + 1 ΩWe first claim that we can find a solution that minimizes J and thatis nonnegative. Indeed, remember that the minimizing solution isconstructed by considering a minimizing sequence (u n ) n≥0 . Settingv n = |u n |, we have J(v n ) = J(u n ), so that (v n ) n≥0 is also a minimizingsequence, which produces a nonnegative solution. Since −uis a solution whenever u is a solution, it remains <strong>to</strong> show that theinfimum of J is negative, so that this solution is not identically 0.Since λ < −λ 1 , there exists ϕ ∈ H0 1 (Ω) such that ‖ϕ‖ L 2 = 1 and‖∇ϕ‖ 2 L∈ (λ 2 1 , −λ). By density, there exists ϕ ∈ Cc ∞ (Ω) such that


2.4. CONSTRAINED MINIMIZATION 47‖ϕ‖ L 2 = 1 and ‖∇ϕ‖ 2 L∈ (λ 2 1 , −λ). Set µ = ‖∇ϕ‖ 2 L. Given t > 0,2we have∫J(tϕ) = t2 a(µ + λ) + tp+1 |ϕ| p+1 .2 p + 1Since µ+λ < 0, we have J(tϕ) < 0 for t small enough, thus inf J(u) 0 and λ ≥ −λ 1 , the only solutionof (2.3.8) is u = 0. Indeed, let u be a solution, and multiply theequation (2.3.8) by u. It follows that∫∫ ∫|∇u| 2 + λ u 2 + a |u| p+1 = 0,thus u = 0.ΩΩΩΩ2.4. Constrained minimizationConsider the equation{−△u + λu = a|u| p−1 u in Ω,u = 0 in ∂Ω,(2.4.1)with λ > −λ 1 where λ 1 = λ 1 (−∆) is defined by (2.1.5), a > 0 and1 < p < (N + 2)/(N − 2). A solution of (2.4.1) is a critical point ofthe functionalE(u) = 1 ∫|∇u| 2 + λ ∫u 2 −a ∫|u| p+1 , (2.4.2)2 Ω 2 Ω p + 1 Ωfor u ∈ H0 1 (Ω). It is clear that u = 0 is a trivial solution. If we lookfor a nontrivial solution, we cannot apply the global minimizationtechnique of the preceding section, because E is not bounded frombelow. (To see this, take u = tϕ with ϕ ∈ Cc∞ (Ω), ϕ ≠ 0, and lett → ∞.)In this section, we will solve the equation (2.4.1) by minimizingon the set12∫Ω|∇u| 2 + λ 2∫Ωu 2 ,{∫}u ∈ H0 1 (Ω) ∩ L p+1 (Ω); |u| p+1 = 1 ,Ω


48 2. VARIATIONAL METHODSi.e. we will solve a minimization problem with constraint. For thatpurpose, we need the notion of Lagrange multiplier.Theorem 2.4.1 (Lagrange multipliers). Let X be a Banach space,let F, J ∈ C 1 (X, R) and setM = {v ∈ X; F (v) = 0}.Let S ⊂ M, S ≠ ∅, and suppose x 0 ∈ S satisfiesJ(u 0 ) = infv∈S J(v).If F ′ (u 0 ) ≠ 0 and if M ∩{x ∈ X; ‖x−u 0 ‖ X ≤ η} ⊂ S for some η > 0,then there exists a Lagrange multiplier λ ∈ R such that J ′ (u 0 ) =λF ′ (u 0 ).Proof. Let f = J ′ (u 0 ) and g = F ′ (u 0 ). If f = 0, then λ = 0 isa Lagrange multiplier. Therefore, we may assume f ≠ 0. Note thatby assumption, we also have g ≠ 0. We now proceed in two steps.Step 1. g −1 (0) ⊂ f −1 (0). Set X 0 = g −1 (0). Since g ≠ 0,there exists w ∈ X such that g(w) = 1. Consider now the mappingφ : X 0 ×R → R defined by φ(v, t) = F (u 0 +v+tw). We have φ(0, 0) =0, ∂ t φ(0, 0) = g(w) = 1, ∂ v φ(0, 0) = g |X0 = 0. By the implicitfunction theorem, there exist ε > 0 and a function t ∈ C 1 (B ε , R) suchthat t(0) = 0, t ′ (0) = 0, and φ(v, t(v)) = 0 for all v ∈ B ε . Here,B ε = {v ∈ X 0 ; ‖v‖ X < ε}. Therefore, F (u 0 + v + t(v)w) = 0 forall v ∈ B ε , hence u 0 + v + t(v)w ∈ M for all v ∈ B ε . By taking εsufficiently small, we have u 0 + v + t(v)w ∈ S for all v ∈ B ε , thus inparticularJ(u 0 + v + t(v)w) ≥ J(u 0 ), (2.4.3)for all v ∈ B ε . Let now v ∈ g −1 (0), i.e. (F ′ (u 0 ), v) X ⋆ ,X = 0. We need<strong>to</strong> show that v ∈ f −1 (0), i.e. (J ′ (u 0 ), v) X⋆ ,X = 0. Letϕ(s) = J(u 0 + sv + t(sv)w) − J(u 0 ),for |s| < ε‖v‖ −1X. We have ϕ(0) = 0, and it follows from (2.4.3) thatϕ(s) ≥ 0. Therefore, ϕ ′ (0) = 0. Sinceϕ ′ (0) = (J ′ (u 0 ), v + (t ′ (0), v) X ⋆ ,Xw) X ⋆ ,X = (J ′ (u 0 ), v) X ⋆ ,X,the result follows.Step 2. Conclusion. Since g ≠ 0, there exists w ∈ X such thatg(w) = 1. Set λ = f(w). Given any u ∈ X, we haveg(u − g(u)w) = g(u) − g(u)g(w) = 0.


2.4. CONSTRAINED MINIMIZATION 49Therefore, u−g(u)w ∈ g −1 (0), so that by Step 1, u−g(u)w ∈ f −1 (0).It follows that f(u − g(u)w) = 0, i.e. f(u) = g(u)f(w) = λg(u). Thismeans that f = λg, i.e. J ′ (u 0 ) = λF ′ (u 0 ).□We now give an application of Theorem 2.4.1 <strong>to</strong> the resolution ofthe equation (2.4.1).Theorem 2.4.2. Let Ω be a bounded domain of R N . Supposeλ > −λ 1 where λ 1 = λ 1 (−∆) is defined by (2.1.5), a > 0 and 1 < p


50 2. VARIATIONAL METHODSSince v ≠ 0, we have J(v) > 0, and it follows that µ > 0. Finally, setu = (µ/a) 1p−1 v. It follows from (2.4.5) that u satisfies (2.4.1). Thiscompletes the proof.□Remark 2.4.3. Here are some comments on Theorem 2.4.2.(i) If, instead of the equation (2.4.1), we consider the equation−△u + λu = a|u| p−1 u + h, with h ≠ 0, then the existence problemis considerably more difficult, and only partial results areknown. See Struwe [43], Bahri and Berestycki [6], Bahri andLions [7], Bahri [5].(ii) If we replace the nonlinearity |u| p−1 u by a nonhomogeneous oneg(u) with the same behavior (for example, g(u) = |u| p−1 u +|u| q−1 u), then the method we used <strong>to</strong> prove existence does notapply, because of the scaling used at the very end of the proof(which uses the homogeneity). In this case, what we obtainis the existence of u ∈ H0 1 (Ω) and µ > 0 such that −△u +λu = µg(u). In order <strong>to</strong> solve <strong>equations</strong> of the type (2.4.1) withnonhomogeneous nonlinearities, we will apply the mountain passtheorem in the next section.(iii) The assumption p < (N + 2)/(N − 2) may be essential or not,depending on the domain Ω. See Section 2.7.(iv) The assumption λ > −λ 1 is not essential for the existence ofa nontrivial solution u ∈ H0 1 (Ω) of the equation (2.4.1). (Seefor example Kavian [28], Example 8.7 of Chapter 3.) However,it is necessary for the existence of a nontrivial solution u ≥ 0.Indeed, suppose u ≥ 0 is a solution of (2.4.1). Multiplying theequation by ϕ 1 , a positive eigenvec<strong>to</strong>r corresponding <strong>to</strong> the firsteigenvalue of −△ in H0 1 (Ω) (see Section 3.2 below), we obtain∫ ∫(λ 1 + λ) uϕ 1 = a |u| p−1 uϕ 1 .ΩSince ϕ 1 > 0, the right-hand side is positive, so the left-handside must be positive, which implies λ > −λ 1 . Note that evenwhen λ ≤ −λ 1 , we can apply the minimization technique ofthe proof of Theorem 2.4.2. It is clear that the minimizationsequence (v n ) n≥0 is bounded in H0 1 (Ω) (note that it is a prioribounded in L 2 (Ω) since v n ∈ S. Therefore, we obtain a solutionv ≥ 0, v ≠ 0 of the equation (2.4.5). However, multiplying theequation (2.4.5) by ϕ 1 , we see that µ = 0 if λ = −λ 1 and µ < 0Ω


2.4. CONSTRAINED MINIMIZATION 51if λ < −λ 1 . Therefore, the method applies, but it produces asolution of the equation (2.4.1) with a ≤ 0.Solutions of minimal energy E (defined by (2.4.2)) may be importantfor some applications, because they tend <strong>to</strong> be “more stable”, insome appropriate sense. However, we saw that the energy E is notbounded from below, so a solution cannot minimize the energy on thewhole space H 1 0 (Ω). There is still an appropriate notion of solutionof minimal energy, the ground state. A ground state is a nontrivialsolution of (2.4.1) which minimizes E among all nontrivial solutionsof (2.4.1).We will show below the existence of a ground state. We can usetwo arguments for that purpose:– We can minimize E(u) on the set{∫∫S = u ∈ H0 1 (Ω); u ≠ 0 and |∇u| 2 + λΩΩ∫u 2 = a |u| p+1} ,Ωin order construct in one step a solution of (2.4.1) which is a groundstate. In addition, we obtain the existence of a ground state u ≥ 0.– We can consider a minimizing sequence of nontrivial solutions ofthe equation (2.4.1) (which exists by Theorem 2.4.2) and show thatsome subsequence converges <strong>to</strong> a ground state.We show below the existence of a ground state by using the firstmethod. We will also prove a more general result in the followingsection (Theorem 2.5.8).Theorem 2.4.4. Under the assumptions of Theorem 2.4.2, thereexists a ground state u ≥ 0 of the equation (2.4.1).setProof. Let∫F (u) =Ω∫ ∫|∇u| 2 + λ u 2 − a |u| p+1 ,ΩΩM = {u ∈ H 1 0 (Ω); F (u) = 0}, S = {u ∈ M; u ≠ 0},and consider E defined by (2.4.2). Given any v ∈ H 1 0 (Ω), v ≠ 0, wesee that F (tu) = 0 for some t > 0. Thus S ≠ ∅. We proceed in foursteps.


52 2. VARIATIONAL METHODSStep 1. (F ′ (v), v) H −1 ,H0 1 < 0 and (E′ (v), v) H −1 ,H0 1v ∈ S. Indeed,= 0 for all(F ′ (v), v) H −1 ,H0 1 = (−2△v + 2λv − a(p + 1)|v|p−1 v, v) H −1 ,H0∫∫∫1= 2 |∇v| 2 + 2λ v 2 − a(p + 1) |v| p+1ΩΩΩ∫= 2F (v) − a(p − 1) |v| p+1 ,from which we deduce the first property. Since (E ′ (v), v) H −1 ,H0 1 =F (v), the second property follows.Step 2. There exists δ > 0 such that ‖v‖ L p+1 ≥ δ for all v ∈ S.Indeed, since F (v) = 0 and λ > −λ 1 , there exists a constant C suchthat‖v‖ 2 H ≤ 1 C‖v‖p+1 L, p+1for all v ∈ S. By Sobolev’s inequality, we deduce that‖v‖ 2 L p+1Ω≤ C‖v‖p+1 L p+1 ,from which the result follows.Step 3. There exists u ∈ S, u ≥ 0, such thatIndeed,E(u) = inf E(v) := m. (2.4.6)v∈SE(v) = 1 ( 12 F (v) + a 2 − 1p + 1) ∫ Ω( 1= a2 − 1p + 1) ∫ |v| p+1 ,ΩΩ|v| p+1(2.4.7)for all v ∈ S, so that m > 0 by Step 2. Furthermore, it followsfrom (2.4.6) and (2.4.7) that( 1m = a2 − 1 ) ∫infp + 1 v∈S|v| p+1 . (2.4.8)Let (u n ) n≥0 ⊂ S be a minimizing sequence for (2.4.6), hence forfor (2.4.8). Replacing u n by |u n |, we see that we may assume u n ≥ 0.Since u n ∈ S and (u n ) n≥0 is bounded in L p+1 (Ω) (hence in L 2 (Ω))by (2.4.8), we see that (u n ) n≥0 is bounded in H0 1 (Ω). Therefore(Theorem 5.5.5), there exist a subsequence, which we still denote by


2.4. CONSTRAINED MINIMIZATION 53(u n ) n≥0 , and u ∈ H0 1 (Ω), u ≥ 0, such that u n → u in L p+1 (Ω) and‖∇u‖ L 2 ≤ lim inf ‖∇u n ‖ L 2 as n → ∞. It follows that( 1a2 − 1p + 1) ∫ |u| p+1 = m, (2.4.9)Ωand F (u) ≤ 0. We deduce in particular that there exists t ∈ (0, 1]such that F (tu) = 0, i.e. tu ∈ S. Therefore,( 1m ≤ a2 − 1p + 1) ∫ ( 1|tu| p+1 = t p+1 aΩ2 − 1p + 1) ∫ |u| p+1 = t p+1 m,Ωby (2.4.9). Since m > 0, this implies that t = 1. Therefore, u ∈ Sand thus E(u) = m by (2.4.9) and (2.4.7).Step 4. Conclusion. Let u be as in Step 3. By Step 1 we haveF ′ (u) ≠ 0; and so, we may apply Theorem 2.4.1. It follows that thereexists a Lagrange multiplier λ ∈ R such that E ′ (u) = λF ′ (u). Since,by Step 1, (E ′ (u), u) H −1 ,H0 1 = 0 and (F ′ (v), v) H −1 ,H0 1 ≠ 0, we musthave λ = 0; and so u is a solution of the equation (2.4.1). It remains<strong>to</strong> show that E(v) ≥ E(u) for all solutions v ≠ 0 of (2.4.1). This isclear, since any solution v of (2.4.1) satisfies F (v) = 0, i.e. v ∈ S, andu minimizes E on S.□We now establish the existence of nontrivial solutions of (2.4.1)in some domains for supercritical nonlinearities, i.e. for p ≥ (N +2)/(N − 2).Theorem 2.4.5. Assume N ≥ 2. Let 0 < R 0 < R 1 ≤ ∞ and letΩ be the annulus {x ∈ R N ; R 0 < |x| < R 1 }. Suppose λ > −λ 1 whereλ 1 = λ 1 (−∆) is defined by (2.1.5), a > 0 and p > 1. It follows thatthere exists a radially symmetric solution u ∈ H 1 0 (Ω), u ≥ 0, u ≠ 0 ofthe equation (2.4.1).Proof. Recall that if w ∈ H 1 (R N ) is radially symmetric, then|w(x)| ≤ √ 2|x| − N−12 ‖w‖ L 2‖∇w‖ L 2, (2.4.10)for a.a. x ∈ R N (see (5.6.13)). We denote by W the subspace ofH0 1 (Ω) of radially symmetric functions, so that W is a closed subspaceof H0 1 (Ω). Given u ∈ W , let{u(x) if x ∈ Ω,ũ(x) =0 if x ∉ Ω.


54 2. VARIATIONAL METHODSIt follows that ũ ∈ H 1 (R N ). Since ũ is also radially symmetric, wemay apply estimate (2.4.10) and we deduce that|u(x)| ≤ √ 2|x| − N−12 ‖u‖ L 2‖∇u‖ L 2, (2.4.11)for a.a. x ∈ Ω. This implies in particular that W ↩→ L ∞ (Ω), thusW ↩→ L p+1 (Ω). We now argue as in Theorem 2.4.2. Set∫F (u) = |u| p+1 − 1,ΩandJ(u) = 1 ∫|∇u| 2 + λ ∫u 2 .2 Ω 2 ΩIt follows that F, J ∈ C 1 (W, R). (Apply Corollary 2.2.2 and the embeddingW ↩→ L p+1 (Ω).) LetM = S = {u ∈ W ; F (u) = 0}.We have F ′ (u) = |u| p−1 u ≠ 0 for all u ∈ S. We construct v ∈ S suchthatJ(v) = inf J(w). (2.4.12)w∈SSince J ≥ 0, we may consider a minimizing sequence (u n ) n≥0 ⊂ S,which is bounded in W (by (2.1.8)). Set now v n = |u n |. It followsthat (v n ) n≥0 ⊂ S and is also a minimizing sequence. We now considerseparately two cases.Case 1: R 1 < ∞. There exist a subsequence, which we stilldenote by (v n ) n≥0 , and v ∈ W such that v n → v in L 2 (Ω) and‖∇v‖ L 2 ≤ lim inf ‖∇v n ‖ L 2 as n → ∞ (Theorem 5.5.5). It follows thatJ(v) ≤ lim inf J(v n ). Furthermore, since W ↩→ L ∞ (Ω) and v n → v inL 2 (Ω), we deduce from Hölder’s inequality that v n → v in L p+1 (Ω).This implies that F (v) = 0; and so v ∈ S satisfies (2.4.12)Case 2: R 1 = ∞. In this case, it follows from Remark 2.1.5that λ 1 = 0, thus λ > 0. There exist a subsequence, which westill denote by (v n ) n≥0 , and v ∈ W such that v n → v in L r (Ω)as n → ∞ for all 2 < r < 2N/(N − 2). Furthermore, ‖v‖ L 2 ≤lim inf ‖v n ‖ L 2 and ‖∇v‖ L 2 ≤ lim inf ‖∇v n ‖ L 2 as n → ∞. (The estimate(2.4.11) is essential for that compactness property, see Remark5.6.5 and Lemma 5.5.3.) Since λ > 0, it follows that J(v) ≤lim inf J(v n ). Furthermore, since W ↩→ L ∞ (Ω) and v n → v in L 2 (Ω),we deduce from Hölder’s inequality that v n → v in L p+1 (Ω) as n → ∞.This implies that F (v) = 0; and so v ∈ S satisfies (2.4.12)


2.5. THE MOUNTAIN PASS THEOREM 55We see that in both cases, v satisfies (2.4.12). In addition, wehave v ≥ 0 and, since v ∈ S, v ≠ 0. By Theorem 2.4.1, there exists aLagrange multiplier µ ∈ R such that J ′ (v) = µF ′ (v), i.e.− △v + λv = µ|v| p−1 v. (2.4.13)Taking the H −1 − H0 1 duality product of (2.4.13) with v, we obtain∫2J(v) = µ |v| p+1 .Since v ≠ 0, we have J(v) > 0, and it follows that µ > 0. Finally, setu = (µ/a) 1p−1 v. It follows from (2.4.13) that u satisfies (2.4.1). Thiscompletes the proof.□Remark 2.4.6. One can show that if N ≥ 2, λ > 0, a > 0 and1 < p < (N + 2)/(N − 2), then there exists a radially symmetricsolution u ∈ H 1 (R N ), u ≥ 0, u ≠ 0 of the equation (2.4.1). The proofis the same as the proof of Theorem 2.4.5 (use Theorem 5.6.3 forpassing <strong>to</strong> the limit). Note that the upper bound on p is essential byPohožaev’s identity (see Section 2.7 and in particular Lemma 2.7.1).Remark 2.4.7. Note that one cannot obtain ground states for the<strong>equations</strong> considered in Theorem 2.4.5 and Remark 2.4.6 by adaptatingthe argument that we used in the proof of Theorem 2.4.4 <strong>to</strong> theradial case. This would only prove the existence of a nontrivial solutionthat minimizes the energy among all nontrivial, radial solutions.We will obtain ground states by other methods (see Section 2.6).2.5. The mountain pass theoremIn the preceding section, we established the existence of a nontrivialsolution of the equation{−△u + λu = f(u) in Ω,(2.5.1)u = 0 in ∂Ω,in a bounded domain Ω, with λ > −λ 1 , and for homogeneous nonlinearitiesof the form f(u) = a|u| p−1 u with a > 0 and p < (N +2)/(N − 2). The homogeneity of f was essential for the method (constrainedminimization). In this section, we will use the mountain passtheorem in order <strong>to</strong> establish existence of a nontrivial solution fornonhomogeneous nonlinearities.Ω


56 2. VARIATIONAL METHODSWe begin by establishing the mountain pass theorem, more preciselyone of its many versions. We first introduce the Palais-Smalecondition.Definition 2.5.1. Let X be a Banach space and J ∈ C 1 (X, R).Given c ∈ R, we say that J satisfies the Palais-Smale condition at thelevel c (in brief, J satisfies (PS) c ) if the following holds. If there existsa sequence (u n ) n≥0 ⊂ X such that J(u n ) → c and J ′ (u n ) → 0 (inX ⋆ ) as n → ∞, then c is a critical value (i.e. there is u ∈ X such thatJ(u) = c and J ′ (u) = 0). We say that J satisfies the Palais-Smalecondition (in brief, J satisfies (PS)) if J satisfies (PS) c for all c ∈ R.We will see later examples of functionals that satisfy the Palais-Smale condition. We are now in a position <strong>to</strong> state the mountain passtheorem of Ambrosetti and Rabinowitz (see [4]).Theorem 2.5.2 (The mountain pass theorem). Let X be a Banachspace, and let J ∈ C 1 (X, R). Suppose that:(i) J(0) = 0;(ii) there exist ε, γ > 0 such that J(u) ≥ γ for ‖u‖ = ε;(iii) there exists u 0 ∈ X such that ‖u 0 ‖ > ε and J(u) < γ.Set A = {p ∈ C([0, 1], X); p(0) = 0, p(1) = u 0 } and letc = inf max J(p(t)) ≥ γ.p∈A t∈[0,1]If J satisfies (PS) c , then c is a critical value of J.Corollary 2.5.3. Let X be a Banach space and J ∈ C 1 (X, R).Suppose that:(i) J(0) = 0;(ii) there exist ε, γ > 0 such that J(u) ≥ γ for ‖u‖ = ε;(iii) there exists u 0 ∈ X such that ‖u 0 ‖ > ε and J(u) < γ.If J satisfies (PS), then there exist c ≥ γ and u ∈ X such thatJ(u) = c and J ′ (u) = 0.Corollary 2.5.3 is an immediate consequence of Theorem 2.5.2.For the proof of Theorem 2.5.2, we follow the argument of Brezis andNirenberg [14], which is especially simple and elegant. We will usethe following two results.


2.5. THE MOUNTAIN PASS THEOREM 57Lemma 2.5.4 (Ekeland’s principle [21]). Let (A, d) be a completemetric space and let ψ ∈ C(A, R) be bounded from below. Ifc = infp∈A ψ(p),then for every ε > 0, there exists p ε ∈ A such thatandfor all p ∈ A.and setc ≤ ψ(p ε ) ≤ c + ε,ψ(p) − ψ(p ε ) + εd(p, p ε ) ≥ 0,Proof. Fix ε > 0. Let p 1 ∈ A satisfyc ≤ ψ(p 1 ) ≤ c + ε,E 1 = {p ∈ A; ψ(p) − ψ(p 1 ) + εd(p, p 1 ) ≤ 0}.It is clear that p 1 ∈ E 1 , so that E 1 ≠ ∅. SetFix p 2 ∈ E 1 such thatc 1 = infp∈E 1ψ(p) ∈ [c, ψ(p 1 )].ψ(p 2 ) − c 1 ≤ 1 2 (ψ(p 1) − c 1 ),(observe that such a p 2 exists. Indeed, if ψ(p 1 ) = c 1 , we take p 2 = p 1 ,and if ψ(p 1 ) − c 1 > 0, there exists a sequence (p 1,l ) l≥0 ⊂ E 1 such thatψ(p 1,l ) → c 1 as l → ∞, so we take p 2 = p 1,l for some l large enough)and setE 2 = {p ∈ A; ψ(p) − ψ(p 2 ) + εd(p, p 2 ) ≤ 0}.It is clear that p 2 ∈ E 2 , so that E 2 ≠ ∅. Setc 2 = infp∈E 2ψ(p) ∈ [c 1 , ψ(p 2 )].We claim that E 2 ⊂ E 1 . Indeed, if p ∈ E 2 , thenψ(p) − ψ(p 1 ) + εd(p, p 1 ) = [ψ(p) − ψ(p 2 ) + εd(p, p 2 )]+[ψ(p 2 ) − ψ(p 1 ) + εd(p 2 , p 1 )] + ε[d(p, p 1 ) − d(p, p 2 ) − d(p 2 , p 1 )] ≤ 0.Since E 2 ⊂ E 1 , we see that c 2 ≥ c 1 . By induction, we construct a sequence(p n ) n≥1 ⊂ A, a nonincreasing sequence (E n ) n≥1 of nonempty,


58 2. VARIATIONAL METHODSclosed subsets of A, and a nondecreasing sequence (c n ) n≥1 of realnumbers, c ≤ c 1 ≤ · · · ≤ c + ε. We haveψ(p n+1 ) − c n ≤ 1 2 (ψ(p n) − c n ),for all n ≥ 1. Since the sequence (c n ) n≥1 is nondecreasing, we deducethatψ(p n+1 ) − c n+1 ≤ 1 2 (ψ(p n) − c n );and so,ψ(p n+1 ) − c n+1 ≤ 2 −n (ψ(p 1 ) − c 1 ).Furthermore, if p ∈ E n+1 , then by definitionεd(p, p n+1 ) ≤ ψ(p n+1 ) − ψ(p) ≤ ψ(p n+1 ) − c n+1 ≤ 2 −n (ψ(p 1 ) − c 1 ).This means that the diameter of E n converges <strong>to</strong> 0 as n → ∞. SinceA is complete, it follows that ∩ n≥1 E n is reduced <strong>to</strong> a point, which wedenote by p ε . Given now p ∈ A, p ≠ p ε , we have p ∉ E n for n ≥ n 0 ;and so,ψ(p) − ψ(p n ) + εd(p, p n ) > 0,for n ≥ n 0 . Letting n → ∞, we obtainψ(p) − ψ(p ε ) + εd(p, p ε ) ≥ 0.Since p ≠ p ε is arbitrary, the result follows.Lemma 2.5.5. Let X be a Banach space and let f ∈ C([0, 1], X ⋆ ).For every ε > 0, there exists v ∈ C([0, 1], X) such thatandfor all t ∈ [0, 1].‖v(t)‖ X ≤ 1,(f(t), v(t)) X ⋆ ,X ≥ ‖f(t)‖ X ⋆ − ε,Proof. Fix ε > 0. For every t ∈ [0, 1], there exists x t ∈ X suchthat‖x t ‖ X < 1, (f(t), x t ) X⋆ ,X > ‖f(t)‖ X ⋆ − ε.By continuity, there exists δ(t) > 0 such that(f(s), x t ) X⋆ ,X > ‖f(s)‖ X ⋆ − ε,for all s ∈ [0, 1] such that |s − t| ≤ δ(t). In particular,[0, 1] ⊂ ∪ (t − δ(t), t + δ(t),t∈[0,1]□


2.5. THE MOUNTAIN PASS THEOREM 59and we deduce by compactness of [0, 1] that there exist an integerl ≥ 1 and (t j ) 1≤j≤l ⊂ [0, 1] such that[0, 1] ⊂ ∪1≤j≤l I j,where I j = [0, 1] ∩ (t j − δ(t j ), t j + δ(t j )). If I j = [0, 1] for some 1 ≤j ≤ l, then we can take v(t) ≡ x tj . Otherwise, given any 1 ≤ j ≤ l,we setwhere K j = [0, 1] \ I j , andρ j (t) = dist (t, K j ),ρ(t) =l∑ρ j (t),j=1for all t ∈ [0, 1]. We observe that any t ∈ [0, 1] belongs <strong>to</strong> some I j , sothat ρ j (t) > 0. In particular, ρ(t) > 0 for all t ∈ [0, 1]. Finally, setv(t) = 1ρ(t)l∑ρ j (t)x tj .We claim that v satisfies the conclusions of the lemma. Indeed,‖v(t)‖ X ≤ 1ρ(t)j=1l∑ρ j (t)‖x tj ‖ X ≤ 1.j=1In addition, note that if ρ j (t) > 0, then t ∈ I j ; and so,(f(t), v(t)) X ⋆ ,X = 1ρ(t)≥ 1ρ(t)which completes the proof.l∑ρ j (t)(f(t), x tj ) X ⋆ ,Xj=1l∑ρ j (t)(‖f(t)‖ X ⋆ − ε) ≥ ‖f(t)‖ X ⋆ − ε,j=1□Proof of Theorem 2.5.2. Let d(p, q) = ‖p−q‖ C([0,1],X) for allp, q ∈ A, and setψ(p) = maxt∈[0,1] J(p(t)),


60 2. VARIATIONAL METHODSfor p ∈ A. We note that (A, d) is a complete metric space and thatψ ∈ C(A, R). Therefore, we may apply Ekeland’s principle and wesee that for every ε > 0, there exists p ε ∈ A such thatandc ≤ ψ(p ε ) ≤ c + ε,ψ(p) − ψ(p ε ) + εd(p, p ε ) ≥ 0, (2.5.2)for all p ∈ A. We claim that there exists t ε ∈ (0, 1), such thatandTo see this, consider the setc ≤ J(p ε (t ε )) ≤ c + ε, (2.5.3)‖J ′ (p ε (t ε ))‖ X ⋆ ≤ ε. (2.5.4)B ε = {t ∈ [0, 1]; J(p ε (t)) = ψ(p ε )}.We need only show that there exists t ε ∈ B ε such that‖J ′ (p ε (t ε ))‖ X ⋆ ≤ 2ε. (2.5.5)Applying Lemma 2.5.5 with f(t) = J ′ (p ε (t)), we obtain a functionv ε ∈ C([0, 1], X) such that‖v ε (t)‖ X ≤ 1, (J ′ (p ε (t)), v ε (t)) X⋆ ,X ≥ ‖J ′ (p ε (t))‖ X ⋆ − ε, (2.5.6)for all t ∈ [0, 1]. Since B ε ⊂ (0, 1) (recall that J(p ε (0)), J(p ε (1)) < c)there exists a function α ε ∈ C([0, 1], R) such that 0 ≤ α ε ≤ 1, α ε (0) =α ε (1) = 0 and α ε ≡ 1 on a neighborhood of B ε . Given n ≥ 1, we letin (2.5.2), and we obtainWe setp(t) = p ε (t) − 1 n α ε(t)v ε (t),ψ(p ε − n −1 α ε v ε ) − ψ(p ε ) + ε ≥ 0. (2.5.7)nB ε,n = {t ∈ [0, 1]; J(p ε (t) − n −1 α ε (t)v ε (t)) = ψ(p ε − n −1 α ε v ε )},and we observe that B ε,n ≠ ∅ by definition of ψ. Consider a sequence(t ε,n ) n≥1 with t ε,n ∈ B ε,n . There exist a subsequence, which we stilldenote by (t ε,n ) n≥1 and t ε ∈ [0, 1] such that t ε,n → t ε as n → ∞. Note


2.5. THE MOUNTAIN PASS THEOREM 61that, since ψ is continuous, ψ(p ε − n −1 α ε v ε ) → ψ(p ε ) as n → ∞, sothatJ(p ε (t ε )) = limn→∞ J(p ε(t ε,n ) − n −1 α ε (t ε,n )v ε (t ε,n ))= limn→∞ ψ(p ε − n −1 α ε v ε ) = ψ(p ε ).We deduce that t ε ∈ B ε . Note that for n large enough, we haveα ε (t ε,n ) = 1 (because α ε ≡ 1 on a neighborhood of B ε ), so thatJ(p ε (t ε,n ) − n −1 v ε (t ε,n )) − J(p ε (t ε,n ))= J(p ε (t ε,n ) − n −1 α ε (t ε,n )v ε (t ε,n )) − J(p ε (t ε,n ))since t ε ∈ B ε . Therefore, since t ε,n ∈ B ε,n ,J(p ε (t ε,n ) − n −1 v ε (t ε,n )) − J(p ε (t ε,n ))by (2.5.7). On the other hand,J(p ε (t ε,n ) − n −1 v ε (t ε,n )) − J(p ε (t ε,n ))and so,== − 1 n∫ 10∫ 10≥ J(p ε (t ε,n ) − n −1 α ε (t ε,n )v ε (t ε,n )) − ψ(p ε ),≥ ψ(p ε − n −1 α ε v ε ) − ψ(p ε ) ≥ − ε n , (2.5.8)dds J(p ε(t ε,n ) − sn −1 v ε (t ε,n )) ds(J ′ (p ε (t ε,n ) − sn −1 v ε (t ε,n )), v ε (t ε,n )) X⋆ ,X ds;J(p ε (t ε,n )−n −1 v ε (t ε,n ))−J(p ε (t ε,n ))+ 1 n (J ′ (p ε (t ε,n )), v ε (t ε,n )) X ⋆ ,X= − 1 n∫ 10(J ′ (p ε (t ε,n ) − sn −1 v ε (t ε,n )) − J ′ (p ε (t ε,n )), v ε (t ε,n ) ) X ⋆ ,X .Since ‖v ε ‖ X ≤ 1 and since J is C 1 , it follows that the right-hand sideof the above identity is o(n −1 ). Therefore,J(p ε (t ε,n ) − n −1 v ε (t ε,n )) − J(p ε (t ε,n ))= − 1 n (J ′ (p ε (t ε,n )), v ε (t ε,n )) X ⋆ ,X + o(n −1 ). (2.5.9)


62 2. VARIATIONAL METHODSWe deduce from (2.5.8) and (2.5.9) thatUsing now (2.5.6), we see that(J ′ (p ε (t ε,n )), v ε (t ε,n )) X⋆ ,X ≤ ε + o(1).‖J ′ (p ε (t ε,n ))‖ X ⋆ ≤ 2ε + o(1).Letting n → ∞, we obtain (2.5.5), which proves the claim (2.5.3)-(2.5.4). Finally, we let ε = 1/n for n ≥ 1, and we let u n = p ε (t ε )). Itfollows from (2.5.3)-(2.5.4) that J(u n ) → c and J ′ (u n ) → 0 as n → ∞.The result now follows by applying the condition (PS) c . □We now give some applications of the mountain pass theorem.Theorem 2.5.6. Assume Ω is a bounded domain of R N , and letλ > −λ 1 where λ 1 = λ 1 (−∆) is defined by (2.1.5). Let f ∈ C(R, R)satisfy f(0) = 0, and suppose there exist 1 < p < (N + 2)/(N − 2)(1 < p < ∞ if N = 1 or 2), ν < λ + λ 1 and θ > 2 such thatwhere F (u) =|f(u)| ≤ C(1 + |u| p ) for all u ∈ R,F (u) ≤ ν 2 u2 for |u| small,0 < θF (u) ≤ uf(u) for |u| large,∫ u0f(s) ds. It follows that there exists a solution u ∈H 1 0 (Ω), u ≠ 0, of the equation (2.5.1).Proof. SetJ(u) = 1 ∫|∇u| 2 + λ ∫ ∫u 2 − F (u). (2.5.10)2 Ω 2 Ω ΩWe will show, by applying the mountain pass theorem, that thereexists a critical point u ∈ H0 1 (Ω) of J such that J(u) > 0 (and so,u ≠ 0). We proceed in two steps.Step 1. J satisfies (PS). Suppose (u n ) n≥0 ⊂ H0 1 (Ω) satisfiesJ(u n ) → c ∈ R and J ′ (u n ) → 0 in H −1 (Ω) as n → ∞. Since J ′ (u n ) =−△u n + λu n − f(u n ), it follows that∫∫(J ′ (u n ), u n ) H −1 ,H0 1 = |∇u n | 2 + λand so,ΩΩ∫u 2 n − u n f(u n );Ω∫2J(u n ) − (J ′ (u n ), u n ) H −1 ,H01 = (u n f(u n ) − 2F (u n )).Ω


2.5. THE MOUNTAIN PASS THEOREM 63Note that uf(u) ≥ θF (u) − C for all u ∈ R and some constant C.Therefore,∫2J(u n ) − (J ′ (u n ), u n ) H −1 ,H01 ≥ (θ − 2) F (u n ) − C|Ω|.We deduce that∫(θ − 2) F (u n ) ≤ 2J(u n ) + ‖J ′ (u n )‖ H −1‖u n ‖ H 1 + C|Ω|. (2.5.11)ΩIt follows that there exists a constant C such that∫F (u n ) ≤ C + C‖u n ‖ H 1.Therefore, by (2.1.8),ΩJ(u n ) ≥ α‖u n ‖ 2 H 1 − C‖u n‖ H 1 − C,with α given by (2.1.7); and so (u n ) n≥0 is bounded in H0 1 (Ω). Wededuce (Theorem 5.5.5) that there exist a subsequence, which we stilldenote by (u n ) n≥0 and u ∈ H0 1 (Ω) such that u n → u in L p+1 (Ω) asn → ∞ and ∫∇u n · ∇ϕ −→ ∇u · ∇ϕ,Ωn→∞∫Ωfor all ϕ ∈ H0 1 (Ω). Furthermore, we may also assume that thereexists h ∈ L p+1 (Ω) such that |u n | ≤ h a.e. in Ω. We deduce easily bydominated convergence and the growth assumption on f that f(u n ) →f(u) in L p+1p (Ω), hence in H −1 (Ω), as n → ∞. It follows that∫f(u n )ϕ −→ f(u)ϕ,n→∞∫Ωfor all ϕ ∈ H 1 0 (Ω). Therefore,(−△u n + λu n − f(u n ), ϕ) H −1 ,H 1 0ΩΩ−→n→∞ (−△u + λu − f(u), ϕ) H −1 ,H 1 0 .Since −△u n + λu n − f(u n ) = J ′ (u n ) → 0, it follows that J ′ (u) = 0.It now remains <strong>to</strong> show that J(u) = c. It follows from what precedesthat −△u n + λu n → −△u + λu in H −1 (Ω). By Theorem 2.1.4, thisimplies that u n → u in H0 1 (Ω); and so, J(u) = lim J(u n ) = c asn → ∞.Step 2. Conclusion. We have J(0) = 0. In addition, thereexists a constant C such that F (u) ≤ ν 2 u2 + C|u| p+1 for all u ∈ R;


64 2. VARIATIONAL METHODSand so,Therefore,∫J(u) ≥ 1 2ΩF (u) ≤ ν 2∫Ω∫Ω|∇u| 2 + λ − ν2u 2 + C‖u‖ p+1H 1 .∫Ωu 2 − C‖u‖ p+1H 1 .Since λ − ν > −λ 1 , we deduce that there exists δ > 0 such thatJ(u) ≥ δ‖u‖ 2 H 1 − C‖u‖p+1 H 1 .Therefore, setting ε = (δ/2C) 1p−1 , we have J(u) ≥ δε 2 /2 > 0 for‖u‖ H 1 = ε. We claim that there exists u ∈ H 1 0 (Ω) such that ‖u‖ H 1 ≥ εand J(u) < 0. Indeed, for s large, we havef(s)F (s) ≥ θ s ;and so, F (s) ≥ cs θ for s large. Thus F (u) ≥ cs θ − C for all s ≥ 0.Consider now ψ ∈ Cc∞ (Ω) such that ψ ≥ 0 and ψ ≠ 0, and t > 0. Wehave(∫∫ )∫J(tψ) ≤ t2 |∇ψ| 2 + λ ψ 2 + C|Ω| − ct θ ψ θ . (2.5.12)2 ΩΩΩTherefore, J(tψ) < 0 for t large enough, which proves the claim. SinceJ satisfies (PS) by Step 1, it follows from what precedes that we mayapply the mountain pass theorem, from which the result follows. □Remark 2.5.7. Here are some comments on Theorem 2.5.6.(i) We see that Theorem 2.5.6 applies <strong>to</strong> more general nonlinearitiesthan Theorem 2.4.2, because it does not require homogeneity.On the other hand, we do not know if the nontrivial solutionthat we construct is nonnegative.(ii) Note that the assumption λ > −λ 1 is not essential in Theorem2.5.6. However, the proof in the general case requires aslightly stronger assumption on f (namely, we need F ≥ 0) anda more general version of the mountain pass theorem (see forexample Kavian [28], Example 8.7 of Chapter 3.).(iii) Note that in Step 1 of the proof, we proved a slightly strongerproperty than (PS). We proved that if (u n ) n≥0 ⊂ H0 1 (Ω) satisfiesJ ′ (u n ) → 0 and J(u n ) → c ∈ R as n → ∞, then there exist asubsequence (u nk ) k>0 and u ∈ H0 1 (Ω) such that u nk → u in


2.5. THE MOUNTAIN PASS THEOREM 65H0 1 (Ω) as k → ∞ (and so, J(u) = c and J ′ (u) = 0). Thisproperty is sometimes used as the definition of the Palais-Smalecondition.We saw that the energy J is not bounded from below (see (2.5.12)),so a solution cannot minimize the energy on the whole space H 1 0 (Ω).However, there is still the notion of ground state, as in the precedingsection. A ground state is a nontrivial solution of (2.5.1) which minimizesJ among all nontrivial solutions of (2.5.1). We now show theexistence of a ground state, under slightly stronger assumptions on fthan in Theorem 2.5.6.Theorem 2.5.8. Assume Ω is a bounded domain of R N , and letλ > −λ 1 where λ 1 = λ 1 (−∆) is defined by (2.1.5). Let f ∈ C(R, R)satisfy f(0) = 0, and suppose there exist 1 < p < (N + 2)/(N − 2)(1 < p < ∞ if N = 1 or 2), ν < λ + λ 1 and θ > 2 such thatwhere F (u) =|f(u)| ≤ C(1 + |u| p ) for all u ∈ R,uf(u) ≤ νu 2 + C|u| p+1 for all u ∈ R,0 < θF (u) ≤ uf(u) for |u| large,∫ uof the equation (2.5.1).0f(s) ds. It follows that there exists a ground stateProof. Since f satisfies the assumptions of Theorem 2.5.6, thereexists a nontrivial solution of (2.5.1). Let E ̸= ∅ be the set of nontrivialsolutions of (2.5.1), and setm = infv∈E J(v).If v ∈ E, then it follows from (2.5.11) that∫θ F (v) ≤ 2J(v) + C|Ω|.ΩSince F > 0, we deduce that J(v) is bounded from below; and so, m >−∞. Let now (u n ) n≥0 be a minimizing sequence. Since J ′ (u n ) = 0and J(u n ) → m ∈ R, it follows from Remark 2.5.7 (iii) that there exista subsequence (u nk ) k>0 and u ∈ H0 1 (Ω) such that u nk → u in H0 1 (Ω)as k → ∞. In particular, J(u) = m and J ′ (u) = 0. Therefore, it onlyremains <strong>to</strong> show that u ≠ 0. Indeed, we have (J ′ (u n ), u n ) H −1 ,H0 1 = 0,


66 2. VARIATIONAL METHODSi.e.∫Ω∫ ∫|∇u n | 2 + λ u 2 n =ΩΩ∫≤ ν∫ ∫u n f(u n ) ≤ ν u 2 n + C |u n | p+1ΩΩΩu 2 n + C‖u n ‖ p+1H 1 ;and so,∫∫|∇u n | 2 + (λ − ν) u 2 n ≤ C‖u n ‖ p+1H. 1ΩΩSince λ − ν > −λ 1 , we deduce that‖u n ‖ 2 H 1 ≤ C‖u n‖ p+1H 1 ;and since u n ≠ 0, we conclude that ‖u n ‖ H 1 ≥ C − 1p−1 . It follows that‖u‖ H 1 ≥ C − 1p−1 , so that u ≠ 0.□2.6. Specific methods in R NIn this section, we consider the case Ω = R N , and we study theexistence of nontrivial solutions, and in particular of ground states, ofthe equation (2.5.1).Under appropriate assumptions on f, we already obtained an existenceresult in Section 1.3. However, the method we applied fails ifwe consider a nonlinearity f that also depends on x in a non-radialway. Also, it does not show the existence of a ground state. Wealso obtained an existence result in the homogeneous case by a globalminimization technique (see Remark 2.4.6), but we observed that themethod does not apply <strong>to</strong> show the existence of a ground state (seeRemark 2.4.7). Note also that we cannot apply the mountain passtheorem, since the associated functional does not satisfy the Palais-Smale condition, due <strong>to</strong> the lack of compactness of the embeddingH 1 (R N ) ↩→ L 2 (R N ).We will show in this section, the existence of a ground state, bysolving a relevant constrained minimization problem, as in Berestyckiand Lions [8]. The resolution of that problem will be an opportunityfor introducing two different <strong>to</strong>ols which allow <strong>to</strong> circumvent the difficultiesraised by the lack of compactness. Both <strong>to</strong>ols apply <strong>to</strong> thesituation we consider, but it may happen that for a given problem one<strong>to</strong>ol applies but the other does not.


2.6. SPECIFIC METHODS IN R N 67Throughout this section, we assume thatf ∈ C 1 (R), f(0) = f ′ (0) = 0, (2.6.1)and that there exists 1 < p < (N + 2)/(N − 2) such thatfor all u ∈ R. We set|f(u)| ≤ C(1 + |u| p ), (2.6.2)F (u) =∫ u0f(s) ds, (2.6.3)and for some of the results we will assume that there exists u 0 ∈ Rsuch thatF (u 0 ) − λ 2 u2 0 > 0, (2.6.4)where λ > 0 is a given number. Finally, we set∫V (u) = F (u) − λ ∫u 2 , (2.6.5)R 2N R Nfor u ∈ H 1 (R N ). If f satisfies (2.6.1)-(2.6.2), then it follows fromCorollary 2.2.2 that V ∈ C 1 (H 1 (R N ), R) and that V ′ (u) = f(u) − λu.We recall that a ground state of (2.5.1) is a solution u ≠ 0, u ∈H 1 (R N ) of (2.5.1), such that J(u) ≤ J(v) for all solutions v ≠ 0,v ∈ H 1 (R N ) of (2.5.1). Here, J is defined by (2.5.10). Our mainresult is the following.Theorem 2.6.1. Let N ≥ 3, λ > 0 and assume (2.6.1), (2.6.2)and (2.6.4). It follows that there exists a ground state u of (2.5.1).The proof of Theorem 2.6.1 consists in two steps. First, one reducesthe existence of a ground state <strong>to</strong> the resolution of a constrainedminimization problem (Proposition 2.6.2 below). Next, one solves theminimization problem (Proposition 2.6.4 below). As a matter of fact,we give two different proofs of Proposition 2.6.4, one based on theconcentration-compactness principle of P.-L. Lions, the other (underslightly more restrictive assumptions on f) based on symmetrization.Note that we assume N ≥ 3 for simplicity. The case N = 1 is solvedcompletely in Section 1.1, and the case N = 2 is more delicate (seefor example Kavian [28], Théorème 5.1 p. 276).We first reduce the existence of a ground state of (2.5.1) <strong>to</strong> theresolution of a constrained minimization problem.


68 2. VARIATIONAL METHODSProposition 2.6.2. Suppose N ≥ 3. Assume (2.6.1)-(2.6.2), andlet λ ∈ R. If there exists a solution ũ ∈ H 1 (R N ) of the minimizationproblem{V (ũ) = 1,∫∫ }|∇ũ| 2 = inf{|∇v| 2 ; v ∈ H 1 (R N ), V (v) = 1 := m,R N R N (2.6.6)then there exists a ground state u of (2.5.1). More precisely, if ũis√a solution of (2.6.6), then u defined by u(x) = ũ(γx) with γ =2N/(N − 2)m is a ground state of (2.5.1).Remark 2.6.3. Note that if ũ satisfies (2.6.6), then in particularV (ũ) = 1, so that ũ ≠ 0. It follows that ‖∇ũ‖ L 2 > 0, so that γ > 0.Proof of Proposition 2.6.2. Let ũ be a solution of (2.6.6).It follows from Theorem 2.4.1 that there exists a Lagrange multiplierΛ ∈ R such that− △ũ = Λ(f(ũ) − λũ). (2.6.7)Indeed, we need only verify that V ′ (ũ) ≠ 0 in order <strong>to</strong> apply Theorem2.4.1. Note that ũ ∈ L ∞ (R N ) by standard regularity results (seee.g. Corollary 4.4.3). Set H(x) = F (x) − λx 2 /2, so that H ′ (x) =f(x) − λx, V (u) = ∫ H(u) and V ′ (u) = H ′ (u). Since u is bounded,R Nwe may modify the values of H(x) for x large without modifying V (u)nor V ′ (u). In particular, we may assume that H ′ is bounded, so thatH(u) ∈ H 1 (R N ). Since V (u) ≠ 0, we must have H(u) ≢ 0, so that∇H(u) ≢ 0 (see Proposition 5.1.11). Since ∇H(u) = H ′ (u)∇u, we seethat H ′ (u) ≢ 0, which proves the claim, hence (2.6.7) is established.Since ũ ∈ L ∞ (R N ), we deduce from Pohožaev’s identity (seeLemma 2.7.1) thatand so,N − 22∫R N |∇ũ| 2 = NΛV (ũ) = NΛ;Λ = N − 22N m.Therefore, it follows from (2.6.7) that u defined by u(x) = ũ(γx)satisfies (2.5.1).


2.6. SPECIFIC METHODS IN R N 69It remains <strong>to</strong> show that u is a ground state of (2.5.1). We observethat ∫ ∫( N − 2) N−2|∇u| 2 = γ 2−N 2m N 2 .R 2NN|∇ũ| 2 = γ 2−N m =R N (2.6.8)Suppose now that v ≠ 0 is another solution of (2.5.1). It follows fromPohožaev’s identity that∫N − 2|∇v| 2 = NV (v); (2.6.9)2 R Nand so∫V (v) = N − 2 |∇v| 2 .2N R NTherefore, if we set v(x) = ṽ(µx) with( ∫ N − 2µ =|∇v| 2) − 1 N,2N R Nwe see that V (ṽ) = µ N V (v) = 1, so that∫It follows that∫R N |∇v| 2 = µ 2−N ∫and so,R N |∇ṽ| 2 ≥ m.R N |∇ṽ| 2 ≥ µ 2−N m =∫( N − 2|∇v| 2 ≥R 2NNComparing with (2.6.8), we obtain∫|∇v| 2 ≥R N∫( N − 22N) N−22m N 2 .∫R N |∇v| 2) N−2Nm;|∇u| 2 . (2.6.10)R NFinally, we observe that if w is any solution of (2.5.1), then it followsfrom Pohožaev’s identity (2.6.9) thatJ(w) = 1 ∫|∇w| 2 − V (w) = 1 ∫|∇w| 2 .2 R NN R NIn particular, (2.6.10) implies J(v) ≥ J(u), which completes the proof.□We now study the existence for the problem (2.6.6).


70 2. VARIATIONAL METHODSProposition 2.6.4. Suppose N ≥ 3 and λ > 0. Assume (2.6.1)-(2.6.2) and (2.6.4). It follows that there exists a solution ũ ∈ H 1 (R N )of the minimization problem (2.6.6).Proof. We proceed in five steps.Step 1. {v ∈ H 1 (R N ); V (v) = 1} ≠ ∅. Consider the functionv defined byv(x) ={u 0 if |x| < 1,0 if |x| ≥ 1,where u 0 is as in (2.6.4). It follows from (2.6.4) that∫R N (F (v) − λ 2 v2) > 0.By convolution of v with a smoothing sequence, we obtain a sequence(v n ) n≥0 ⊂ Cc ∞ (R N ) such that v n → v in L p+1 (R N ) ∩ L 2 (R N ) asn → ∞. It follows that(F (v n ) −∫R λ ) (2 v2 n −→ F (v) − N n→∞∫R λ 2 v2) .NTherefore, for n large enough, we have v n ∈ H 1 (R N ) and V (v n ) > 0.Fixing such a n and setting w(x) = v n (µx) with µ = V (v n ) 1 N , we seethat w ∈ H 1 (R N ) and V (w) = 1.Step 2. If (v n ) n≥0 is a minimizing sequence for the problem(2.6.6), then (v n ) n≥0 is bounded in H 1 (R N ) and is boundedfrom below in L 2 (R N ) and in L p+1 (R N ). We first observe thatby assumption, ‖∇v n ‖ L 2 is bounded, and we estimate ‖v n ‖ L 2. SinceV (v n ) = 1, we have∫vn 2 = 2 ( ∫ F (v n ) − 1)≤ 2 ∫F (v n ). (2.6.11)R λN R λN R NOn the other hand, it follows from (2.6.1)-(2.6.2) that for every ε > 0,there exists C ε such thatF (s) ≤ εs 2 + C ε |s| p+1 . (2.6.12)Therefore, we deduce from (2.6.11) that∫vn 2 ≤ CR N ∫|v n | p+1 .R N (2.6.13)


2.6. SPECIFIC METHODS IN R N 71Since p + 1 < 2N/(N − 2), we deduce from Gagliardo-Nirenberg’sinequality that∫so that (2.6.13) yields|v n | p+1 ≤ C‖∇v n ‖ N(p−1)2L 2R N‖v n ‖ (N−2)(p−1)2L 2‖v n ‖ (N+2)−p(N−2)2L,2≤ C‖∇v n ‖ N(p−1)2L, 2which establishes the upper estimate of ‖v n ‖ L 2, hence of ‖v n ‖ H 1. Wenow prove the lower estimate of ‖v n ‖ L 2. We have1 = − λ ∫ ∫∫vn 2 + F (v n ) ≤ C |v n | p+1 ,2 R N R N R Nby (2.6.12). Applying again Gagliardo-Nirenberg’s inequality, we obtain1 ≤ C‖∇v n ‖ N(p−1)2L 2‖v n ‖ (N+2)−p(N−2)2L 2≤ C‖v n ‖ (N+2)−p(N−2)2L,2which proves the desired estimate. Finally, the lower estimate of‖v n ‖ L p+1 follows from (2.6.13).Step 3. m > 0. It is clear that m ≥ 0. Suppose now bycontradiction m = 0 and consider a minimizing sequence (v n ) n≥0 .Since (v n ) n≥0 is bounded in L 2 (R N ) and m = 0, we deduce fromGagliardo-Nirenberg’s inequality that ‖u n ‖ L p+1 → 0 as n → ∞, whichcontradicts the lower estimate of Step 2.Step 4. There exist a minimizing sequence (u n ) n≥0 for theproblem (2.6.6) and u ∈ H 1 (R N ) such that u n → u in L 2 (R N ) asn → ∞. Consider a minimizing sequence (v n ) n≥0 . It follows fromStep 2 that (v n ) n≥0 is bounded in H 1 (R N ) and bounded from belowin L 2 (R N ). Therefore, we may assume without loss of generalitythat ‖v n ‖ L 2 → a > 0 as n → ∞. We now apply the concentrationcompactnessprinciple of P.-L. Lions, see Theorem 5.6.1. It followsthat there exists a subsequence, which we still denote by (v n ) n≥0which satisfies one of the following properties.(i) Compactness up <strong>to</strong> a translation: There exist u ∈ H 1 (R N ) anda sequence (y n ) n≥0 ⊂ R N such that v n (· − y n ) → u in L r (R N )as k → ∞, for 2 ≤ r < 2N/(N − 2).(ii) Vanishing: ‖v nk ‖ L r → 0 as k → ∞ for 2 < r < 2N/(N − 2).


72 2. VARIATIONAL METHODS(iii) Dicho<strong>to</strong>my: There exist 0 < µ < a and two bounded sequences(w n ) n≥0 and (z n ) n≥0 of H 1 (R N ) with compact support suchthat, as n → ∞,‖w n ‖ 2 L → µ, ‖z n‖ 2 2 L2 → a − µ, (2.6.14)dist (supp w n , supp z n ) → ∞, (2.6.15)‖v n − w n − z n ‖ L r → 0 for 2 ≤ r < 2N/(N − 2), (2.6.16)lim sup ‖∇w n ‖ 2 L + ‖∇z n‖ 2 2 L2 ≤ m. (2.6.17)We see that if (i) holds, then setting u n (·) = v n (· − y n ), (u n ) n≥0is also a minimizing sequence which is relatively compact in L 2 (R N ).Therefore, we need only rule out (ii) and (iii). Since (v n ) n≥0 isbounded from below in L p+1 (R N ) by Step 2, it follows that (ii) canno<strong>to</strong>ccur. We finally rule out (iii). It is convenient <strong>to</strong> introduce, forλ > 0,m λ = inf{ ∫ }|∇ϕ| 2 ; ϕ ∈ H 1 (R N ), V (ϕ) = λ . (2.6.18)R NIt follows easily from the scaling identity V (ϕ(µ·)) = µ −N V (ϕ(·)) thatm λ = λ N−2N m. (2.6.19)Since m > 0 (by Step 3), we deduce in particular thatm < m γ + m 1−γ , (2.6.20)for 0 < γ < 1. Assume now by contradiction that (iii) holds. Since w nand z n have disjoint support, we see that V (w n +z n ) = V (w n )+V (z n ).Also, it follows from (2.6.16) that V (v n ) − V (w n + z n ) → 0; and so,V (v n )−V (w n )−V (z n ) → 0. Since (w n ) n≥0 and (z n ) n≥0 are boundedin H 1 (R N ) and V (v n ) = 1, we may assume without loss of generalitythat there exists γ ∈ R such thatV (w n ) −→n→∞1 − γ and V (z n ) −→n→∞γ. (2.6.21)We consider separately the different possible values of γ. If γ < 0,then in particular, V (w n ) > 1 for n large. It follows in particularfrom (2.6.17) that ‖∇w n ‖ 2 L≤ m. Since on the other hand2‖∇w n ‖ 2 L≥ m 2 V (wn) > m by (2.6.19), we obtain a contradiction. Ifγ > 1, then we also get <strong>to</strong> a contradiction by considering the sequence(z n ) n≥0 . If γ = 0, then the argument of Step 2 shows that (w n ) n≥0


2.6. SPECIFIC METHODS IN R N 73is bounded from below in L p+1 (R N ). By Gagliardo-Nirenberg’s inequality,this implies that ‖∇w n ‖ L 2 is bounded from below. We nowdeduce from (2.6.17) that lim sup ‖∇z n ‖ 2 < m. Since V (z n ) → 1, wemay assume by scaling that V (z n ) = 1, and we get <strong>to</strong> a contradictionwith the definition of m. If γ = 1, then we also get <strong>to</strong> a contradictionby inverting the roles of the sequences (w n ) n≥0 and (z n ) n≥0 . Finally,if γ ∈ (0, 1), then we easily deduce from (2.6.17) and (2.6.21) thatm γ + m 1−γ ≤ m, which contradicts (2.6.20).Step 5. Conclusion. We apply Step 4. Since the minimizingsequence (u n ) n≥0 is bounded in H 1 (R N ) by Step 2, and since u n →u in L 2 (R N ), we deduce from Gagliardo-Nirenberg’s inequality thatu n → u in L p+1 (R N ) as n → ∞. In particular, 1 = V (u n ) → V (u). Inaddition, it follows from (5.5.8) that ‖∇u‖ 2 L≤ lim inf ‖∇u 2 n ‖ 2 L= m2as n → ∞. Therefore, u satisfies (2.6.6).□We now give an alternative proof of Proposition 2.6.4, which isapplicable when f is odd. That alternative proof is based on the propertiesof the symmetric-decreasing rearrangement. (See for exampleLieb and Loss [31]; Hardy, Littlewood and Pólya [24]. For a differentapproach, see Brock and Solynin [16].) Given a measurable set E ofR N , we denote by E ∗ the ball of R N centered at 0 and such thatAccordingly, we set|E ∗ | = |E|.1 ∗ E = 1 E ∗.Given now a measurable function u : R N → R such that |{|u| > t}| 0, we setf ∗ (x) =∫ ∞01 ∗ {|u|>t}(x) dt, (2.6.22)for all x ∈ R N . It is not difficult <strong>to</strong> show that u ∗ is nonnegative,radially symmetric and nonincreasing. Moreover, u ∗ has the samedistribution function as u, i.e.|{u ∗ ≥ λ}| = |{|u| ≥ λ}|,for all λ > 0. It follows from the above identity that if φ ∈ C(R) iscontinuous, nondecreasing, and φ(0) = 0, then∫∫φ(u ∗ ) = φ(|u|).R N R N


74 2. VARIATIONAL METHODS(Integrate the function θ(λ, x) = 1 {φ(u(x))≥λ} on (0, ∞) × R N and applyFubini.) The assumption that φ is nondecreasing can be removedby writing φ = φ 1 −φ 2 , where φ 1 and φ 2 are nondecreasing. In particular,if H ∈ C(R) is even, H(0) = 0, and if H(u) ∈ L 1 (R N ), it followsthat H(u ∗ ) ∈ L 1 (R N ) and that∫H(u ∗ ) =R N ∫H(u).R N (2.6.23)One can show that if ∇u ∈ L p (R N ) for some 1 ≤ p < ∞, then∇u ∗ ∈ L p (R N ) and∫∫|∇u ∗ | p ≤ |∇u| p . (2.6.24)R N R NThis result, however, is more delicate. See Lieb [30] for a relativelysimple proof in the case p = 2. See Brock and Solynin [16] for a reallysimple proof in the general case, via polarization.Alternative proof of Proposition 2.6.4 when f is odd.We only give an alternative proof of Steps 4 and 5. Note that, sincef is odd, F is even. Consider a minimizing sequence (v n ) n≥0 ofthe problem (2.6.6), and let u n = vn. ∗ It follows from (2.6.23) thatV (u n ) = 1, and it follows from (2.6.24) that (u n ) n≥0 is also a minimizingsequence. Since u n is spherically symmetric, it follows fromTheorem 5.6.3 that there exist a subsequence, which we still denoteby (u n ) n≥0 , and u ∈ H 1 (R N ) such that u n → u in L r (R N ) as n → ∞,for every 2 < r < 2N/(N − 2). Note that for every ε > 0, thereexists C ε such that |F (x) − F (y)| ≤ ε|x − y| + C ε (|x| p + |y| p )|x − y|.Therefore, we see that ∫ ∫F (u n ) −→ F (u).R N n→∞R NSince also∫u 2 ≤ lim infR N n→∞∫u 2 n,R Nand∫|∇u| 2 ≤ lim infR N n→∞∫|∇u n | 2 ,R Nby (5.5.6) and (5.5.8), we see that V (u) ≥ 1 and ‖∇u‖ 2 L≤ m. It2now remains <strong>to</strong> show that V (u) = 1. Suppose by contradiction thatV (u) > 1, and set v(x) = u(µx) with µ = (V (u)) 1 N > 1. It follows


2.7. STUDY OF A MODEL CASE 75that V (v) = 1 and that ‖∇v‖ 2 L= µ −(N−2) ‖∇u‖ 2 2 L≤ µ −(N−2) m < m,2which contradicts the definition of m. This completes the proof. □2.7. Study of a model caseIn this section, we apply the results obtained in Chapters 1 and 2<strong>to</strong> a model case, and we discuss the optimality. For the study ofoptimality, the following results, known as Pohožaev’s identity, willbe useful. The first one concerns the case Ω = R N .Lemma 2.7.1 (Pohožaev’s identity). Let g ∈ C(R) and set G(u) =∫ u0 g(s) ds for all u ∈ R. If u ∈ L∞ loc (RN ) satisfiesin D ′ (R N ), then∫R N { N − 2provided G(u) ∈ L 1 (R N ) and ∇u ∈ L 2 (R N ).2− △u = g(u), (2.7.1)}|∇u| 2 − NG(u) = 0, (2.7.2)Proof. We use the argument of Berestycki and Lions [8], proofof Proposition 1, p. 320. It follows from local regularity (see Theorem4.4.5) that u ∈ W 2,ploc (RN ) ∩ C 1,αloc (RN ) for all 1 < p < ∞ and all0 ≤ α < 1. A long but straightforward calculation shows that, givenany x 0 ∈ R N ,{ N − 2}0 = [−△u − g(u)][(x − x 0 ) · ∇u] = − |∇u| 2 − NG(u) +2{( 1)}∇ ·2 |∇u|2 − G(u) (x − x 0 ) − ((x − x 0 ) · ∇u)∇u , (2.7.3)a.e. in R N . Choosing x 0 = 0 and integrating (2.7.3) on B R (the ballof R N of center 0 and radius R), we obtain∫ { N − 2}|∇u| 2 − NG(u)B R2∫ {( 1)}=∂B R2 |∇u|2 − G(u) x − (x · ∇u)∇u · ⃗n. (2.7.4)Since ∇u ∈ L 2 (R N ) and G(u) ∈ L 1 (R N ), we see that∫ { N − 2}|∇u| 2 − NG(u) −→B R2R→∞


76 2. VARIATIONAL METHODS∫{ N − 2R 2N}|∇u| 2 − NG(u) . (2.7.5)Moreover,∫ ∞ ∫∫(|∇u| 2 + |G(u)|) dσ dR = (|∇u| 2 + |G(u)|) < ∞,0 ∂B R R Nso that there exists a sequence R n → ∞ such thatR n∫∂B Rn(|∇u| 2 + |G(u)|) −→n→∞0. (2.7.6)We finally let R = R n in (2.7.4) and let n → ∞. It follows from (2.7.6)that the right-hand converges <strong>to</strong> 0 as n → ∞. Since the limit of theleft-hand side is given by (2.7.5), we obtain (2.7.2) in the limit. □For the case of a general domain, we have the following result.Lemma 2.7.2 (Pohožaev’s identity). Let g and G be as above.Let Ω be an open domain of R N with boundary of class C 1 . If u ∈H 2 (Ω) ∩ H0 1 (Ω) is such that g(u) ∈ H −1 (R N ) and G(u) ∈ L 1 (R N )and satisfies (2.7.1) in D ′ (Ω), then for any x 0 ∈ R N ,∫ { N − 2}|∇u| 2 − NG(u) + 1 ∫|∇u| 2 (x − x 0 ) · ⃗n = 0, (2.7.7)22Ω∂Ωwhere ⃗n denotes the outward unit normal.Proof. We give a formal argument, and we refer <strong>to</strong> Kavian [28],p. 253 for its justification. Integrating (2.7.2) on Ω, and using theproperty G(u) = 0 on ∂Ω, we obtain∫ { N − 2}0 = − |∇u| 2 − NG(u)Ω 2∫ { 1+2 |∇u|2 (x − x 0 ) − ((x − x 0 ) · ∇u)∇u}· ⃗n.∂ΩSince u = 0 on ∂Ω, we see that ∇u ‖ ⃗n, so that ∇u = (∇u · ⃗n)⃗n.Therefore,((x − x 0 ) · ∇u)∇u = (∇u · ⃗n) 2 ((x − x 0 ) · ⃗n)⃗n = |∇u| 2 ((x − x 0 ) · ⃗n)⃗n,and we obtain (2.7.7).□


2.7. STUDY OF A MODEL CASE 77We now consider the equation{−△u = −λu + µ|u| p−1 u in Ω,u = 0 on ∂Ω,(2.7.8)where p > 1 and λ, µ ∈ R.We will consider three examples of domains Ω.Case 1: Ω = R N . In this case, (2.7.2) has the form∫N − 2|∇u| 2 = − Nλ ∫u 2 + Nµ ∫|u| p+1 , (2.7.9)2 R 2N R p + 1N R Nprovided u ∈ H 1 (R N )∩L p+1 (R N ). In addition, multiplying the equationby u, we obtain∫|∇u| 2 = −λR N ∫u 2 + µR N ∫|u| p+1 ,R N (2.7.10)under the same assumptions on u.– Suppose first µ < 0.– If λ ≥ 0, then it follows immediately from (2.7.10) that the uniquesolution of (2.7.8) in H 1 (R N ) ∩ L p+1 (R N ) is u ≡ 0.– If λ < 0, then the unique solution of (2.7.8) in the space H 1 (R N )∩L p+1 (R N ) is u ≡ 0. This follows from the delicate result ofKa<strong>to</strong> [27] (see also Agmon [2]). See also Remark 1.3.9 (vii) forthe radial case and Remark 1.1.5 (ii) for the case N = 1.– Suppose now µ > 0.– If λ = 0, then it follows from (2.7.9)-(2.7.10) that( N − 2−2Np + 1) ∫ R N |∇u| 2 = 0.Therefore, if N = 1, 2, or if N ≥ 3 and p ≠ (N +2)/(N −2), thenthe unique solution of (2.7.8) in H 1 (R N ) ∩ L p+1 (R N ) is u ≡ 0.If N ≥ 5 and p = (N + 2)/(N − 2), then there is a (radiallysymmetric, positive) solution u ≠ 0 in H 1 (R N ) ∩ L p+1 (R N ) (seeRemark 1.3.9 (iv)).– If λ < 0, then the unique solution of (2.7.8) in the space H 1 (R N )∩L p+1 (R N ) is u ≡ 0. (See above.)– Suppose λ > 0. If N = 1, 2 or if N ≥ 3 and p < (N + 2)/(N − 2),then there is a solution u ∈ H 1 (R N ), u ≠ 0 of (2.7.8). Moreover,there is a positive, spherically symmetric solution. (See for exampleTheorem 1.3.1 for the case N ≥ 2 and Remark 1.1.5 (ii)


78 2. VARIATIONAL METHODSfor the case N = 1.) If N ≥ 3 and p ≥ (N + 2)/(N − 2), then itfollows from (2.7.9)-(2.7.10) that( N − 2−N ) ∫ ( N|∇u| 2 + λ2 p + 1 R 2 − N ) ∫ u 2 = 0,p + 1N R Nso that the unique solution of (2.7.8) in H 1 (R N ) ∩ L p+1 (R N ) isu ≡ 0.Case 2: Ω = {x ∈ R N ; |x| < 1}. In this case, (2.7.7) has theform (taking x 0 = 0)∫N − 2|∇u| 2 = − Nλ ∫u 22 Ω2 Ω+ Nµ ∫|u| p+1 − 1 ∫|∇u| 2 , (2.7.11)p + 12provided u ∈ H 2 (Ω) ∩ L p+1 (Ω). In addition, multiplying the equationby u, we obtain (2.7.10) under the same assumptions on u. Let λ 1 =λ 1 (−∆) be defined by (2.1.5).– Suppose first µ < 0.– If λ ≥ −λ 1 , then the unique solution of (2.7.8) in H 1 (Ω)∩L p+1 (Ω)is u ≡ 0. (See Remark 2.3.10.)– If λ < −λ 1 , then there is a solution u ≢ 0, u ∈ H 1 (Ω) ∩ L p+1 (Ω)of (2.7.8). Moreover, there is a positive, radially symmetric solution.(See Theorem 2.3.9 for the existence of a positive solutionand Theorem 4.5.1 for the symmetry.)– Suppose now µ > 0.– Suppose N ≤ 2 or N ≥ 3 and p < (N + 2)/(N − 2). If λ > −λ 1 ,then there is a solution u ≢ 0, u ∈ H 1 (Ω) ∩ L p+1 (Ω) of (2.7.8).Moreover, there is a positive, radially symmetric solution. (SeeTheorem 2.4.2 for the existence of a positive solution and Theorem4.5.1 for the symmetry.) If λ ≤ −λ 1 , then there is a solutionu ≢ 0, u ∈ H 1 (Ω) ∩ L p+1 (Ω) of (2.7.8). (See Remark 2.4.3 (iii).)These solutions are smooth, i.e. u ∈ C 0 (Ω), see Remark 4.4.4.– Suppose N ≥ 3 and p = (N + 2)/(N − 2). If λ ≥ 0, then theunique solution of (2.7.8) in H 2 (Ω) is u ≡ 0. Indeed, it followsfrom (2.7.10)-(2.7.11) thatNλ(p − 1)2(p + 1)∫ΩΩu 2 = − 1 2∫∂Ω∂Ω|∇u| 2 .


2.7. STUDY OF A MODEL CASE 79The conclusion follows if λ > 0. The case λ = 0 is more delicate:one observes that ∇u = 0 a.e. on ∂Ω, and this implies u ≡ 0(see Pohožaev [40]). In fact, one can show that a solution inH 1 (Ω) belongs <strong>to</strong> H 2 (Ω), so that the unique solution in H 1 (Ω)is u ≡ 0. If N ≥ 4 and −λ 1 < λ < 0, there is a positivesolution. This is a difficult result of Brezis and Nirenberg [15].If N = 3 and −λ 1 < λ < −λ 1 /4, then there is a positive solution(see [15]). If N = 3 and λ ≤ −λ 1 , there is a solution u ≠ 0(see Comte [20]). If N = 4, λ ≤ −λ 1 and λ ≠ −λ k for allk ≥ 1, then there is a solution u ≠ 0 (see Cerami, Solimini andStruwe [19]). If N ≥ 5 and λ ≤ −λ 1 , then there is a solutionu ≠ 0 (see [19]). The case N = 4 and λ = −λ k seems <strong>to</strong> beopen. The case N = 3 and λ ∈ [−λ 1 /4, 0) is a very challengingopen problem, which probably requires some new ideas. Notethat in this last case, it is known that there is no nontrivial radialsolution, and in particular, there is no positive solution (see Brezisand Nirenberg [15]).– Suppose N ≥ 3 and p > (N + 2)/(N − 2). If(2(p + 1))λ ≥ −λ 1 1 − ,N(p − 1)then the unique solution of (2.7.8) in H 2 (Ω) ∩ L p+1 (Ω) is u ≡ 0.Indeed, it follows from (2.7.10)-(2.7.11) that( N − 2−N2 p + 1) ∫ ∫|∇u| 2 Nλ(p − 1)+ u 2 = − 1 ∫|∇u| 2 ;Ω 2(p + 1) Ω 2 ∂Ωand so,[( N − 22−N )λ 1 +p + 1Nλ(p − 1)] ∫ u 2 ≤ − 1 ∫|∇u| 2 .2(p + 1) Ω 2 ∂ΩThe conclusion follows as above. If N = 3, then there existsλ ∗ ∈ (0, λ 1 ) such that for λ ∈ (−λ 1 , −λ ∗ ) there is a positive,radial solution (see Budd and Nurbury [17]). Also, one can showthat there is always a bifurcation branch starting from λ k , forevery k ≥ 1. The other cases seem <strong>to</strong> be open.Case 3: Ω = {x ∈ R N ; 1 < |x| < 2}.defined by (2.1.5).– Suppose first µ < 0.Let λ 1 = λ 1 (−∆) be


80 2. VARIATIONAL METHODS– If λ ≥ −λ 1 , then the unique solution of (2.7.8) in H 1 (Ω)∩L p+1 (Ω)is u ≡ 0. (See Remark 2.3.10.)– If λ < −λ 1 , then there is a solution u ≢ 0, u ∈ H 1 (Ω) ∩ L p+1 (Ω)of (2.7.8). Moreover, there is a positive, radially symmetric solution(See Theorem 2.3.9. In fact, Theorem 2.3.9 produces a positivesolution, but one can construct a radially symmetric solutionby minimizing on radially symmetric functions.) This solution issmooth, i.e. u ∈ C 0 (Ω), see Remark 4.4.4.– Suppose now µ > 0.– If λ > −λ 1 , then there is a solution u ≢ 0, u ∈ H 1 (Ω) ∩ L p+1 (Ω)of (2.7.8). Moreover, there is a positive, radially symmetric solution.(See Theorem 2.4.5 for the case N ≥ 2 and Remark 1.2.6for the case N = 1.) This solution is smooth, i.e. u ∈ C 0 (Ω), seeRemark 4.4.4.– If λ ≤ −λ 1 , then there is a spherically symmetric solution u ≢ 0,u ∈ H 1 (Ω) ∩ L p+1 (Ω) of (2.7.8). (See Kavian [28], Exemple 8.7p. 173. Note that the assumption p < (N + 2)/(N − 2) in [28]is not essential. It is used only for the verification of the Palais-Smale condition, which holds in the present case because of theembedding W ↩→ L ∞ (Ω), see Theorem 2.4.5.) This solution issmooth, i.e. u ∈ C 0 (Ω), see Remark 4.4.4.


CHAPTER 3Methods of super- and subsolutions3.1. The maximum principlesLet Ω be an open subset of R N . We recall that a distributionf ∈ H −1 (Ω) is nonnegative (respectively, nonpositive), i.e. f ≥ 0(respectively, f ≤ 0) if and only if (f, ϕ) H −1 ,H01 ≥ 0 (respectively,(f, ϕ) H −1 ,H0 1 ≤ 0), for all ϕ ∈ H1 0 (Ω), ϕ ≥ 0. By density, this isequivalent <strong>to</strong> saying that f ≥ 0 in D ′ (Ω), i.e. that (f, ϕ) D′ ,D ≥ 0 forall ϕ ∈ Cc∞ (Ω), ϕ ≥ 0. In particular, if f ∈ L 1 loc (Ω) ∩ H−1 (Ω), thenf ≥ 0 in H −1 (Ω) if and only if f ≥ 0 a.e. in Ω. Therefore, the abovedefinition is consistent with the usual one for functions.We now can state the following weak form of the maximum principle.Theorem 3.1.1. Consider a function a ∈ L ∞ (Ω), let λ 1 (−△+a)be defined by (2.1.5) and let λ > −λ 1 (−△ + a). Suppose u ∈ H 1 (Ω)satisfies −△u + au + λu ≥ 0 (respectively, ≤ 0) in H −1 (Ω). If u − ∈H 1 0 (Ω)(respectively, u + ∈ H 1 0 (Ω)), then u ≥ 0 (respectively, u ≤ 0)almost everywhere in Ω.Proof. We prove the first part of the result, the second followsby changing u <strong>to</strong> −u. Since u − ≥ 0, we see that(−∆u + au + λu, −u − ) H −1 ,H 1 0 ≤ 0,which we rewrite, using formula (5.1.5), as∫[∇u · ∇(−u − ) + au(−u − ) + λu(−u − )] ≤ 0.ΩThis means (see Remark 5.3.4) that∫[|∇(u − )| 2 + a|u − | 2 + λ|u − | 2 ] ≤ 0.Ω81


82 3. METHODS OF SUPER- AND SUBSOLUTIONSSince λ > −λ 1 (−∆ + a), we deduce from (2.1.6) that u − = 0, i.e.u ≥ 0.□We now study the strong maximum principle. Our first result inthis direction is the following.Theorem 3.1.2. Suppose Ω ⊂ R N is a connected, open set. Letλ 1 (−△) be defined by (2.1.5) and let λ > −λ 1 (−△). Suppose u ∈H 1 (Ω) ∩ C(Ω) satisfies −△u + λu ≥ 0 (respectively, ≤ 0) in H −1 (Ω).If u − ∈ H 1 0 (Ω) (respectively, u + ∈ H 1 0 (Ω)) and if u ≢ 0, then u > 0(respectively, u < 0) in Ω.The proof of Theorem 3.1.2 is based on the following simplelemma.Lemma 3.1.3. Let 0 < ρ < R < ∞ and set ω = {ρ < |x| < R}.Let λ ∈ R and suppose β ≥ max{0, N−2} satisfies β(β−N+2) ≥ λR 2 .If v is defined by v(x) = |x| −β − R −β for ρ ≤ |x| ≤ R, then thefollowing properties hold.(i) v ∈ C ∞ (ω).(ii) v(x) = 0 if |x| = R.(iii) ρ −β > v(x) ≥ βR −(β+1) (R − |x|) if ρ ≤ |x| ≤ R.(iv) −∆v + λv ≤ 0 in ω.Proof. Properties (i), (ii) and (iii) are immediate. Next,−∆v + λv = −β(β − N + 2)|x| −(β+2) + λ|x| −β − λR −β≤ −β(β − N + 2)|x| −(β+2) + λ|x| −β ,and (iv) easily follows.□Proof of Theorem 3.1.2. We prove the first part of the result,the second follows by changing u <strong>to</strong> −u. We first note that byTheorem 3.1.1, u ≥ 0 a.e. in Ω. Since u ∈ C(Ω) and u ≢ 0, the setO = {x ∈ Ω; u(x) > 0},is a nonempty open subset of Ω. Ω being connected, we need onlyshow that O is a closed subset of Ω. Suppose (y n ) n≥0 ⊂ O andy n → y ∈ Ω as n → ∞. Let R > 0 be such that B(y, 2R) ⊂ Ω, andfix n 0 large enough so that |y − y n0 | < R. Since u(y n0 ) > 0, thereexist 0 < ρ < R and ε > 0 such that u(x) ≥ ε for |x − y n0 | = ρ.Set U = {ρ < |x − y n0 | < R} and let w(x) = u(x) − ερ β v(x − y n0 )


3.1. THE MAXIMUM PRINCIPLES 83for x ∈ U, where β and v are as in Lemma 3.1.3. It follows thatw ∈ H 1 (U) ∩ C(U). Moreover, −∆w + λw ≥ 0 by property (iv) ofLemma 3.1.3. Also, since u ≥ 0 in Ω, we deduce from property (ii) ofLemma 3.1.3 that w(x) ≥ 0 if |x−y n0 | = R. Furthermore, w(x) ≥ 0 if|x − y n0 | = ρ by property (iii) of Lemma 3.1.3 and because u(x) ≥ ε.Thus we may apply Theorem 3.1.1 and we deduce that w(x) ≥ 0 forx ∈ U. In particular, u(y) ≥ ερ β v(y − y n0 ) > 0 by property (iii) ofLemma 3.1.3, so that y ∈ O. Therefore, O is closed, which completesthe proof.□We now state a stronger version of the maximum principle, whichrequires a certain amount of regularity of the domain.Theorem 3.1.4. Let Ω ⊂ R N be a bounded, connected, open set.Assume there exist η, ν > 0 with the following property. For everyx ∈ Ω such that d(x, ∂Ω) ≤ η, there exists y ∈ Ω such that⎧⎪⎨ x ∈ B(y, η),B(y, η) ⊂ Ω,(3.1.1)⎪⎩η − |x − y| ≥ νd(x, ∂Ω).Let λ 1 (−△) be defined by (2.1.5) and let λ > −λ 1 (−△). Supposeu ∈ H 1 (Ω) ∩ C(Ω) satisfies −△u + λu ≥ 0 (respectively, ≤ 0) inH −1 (Ω). If u − ∈ H0 1 (Ω) (respectively, u + ∈ H0 1 (Ω)) and if u ≢ 0,then there exists µ > 0 such that u(x) ≥ µd(x, ∂Ω) (respectively,u(x) ≤ −µd(x, ∂Ω)) in Ω.Remark 3.1.5. The assumption (3.1.1) is satisfied if ∂Ω is ofclass C 2 . Indeed, let γ(z) denote the unit outwards normal <strong>to</strong> ∂Ω atz ∈ ∂Ω. Since Ω is bounded, ∂Ω is uniformly C 2 , so that there existsη > 0 such that B(z − ηγ(z), η) ⊂ Ω for every z ∈ ∂Ω. If x ∈ Ω andd(x, ∂Ω) ≤ η, let z ∈ ∂Ω be such that |x − z| = d(x, ∂Ω). It followsthat x − z is parallel <strong>to</strong> γ(y). Thus, if we set y = z − ηγ(z), we seethat x ∈ B(y, η), B(y, η) ⊂ Ω and η − |x − y| = |z − x| = d(x, ∂Ω).Proof of Theorem 3.1.4. We prove the first part of the result,the second follows by changing u <strong>to</strong> −u. Let 0 < ε ≤ η/2 and considerΩ ε = {x ∈ Ω; d(x, ∂Ω) ≥ ε}. We fix ε > 0 sufficiently small so thatΩ ε is a nonempty, compact subset of Ω. It follows from Theorem 3.1.2that there exists δ > 0 such thatu(x) ≥ δ for all x ∈ Ω ε . (3.1.2)


84 3. METHODS OF SUPER- AND SUBSOLUTIONSWe now consider x 0 ∈ Ω such that d(x 0 , ∂Ω) < ε, and we let y 0 ∈ Ωsatisfy (3.1.1). Since B(y 0 , η) ⊂ Ω and η ≥ 2ε, we see that d(z, ∂Ω) ≥ε for all z ∈ B(y 0 , η/2). It then follows from (3.1.2) thatu(z) ≥ δ for all z ∈ B(y 0 , η/2). (3.1.3)We let ρ = η/2, R = η and U = {ρ < |x − y 0 | < R}, so thatx 0 ∈ U. Let w(x) = u(x) − ερ β v(x − y 0 ) for x ∈ U, where β and vare as in Lemma 3.1.3. It follows that w ∈ H 1 (U) ∩ C(U). Moreover,−∆w + λw ≥ 0 by property (iv) of Lemma 3.1.3. Also, w(x) ≥ 0 if|x − y 0 | = R by property (ii) of Lemma 3.1.3 and because u ≥ 0 in Ω.Furthermore, w(x) ≥ 0 if |x−y 0 | = ρ by property (iii) of Lemma 3.1.3and (3.1.3). Thus we may apply Theorem 3.1.1 and we deduce thatw(x) ≥ 0 for x ∈ U. In particular,u(x 0 ) ≥ βερ β R −(β+1) (R − |x 0 − y 0 |) ≥ νβερ β R −(β+1) d(x 0 , ∂Ω),where the first inequality above follows from of Lemma 3.1.3 (iii) andthe second from (3.1.1). Since x 0 ∈ Ω \ Ω ε is arbitrary, we see thatthere exists µ > 0 such that u(x) ≥ µd(x, ∂Ω) for all x ∈ Ω \ Ω ε .On the other hand, (3.1.2) implies that there exists µ ′ > 0 such thatu(x) ≥ µ ′ d(x, ∂Ω) for all x ∈ Ω ε . This completes the proof. □3.2. The spectral decomposition of the LaplacianThroughout this section, we assume that Ω is bounded and connected,and we give some important properties concerning the spectraldecomposition of −△ + a where a ∈ L ∞ (Ω).Theorem 3.2.1. Assume Ω is a bounded, connected domain ofR N and let a ∈ L ∞ (Ω). It follows that there exist a nondecreasingsequence (λ n ) n≥1 ⊂ R with λ n → ∞ as n → ∞ and a Hilbert basis(ϕ n ) n≥1 of L 2 (Ω) such that (ϕ n ) n≥1 ⊂ H0 1 (Ω) and− ∆ϕ n + aϕ n = λ n ϕ n , (3.2.1)in H −1 (Ω). Moreover, the following properties hold.(i) ϕ n ∈ L ∞ (Ω) ∩ C(Ω), for every n ≥ 1.(ii) λ 1 = λ 1 (−∆ + a; Ω), where λ 1 (−∆ + a; Ω) is defined by (2.1.5).(iii) λ 1 is a simple eigenvalue and either ϕ 1 > 0 or else ϕ 1 < 0 onΩ.For the proof of Theorem 3.2.1, we will use the following fundamentalproperty.


3.2. THE SPECTRAL DECOMPOSITION OF THE LAPLACIAN 85Proposition 3.2.2. Suppose Ω is a bounded, connected domainof R N . Let a ∈ L ∞ (Ω) and Λ = λ 1 (−∆ + a) where λ 1 (−∆ + a) isdefined by (2.1.5). It follows that there exists ϕ ∈ H0 1 (Ω) ∩ L ∞ (Ω) ∩C(Ω), ϕ > 0 in Ω, ‖ϕ‖ L 2 = 1, such that− ∆ϕ + aϕ = Λϕ, (3.2.2)in H −1 (Ω). In addition, the following properties hold.(i) ϕ is the unique nonnegative solution of the minimization problemu ∈ S,J(u) = inf J(v), (3.2.3)v∈SwhereJ(v) = 1 ∫{|∇v| 2 + av 2 }, (3.2.4)2 Ωand S = {v ∈ H0 1 (Ω); ‖v‖ L 2 = 1}.(ii) If ψ ∈ H0 1 (Ω) is a solution of the equation (3.2.2), then thereexists a constant c ∈ R such that ψ = cϕ.Proof. We proceed in five steps.Step 1. The minimization problem (3.2.3) has a solution u ∈H0 1 (Ω), u ≥ 0. Indeed, it follows from the techniques of Section 2.2that if J is defined by (3.2.4), then J ∈ C 1 (H0 1 (Ω), R) and J ′ (u) =−∆u + au for all u ∈ H0 1 (Ω). Moreover, if F (u) = ‖u‖ 2 L, then2F ∈ C 1 (H0 1 (Ω), R) and F ′ (u) = 2u for all u ∈ H0 1 (Ω). Next, it followsfrom (2.1.5) thatinf J(v) = Λv∈S 2 . (3.2.5)Consider a minimizing sequence (v n ) n≥0 of (3.2.3). Letting u n =|v n |, it follows that u n ≥ 0 and that (u n ) n≥0 is also a minimizingsequence of (3.2.3). Moreover, since ‖u n ‖ L 2 = 1, we see that (u n ) n≥0is bounded in H0 1 (Ω). Since Ω is bounded, it follows that there existsu ∈ H0 1 (Ω) such that u n → u in L 2 (Ω) and ‖∇u‖ L 2 ≤ lim inf ‖∇u n ‖ L 2as n → ∞ (see Theorem 5.5.5). In particular, u ≥ 0 and u is a solutionof the minimization problem (3.2.3).Step 2. If u ∈ H0 1 (Ω) and ‖u‖ L 2 = 1, then is a solution of theminimization problem (3.2.3) if and only if u is a solution of the equation(3.2.2). Suppose first that u is a solution of the minimizationproblem (3.2.3). It follows from Theorem 2.4.1 that there exists a Lagrangemultiplier λ such that −∆u + au = λu. Taking the H −1 − H01duality product of that equation with u, we see that 2J(u) = λ, so that


86 3. METHODS OF SUPER- AND SUBSOLUTIONSλ = Λ by (3.2.5). Thus u satisfies the equation (3.2.2). Conversely,suppose u is a solution of the equation (3.2.2). Taking the H −1 − H01duality product of the equation with u, we see that 2J(u) = Λ, sothat, by (3.2.5), u is a solution of the minimization problem (3.2.3).Step 3. If u ∈ H0 1 (Ω), u ≥ 0, u ≢ 0, is a solution of theequation (3.2.2), then u ∈ L ∞ (Ω) ∩ C(Ω) and u > 0 in Ω. Theproperty u ∈ L ∞ (Ω) ∩ C(Ω) follows from Corollary 4.4.3. Next, wewrite (3.2.2) in the form −∆u + ‖a‖ L ∞u = (‖a‖ L ∞ − a)u. Since‖a‖ L ∞ − a ≥ 0, we see that (‖a‖ L ∞ − a)u ≥ 0, and it follows from thestrong maximum principle that u > 0 in Ω.Step 4. If u, v ≥ 0 are two solutions of the minimizationproblem (3.2.3), then u = v. Indeed, let w = u − v and assume bycontradiction that w ≢ 0. It follows from Step 2 that w is a solutionof the equation (3.2.2). Thus, again by Step 2, w/‖w‖ L 2 is a solutionof the minimization problem (3.2.3), so that z = |w|/‖w‖ L 2 is also asolution of the minimization problem (3.2.3). By Steps 2 and 3, wesee that z > 0 in Ω, so that w does not vanish in Ω. In particular, wdoes not change sign and we may assume for example that w > 0. Itfollows that 0 ≤ v < u, which is absurd since ‖u‖ L 2 = ‖v‖ L 2.Step 5. Conclusion. Let ϕ = u with u as in Step 1. Itfollows from Steps 2 and 3 that ϕ is a solution of (3.2.2) and thatϕ ∈ H0 1 (Ω) ∩ L ∞ (Ω) ∩ C(Ω), ϕ > 0 in Ω and ‖ϕ‖ L 2 = 1. Next,property (i) follows from Step 4. Finally, suppose ψ ∈ H0 1 (Ω), ψ ≢ 0,is a solution of the equation (3.2.2). Setting z = |ψ|/‖ψ‖ L 2, we deducefrom Step 2 (see the proof of Step 4) that z is a nonnegative solutionof (3.2.3). Thus z = ϕ, so that |ψ| = ‖ψ‖ L 2ϕ. In particular, ψdoes not vanish in Ω, so that ψ has constant sign. We deduce thatψ = ±‖ψ‖ L 2ϕ, which proves property (ii).□Proof of Theorem 3.2.1. Set Λ = λ 1 (−∆+a) where λ 1 (−∆+a) is defined by (2.1.5). Let f ∈ H −1 (Ω), and let u ∈ H 1 0 (Ω) be thesolution of the equation −△u + au + (1 − Λ)u = f in H −1 (Ω). Letus set u = Kf. By Theorem 2.1.4, K is bounded H −1 (Ω) → H 1 0 (Ω),hence L 2 (Ω) → H 1 0 (Ω). Therefore, by Theorem 5.5.4, K is compactL 2 (Ω) → L 2 (Ω). We claim that K is self adjoint. Indeed, let f, g ∈L 2 (Ω) and let u = Kf, v = Kg. We have(u, g) L 2 − (f, v) L 2 = (−△v + av + (1 − Λ)v, u) H −1 ,H 1 0− (−△u + au + (1 − Λ)u, v) H −1 ,H 1 0 = 0,


3.2. THE SPECTRAL DECOMPOSITION OF THE LAPLACIAN 87by (5.1.5). It is clear that K −1 (0) = {0}. Moreover, if f ∈ L 2 (Ω) andu = Kf, thenso that by (2.1.6),(Kf, f) L 2 = (u, −∆u + au + (1 − Λ)u) H 10 ,H∫−1= (|∇u| 2 + au 2 + (1 − Λ)u 2 ),Ω∫(Kf, f) L 2 ≥Ωu 2 ≥ 0.Therefore (see Brezis [11], Theorem VI.11), L 2 (Ω) possesses a Hilbertbasis (ϕ n ) n≥1 made of eigenvec<strong>to</strong>rs of K, and the eigenvalues of Kconsist of a nonincreasing sequence (σ n ) n≥1 ⊂ (0, ∞) converging <strong>to</strong> 0,as n → ∞. We observe that ϕ n = σn−1 Kϕ n , so that ϕ n ∈ H0 1 (Ω) andϕ n satisfies the equation (3.2.1) withλ n = 1σ n− 1 + Λ. (3.2.6)This proves the first statement, and we now prove properties (i), (ii)and (iii).(i) This follows from Corollary 4.4.3.(ii) By formula (3.2.6), this amounts in showing that σ 1 = 1.We first observe that if ϕ is as in Proposition 3.2.2, then Kϕ = ϕ.Thus 1 is an eigenvalue of K, so that σ 1 ≥ 1. Next, we see that−∆ϕ 1 + aϕ 1 = (σ1 −1 − 1 + Λ)ϕ 1 . Taking the H −1 − H0 1 dualityproduct of the equation with ϕ 1 , we deduce that∫{|∇ϕ 1 | 2 + aϕ 2 1} = σ1 −1 − 1 + Λ.ΩUsing (2.1.5), we deduce that σ −11 − 1 + Λ ≥ Λ. Thus σ 1 ≤ 1, whichproves (ii).(iii) This follows from Proposition 3.2.2. □Remark 3.2.3. Connectedness of Ω is required only for property(iii) of Theorem 3.2.1. Without Connectedness, these two propertiesmay not hold, as shows the following example. Let Ω = (0, π) ∪(π, 2π). Then λ 1 = 1, and the corresponding eigenspace is twodimensional.More precisely, it is the spaces spanned by the two


88 3. METHODS OF SUPER- AND SUBSOLUTIONSfunctions ϕ 1 and ˜ϕ 1 defined byϕ 1 (x) ={sin x if 0 < x < π,0 if π < x < 2π,˜ϕ 1 (x) ={0 if 0 < x < π,− sin x if π < x < 2π.In particular, both ϕ 1 and ˜ϕ 1 vanish on a connected component of Ω.We end this section with a useful characterization of H 1 0 (Ω) interms of the spectral decomposition.Proposition 3.2.4. Assume Ω is a bounded, connected domainof R N , let a ∈ L ∞ (Ω) and let (λ n ) n≥1 ⊂ R and (ϕ n ) n≥1 ⊂ H0 1 (Ω) beas in Theorem 3.2.1. Given any u ∈ L 2 (Ω), let α j = (u, ϕ j ) L 2 for allj ≥ 1 so that u = ∑ ∑α j ϕ j . It follows that u ∈ H0 1 (Ω) if and only ifλj αj 2 < ∞. Moreover, ∑ λ j αj 2 = ‖∇u‖2 L. 2Proof. Since (ϕ j ) j≥1 is a Hilbert basis of L 2 (Ω), we may considerthe isometric isomorphism T : l 2 (N) → L 2 (Ω) defined byif A = (α j ) j≥1 . Let nowequipped with the normT A =∞∑α j ϕ j ,j=1V = {A ∈ l 2 (N); ∑ λ j α 2 j < ∞},( ∑∞ ) 1‖A‖ V = λ j αj2 2,j=1so that V is a Banach (in fact, Hilbert) space. We first claim thatT (V) ⊂ H 1 0 (Ω) and‖A‖ V = ‖∇T A‖ L 2, (3.2.7)for all A ∈ V. Indeed, let A ∈ V and consider the sequence (A n ) n≥0 ⊂V defined by α n,j = α j if j ≤ n and α n,j = 0 if j > n. It follows that


3.3. THE ITERATION METHOD 89A n → A in V (hence in l 2 (N)). Moreover,‖∇T A n ‖ 2 L = (−∆T A n, T A 2 n ) H −1 ,H01( ∑ n n∑ )= λ j α j ϕ j , α j ϕ jj=1j=1j=1( ∑ n n∑ )= λ j α j ϕ j , α j ϕ j=j=1n∑λ j αj 2 = ‖A n ‖ 2 V.j=1H −1 ,H 1 0A similar calculation shows that if m > n ≥ 1, then‖∇(T A m − T A n )‖ 2 L 2 = m ∑j=n+1L 2λ j α 2 j(3.2.8)−→ 0. (3.2.9)n→∞We deduce in particular from (3.2.9) that (T A n ) n≥1 is a Cauchy sequencein H0 1 (Ω). Since T A n → T A in L 2 (Ω) (because A n → Ain l 2 (N)), we deduce that T A ∈ H0 1 (Ω) and T A n → T A in V.Thus (3.2.7) follows by letting n → ∞ in (3.2.8). It now remains<strong>to</strong> show that T V = H0 1 (Ω). Since V is a Banach space and T : V →H0 1 (Ω) is isometric, we see that T V is a closed subspace of H0 1 (Ω).Suppose by contradiction that V ̸= H0 1 (Ω). It follows that there existsw ∈ H0 1 (Ω), w ≠ 0 such that (T A, w) H 10= 0 for all A ∈ V. Fixn ≥ 1, and define A ∈ V by α j = 1 if j = n and α j = 0 if j ≠ n. Itfollows that T A = ϕ n , so that (ϕ n , w) H 10= 0. Since(ϕ n , w) H 10= (∇ϕ n , ∇w) L 2 = (−∆ϕ n , w) H −1 ,H 1 0= λ n (ϕ n , w) H −1 ,H 1 0 = λ n(ϕ n , w) L 2,and n ≥ 1 is arbitrary, we conclude that w = 0, which is a contradiction.□3.3. The iteration methodThroughout this section, we assume that Ω is a bounded domainof R N and we consider the problem{−△u = g(u) in Ω,(3.3.1)u = 0 on ∂Ω,


90 3. METHODS OF SUPER- AND SUBSOLUTIONSwhere g ∈ C 1 (R) is a given nonlinearity. The method we will userelies on the notion of sub- and supersolutions, which are defined asfollows.Definition 3.3.1. A function u ∈ H 1 (Ω) ∩ L ∞ (Ω) is called asupersolution of (3.3.1) if the following properties hold.(i) −△u ≥ g(u) in H −1 (Ω);(ii) u − ∈ H 1 0 (Ω).Similarly, a function u ∈ H 1 (Ω) ∩ L ∞ (Ω) is called a subsolutionof (3.3.1) if the following properties hold.(iii) −△u ≤ g(u) in H −1 (Ω);(iv) u + ∈ H 1 0 (Ω).In particular, a solution u ∈ H 1 0 (Ω) of (3.3.1) is a both a supersolutionand a subsolution.Remark 3.3.2. Here are some comments on Definition 3.3.1.(i) As observed in Section 3.1, the property −△u ≥ g(u) in H −1 (Ω)is equivalent <strong>to</strong> saying that −△u ≥ g(u) in D ′ (Ω), i.e. that(−△u − g(u), ϕ) D ′ ,D ≥ 0 for all ϕ ∈ Cc ∞ (Ω), ϕ ≥ 0.(ii) It follows from (i) above that if u ∈ L ∞ (Ω) ∩ H 2 (Ω) satisfies−△u ≥ g(u) a.e. in Ω, then −△u ≥ g(u) in H −1 (Ω).(iii) Property (ii) of Definition 3.3.1 is a weak way of saying “u ≥0 on ∂Ω”. Indeed, if u ∈ C(Ω), then u ≥ 0 on ∂Ω impliesthat u − ∈ H0 1 (Ω) (see Remark 5.1.10 (ii)). Conversely, if Ω isof class C 1 , if u ∈ C(Ω) and if u − ∈ H0 1 (Ω), then u ≥ 0 on∂Ω (see Remark 5.1.10 (iii)). (A similar observation holds forproperty (iv).)Our main result of this section is the following.Theorem 3.3.3. Assume that Ω is a bounded domain of R N andlet g ∈ C 1 (R). Suppose that there exist a subsolution u and a supersolutionu of (3.3.1). If u ≤ u a.e. in Ω, then the following propertieshold.(i) There exists a solution ũ ∈ H 1 0 (Ω) ∩ L ∞ (Ω) of (3.3.1) which isminimal with respect <strong>to</strong> u, in the sense that if w is any supersolutionof (3.3.1) with w ≥ u, then w ≥ ũ.(ii) Similarly, there exists a solution û ∈ H 1 0 (Ω) ∩ L ∞ (Ω) of (3.3.1)which is maximal with respect <strong>to</strong> u, in the sense that if z is anysubsolution of (3.3.1) with z ≤ u, then z ≤ û.


3.3. THE ITERATION METHOD 91(iii) In particular, u ≤ ũ ≤ û ≤ u. (Note that ũ and û may coincide.)Remark 3.3.4. Here are some comments on Theorem 3.3.3.(i) The main conclusion of Theorem 3.3.3 is the existence of a solutionof (3.3.1). On the other hand, the maximal and minimalsolutions can be useful.(ii) Theorem 3.3.3 is somewhat surprising because no assumptionis made on the behavior of g. Of course, in practice the behaviorof g will be important for the construction of sub- andsupersolutions.(iii) The assumption u ≤ u is absolutely essential in Theorem 3.3.3.This can be seen on a quite elementary example: consider forexample the equation {−u ′′ = 2 + 9u,(3.3.2)u |∂Ω = 0,letand setin Ω = (0, π). It is clear that u(x) ≡ 0 is a subsolution. Furthermore,u(x) = − sin(x) 2 is a supersolution. Indeed,−u ′′ − 9u = 2 + 5 sin(x) 2 ≥ 2.Next, we claim that there is no solution of (3.3.2). Indeed, supposeu satisfies (3.3.2). Multiplying the equation by sin(3x) andintegrating by parts, we obtain9∫ π0u(x) sin(3x) = 2which is absurd since∫ π0∫ π0sin(3x) + 9sin(3x) = 2 3 ≠ 0.∫ π0u(x) sin(3x),Thus we have an example where there is a subsolution, there is asupersolution, but there is no solution. Obviously, Theorem 3.3.3does not apply because u ≰ u.Proof of Theorem 3.3.3. We use an iteration technique. SetM = max{‖u‖ L ∞, ‖u‖ L ∞},λ ≥ ‖g ′ ‖ L ∞ (−M,M), (3.3.3)g λ (u) = g(u) + λu, (3.3.4)


92 3. METHODS OF SUPER- AND SUBSOLUTIONSfor all u ∈ R. Finally, we set u 0 = u and u 0 = u and we define thesequences (u n ) n≥0 and (u n ) n≥0 by induction as follows.{−△u n+1 + λu n+1 = g λ (u n ),u n+1|∂Ω = 0,{(3.3.5)−△u n+1 + λu n+1 = g λ (u n ),u n+1 |∂Ω = 0,We will show that the sequences are well defined, thatu 0 ≤ u 1 ≤ u 2 ≤ · · · ≤ u 2 ≤ u 1 ≤ u 0 ,and that ũ = lim u n and û = limn→∞ n→∞ un have the desired properties.We proceed in five steps.Step 1. u 0 ≤ u 1 ≤ u 1 ≤ u 0 . Since u 0 , u 0 ∈ L ∞ (Ω), wesee that g λ (u 0 ), g λ (u 0 ) ∈ L ∞ (Ω). Therefore, we may apply Theorem2.1.4, from which it follows that u 1 , u 1 ∈ H0 1 (Ω) are well-defined.(Note that λ ≥ 0 and, since Ω is bounded, λ 1 (−∆) > 0, see Remark2.1.5.) Furthermore,−△u 1 + λu 1 = g λ (u 0 ) ≥ −△u 0 + λu 0 ,since u 0 is a subsolution. It follows from the maximum principle thatu 1 ≥ u 0 . One shows similarly that u 1 ≤ u 0 . Next, observe that g λ isnondecreasing on [−M, M]; and so,−△u 1 + λu 1 = g λ (u 0 ) ≤ g λ (u 0 ) = −△u 1 + λu 1 .Applying again the maximum principle, we deduce that u 1 ≤ u 1 .Hence the result.Step 2. For all n ≥ 1, u n and u n are well-defined, and u n−1 ≤u n ≤ u n ≤ u n−1 . We argue by induction. It follows from Step 1that the property holds for n = 1. Suppose it holds up <strong>to</strong> somen ≥ 1. In particular, u n , u n ∈ L ∞ (Ω), so that u n+1 , u n+1 ∈ H0 1 (Ω)are well-defined by Theorem 2.1.4. Next, since g λ is nondecreasing on[−M, M], it follows that−△u n+1 + λu n+1 = g λ (u n ) ≥ g λ (u n−1 ) = −△u n + λu n ;and so, u n+1 ≥ u n by the maximum principle. One shows as well thatu n+1 ≤ u n . Finally, using again the nondecreaing character of g λ , wesee that−△u n+1 + λu n+1 = g λ (u n ) ≤ g λ (u n ) = −△u n+1 + λu n+1 ;


3.3. THE ITERATION METHOD 93and so, u n+1 ≤ u n+1 by the maximum principle. Thus u n ≤ u n+1 ≤u n+1 ≤ u n , which proves the result.Step 3. If ũ = lim u n and û = lim u n as n → ∞, then ũ, û ∈H0 1 (Ω)∩L ∞ (Ω) are solutions of (3.3.1). Indeed, it follows from Step 2that (u n ) n≥0 is nondecreasing and bounded in L ∞ (Ω). Thus ũ =lim u n as n → ∞ is well-defined as a limit a.e. in Ω. Since g λ iscontinuous, it follows that g λ (u n ) → g λ (ũ) a.e. in Ω. Since (u n ) n≥0is bounded in L ∞ (Ω), (g λ (u n )) n≥0 is also bounded in L ∞ (Ω), and bythe dominated convergence theorem it follows that g(u n ) → g λ (ũ) inL 2 (Ω). Finally, since (−△ + λI) −1 is continuous L 2 (Ω) → H0 1 (Ω) (seeTheorem 2.1.4), we deduce that u n converges in H0 1 (Ω) as n → ∞ <strong>to</strong>the solution v ∈ H0 1 (Ω) of{−△v + λv = g λ (ũ),v |∂Ω = 0.Since u n → ũ a.e., we see that v = ũ and the result follows for ũ. Asimilar argument applies <strong>to</strong> û.Step 4. The solutions ũ and û are independent of λ satisfying(3.3.3). We show the result for ũ, and the same argument applies<strong>to</strong> û. Let λ, λ ′ satisfy (3.3.3) and let ũ = lim n and ũ ′ = limn→∞ n→∞ u′ nbe the corresponding solutions of (3.3.1) constructed as above. Sinceλ and λ ′ play a similar role, we need only show that We first showthat ũ ≤ ũ ′ . Assume for definiteness that λ ≥ λ ′ . We claim thatũ ′ ≥ u n for all n ≥ 0. We argue by induction. This is clear for n = 0.Assuming it holds for some n ≥ 0, we have−△ũ ′ + λũ ′ = g λ (ũ ′ ) ≥ g λ (u n ) = −△u n+1 + λu n+1 .It follows from the maximum principle that ũ ′ ≥ u n+1 , which provesthe claim. Letting n → ∞, we deduce that ũ ≤ ũ ′ .Step 5. Minimality of ũ and maximality of û. We only showthe minimality of ũ, the other argument being similar. Let w be a supersolutionof (3.3.1), w ≥ u 0 . Let ˜M = max{‖u‖ L ∞, ‖u‖ L ∞, ‖w‖ L ∞}and letλ ≥ ‖g ′ ‖ L ∞ (−˜M,˜M) .Let (u n ) n≥0 be the corresponding sequence defined by (3.3.5), so thatũ = limn→∞ u n by Steps 3 and 4. We claim that w ≥ u n for all n ≥ 0. Weargue by induction. This is true by assumption for n = 0. Assuming


94 3. METHODS OF SUPER- AND SUBSOLUTIONSthis holds up <strong>to</strong> some n ≥ 0, we have−△w + λw ≥ g λ (w) ≥ g λ (u n ) = −△u n+1 + λu n+1 .It follows from the maximum principle that w ≥ u n+1 , which provesthe claim. We deduce the result by letting n → ∞. This completesthe proof.□We now give applications of Theorem 3.3.3 in elementary caseswhere sub- and supersolutions are trivially constructed. We will considermore delicate cases in the next section.Corollary 3.3.5. Assume that Ω is a bounded domain of R Nand let g ∈ C 1 (R). If there exist a ≤ 0 ≤ b such that g(a) = g(b) = 0,then there exists a solution u ∈ H 1 0 (Ω) ∩ L ∞ (Ω) of (3.3.1), with a ≤u ≤ b.Proof. u ≡ a is clearly a subsolution and u ≡ b is clearly asupersolution.□Corollary 3.3.6. If Ω is a bounded domain of R N and if g ∈C 1 (R), then the following properties hold.(i) If g(0) < 0 and if there exists a < 0 such that g(a) = 0, then thereexists a solution u ∈ H0 1 (Ω) ∩ L ∞ (Ω) of (3.3.1), with a ≤ u ≤ 0.(ii) If g(0) > 0 and if there exists b > 0 such that g(b) = 0, then thereexists a solution u ∈ H0 1 (Ω) ∩ L ∞ (Ω) of (3.3.1), with 0 ≤ u ≤ b.Proof. Suppose for example g(0) < 0, the other case being similar.Then u ≡ a is clearly a subsolution and u ≡ 0 is clearly asupersolution.□3.4. The equation −△u = λg(u)In this section, we consider a function g ∈ C 1 (R) and we studythe problem{−△u = λg(u) in Ω,(3.4.1)u = 0 on ∂Ω,where λ is a nonnegative parameter. We will apply Theorem 3.3.3 inorder <strong>to</strong> determine the set of λ’s such that (3.4.1) has a solution in anappropriate sense.We observe that we may assume g(0) ≠ 0, since otherwise there isalways the trivial solution u = 0. Next, by possibly changing g <strong>to</strong> −g,


3.4. THE EQUATION −△u = λg(u) 95we may assume g(0) > 0. Finally, if g(b) = 0 for some b > 0, then theexistence of a solution for every λ > 0 follows from Corollary 3.3.6 (ii).Therefore, we need only consider the case g(u) > 0 for all u ≥ 0. Ourfirst result is the following.Theorem 3.4.1. Let Ω be a bounded, connected, open subset ofR N . Let g ∈ C 1 (R) and assume g(u) > 0 for all u ≥ 0. There exists0 < λ ∗ ≤ ∞ with the following properties.(i) For every λ ∈ [0, λ ∗ ), there exists a (unique) minimal solutionu λ ≥ 0, u λ ∈ H 1 0 (Ω) ∩ L ∞ (Ω) of (3.4.1). u λ is minimal in thesense that any supersolution v ≥ 0 of (3.4.1) satisfies v ≥ u λ .In addition, u λ ∈ C(Ω).(ii) The map u ↦→ u λ is increasing (0, ∞) → H 1 0 (Ω) ∩ L ∞ (Ω).(iii) If λ ∗ < ∞ and if λ > λ ∗ , then there is no solution u ∈ H 1 0 (Ω) ∩L ∞ (Ω) of (3.4.1).Proof. The proof relies on the results of Sections 3.1 and 3.3.We proceed in three steps.Step 1. If λ > 0 is sufficiently small, then the equation (3.4.1)has a minimal solution u λ ≥ 0, u λ ∈ H0 1 (Ω) ∩ L ∞ (Ω). We first notethat, since g(0) > 0, 0 is a subsolution of (3.4.1) for all λ ≥ 0. Next,let R > 0 be sufficiently large so that Ω ⊂ B(0, R) and set w(x) =cosh R−cosh x 1 . It follows that w > 0 on Ω, w ∈ C ∞ (Ω) ⊂ H 1 (Ω) and−∆w = cosh x 1 ≥ 1. Therefore, if λ > 0 is sufficiently small so that1/λ ≥ sup{g(s); 0 ≤ s ≤ cosh R}, then −∆w ≥ λg(w), so that w is asupersolution of (3.4.1). The result now follows from Theorem 3.3.3.Step 2. Construction of λ ∗ and proof of (i) and (iii). We setΛ = {λ ≥ 0; (3.4.1) has a nonnegative solution inH 1 0 (Ω) ∩ L ∞ (Ω)}, (3.4.2)andλ ∗ = sup Λ. (3.4.3)It follows from Step 1 that Λ ≠ ∅, so that 0 < λ ∗ ≤ ∞. Considernow 0 ≤ λ < λ ∗ . It follows from (3.4.3) that there exists λ ≥ λsuch that λ ∈ Λ. Let u ∈ H0 1 (Ω) ∩ L ∞ (Ω), u ≥ 0 be a solutionof (3.4.1) with λ. It follows that −∆u = λg(u) ≥ λg(u), so that u is asupersolution of (3.4.1). Since, as abserved above, 0 is a subsolutionof (3.4.1), it follows from Theorem 3.3.3 that the equation (3.4.1) hasa minimal solution u λ ≥ 0, u λ ∈ H0 1 (Ω) ∩ L ∞ (Ω). In addition, since


96 3. METHODS OF SUPER- AND SUBSOLUTIONS−∆u λ = λg(u λ ) ∈ L ∞ (Ω), we see that u λ ∈ C(Ω) by Corollary 4.2.7.This proves part (i), and part (iii) follows from (3.4.2)-(3.4.3).Step 3. Proof of (ii). Let 0 ≤ λ < µ < λ ∗ . It is clear that u µis a supersolution of (3.4.1). Since u µ ≥ 0, we deduce from part (i)that u µ ≥ u λ . On the other hand, since g > 0 and λ > µ, it is clearthat u µ ≢ u λ . Letγ = max{|g ′ (s)|; 0 ≤ s ≤ ‖u µ ‖ L ∞}.Setting w = u µ − u λ , it follows that−∆w + λγw = λγw + µg(u µ ) − λg(u λ )≥ λ[γ(u µ − u λ ) + g(u µ ) − g(u λ )] ≥ 0.By the strong maximum principle (Theorem 3.1.2), we deduce thatw > 0, i.e. u µ > u λ in Ω. This completes the proof.□Remark 3.4.2. Here are some comments on Theorem 3.4.1.(i) The mapping λ ↦→ u λ can have discontinuities, even for a smoothnonlinearity g. On the other hand, if g is convex (or concave) on[0, ∞), then the mapping λ ↦→ u λ is smooth. On these questions,see for example [18].(ii) The conclusion of Theorem 3.4.1 can be strengthened. In particular,for λ = λ ∗ , there always exists a solution of (3.4.1) inan appropriate weak sense. That solution is unique. Moreover,if λ ∗ < ∞ and λ > λ ∗ , then there is no solution of (3.4.1), evenin a weak sense. See in particular [12, 36] .The parameter λ ∗ introduced in Theorem 3.4.1 can be finite orinfinite, depending on the behavior of g(u) as u → ∞, as shows thefollowing result.Proposition 3.4.3. Let Ω be a bounded, connected, open subse<strong>to</strong>f R N . Let g ∈ C 1 (R) and assume g(u) > 0 for all u ≥ 0. With thenotation of Theorem 3.4.1, the following properties hold.(i) If g(u)u(ii) If lim infu→∞−→ 0, thenu→∞ λ∗ = ∞.g(u)u > 0, then λ∗ < ∞.Proof. (i) By Theorem 3.3.3, we need only construct a nonnegative,bounded supersolution of (3.4.1) for every λ > 0. (Recallthat 0 is always a subsolution.) To do this, we consider the function


3.4. THE EQUATION −△u = λg(u) 97w(x) = cosh R − cosh x 1 introduced in the Step 1 of the proof of Theorem3.4.1. Fix λ > 0 and let k > 0 <strong>to</strong> be chosen later. If u = kw,then −∆u = k cosh x 1 ≥ k. Since w ≤ cosh R, we deduce that− ∆u ≥ k 2 + k 2 ≥ kw2 cosh R + k 2 = 12 cosh R u + k 2 . (3.4.4)Since g(u)/u → 0 as u → ∞, we see that for every ε > 0, there existsC ε such that g(u) ≤ εu + C ε for all u ≥ 0. In particular, there existsK such that1λg(u) ≤2 cosh R u + K,for all u ≥ 0. Applying (3.4.4), we deduce that−∆u ≥ λg(u) + k 2 − K.Thus if we choose k ≥ 2K, then u is a supersolution of (3.4.1). Thisproves part (i).(ii) Let ϕ 1 > 0 be the first eigenfunction of −∆ (see Section 3.2).It follows from (3.4.1) that∫∫λ 1 u λ ϕ 1 = λ g(u λ )ϕ 1 , (3.4.5)ΩΩfor all 0 ≤ λ < λ ∗ . By assumption, there exist η, K > 0 such thatg(u) ≥ ηu − K for all u ≥ 0; and so,∫λ u λ ϕ 1 ≤ λ ∫1g(u λ )ϕ 1 + Kλ 1ηη ‖ϕ 1‖ L 1. (3.4.6)ΩOn the other hand, g is clearly bounded from below on [0, ∞), i.e.there exists ε > 0 such that g(u) ≥ ε for all u > 0. It then followsfrom (3.4.5) and (3.4.6) that(λ − λ 1η)ε‖ϕ‖ L 1 ≤(λ − λ 1ηΩ) ∫ g(u λ )ϕ 1 ≤ Kλ 1Ωη ‖ϕ 1‖ L 1,for 0 ≤ λ < λ ∗ , which implies λ ∗ ≤ (K + ε)λ 1 /ηε < ∞.Assuming Ω is sufficiently smooth, we study the “stability” of thesolutions u λ .Proposition 3.4.4. Let Ω be a bounded, connected, open subse<strong>to</strong>f R N . Let g ∈ C 1 (R) and assume g(u) > 0 for all u ≥ 0. If ∂Ωis of class C 2 then, with the notation of Theorem 3.4.1, the followingproperties hold.□


98 3. METHODS OF SUPER- AND SUBSOLUTIONS(i) λ 1 (−∆ − λg ′ (u λ )) ≥ 0 for every 0 ≤ λ < λ ∗ , where λ 1 is definedby (2.1.5).(ii) If g is convex or concave on [0, ∞), then λ 1 (−∆ − λg ′ (u λ )) > 0for every 0 ≤ λ < λ ∗ .Proof. Let λ 1 = λ 1 (−∆ − λg ′ (u λ )) be as defined by (2.1.5) andlet ϕ 1 > 0 be the corresponding first eigenvec<strong>to</strong>r (see Section 3.2). Itfollows that− ∆ϕ 1 = λg ′ (u λ )ϕ 1 + λ 1 ϕ 1 . (3.4.7)We observe that, by Theorem 3.1.4 and Remark 3.1.5 (for the lowerestimate) and Theorem 4.3.1 and Remark 4.3.2 (i) (for the upperestimate), there exist 0 < k < K such thatkd(x, ∂Ω) ≤ ϕ 1 (x) ≤ Kd(x, ∂Ω), (3.4.8)for all x ∈ Ω. As well, it follows that for every 0 ≤ λ < λ ∗ there exist0 < c λ < C λ such thatc λ d(x, ∂Ω) ≤ u λ (x) ≤ C λ d(x, ∂Ω), (3.4.9)for all x ∈ Ω. We now proceed in two steps.Step 1. Proof of (i). Assume by contradiction that λ 1 (−∆ −λg ′ (u λ )) < 0. Let ε > 0 and let w ε = u λ − εϕ 1 . It follows from (3.4.1)and (3.4.7) that− ∆w ε − λg(w ε ) = −λ(g(u λ − εϕ 1 ) − g(u λ )+ εϕ 1 [g ′ (u λ ) + (λ 1 /λ)]). (3.4.10)On the other hand, since g is C 1 and u λ , ϕ 1 ∈ L ∞ (Ω), we see thatg(u λ − εϕ 1 ) − g(u λ ) + εϕ 1 g ′ (u λ ) = o(εϕ 1 ). (3.4.11)Since λ 1 < 0, we deduce from (3.4.10)-(3.4.11) that −∆w ε ≥ λg(w ε )for all sufficiently small ε > 0, which implies that w ε is a supersolutionof (3.4.1). Note that 0 is a subsolution and that by (3.4.8)-(3.4.9) ,0 ≤ w ε < u λ if ε > 0 is sufficiently small. It then follows fromTheorem 3.3.3 that there exists a solution 0 ≤ w ≤ w ε < u λ of (3.4.1),which contradicts the minimality of u λ .Step 2. Proof of (ii). The result is clear if λ = 0, so weconsider λ > 0. Assume by contradiction that λ 1 (−∆ − λg ′ (u λ )) = 0


3.4. THE EQUATION −△u = λg(u) 99and let 0 < µ < λ ∗ <strong>to</strong> be chosen later. It then follows from (3.4.7)and (3.4.1) (for λ and µ) that∫∫∫∫∇ϕ 1 · ∇u λ = λ g ′ (u λ )ϕ 1 u λ , ∇ϕ 1 · ∇u µ = λ g ′ (u λ )ϕ 1 u µ ,ΩΩΩΩand∫ ∫∇ϕ 1 · ∇u λ = λ g(u λ )ϕ 1 ,ΩΩIt follows that∫∫λ g ′ (u λ )ϕ 1 u λ = λΩΩg(u λ )ϕ 1 ,from which we deduce that∫[g(u µ ) − g(u λ ) − (u µ − u λ )g ′ (u λ )]ϕ 1 = λ − µ ∫λΩ∫Ω∫∇ϕ 1 · ∇u µ = µ g(u µ )ϕ 1 .Ω∫∫λ g ′ (u λ )ϕ 1 u µ = µ g(u µ )ϕ 1 ,ΩΩΩg(u µ )ϕ 1 . (3.4.12)Next, we observe that by (3.4.8),∫g(u µ )ϕ 1 > 0. (3.4.13)ΩWe now argue as follows. If g is convex, then the integrand in theleft-hand side of (3.4.12) is nonnegative. We then chose µ > λ sothat by (3.4.13) the right-hand side of (3.4.12) is negative, yielding acontradiction. If g is concave, then the integrand in the left-hand sideof (3.4.12) is nonpositive and we obtain a contradiction by choosingλ < µ.□Remark 3.4.5. We give an explicit characterization of all solutionsof (3.4.1) in a model case. Let Ω = (0, l) and g(u) = e u , i.e.consider the problem− u ′′ = λe u , u(0) = u(l) = 0. (3.4.14)Note first that by any solution of (3.4.14) is positive in Ω. ApplyingTheorem 1.2.3, we see that there exists a solution of (3.4.14) everytime there exists x > 0 such that∫ x0ds√ex − e s = l√ λ/2. (3.4.15)


100 3. METHODS OF SUPER- AND SUBSOLUTIONSLet H(x) be the left-hand side of (3.4.15). Setting successively σ = e s ,τ = e x − σ and θ = e −x τ, we findH(x) =∫ ex1= 1 √ex∫dσe x −1σ √ e x − σ = dτ(e x − τ) √ τ∫ 1−e−x00dθ(1 − θ) √ θ .Since1(1 − θ) √ θ = d dθ log 1 + √ θ1 − √ θ ,we deduce thatH(x) = F (1 − e −x ),whereF (y) = √ 1 − y log 1 + √ y1 − √ y ,for y ∈ [0, 1). We note that F (0) = 0 = lim F (y), and thaty↑1F ′ (y) =12 √ y √ 1 − y(2 − √ y log 1 + √ y1 − √ ySince the function y ↦→ √ y log 1 + √ y1 − √ is increasing, there exists ayunique y 0 ∈ (0, 1) such that√y0 log 1 + √ y 01 − √ y 0= 2.Thus F is increasing on (0, y 0 ) and decreasing on (y 0 , 1). Ifλ ∗ = 2 ( ) 2,l 2 max F (y)y∈(0,1)then it follows that if 0 < λ < λ ∗ , then there exist exactly two solutionsof (3.4.14); if λ = λ ∗ , then there exists exactly one solutionof (3.4.14); if λ > λ ∗ , then there is no solution of (3.4.14).).


CHAPTER 4Regularity and qualitative propertiesIn this section, we study the regularity and symmetry propertiesof the solutions of nonlinear <strong>elliptic</strong> <strong>equations</strong>. We begin by studyingthe regularity for linear <strong>equations</strong>, then use bootstrap arguments inthe nonlinear case. For the symmetry properties, we use the “movingplanes” technique, based on the maximum principle.4.1. Interior regularity for linear <strong>equations</strong>In this section, we study the interior regularity of the solutions ofthe equation− ∆u + λu = f, (4.1.1)in D ′ (Ω). Our first result concerns the case where the equation holdsin the whole space R N .Proposition 4.1.1. Let m ∈ Z and 1 < p < ∞. Supposeλ > 0 and let v, h ∈ S ′ (R N ) satisfy (4.1.1) in S ′ (R N ). If h ∈W m,p (R N ), then v ∈ W m+2,p (R N ), and there exists a constant Csuch that ‖v‖ W m+2,p ≤ C‖h‖ W m,p.Proof. Taking the Fourier transform of (4.1.1), we obtain (λ +4π 2 |ξ| 2 )̂v = ĥ in S′ (R N ), so that (λ+4π 2 |ξ| 2 ) m+22 ̂v = (λ+4π 2 |ξ| 2 ) m 2 ̂f,and the result follows from Theorem 5.2.3.□We now consider the case of a general domain Ω.Proposition 4.1.2. Let λ ∈ R and let u, f ∈ D ′ (Ω) satisfy theequation (4.1.1) in D ′ (Ω).(i) If f ∈ W m,pn,ploc(Ω) and u ∈ Wloc (Ω) for some m ≥ 0, n ∈ Z and1 < p < ∞, then u ∈ W m+2,ploc(Ω). In addition, for every Ω ′′ ⊂⊂Ω ′ ⊂⊂ Ω, there exists a constant C (depending only on m, Ω ′ andΩ ′′ ) such that ‖u‖ W m+2,p (Ω ′′ ) ≤ C(‖f‖ W m,p (Ω ′ ) + ‖u‖ W n,p (Ω ′ )).101


102 4. REGULARITY AND QUALITATIVE PROPERTIES(ii) If f ∈ C ∞ (Ω) and u ∈ W n,ploc(Ω) for some n ∈ Z and 1 < p < ∞,then u ∈ C ∞ (Ω).Proof. We proceed in two steps.Step 1. Consider ω ′′ ⊂⊂ ω ′ ⊂⊂ Ω, k ∈ Z and 1 < p < ∞. Ifu ∈ W k,p (ω ′ ) and f ∈ W k−1,p (ω ′ ) solve the equation (4.1.1) in D ′ (ω ′ ),then u ∈ W k+1,p (ω ′′ ) and there exists C such that ‖u‖ W k+1,p (ω ′′ ) ≤C(‖f‖ W k−1,p (ω ′ ) +‖u‖ W k,p (ω )). To show this, consider ρ ∈ C ∞ ′ c (R N )such that ρ ≡ 1 on ω ′′ and supp ρ ⊂ ω ′ and define v ∈ D ′ (R N ) byv = ρu, i.e.(v, ϕ) D ′ (R N ),D(R N ) = (u, ρϕ) D ′ (ω ′ ),D(ω ′ ).It is not difficult <strong>to</strong> show that v ∈ W k,p (R N ) and that ‖v‖ W k,p (R N ) ≤C‖u‖ W k,p (ω ′ ). <strong>An</strong> easy calculation shows that v solves the equation− △v + v = T 1 + T 2 + T 3 , (4.1.2)in D ′ (R N ), where the distributions T 1 , T 2 and T 3 are defined by(T 1 , ϕ) D ′ (R N ),D(R N ) = (f + (1 − λ)u, ρϕ) D ′ (ω ′ ),D(ω ′ ),(T 2 , ϕ) D′ (R N ),D(R N ) = −(u, ϕ△ρ) D′ (ω ′ ),D(ω ′ ),(T 3 , ϕ) D′ (R N ),D(R N ) = −2(∇u, ϕ∇ρ) D′ (ω ′ ),D(ω ′ ),for every ϕ ∈ Cc∞ (R N ). It follows easily that T j ∈ W k−1,p (R N ) andthat‖T j ‖ W k−1,p (R N ) ≤ C(‖f‖ W k−1,p (ω ′ ) + ‖u‖ W k,p (ω ′ )),for j = 1, 2, 3. Applying (4.1.2) and Proposition 4.1.1, we deducev ∈ W k+1,p (R N ) and ‖v‖ W k+1,p (R N ) ≤ C(‖f‖ W k−1,p (ω )+‖u‖ ′ W k,p (ω )). ′The result follows, since the restrictions of u and v <strong>to</strong> ω ′′ coincide.Step 2. Conclusion. Without loss of generality, we may assumen = −l ≤ 0. Let Ω ′′ ⊂⊂ Ω ′ ⊂⊂ Ω. Consider now a family(ω j ) 0≤j≤m+l+1 of open subsets of Ω, such thatΩ ′′ = ω m+l+1 ⊂⊂ · · · ⊂⊂ ω 0 ⊂⊂ Ω ′(one constructs easily such a family).u ∈ W −l+1,p (ω 0 ) and thatIt follows from Step 1 that‖u‖ W −l+1,p (ω 0) ≤ C(‖f‖ W −l−1,p (Ω ′ ) + ‖u‖ W −l,p (Ω ′ ))≤ C(‖f‖ W m,p (Ω ′ ) + ‖u‖ W n,p (Ω ′ )).(4.1.3)


4.2. L p REGULARITY FOR LINEAR EQUATIONS 103We deduce from (4.1.3) and Proposition 4.1.1 that u ∈ W −l+2,p (ω 1 )and‖u‖ W −l+2,p (ω 1) ≤ C(‖f‖ W −l,p (ω 0) + ‖u‖ W −l+1,p (ω 0))≤ C(‖f‖ W m,p (Ω ′ ) + ‖u‖ W n,p (Ω ′ )).Iterating the above argument, one shows that u ∈ W m+2,p (ω m+l+1 ) =W m+2,p (Ω ′′ ) and that there exists C such that ‖u‖ W m+2,p (Ω ′′ ) ≤C(‖f‖ W m,p (Ω ′ ) + ‖u‖ W n,p (Ω ′ )). Hence property (i), since Ω ′ and Ω ′′are arbitrary. Property (ii) follows from Property (i) and the fact thatC ∞ (Ω) = ∩ m≥0 W m,ploc(Ω) (see Corollary 5.4.17). □4.2. L p regularity for linear <strong>equations</strong>In this section, we consider an open subset Ω ⊂ R N and we studythe L p regularity for solutions of the linear equation{−△u + λu = f in Ω,(4.2.1)u = 0 in ∂Ω.It follows from Theorem 2.1.4 that if λ > −λ 1 with λ 1 = λ 1 (−∆)defined by (2.1.5), then for every f ∈ H −1 (Ω), the equation (4.2.1)has a unique weak solution u ∈ H 1 0 (Ω). We begin with the followingresult.Theorem 4.2.1. Let λ > 0, let f ∈ H −1 (Ω) and let u ∈ H 1 0 (Ω)be the solution of (4.2.1). If f ∈ L p (Ω) for some p ∈ [1, ∞], thenu ∈ L p (Ω) and λ‖u‖ L p ≤ ‖f‖ L p.Proof. Let ϕ ∈ C 1 (R, R). Assume that ϕ(0) = 0, ϕ ′ ≥ 0 andϕ ′ ∈ L ∞ (R). It follows from Proposition 5.3.1 that ϕ(u) ∈ H0 1 (Ω) andthat ∇ϕ(u) = ϕ ′ (u)∇u a.e. in Ω. Therefore, taking the H −1 − H01duality product of (4.2.1) with ϕ(u), we obtain∫∫∫ϕ ′ (u)|∇u| 2 dx + λ uϕ(u) dx = fϕ(u) dx;Ωand so∫∫λ uϕ(u) dx ≤ fϕ(u) dx.ΩΩAssume that |ϕ(u)| ≤ |u| p−1 . It follows that |ϕ(u)|so∫( ∫λ uϕ(u) dx ≤ ‖f‖ L p uϕ(u) dxΩΩΩΩpp−1) p−1p.≤ uϕ(u); and


104 4. REGULARITY AND QUALITATIVE PROPERTIESWe deduce that( ∫ ) 1pλ uϕ(u) dx ≤ ‖f‖ L p, (4.2.2)Ωand we consider separately two cases.Case 1: p ≤ 2 Given ε > 0, let ϕ(u) = u(ε + u 2 ) p−22 . It followsfrom (4.2.2) that( ∫ )λ u 2 (ε + u 2 ) p−21p2 dx ≤ ‖f‖ L p.ΩLetting ε ↓ 0 and applying Fa<strong>to</strong>u’s Lemma yields the desired result.Case 2: 2 < p ≤ ∞ We use a duality argument. Given h ∈Cc∞ (Ω), let v ∈ H0 1 (Ω) be the solution of (4.2.1) with f replaced byh. We have∫uh = (u, −△v + λv) H 10 ,H = (−△u + λu, v) −1 H −1 ,H01 Ω∫= (f, v) H −1 ,H0 1 = fv.Therefore,∫∣ΩΩuh∣ ≤ ‖f‖ L p‖v‖ L p ′ ≤ 1 λ ‖f‖ L p‖h‖ L , p′by the result of Case 1 (since p ′ < 2). Since h ∈ Cc∞ (Ω) is arbitrary,we deduce that ‖u‖ L p ≤ λ −1 ‖f‖ L p. □For some λ < 0, one can still obtain L ∞ regularity results. Moreprecisely, we have the following.Theorem 4.2.2. Let λ 1 = λ 1 (−∆) be defined by (2.1.5) andlet λ > −λ 1 . Let f ∈ H −1 (Ω) and let u ∈ H0 1 (Ω) be the solutionof (4.2.1). If f ∈ L p (Ω) + L ∞ (Ω) for some p > 1, p > N/2, thenu ∈ L ∞ (Ω). Moreover, given 1 ≤ r < ∞, there exists a constant Cindependent of f such that‖u‖ L ∞ ≤ C(‖f‖ Lp +L ∞ + ‖u‖ L r).In particular, ‖u‖ L ∞ ≤ C(‖f‖ L p +L ∞ + ‖f‖ H −1).Proof. The proof we follow is adapted from Hartman and Stampacchia[25] (see also Brezis and Lions [13]). By homogeneity, we mayassume that ‖u‖ L r + ‖f‖ L p +L ∞ ≤ 1. In particular, f = f 1 + f 2 with‖f 1 ‖ L p ≤ 1 and ‖f 2 ‖ L ∞ ≤ 1. Since −u solves the same equation as


4.2. L p REGULARITY FOR LINEAR EQUATIONS 105u, with f replaced by −f (which satisfies the same assumptions), it issufficient <strong>to</strong> estimate ‖u + ‖ L ∞. Set T = ‖u + ‖ L ∞ ∈ [0, ∞] and assumethat T > 0. For t ∈ (0, T ), set v(t) = (u − t) + . We have v(t) ∈ H0 1 (Ω)by Corollary 5.3.6. Let now α(t) = |{x ∈ Ω, u(x) > t}| for all t > 0.Note that α(t) is always finite. In particular, since v(t) ∈ L 2 (Ω) issupported in {x ∈ Ω, u(x) > t}, we have v(t) ∈ L 1 (Ω). We set∫β(t) = v(t) dx.ΩIntegrating the function 1 {u>s} (x) on (t, ∞)×Ω and applying Fubini’sTheorem, we obtainβ(t) =∫ ∞tα(s) ds,so that β ∈ W 1,1loc (0, ∞) and β ′ (t) = −α(t), (4.2.3)for almost all t > 0. The idea of the proof is <strong>to</strong> obtain a differentialinequality on β(t) which implies that β(t) must vanish for t largeenough. Taking the H −1 − H0 1 duality product of (4.2.1) with v(t),we obtain ∫ ∫∇u · ∇v(t) + λ uv(t) = (f, v(t)) H −1 ,H 1,0ΩΩfor every t > 0. Therefore, by applying formula (5.3.3) and the propertyv(t) ∈ L 1 (Ω), we deduce that∫∫{|∇v(t)| 2 + λ|v(t)| 2 } dx = (f − tλ)v(t) dx.ΩSince λ > −λ 1 , we deduce by applying (2.1.8) that∫∫‖v(t)‖ 2 H ≤ C (f − tλ)v(t) dx ≤ C (|f| + t|λ|)v(t) dx. (4.2.4)1ΩWe observe that∫ ∫|f|v(t) ≤ (|f 1 | + |f 2 |)v(t) ≤ ‖f 1 ‖ L p‖v(t)‖ L p ′ + ‖f 2‖ L ∞‖v(t)‖ L 1ΩΩ≤ ‖v(t)‖ L p ′ + ‖v(t)‖ L 1,and we deduce from (4.2.4) thatΩΩ‖v(t)‖ 2 H ≤ C(1 + t)(‖v(t)‖ 1 L + ‖v(t)‖ p′ L1). (4.2.5)


106 4. REGULARITY AND QUALITATIVE PROPERTIESFix now ρ > 2p ′ such that ρ < 2N/(N − 2) (ρ < ∞ if N = 1). (Notethat this is possible since p > min{1, N/2}.) We have in particularH0 1 (Ω) ↩→ L ρ (Ω). Also, , it follows from Hölder’s inequality that‖v(t)‖ L 1 ≤ α(t) 1− 1 ρ ‖v(t)‖L ρ and ‖v(t)‖ L p ′ ≤ α(t) 1 p ′ − 1 ρ‖v(t)‖ L ρ. Thuswe deduce from (4.2.5) that‖v(t)‖ 2 L ρ ≤ C(1 + t)(α(t) 1 p ′ − 1 ρ+ +α(t) 1− 1 ρ )‖v(t)‖L ρ.Finally, since β(t) = ‖v(t)‖ L 1 ≤ α(t) 1− 1 ρ ‖v(t)‖L ρ, we obtainwhich we can write asβ(t) ≤ C(1 + t)(α(t) 1+ 1 p ′ − 2 ρ+ +α(t) 2− 2 ρ ),β(t) ≤ C(1 + t)F (α(t)),with F (s) = s 1+ 1 p ′ − 2 ρ+ s 2− 2 ρ . It follows that− α(t) + F −1( β(t))≤ 0. (4.2.6)C(1 + t)Setting z(t) =β(t) , we deduce from (4.2.3) and (4.2.6) thatC(1 + t)sz ′ + ψ(z(t))C(1 + t) ≤ 0,with ψ(s) = F −1 (s)+Cs. Integrating the above differential inequalityyields∫ t∫dσz(s)C(1 + σ) ≤ dσψ(σ) ,for all 0 < s < t < T . If T ≤ 1, then by definition ‖u + ‖ L ∞ ≤ 1.Otherwise, we obtain∫ t1z(t)∫dσz(1)C(1 + σ) ≤ dσψ(σ) ,z(t)for all 1 < t < T , which implies in particular that∫ T1∫dσz(1)C(1 + σ) ≤ dσψ(σ) .Note that F (s) ≈ s 1+ 1 p ′ − 2 ρas s ↓ 0 and 1 + 1/p ′ − 2/ρ > 1, so that1/ψ is integrable near zero. Since 1/(1 + σ) is not integrable at ∞,0


4.2. L p REGULARITY FOR LINEAR EQUATIONS 107this implies that T = ‖u + ‖ L ∞ < ∞. Moreover, ‖u + ‖ L ∞ is estimatedin terms of z(1), andz(1) = 1 ∫− 1)C Ω(u + ≤ 1 ∫u ≤ 1 ∫u r ≤ 1 C {u>1} C {u>1} C .This completes the proof.One can improve the L p estimates by using Sobolev’s inequalities.In particular, we have the following result.Theorem 4.2.3. Let λ > 0, f ∈ H −1 (Ω) and let u ∈ H 1 0 (Ω) bethe solution of (4.2.1). If f ∈ L p (Ω) for some p ∈ (1, ∞], then thefollowing properties hold.(i) If p > N/2, then u ∈ L p (Ω)∩L ∞ (Ω), and there exists a constantC independent of f such that ‖u‖ L r ≤ C‖f‖ L p for all r ∈ [p, ∞].(ii) If p = N/2 and N ≥ 3, then u ∈ L r (Ω) for all r ∈ [p, ∞), andthere exist constants C(r) independent of f such that ‖u‖ L r ≤C(r)‖f‖ L p.(iii) If 1 < p < N/2 and N ≥ 3, then u ∈ L p (Ω) ∩ L NpN−2p (Ω), andthere exists a constant C independent of f such that ‖u‖ L r ≤C‖f‖ L p for all r ∈ [p, Np/(N − 2p)].Proof. Property (i) follows from Theorems 4.2.1 and 4.2.2 andHölder’s inequality. It remains <strong>to</strong> establish properties (ii) and (iii).Note that in this case N ≥ 3. By density, it is sufficient <strong>to</strong> establishthese properties for f ∈ Cc∞ (Ω). In this case, we have u ∈ L 1 (Ω) ∩L ∞ (Ω) by Theorem 4.2.1. Consider an odd, increasing function ϕ :R → R such that ϕ ′ is bounded and define∫ x √ψ(x) = ϕ′ (s) ds. (4.2.7)0It follows that ψ is odd, nondecreasing, and that ψ ′ is bounded. ByProposition 5.3.1, ϕ(u) and ψ(u) both belong <strong>to</strong> H 1 0 (Ω), and|∇ψ(u)| 2 = ϕ ′ (u)|∇u| 2 = ∇u · ∇(ϕ(u)), (4.2.8)a.e. Taking the H −1 − H01 duality product of (4.2.1) with ϕ(u), itfollows from (4.2.8) that∫(|∇(ψ(u))| 2 + λuϕ(u)) dx = (f, ϕ(u)) H −1 ,H0 1.Ω□


108 4. REGULARITY AND QUALITATIVE PROPERTIESIn addition, xϕ(x) ≥ 0, and it follows from (4.2.7) and Cauchy-Schwarz inequality that xϕ(x) ≥ |ψ(x)| 2 . Therefore, there exists aconstant C such that‖ψ(u)‖ 2 H 1 ≤ C(f, ϕ(u)) H −1 ,H 1 0 .We deduce that, given any p ∈ [1, ∞],‖ψ(u)‖ 2 H 1 ≤ C‖f‖ L p‖ϕ(u)‖ L p′ .Since H 1 0 (Ω) ↩→ L 2NN−2 (Ω), it follows that‖ψ(u)‖ 2 L 2NN−2≤ C‖f‖ L p‖ϕ(u)‖ L p ′ . (4.2.9)Consider now 1 < q < ∞ such that (q − 1)p ′ ≥ 1. If q ≤ 2, letϕ ε (x) = x(ε + x 2 ) q−2q . If q > 2, take ϕε (x) = x|x| q−2 (1 + εx 2 ) 2−q2 .It follows that |ϕ ε (x)| ≤ C|x| q−1 and that |ϕ ε (x)| → |x| q−1 as ε ↓ 0.One verifies easily that |ψ ε (x)| 2 ≤ C|x| q and that |ψ ε (x)| 2 → (4(q −1)/q 2 )|x| q . Applying (4.2.9), then letting ε ↓ 0 and applying thedominated convergence theorem, it follows that‖u‖ q L NqN−2≤ Cq2q − 1 ‖f‖ L p‖u‖q−1 L (q−1)p′, (4.2.10)for all 1 < q < ∞ such that (q −1)p ′ ≥ 1. We now prove property (ii).Suppose that N ≥ 3 and that p = N/2. Apply (4.2.10) with q > N/2.It follows that‖u‖ q L NqN−2q2≤ Cq − 1 ‖f‖ L N/2‖u‖q−1N(q−1)L N−2. (4.2.11)On the other hand, it follows from Hölder’s inequality and Theorem4.2.1 that‖u‖ q−1N(q−1)L N−2(2q−N)q2q−N+2≤ ‖u‖L NqN−2Substitution in (4.2.11) yields‖u‖LNqN−2‖u‖ N−22q−N+2L N 2≤ C(q)‖f‖ LN2.(2q−N)q2q−N+2≤ ‖u‖L NqN−2‖f‖ N−22q−N+2L N 2Property (ii) follows from the above estimate and Theorem 4.2.1,since q is arbitrary. Finally, we prove property (iii). We set q =(N − 2)p/(N − 2p). It follows in particular that Nq/(N − 2) = (q −1)p ′ = Np/(N − 2p), so that by (4.2.10)‖u‖LNpN−2p≤ C ′ ‖f‖ L p..


4.2. L p REGULARITY FOR LINEAR EQUATIONS 109Property (iii) follows from the above estimate and Theorem 4.2.1.□Corollary 4.2.4. Let λ > 0, let f ∈ H −1 (Ω) and let u ∈ H 1 0 (Ω)be the solution of (4.2.1). If f ∈ L 1 (Ω), then the following propertieshold.(i) If N = 1, then u ∈ L 1 (Ω) ∩ L ∞ (Ω), and there exists a constantC independent of f and r such that ‖u‖ L r ≤ C‖f‖ L 1 for allr ∈ [1, ∞].(ii) If N ≥ 2, then u ∈ L r (Ω) for all r ∈ [1, N/(N − 2)) andthere exists a constant C(r) independent of f such that ‖u‖ L r ≤C(r)‖f‖ L 1.Proof. If N = 1, then u ∈ H0 1 (Ω) ↩→ L ∞ (Ω). Taking the H −1 −H01 duality product of (4.2.1) with u, we deduce easily that thereexists µ > 0 such thatµ‖u‖ 2 H 1 ≤ (f, u) H −1 ,H 1 0 ≤ ‖f‖ L 1‖u‖ L ∞ ≤ C‖f‖ L 1‖u‖ H 1.Therefore, µ‖u‖ H 1 ≤ C‖f‖ L 1, and (i) follows. In the case N ≥ 2, weuse a duality argument. Let u and f be as in the statement of thetheorem. It follows from Theorem 4.2.1 that u ∈ L 1 (Ω) and‖u‖ L 1 ≤ C‖f‖ L 1. (4.2.12)Fix q > N/2. Let h ∈ Cc∞ (Ω), and let ϕ ∈ H0 1 (Ω) be the solution ofthe equation −△ϕ + λϕ = h. It follows from Theorem 4.2.3 thatSince‖ϕ‖ L ∞ ≤ C‖h‖ L q. (4.2.13)(f, ϕ) H −1 ,H 1 0 = (−△u + λu, ϕ) H −1 ,H 1 0= (u, −△ϕ + λϕ) H 10 ,H −1 = (u, h) H 1 0 ,H−1,we deduce from (4.2.12)-(4.2.13) that∫∣ uh∣ ≤ ‖f‖ L 1‖ϕ‖ L ∞ ≤ C‖f‖ L 1‖h‖ L q.ΩSince ϕ ∈ Cc ∞ (Ω) is arbitrary, we obtain ‖u‖ L q ′ ≤ C‖f‖ L1. SinceN/2 < q ≤ ∞ is arbitrary, 1 ≤ q ′ < N/(N − 2) is arbitrary and theresult follows.□Remark 4.2.5. Note that the estimates of Theorem 4.2.3 andCorollary 4.2.4 are optimal in the following sense.


110 4. REGULARITY AND QUALITATIVE PROPERTIES(i) If N ≥ 2 and f ∈ L N 2 (Ω), then u is not necessarily in L ∞ (Ω).For example, let Ω be the unit ball, and let u(x) = (− log |x|) γwith γ > 0. Then u ∉ L ∞ (Ω). On the other hand, one verifieseasily that if 0 < γ < 1/2 in the case N = 2 and 0 < γ < 1−2/Nin the case N ≥ 3, then u ∈ H0 1 (Ω) and −△u + u ∈ L N 2 (Ω).(ii) If N ≥ 3 and f ∈ L 1 (Ω), then there is no estimate of the form‖u‖ NL N−2≤ C‖f‖ L 1. (Note that since u ∈ H0 1 (Ω), we alwayshave u ∈ L NN−2 (Ω).) One constructs easily a counter example asfollows. Let Ω be the unit ball, and let u = zϕ with ϕ ∈ Cc∞ (Ω),ϕ(0) ≠ 0, and z(x) = |x| 2−N (− log |x|) γ with γ < 0. Then−△u + u ∈ L 1 (Ω) and u ∉ L NN−2 (Ω). By approximating u bysmooth functions, one deduces that there is no estimate of theform ‖u‖ NL N−2≤ C‖f‖ L 1.(iii) If N ≥ 3 and 1 < p < N/2, then by arguing as above oneshows the following properties. There is no estimate of theform ‖u‖ L q ≤ C‖f‖ L 1 for q > Np/(N − 2p). Moreover, iff ∈ L p (Ω), then in general u ∉ L q (Ω) for q > Np/(N − 2p),q > 2N/(N − 2).Remark 4.2.6. Under some smoothness assumptions on Ω, onecan establish higher order L p estimates. However, the proof of theseestimates is considerably more delicate. In particular, one has thefollowing results.(i) If Ω has a bounded boundary of class C 2 (in fact, C 1,1 is enough)and if 1 < p < ∞, then one can show that for every λ > 0 andf ∈ L p (Ω), there exists a unique solution u ∈ W 1,p0 (Ω)∩W 2,p (Ω)of equation (4.2.1), and that‖u‖ W 2,p ≤ C(‖u‖ L p + ‖f‖ L p),for some constant C independent of f (see e.g. Theorem 9.15,p.241 in Gilbarg and Trudinger [23]). One shows as well that forevery f ∈ W −1,p (Ω), there exists a unique solution u ∈ W 1,p0 (Ω)of equation (4.2.1) (see Agmon, Douglis and Nirenberg [3]).(ii) One has partial results in the cases p = 1 and p = ∞. In particular,if Ω is bounded and smooth enough, then for every λ > 0and f ∈ L 1 (Ω), there exists a unique solution u ∈ W 1,10 (Ω),such that △u ∈ L 1 (Ω), of equation (4.2.1) (see Pazy [39], Theorem3.10, p.218). It follows that λ‖u‖ L 1 ≤ ‖f‖ L 1. In general,


4.3. C 0 REGULARITY FOR LINEAR EQUATIONS 111u ∉ W 2,1 (Ω). If Ω is bounded, it follows from Theorems 2.1.4and 4.2.1 that for every λ > 0 and f ∈ L ∞ (Ω), there exists aunique solution u ∈ H0 1 (Ω) ∩ L ∞ (Ω), such that △u ∈ L ∞ (Ω),of equation (4.2.1). In general, u ∉ W 2,∞ (Ω), even if Ω issmooth. On the other hand, it follows from property (i) abovethat u ∈ W 1,p0 (Ω) ∩ W 2,p (Ω), for every p < ∞.Corollary 4.2.7. Let λ > 0, f ∈ H −1 (Ω) and let u ∈ H 1 0 (Ω)be the solution of (4.2.1). If f ∈ L p (Ω) for some 1 ≤ p < ∞ withp > N/2, or if f ∈ C 0 (Ω), then u ∈ L ∞ (Ω) ∩ C(Ω).Proof. Let (f n ) n≥0 ⊂ Cc∞ (Ω) such that f n → f in L p (Ω) asn → ∞ (we let p = ∞ in the case f ∈ C 0 (Ω)), and let (u n ) n≥0be the corresponding solutions of (4.2.1). It follows from Proposition4.1.2 (ii) that u n ∈ C(Ω). Moreover, u n → u in L ∞ (Ω) byTheorem 4.2.3 (i) (or Corollary 4.2.4 (i) in the case p = 1 = N), andthe result follows.□4.3. C 0 regularity for linear <strong>equations</strong>In this section we show that if Ω satisfies certain geometric assumptions,then the solution of (4.2.1) with sufficiently smooth righthandside is continuous at ∂Ω.Theorem 4.3.1. If N ≥ 2 suppose that there exists ρ > 0 suchthat for every x 0 ∈ ∂Ω there exists y(x 0 ) ∈ R N with |x 0 − y(x 0 )| = ρand B(y 0 , ρ) ∩ Ω = ∅. Let λ > 0, f ∈ H −1 (Ω) ∩ L ∞ (Ω) and letu ∈ H0 1 (Ω) be the solution of (4.2.1). It follows that|u(x)| ≤ C‖f‖ L ∞d(x, ∂Ω), (4.3.1)for all x ∈ Ω, where C is independent of f.Proof. We may assume without loss of generality that |f| ≤ 1,so that |u| ≤ λ −1 by Theorem 4.2.1. We may also suppose N ≥ 2, forthe case N = 1 is immediate. We construct a local barrier at everypoint of ∂Ω. Given c > 0, setw(x) ={14 (ρ2 − |x| 2 ) + c log(|x|/ρ) if N = 2,12N (ρ2 − |x| 2 ) + c(ρ 2−N − |x| 2−N ) if N ≥ 3.(4.3.2)


112 4. REGULARITY AND QUALITATIVE PROPERTIESIt follows that −△w = 1 in R N \ {0}. Furthermore, we see that if cis large enough, then there exist ρ 1 > ρ 0 > ρ such thatw(x) > 0 for ρ < |x| ≤ ρ 1 , w(x) ≥ λ −1 for ρ 0 ≤ |x| ≤ ρ 1 . (4.3.3)Given c as above, we observe that there exists a constant K such thatw(x) ≤ K(|x| − ρ) for ρ ≤ |x| ≤ ρ 1 . (4.3.4)Let now ˜x ∈ Ω such that 2d(˜x, ∂Ω) < ρ 1 − ρ, and let x 0 ∈ ∂Ω be suchthat |˜x − x 0 | ≤ 2d(˜x, ∂Ω). Set ˜Ω = {x ∈ Ω; ρ < |x − y(x 0 )| < ρ 1 } andv(x) = w(x − y(x 0 )) for x ∈ ˜Ω. We note that |˜x − y(x 0 )| > 0 by thegeometric assumption. Moreover,|˜x − y(x 0 )| ≤ |˜x − x 0 | + |x 0 − y(x 0 )| < ρ 1 − ρ + ρ = ρ 1 ,so that ˜x ∈ ˜Ω. Next, it follows from (4.3.3)-(4.3.4) that0 ≤ v(x) ≤ K(|x − y(x 0 )| − ρ) ≤ K(|x − x 0 | + |x 0 − y(x 0 )| − ρ)= K|x − x 0 | ≤ 2Kd(x, ∂Ω),for all x ∈ ˜Ω. On the other hand,−△(u − v) + λ(u − v) = f − (1 + λv) ≤ f − 1 ≤ 0,in ˜Ω. We claim that(u − v) + ∈ H 1 0 (˜Ω). (4.3.5)It then follows from the maximum principle that u(x) ≤ v(x) ≤2Kd(x, ∂Ω) for a.a. x ∈ ˜Ω. Changing u <strong>to</strong> −u, one obtains as wellthat −u ≤ v, so that |u(x)| ≤ 2Kd(x, ∂Ω) for a.a. x ∈ ˜Ω. In particular,|u(˜x)| ≤ 2Kd(x, ∂Ω). Since ˜x is arbitrary, we deduce thatif x ∈ Ω such that 2d(x, ∂Ω) < ρ 1 − ρ, then |u(x)| ≤ 2Kd(x, ∂Ω).For x ∈ Ω such that 2d(x, ∂Ω) ≥ ρ 1 − ρ, we have u(x) ≤ λ −1 ≤2λ −1 (ρ 1 − ρ) −1 d(x, ∂Ω), and the result follows.It now remains <strong>to</strong> establish the claim (4.3.5). Let ϕ ∈ Cc ∞ (R N )satisfy 0 ≤ ϕ ≤ 1, ϕ ≡ 1 on the set {|x − y(x 0 )| ≤ ρ 0 } and ϕ ≡ 0 onthe set {|x−y(x 0 )| ≥ ρ 1 }. We note that by (4.3.3), u ≤ λ −1 ≤ v, thusϕu − v ≤ u − v ≤ 0 on ˜Ω ∩ {|x − y(x 0 )| ≥ ρ 0 }. Therefore, (ϕu − v) + =(u − v) + = 0 on ˜Ω ∩ {|x − y(x 0 )| ≥ ρ 0 }. On ˜Ω ∩ {|x − y(x 0 )| ≤ ρ 0 },ϕu − v = u − v, so that (u − v) + = (ϕu − v) + in ˜Ω. Let now(u n ) n≥0 ⊂ Cc ∞ (Ω) satisfy u n → u in H 1 (Ω) as n → ∞. It follows(see Proposition 5.3.3) that (ϕu n − v) + → (ϕu − v) + = (u − v) + in


4.4. BOOTSTRAP METHODS 113H 1 (˜Ω). Thus, we need only verify that (ϕu n − v) + ∈ H0 1 (˜Ω). Thisfollows from Remark 5.1.10 (i), because ϕu n = 0 and v ≥ 0 on ∂ ˜Ω. □Remark 4.3.2. Here are some comments on Theorem 4.3.1.(i) One verifies easily that it ∂Ω is uniformly of class C 2 , then thegeometric assumption of Theorem 4.3.1 is satisfied.(ii) One can weaken the regularity assumption on Ω and still obtainthe continuity of u at ∂Ω. However, one does not obtain ingeneral the estimate (4.3.1).(iii) Without any regularity assumption, it can happen that u ∉C 0 (Ω) even for some f ∈ Cc ∞ (Ω). For example, let Ω = R N \{0}with N ≥ 2 and set ϕ(x) = cosh x 1 for x ∈ Ω. It follows thatϕ ∈ C ∞ (R N ), and −△ϕ + ϕ = 0. Let now ψ ∈ Cc∞ (R N ) satisfyψ ≡ 1, for |x| ≤ 1 and ψ ≡ 0, for |x| ≥ 2. Set u = ϕψ, so thatu ∈ Cc∞ (R N ). It is not difficult <strong>to</strong> verify that u ∈ H0 1 (Ω) (seeRemark 5.1.10 iii). On the other hand, −△u + u = 0 for |x| ≤ 1and for |x| ≥ 2. In particular, if we set f = −△u + u, thenf ∈ Cc∞ (Ω). Finally, u ∉ C 0 (Ω), since u = 1 on ∂Ω.Corollary 4.3.3. Suppose Ω satisfies the assumption of Theorem4.3.1. Let λ > 0, f ∈ H −1 (Ω) ∩ L ∞ (Ω) and let u ∈ H 1 0 (Ω) be thesolution of (4.2.1). If f ∈ L p (Ω) for some 1 ≤ p < ∞ with p > N/2,or if f ∈ C 0 (Ω), then u ∈ C 0 (Ω).Proof. By Corollary 4.2.7, u ∈ C(Ω). Continuity at ∂Ω followsfrom Theorem 4.3.1.□4.4. Bootstrap methodsIn this section, we consider an arbitrary open domain Ω ⊂ R Nand we study the regularity of the solutions of the equation{−∆u = g(u) in Ω,u = 0 on ∂Ω,where g is a given nonlinearity. We use the regularity properties ofthe linear equation (Sections 4.1, 4.2 and 4.3 above) and bootstarparguments. We prove two kind of results. We establish sufficientconditions (on g and u) so that u ∈ L ∞ (Ω). We also prove interiorregularity, assuming u is (locally) bounded and g is sufficientlysmooth. Our first result is the following.


114 4. REGULARITY AND QUALITATIVE PROPERTIESTheorem 4.4.1. Let g(x, t) : Ω × R → R be measurable in x ∈ Ωfor all t ∈ R and continuous in t ∈ R for a.a. x ∈ Ω. Assume furtherthat there exist p ≥ 1 and a constant C such that|g(x, t)| ≤ C(|t| + |t| p ), (4.4.1)for a.a. x ∈ Ω and all t ∈ R. Let u ∈ H 1 0 (Ω) satisfy g(·, u(·)) ∈H −1 (Ω) and assume that− ∆u = g(·, u(·)), (4.4.2)in H −1 (Ω). If N ≥ 3, suppose that u ∈ L q (Ω) for someq ≥ p, q >It follows that u ∈ L ∞ (Ω) ∩ C(Ω).N(p − 1). (4.4.3)2Proof. If N = 1, then the result follows from the embeddingH0 1 (Ω) ↩→ C 0 (Ω). If N = 2, H0 1 (Ω) ↩→ L r (Ω) for all 2 ≤ r < ∞.In particular, it follows from (4.4.1) that g(·, u) ∈ L N (Ω), and theresult follows from Corollary 4.2.7. Therefore, we may now supposethat N ≥ 3. We first proceed <strong>to</strong> a reduction in order <strong>to</strong> eliminate thefirst term in the right-hand side of (4.4.1). Let η ∈ Cc∞ (R) satisfyη(t) = 0 for |t| ≤ 1. Set g 1 (x, t) = η(t)(t + g(x, t)) and g 2 (x, t) =(1 − η(t))(t + g(x, t)). It follows from (4.4.1) that|g 1 (x, t)| ≤ C min{1, |t|}, (4.4.4)|g 2 (x, t)| ≤ C|t| p , (4.4.5)for a.e. x ∈ Ω and all t ∈ R. It follows from (4.4.4) that g 1 (·, u) ∈L 2 (Ω) ∩ L ∞ (Ω). Therefore, if we denote by u 1 ∈ H 1 0 (Ω) the solutionof the equation− ∆u 1 + u 1 = g 1 (·, u), (4.4.6)then it follows from Corollary 4.2.7 thatu 1 ∈ L 2 (Ω) ∩ L ∞ (Ω) ∩ C(Ω). (4.4.7)Therefore, we need only show that u 2 = u − u 1 ∈ L ∞ (Ω) ∩ C(Ω). Wenote that by (4.4.2) and (4.4.6), u 2 ∈ H 1 0 (Ω) satisfies the equation− ∆u 2 + u 2 = g 2 (·, u). (4.4.8)We now note that, since u ∈ L 2 (Ω), we may assume thatand we proceed in three steps.q ≥ 2, (4.4.9)


4.4. BOOTSTRAP METHODS 115Step 1. The case q > Np/2. It follows from (4.4.5) thatg 2 (·, u) ∈ L q p (Ω). Since q/p > N/2 > 1, we deduce from Corollary4.2.7 that u 2 ∈ L ∞ (Ω) ∩ C(Ω), and the result follows by applying(4.4.7).Step 2. The case p < q ≤ Np/2. Letθ =N> 1, (4.4.10)Np − 2qby (4.4.3). Suppose u ∈ L r (Ω) for some q ≤ r < Np/2. It followsfrom (4.4.5) that g 2 (·, u) ∈ L r p (Ω). Since r/p ≥ q/p > 1 and r/p r ≥ 2, so that u 1 ∈ L θr (Ω)by (4.4.7). Therefore, u ∈ L θr (Ω). Let now k be an integer such thatθ k q ≤ Np/2 < θ k+1 q. Using successively r = θ j q with j = 0, · · · , kin the argument just above, we deduce that u ∈ L θk+1q (Ω). Sinceθ k+1 q > Np/2, the result now follows by Step 1.Step 3. The case p = q (≤ Np/2). It follows from (4.4.5) thatg 2 (·, u) ∈ L 1 (Ω). By (4.4.8) and Corollary 4.2.4 (ii), we deduce thatu 2 ∈ L r (Ω) for all r ∈ [1, N/(N − 2)). Finally, we note thatNN − 2 = NNp − 2q q > q,by (4.4.10). In particular, N/(N − 2) > 2, so that u ∈ L r (Ω) for allr ∈ [2, N/(N − 2)). Thus we are reduced <strong>to</strong> the case of Step 2. □If |Ω| < ∞, then one can weaken the assumption (4.4.1), as showsthe following result.Corollary 4.4.2. Assume |Ω| < ∞. Let g(x, t) : Ω × R → R bemeasurable in x ∈ Ω for all t ∈ R and continuous in t ∈ R for a.a.x ∈ Ω. Assume further that there exist p ≥ 1 and a constant C suchthat|g(x, t)| ≤ C(1 + |t| p ), (4.4.11)for a.a. x ∈ Ω and all t ∈ R. Let u ∈ H 1 0 (Ω) satisfy g(·, u(·)) ∈H −1 (Ω) and (4.4.2) in H −1 (Ω). If N ≥ 3, assume further (4.4.3). Itfollows that u ∈ L ∞ (Ω) ∩ C(Ω).Proof. The proof of Theorem 4.4.1 applies, except for the beginningwhich requires a minor modification. Instead of (4.4.4), the


116 4. REGULARITY AND QUALITATIVE PROPERTIESfunction g 1 satisfies |g 1 (x, t)| ≤ C; and so, g 1 (·, u) ∈ L ∞ (Ω) ↩→ L 2 (Ω),since |Ω| < ∞. The remaining of the proof is then unchanged. □Corollary 4.4.3. Let g(x, t) : Ω × R → R be measurable inx ∈ Ω for all t ∈ R and continuous in t ∈ R for a.a. x ∈ Ω. Assumefurther (4.4.10) (or (4.4.11) if |Ω| < ∞). Let u ∈ H0 1 (Ω) satisfyg(·, u(·)) ∈ H −1 (Ω) and (4.4.2) in H −1 (Ω). If N ≥ 3, assume furtherthatp < N + 2N − 2 . (4.4.12)It follows that u ∈ L ∞ (Ω) ∩ C(Ω).Proof. If N ≥ 3, then, since u ∈ H0 1 (Ω), we have u ∈ L q (Ω) withq = 2N/(N −2). We deduce from (4.4.12) that q satisfies (4.4.3). Theresult now follows from Theorem 4.4.1 (or Corollary 4.4.2 if |Ω| 0, there existsa constant C(M) such that‖g(u)‖ W m,p ≤ C(M)‖u‖ W m,p, (4.4.13)for all u ∈ W m,p (R N ) ∩ L ∞ (R N ) such that ‖u‖ L ∞ ≤ M.Proof. We fix M > 0, so that we may without loss of generalitymodify g(t) for |t| > M. Considering for example a function η ∈


4.4. BOOTSTRAP METHODS 117Cc∞ (R) such that η(t) = 1 for |t| ≤ M, we may replace g(t) by η(t)g(t),so that we may assumesupsupt∈R 0≤j≤m|g (j) (t)| < ∞. (4.4.14)Next, we observe that if 0 ≤ |β| ≤ l, then it follows from Gagliardo-Nirenberg’s inequality (5.4.2) (in fact, the simpler interpolation inequality(5.4.25) is sufficient) that there exists a constant C such that|u| |β|,lp|β|≤ C|u| |β|ll,pl−|β|l‖u‖L , (4.4.15)∞for all u ∈ C l c(R N ). We now proceed in three steps.Step 1. L p estimates. It follows from (4.4.14) that |g(t)| ≤C|t|, from which we deduce that‖g(u)‖ L p ≤ C‖u‖ L p, (4.4.16)for all u ∈ L p (R N ). Next, since |g(t) − g(s)| ≤ C|t − s| by (4.4.14),we see that‖g(u) − g(v)‖ L p ≤ C‖u − v‖ L p, (4.4.17)for all u ∈ L p (R N ).Step 2. The case u ∈ Cc ∞ (R N ). Let u ∈ Cc ∞ (R N ) with‖u‖ L ∞ ≤ M. Let 1 ≤ l ≤ m and consider a multi-index α with|α| = l. It is not difficult <strong>to</strong> show that D α g(u) is a sum of terms ofthe formk∏g (k) (u) D βj u, (4.4.18)j=1where k ∈ {1, . . . , l} and the β j ’s are multi-indices such that α =β 1 + · · · + β k and |β j | ≥ 1. Let p j = lp/|β j |, so thatk∑j=11p j= 1 p . (4.4.19)It follows from (4.4.19), Hölder’s inequality and (4.4.15) that‖k∏D βj u‖ L p ≤j=1k∏‖D βj u‖ Lp j ≤ C(M)‖u‖ W l,p. (4.4.20)j=1We deduce from (4.4.16), (4.4.18) and (4.4.20) that (4.4.13) holds forall u ∈ C ∞ c (R N ).


118 4. REGULARITY AND QUALITATIVE PROPERTIESStep 3. Conclusion. Let u ∈ W m,p (R N ) with ‖u‖ L ∞ ≤ M andlet (u n ) n≥0 ⊂ Cc∞ (R N ) with u n → u in W m,p (R N ). We note that onecan construct the sequence (u n ) n≥0 by truncation and regularization,so that we may also assume that ‖u n ‖ L ∞ ≤ M. Therefore, it followsfrom Step 2 that‖g(u n )‖ W m,p ≤ C(M)‖u n ‖ W m,p. (4.4.21)Since g(u n ) → g(u) in L p (R N ) by (4.4.17), it follows from (4.4.21)that g(u) ∈ W m,p (R N ) and that g(u n ) ⇀ g(u) in W m,p (R N ). (SeeLemma 5.5.3.) Letting n → ∞ in (4.4.21), we obtain (4.4.13). □Corollary 4.4.7. Let m ≥ 0 and g ∈ C m (R, R). Let Ω be anopen domain of R N . If u ∈ W m,ploc (Ω) ∩ L∞ loc(Ω) for some 1 ≤ p < ∞,then g(u) ∈ W m,ploc(Ω).Proof. The result is immediate for m = 0, so we assume m ≥ 1.Let ω ⊂⊂ Ω and let ϕ ∈ Cc∞ (Ω) satisfy ϕ(x) = 1 for all x ∈ ω. Set{ϕ(x)u(x) if x ∈ Ω,v(x) =0 if x ∈ R N \ Ω,so that v ∈ W m,p (R N ) ∩ L ∞ (R N ). Letting h(t) = g(t) − g(0), wededuce from Proposition 4.4.6 that h(v) ∈ W m,p (R N ). Since h(v) =h(u) = g(u) − g(0) in ω, we deduce that g(u) ∈ W m,p (ω). Hence theresult, since ω ⊂⊂ Ω is arbitrary.□Proof of Theorem 4.4.5. Fix 1 < p < ∞. Let 0 ≤ j ≤ mand assume that u ∈ W j,ploc(Ω). (This is certainly true for j = 0.)It follows from Corollary 4.4.7 that g(u) ∈ W j,ploc(Ω), so that u ∈W j+2,ploc(Ω) by Proposition 4.1.2 (i). By induction on j, we deducethat u ∈ W m+2,ploc(Ω) for all 1 < p < ∞. Let now ω ⊂⊂ Ω andconsider a function ϕ ∈ Cc∞ (Ω) such that ϕ = 1 on ω. It follows thatv = ϕu ∈ W m+2,p0 (Ω) for all 1 < p < ∞. Applying Theorem 5.4.16,p−Nm+1,we deduce that v ∈ Cp (Ω) for all N < p < ∞. Since v = u inω and ω ⊂⊂ Ω is arbitrary, the result follows.□We end this section with two results concerning the case Ω = R N .One is the an higher-order global regularity result, and the other anexponential decay property.


4.4. BOOTSTRAP METHODS 119Theorem 4.4.8. Let m ≥ 1 and g ∈ C m (R, R) satisfy g(0) = 0.Let u ∈ L r (R N ) ∩ L ∞ (R N ) for some 1 < r < ∞. If u satisfies−∆u = g(u) in D ′ (R N ), then u ∈ W m+2,p (R N ) for all r ≤ p < ∞.In particular, u ∈ C0 m+1 (R N ) ∩ C m+1,α (R N ) for all 0 < α < 1.Proof. Fix r ≤ p < ∞. Let 0 ≤ j ≤ m and assume thatu ∈ W j,p (R N ). (This is certainly true for j = 0.) It follows fromProposition 4.4.6 (if j ≥ 1; or a direct calculation based on the propertyg ∈ C 1 and g(0) = 0 if j = 0) that g(u) ∈ W j,p (R N ), so thatu ∈ W j+2,p (R N ) by Proposition 4.1.1. By induction on j, we deducethat u ∈ W m+2,p (R N ) for all r ≤ p < ∞. The last property followsfrom Theorem 5.4.16.□Proposition 4.4.9. Let g ∈ C 1 (R, R). Assume g(0) = 0, g ′ (0) 0 such thatsupx∈R N e ε|x| (|u(x)| + |∇u(x)|) < ∞.andProof. We set δ = √ −g ′ (0) > 0 and h(t) = g(t) + δ 2 t, so thath(t)t−→ 0, (4.4.22)t→0− ∆u + δ 2 u = h(u). (4.4.23)Given ε > 0, we set ϕ ε (x) = e δ|x|1+ε|x| . One verifies easily that ϕε ∈W 1,∞ (R N ) and that|∇ϕ ε | ≤ δϕ ε . (4.4.24)Next, we note that u ∈ W 3,q (R N ) for all p ≤ q < ∞ by Theorem 4.4.8.In particular, u and ∇u are globally Lipschitz continuous and |u(x)|+|∇u(x)| → 0 as |x| → ∞. Moreover, the equation (4.4.23) makes sensein L p (R N ). Since ϕ ε u ∈ W 1,1 (R N ) ∩ W 1,∞ (R N ), and in particularϕ ε u ∈ W 1,p′ (R N ), we obtain by multiplying the equation by ϕ ε u andintegrating by parts (see formula (5.1.5))∫ ∫∫ϕ ε |∇u|∫R 2 +δ 2 ϕ ε u 2 = ϕ ε uh(u) − u∇u · ∇ϕ εN R N R N R∫N≤ ϕ ε uh(u) + 1 ∫ϕ ε |∇u|R 2∫R 2 + δ2ϕ ε u 2 ,2N N R N


120 4. REGULARITY AND QUALITATIVE PROPERTIESwhere we used (4.4.24) and Cauchy-Schwarz in the last inequality.Thus we see that∫∫ϕ ε |∇u|∫R 2 + δ 2 ϕ ε u 2 ≤ 2 ϕ ε uh(u). (4.4.25)N R N R NNext, we deduce from (4.4.22) that there exists η > 0 such that4th(t) ≤ δ 2 t 2 for all |t| ≤ η. Also, since u(x) → 0 as |x| → ∞,there exists R > 0 such that |u(x)| ≤ η for |x| ≥ R. Thus we see that2uh(u) ≤ (δ 2 /2)u 2 for |x| ≥ R, so that∫∫∫2 ϕ ε uh(u) ≤ 2 ϕ ε uh(u) + δ2ϕ ε u 2 .R N {|x|R}Applying now (4.4.25), it follows that∫∫ϕ ε |∇u|∫R 2 + δ 2 ϕ ε u 2 ≤ 4N R N ∫≤ 4Finally, letting ε ↓ 0, we deduce that∫R N e δ|x| |∇u| 2 + δ 2 ∫{|x|


4.5. SYMMETRY OF POSITIVE SOLUTIONS 121Theorem 4.5.2. Let Ω be an open, bounded, connected domainof R N . Let g : R → R be locally Lipschitz continuous and let u ∈H 1 0 (Ω) ∩ L ∞ (Ω) satisfy −∆u = g(u) in H −1 (Ω). Suppose further thatΩ is convex in the x 1 direction, (4.5.1)andΩ is symmetric about the hyperplane x 1 = 0. (4.5.2)If u > 0 in Ω, then u is symmetric with respect <strong>to</strong> x 1 and decreasingin x 1 > 0.Remark 4.5.3. Here are some comments on Theorem 4.5.2.(i) The assumption (4.5.1) means that, given any y ∈ R N−1 , theset {x 1 ∈ R; (x 1 , y) ∈ Ω} is convex (i.e. is an interval). Theassumption (4.5.2) means that, given any x 1 ∈ R and y ∈ R N−1 ,if (x 1 , y) ∈ Ω, then (−x 1 , y) ∈ Ω.(ii) Clearly, one can replace the direction x 1 by any arbitrary directionin S N−1 .(iii) If g(0) ≥ 0 and if u ≥ 0 in Ω, then it follows from the strongmaximum principle that if u ≢ 0, then u > 0 in Ω. Thus ifg(0) ≥ 0, the assumptions of Theorem 4.5.2 can be weakened inthe sense that we need only assume u ≢ 0 and u ≥ 0 in Ω.(iv) The conclusion of Theorem 4.5.2 can be false if Ω is not convexin the x 1 direction or if u is not positive in Ω. Here are twosimple one-dimensional examples. Let Ω = (−1, 0) ∪ (0, 1). Wesee that Ω is symmetric about x = 0 but Ω is not convex. Let ube defined by u(x) = sin πx if 0 < x < 1 and u(x) = 2 sin(−πx)if −1 < x < 0. It follows that u > 0 in Ω and that u ∈ H0 1 (Ω)satisfies the equation −∆u = π 2 u. However, u is not even. Letnow Ω = (−1, 1), so that Ω satisfies both (4.5.1) and (4.5.2).and let u(x) = sin πx. It follows that u ∈ H0 1 (Ω) and that−∆u = π 2 u. However, u is neither positive in Ω nor even.(v) We note that no regularity assumption is made on the set Ω.We follow the proof of Berestycki and Nirenberg [9], based onthe “moving planes” technique of Alexandroff. We begin with thefollowing lemma, which gives a lower estimate of λ 1 (−∆; Ω) in termsof |Ω|.Lemma 4.5.4. There exists a constant α(N) > 0 such thatλ 1 (−∆; Ω) ≥ α(N)|Ω| − 2 N , (4.5.3)


122 4. REGULARITY AND QUALITATIVE PROPERTIESfor every open domain Ω ⊂ R N with finite measure, where λ 1 (−∆; Ω)is defined by (2.1.5).Proof. It follows from Poincaré’s inequality (5.4.73) that‖u‖ 2 L 2 ≤ C(N)|Ω| 2 N ‖∇u‖2L 2,which implies (4.5.3) with α(N) = 1/C(N).Lemma 4.5.5. Let Ω ⊂ R N be an open domain and let g : R → Rbe globally Lipschitz continuous. If u, v ∈ L 1 loc(Ω), then there existsa ∈ L ∞ (Ω) such that g(u) − g(v) = a(u − v) a.e. in Ω and ‖a‖ L ∞ ≤ Lwhere L is the Lipschitz constant of g.Proof. Let a be defined a.e. by{ g(u(x))−g(v(x))u(x)−v(x)if u(x) ≠ v(x),a(x) =0 if u(x) = v(x),and, given any ε > 0, leta ε =(u − v)(g(u) − g(v))ε + (u − v) 2 .It is clear that a ε is measurable and that |a ε | ≤ L where L is theLipschitz constant of g. Moreover, a ε → a a.e. as ε ↓ 0, so thata ∈ L ∞ (Ω). Finally, (u − v)a ε (u − v) → g(u) − g(v) a.e. as ε ↓ 0, sothat g(u) − g(v) = a(u − v).□Proof of Theorem 4.5.2. We note that in dimension N = 1,the result follows from Remark 1.2.1. Thus we now assume N ≥ 2.We first observe that, since u ∈ L ∞ (Ω), we may change without lossof generality g(t) for all sufficiently large values of |t|. In particular,we may assume that g is globally Lipschitz continuous and we denoteby L the Lipschitz constant of g. Next, we note that u + g(u) ∈L ∞ (Ω), so that u ∈ C(Ω) by Corollary 4.2.7. We now introduce somenotation. We denote by P the orthogonal projection on R N−1 , i.e. ifx = (x 1 , y) ∈ R N , thenPx = y.LetU = PΩ = {y ∈ R N−1 ; ∃x 1 ∈ R, (x 1 , y) ∈ Ω}.Given y ∈ U, letρ(y) = sup{x 1 ∈ R; (x 1 , y) ∈ Ω}.□


4.5. SYMMETRY OF POSITIVE SOLUTIONS 123It follows from the assumptions (4.5.1)-(4.5.2) thatΩ = ∪y∈U(−ρ(y), ρ(y)) × {y}. (4.5.4)In particular, we see that y ∈ U if and only if (0, y) ∈ Ω, so that U isan open, bounded, connected subset of R N−1 . SetGiven 0 ≤ λ ≤ R, we define the open setwhereΩ λ = {(x 1 , y) ∈ Ω; x 1 > λ} =R = sup{ρ(y); y ∈ U}. (4.5.5)U λ = PΩ λ = {y ∈ U; ρ(y) > λ}.∪ (λ, ρ(y)) × {y}, (4.5.6)y∈UλWe see that Ω λ ≠ ∅ for 0 ≤ λ < R, that Ω λ is decreasing in λ ∈ [0, R],that Ω R = ∅ and that|Ω λ | −→ 0. (4.5.7)λ↑RMoreover, it follows from (4.5.4)-(4.5.6) that, given any (x 1 , y) ∈ R ×R N−1 ,(x 1 , y) ∈ Ω λ ⇒ (2λ − x 1 , y) ∈ Ω.Given 0 ≤ λ < R, we define the function u λ on Ω λ byu λ (x 1 , y) = u(2λ − x 1 , y) − u(x 1 , y), for all (x 1 , y) ∈ Ω λ . (4.5.8)We then see thatu λ ∈ H 1 (Ω λ ) ∩ C(Ω λ ),and that −∆u λ = f λ in H −1 (Ω λ ), where f λ (x 1 , y) = g(u(2λ−x 1 , y)−g(u(x 1 , y)) for all (x 1 , y) ∈ Ω λ . Applying Lemma 4.5.5, we deducethat there exists a function a λ ∈ L ∞ (Ω λ ) such thatand f λ = a λ u λ . Therefore,‖a λ ‖ L ∞ ≤ L, (4.5.9)− ∆u λ = a λ u λ , (4.5.10)in H −1 (Ω λ ). We now proceed in nine steps.Step 1. If ω is an open subset of Ω, then Pω is an open subse<strong>to</strong>f U. Indeed, let y 0 ∈ Pω and fix x 0 1 ∈ R such that (x 0 1, y 0 ) ∈ ω. ωbeing open, there exists ε > 0 such that if |(x 1 − x 0 1, y − y 0 )| < ε then(x 1 , y) ∈ ω. In particular, if |y −y 0 | < ε (where the norm is in R N−1 ),then (x 0 1, y) ∈ ω, so that y ∈ Pω. Thus Pω is an open subset of U.


124 4. REGULARITY AND QUALITATIVE PROPERTIESStep 2.of Ω λ , thenIf 0 < λ < R and if ω ⊂ Ω λ is a connected componentω =∪ (λ, ρ(y)) × {y}, (4.5.11)y∈Owhere O = Pω. Indeed, let (x 0 1, y 0 ) ∈ ω. It follows from (4.5.6) thatλ < x 0 1 < ρ(y 0 ) and that (λ, ρ(y)) × {y 0 } ⊂ Ω λ . Since (λ, ρ(y)) × {y 0 }is a closed, connected subset of Ω λ , ω is a connected component ofΩ λ , and (λ, ρ(y)) × {y 0 } ∩ ω ≠ ∅, we see that (λ, ρ(y)) × {y 0 } ⊂ ω. Inparticular, ω is given by (4.5.11).Step 3. For almost all y ∈ U, u(x 1 , y) → 0 as x 1 ↑ ρ(y).Indeed, let (u n ) n≥0 ⊂ Cc∞ (Ω) satisfy u n → u in H0 1 (Ω). In particular,u n (·, y) − u(·, y) ∈ H 1 (−ρ(y), ρ(y)) for a.a. y ∈ U and∫‖∂ 1 u n − ∂ 1 u‖ 2 L 2 (Ω) = ‖u n (·, y) − u(·, y)‖ 2 H 1 (−ρ(y),ρ(y)) dy −→ 0.n→∞UTherefore, up <strong>to</strong> a subsequence, u n (·, y) → u(·, y) in H 1 (−ρ(y), ρ(y))for a.a. y ∈ U. Since clearly u n (·, y) ∈ H0 1 (−ρ(y), ρ(y)) for all y ∈ U,we deduce that u(·, y) ∈ H0 1 (−ρ(y), ρ(y)) for a.a. y ∈ U, and theresult follows.Step 4. If 0 < λ < R and ω ⊂ Ω λ is a connected component ofΩ λ , then u λ ≢ 0 in ω. Indeed, let O = P(ω) so that O is an opensubset of U by Step 1. It follows from Step 3 that there exists y ∈ Osuch that u(x 1 , y) → 0 as x 1 ↑ ρ(y). We note that −ρ(y) < 2λ−ρ(y) 0and u(2λ − x 1 , y) → u(2λ − ρ(y), y) as x 1 ↑ ρ(y). Thus we see thatu λ (x 1 , y) → u(2λ − ρ(y), y) > 0 as x 1 ↑ ρ(y). Since (x 1 , y) ∈ ω forρ(y) − x 1 sufficiently small by (4.5.11), the result follows.Step 5. If 0 < λ < R, then u − λ ∈ H1 0 (Ω λ ). Indeed, let(u n ) n≥0 ⊂ Cc ∞ (Ω) satisfy u n ≥ 0 and u n → u in H0 1 (Ω) (see the beginningof Section 3.1). It follows easily that (u n ) λ → u λ in H 1 (Ω λ ),so that (u n ) − λ → u− λ in H1 (Ω λ ). It thus suffices <strong>to</strong> show that (u n ) − λ ∈H0 1 (Ω λ ) for all n ≥ 0. Since (u n ) − λ ∈ C(Ω λ), we need only show that(u n ) − λ vanishes on ∂Ω λ (see Remark 5.1.10 (ii)). It is not difficult <strong>to</strong>show that() ()∂Ω λ = ∂Ω ∩ {(x 1 , y) ∈ R N ; x 1 > λ} ∪ ∪ {(λ, y)}y∈U λ (4.5.12)=: A ∪ B.


4.5. SYMMETRY OF POSITIVE SOLUTIONS 125If x ∈ A, then u n (x) = 0 since x ∈ ∂Ω, so that (u n ) λ (x) ≥ 0. Thus(u n ) − λ (x) = 0. If x ∈ B, then (u n) λ (x) = u(x) − u(x) = 0, so that(u n ) − λ (x) = 0. Thus we see that (u n) − λ (x) = 0 for all x ∈ ∂Ω λ, whichproves the desired result.Step 6. If 0 < λ < R and u λ ≥ 0 in Ω λ , then u λ > 0 inΩ λ . Indeed, it follows from (4.5.9)-(4.5.10) that −∆u λ + Lu λ ≥ 0 inH −1 (Ω λ ). Using Steps 4 and 5, we may apply the strong maximumprinciple in every connected component of Ω λ and the result follows.Step 7. There exists 0 < δ < R such that if R − δ < λ < R,then u λ > 0 in Ω λ . Indeed, we observe thatλ 1 (−∆ − a λ ; Ω λ ) ≥ λ 1 (−∆; Ω λ ) − ‖a λ ‖ L ∞ ≥ α(N)|Ω λ | − 2 N − L,by Lemma 4.5.4 and (4.5.9). Applying now (4.5.7), we see that thereexists δ > 0 such that λ 1 (−∆−a λ ) > 0 for R −δ < λ < R. Therefore,we deduce from Step 5 and the maximum principle that u λ ≥ 0 inΩ λ , and the conclusion follows from Step 6.Step 8. u λ > 0 in Ω λ for all 0 < λ < R. To see this, we setµ = inf{0 < σ < R; u λ > 0 in Ω λ for all σ < λ < R},so that 0 ≤ µ < R by Step 7. The conclusion follows if we show thatµ = 0. Suppose by contradiction that µ > 0. Since u ∈ C(Ω), we seeby letting λ ↓ µ that u µ ≥ 0 in Ω µ , and it follows from Step 6 thatu µ > 0 in Ω µ . Let K ⊂ Ω µ be a closed set such thatα(N)|Ω µ \ K| − 2 N ≥ 2L, (4.5.13)where α(N) is given by Lemma 4.5.4. Since u µ ≥ η > 0 on K bycompactness, we see that there exists 0 < δ < µ such thatu ν > 0 on K, (4.5.14)for µ − δ ≤ ν ≤ µ. On the other hand, by choosing δ > 0 possiblysmaller, we deduce from (4.5.13) thatfor µ − δ ≤ ν ≤ µ. In particular,α(N)|Ω ν \ K| − 2 N > L, (4.5.15)λ 1 (−∆ + a ν ; Ω ν \ K) ≥ λ 1 (−∆; Ω ν \ K) − ‖a ν ‖ L ∞≥ α(N)|Ω ν \ K| − 2 N − L > 0,(4.5.16)by Lemma 4.5.4, (4.5.9) and (4.5.15). We claim thatu − ν ∈ H 1 0 (Ω ν \ K). (4.5.17)


126 4. REGULARITY AND QUALITATIVE PROPERTIESTo see this, we observe that by (4.5.14), u − ν vanishes in a neighborhoodof K. Therefore, there exists a function θ ∈ Cc∞ (R N \ K) such thatθu − ν = u − ν . Consider a sequence (ϕ n ) n≥0 ⊂ Cc ∞ (Ω ν ) such that ϕ n →u − ν in H0 1 (Ω ν ). Since θϕ n is supported in a compact subset of Ω ν \ K,we see that θϕ n ∈ H0 1 (Ω ν \K); and since θϕ n → θu − ν = u − ν in H 1 (Ω ν \K), the claim (4.5.17) follows. It now follows from (4.5.16), (4.5.17)and the maximum principle that u ν ≥ 0 in Ω ν \K. Applying (4.5.14),we deduce that u ν ≥ 0 in Ω ν , so that u ν > 0 in Ω ν by Step 6. Thiscontradicts the definition of µ.Step 9. Conclusion. We deduce in particular from Step 8 (byletting λ ↓ 0) that u(x 1 , y) ≥ u(−x 1 , y) for all (x 1 , y) ∈ Ω with x 1 ≥ 0.Changing u(x 1 , y) <strong>to</strong> u(−x 1 , y), we also have the reverse inequality,so that u is symmetric with respect <strong>to</strong> x 1 . Moreover, we deduce fromStep 8 that if (x 1 , y) ∈ Ω with x 1 > 0, then u(2λ − x 1 , y) > u(x 1 , y)for all 0 < λ < x 1 . In particular, if 0 < x ′ 1 < x 1 , then lettingλ = (x 1 + x ′ 1)/2 < x 1 we obtain u(x 1 , y) < u(x ′ 1, y). This proves thatu is decreasing in x 1 > 0.□Remark 4.5.6. Part of the technicalities in the proof of Theorem4.5.2 come from the fact that we do not assume that u ∈ C 0 (Ω).Indeed, if u ∈ C 0 (Ω), then Steps 3, 4, 5 and the end of Step 8 aretrivial. However, since we did not make any smoothness assumptionon Ω, it is not clear how one could deduce the property u ∈ C 0 (Ω)from standard regularity results.


CHAPTER 5Appendix: Sobolev spacesThroughout this section, Ω is an open subset of R N . We studythe basic properties of the Sobolev spaces W m,p (Ω) and W m,p0 (Ω), andparticularly the spaces H 1 (Ω) and H0 1 (Ω) (which correspond <strong>to</strong> m = 1and p = 2). For a more detailed study, see for example Adams [1] .5.1. Definitions and basic propertiesWe begin with the definition of “weak” derivatives. Let u ∈C m (Ω), m ≥ 1. If α ∈ N N is a multi-index such that |α| ≤ m, itfollows from Green’s formula that∫∫D α uϕ = (−1) |α| uD α ϕ, (5.1.1)Ωfor all ϕ ∈ Cc m (Ω). We note that both integrals in (5.1.1) make sensesince D α uϕ ∈ C c (Ω) and uD α ϕ ∈ C c (Ω). As a matter of fact, theright-hand side makes sense as soon as u ∈ L 1 loc(Ω) and the lefthandside makes sense as soon as D α u ∈ L 1 loc(Ω). This motivates thefollowing definition.Definition 5.1.1. Let u ∈ L 1 loc (Ω) and let α ∈ NN . We say thatD α u ∈ L 1 loc (Ω) if there exists u α ∈ L 1 loc(Ω) such that∫∫u α ϕ = (−1) |α| uD α ϕ, (5.1.2)Ωfor all ϕ ∈ C c|α| (Ω). Such a function u α is then unique and we setD α u = u α . If u α ∈ L p loc (Ω) (respectively, u ∈ Lp (Ω)) for some 1 ≤p ≤ ∞, we say that D α u ∈ L p loc (Ω) (respectively, Dα u ∈ L p (Ω)).The Sobolev spaces W m,p (Ω) are defined as follows.Definition 5.1.2. Let m ∈ N and p ∈ [1, ∞]. We setW m,p (Ω) = {u ∈ L p (Ω); D α u ∈ L p (Ω) for |α| ≤ m}.127ΩΩ


128 5. APPENDIX: SOBOLEV SPACESFor u ∈ W m,p (Ω), we set‖u‖ W m,p = ∑‖D α u‖ L p,|α|≤mwhich defines a norm on W m,p (Ω). We setH m (Ω) = W m,2 (Ω),and we equip H m (Ω) with the scalar product(u, v) H m = ∑ ∫D α uD α v dx,|α|≤mwhich defines on H m (Ω) the norm( ∑‖u‖ H m =|α|≤mwhich is equivalent <strong>to</strong> the norm ‖ · ‖ W m,2.Ω‖D α u‖ 2 L 2 ) 12,Proposition 5.1.3. W m,p (Ω) is a Banach space and H m (Ω) is aHilbert space. If p < ∞, then W m,p (Ω) is separable, and if 1 < p < ∞,then W m,p (Ω) is reflexive.Proof. Let k = 1 + N + · · · + N m = (N m+1 − 1)/(N − 1) (k =m + 1 if N = 1), and consider the opera<strong>to</strong>r T : W m,p (Ω) → L p (Ω) kdefined byT u = (D α u) |α|≤m .It is clear that T is isometric and that T is injective. Therefore,W m,p (Ω) can be identified with the subspace T (W m,p (Ω)) of L p (Ω) k .We claim that T (W m,p (Ω)) is closed. Indeed, suppose (u n ) n≥0is such that u n → u in L p (Ω) and D α u n → u α in L p (Ω) for 1 ≤|α| ≤ m. Applying (5.1.1) <strong>to</strong> u n and letting n → ∞, we deduce thatD α u ∈ L p (Ω) and that D α u = u α ; and so, u ∈ W m,p (Ω). Therefore,T (W m,p (Ω)) is a Banach space, and so is W m,p (Ω). If p < ∞, thenL p (Ω) k is separable. Thus so is T (W m,p (Ω)), hence W m,p (Ω). Finally,if 1 < p < ∞, then L p (Ω) k is reflexive. Thus so is T (W m,p (Ω)), henceW m,p (Ω).□Remark 5.1.4. Here are some simple consequences of Definition5.1.2.


5.1. DEFINITIONS AND BASIC PROPERTIES 129(i) It follows easily that if u ∈ W m,p (Ω) and if v ∈ C m (R N ) is suchthat sup{‖D α v‖ L ∞; |α| ≤ m} < ∞, then uv ∈ W m,p (Ω) andLeibnitz formula holds.(ii) If |Ω| < ∞, then L p (Ω) ↩→ L q (Ω) provided p ≥ q. It follows thatW m,p (Ω) ↩→ W m,q (Ω).Remark 5.1.5. One can show that if p < ∞, then W m,p (Ω) ∩C ∞ (Ω) is dense in W m,p (Ω) (see Adams [1], Theorem 3.16 p. 52).We now define the subspaces W m,p0 (Ω) for p < ∞. Formally,W m,p0 (Ω) is the subspace of functions of W m,p (Ω) that vanish, as wellof their derivatives up <strong>to</strong> order m − 1, on ∂Ω.Definition 5.1.6. Let 1 ≤ p < ∞ and let m ∈ N. We denoteby W m,p0 (Ω) the closure of Cc ∞ (Ω) in W m,p (Ω), and we set H0 m (Ω) =W m,20 (Ω).Remark 5.1.7. It follows from Proposition 5.1.3 that W m,p0 (Ω)is a separable Banach space and that W m,p0 (Ω) is reflexive if p > 1.In addition, H0 m (Ω) is a separable Hilbert space.In general W m,p0 (Ω) ≠ W m,p (Ω), however both spaces coincidewhen ∂Ω is “small” (see Adams [1], Sections 3.20–3.33). In particular,we have the following result.Theorem 5.1.8. If 1 ≤ p < ∞ and m ∈ N, then W m,p0 (R N ) =W m,p (R N ).The proof of Theorem 5.1.8 makes use of the following lemma.Lemma 5.1.9. Let ρ ∈ Cc∞ (R N ), ρ ≥ 0, with supp ρ ⊂ {x ∈R N ; |x| ≤ 1} and ‖ρ‖ L 1 = 1. For n ∈ N, n ≥ 1, set ρ n (x) = n N ρ(nx).((ρ n ) n≥1 is called a smoothing sequence.) Then the following propertieshold.(i) For every u ∈ L 1 loc (RN ), ρ n ⋆ u ∈ C ∞ (R N ).(ii) If u ∈ L p (R N ) for some p ∈ [1, ∞], then ρ n ⋆ u ∈ L p (R N ) and‖ρ n ⋆ u‖ L p ≤ ‖u‖ L p. If p < ∞ or if p = ∞ and u ∈ C b,u (R N ),then ρ n ⋆ u → u in L p (R N ) as n → ∞.(iii) If u ∈ W m,p (R N ) for some p ∈ [1, ∞] and m ∈ N, then ρ n ⋆ u ∈W m,p (R N ) and D α (ρ n ⋆u) = ρ n ⋆D α u for |α| ≤ m. In particular,if p < ∞ or if p = ∞ and D α u ∈ C b,u (R N ) for all |α| ≤ m, thenρ n ⋆ u → u in W m,p (R N ) as n → ∞.


130 5. APPENDIX: SOBOLEV SPACESProof. (i)Since∫ρ n ⋆ u(x) = ρ n (x − y)u(y) dy,R Nit is clear that ρ n ⋆ u ∈ C(R N ). One deduces easily from the aboveformula that D α (ρ n ⋆ u) = (D α ρ n ) ⋆ u, and the result follows.(ii) The first part of property (ii) follows from Young’s inequality,since∫∫‖ρ n ‖ L 1 = ρ n (x) dx = n∫R N ρ(nx) dx = ρ(y) dy = 1.N R N R NConsider now u ∈ C b,u (R N ) and set u n = ρ n ⋆ u. We haveu n (x) =∫ρ n (y)u(x − y) dy,R N u(x) =∫ρ n (y)u(x) dy;R Nand so,∫u n (x) − u(x) = ρ n (y)(u(x − y) − u(x)) dy.R NTherefore,∫|u n (x)−u(x)| ≤ ρ n (y)|u(x−y)−u(x)| dy ≤ sup |u(x−y)−u(x)|,R N |y|≤1/nsince supp ρ n ⊂ {y; |y| ≤ 1/n}. Since u is uniformly continuous, wehavesupx∈R Nsup |u(x − y) − u(x)| −→ 0;|y|≤1/nn→∞and so, u n → u uniformly. Consider next u ∈ L p (R N ), with p < ∞,and let ε > 0. There exists v ∈ C c (R N ) such that ‖u − v‖ L p ≤ ε/3.Furthermore, it follows from what precedes that for n large enough,we have ‖v −ρ n ⋆v‖ L p ≤ ε/3. (Since ρ n ⋆v → v uniformly and ρ n ⋆v issupported in a fixed compact subset of R N .) Finally, it follows fromthe inequality of (ii) that ‖ρ n ⋆ v − ρ n ⋆ u‖ L p ≤ ‖u − v‖ L p ≤ ε/3.Writingu − ρ n ⋆ u = u − v + v − ρ n ⋆ v + ρ n ⋆ v − ρ n ⋆ u,we deduce that ‖u − ρ n ⋆ u‖ L p ≤ ε. Since ε > 0 is arbitrary, thiscompletes the proof of property (ii).


5.1. DEFINITIONS AND BASIC PROPERTIES 131(iii) For any v : R N → R, we set ṽ(x) = v(−x). Given u ∈W m,p (R N ) and ϕ ∈ Cc m (R N ), we have∫∫∫(ρ n ⋆ u)D α ϕ = u(˜ρ n ⋆ D α ϕ) = uD α (˜ρ n ⋆ ϕ).R N R N R NBy definition of D α u, we obtain∫(ρ n ⋆u)D∫R α ϕ = (−1)∫R |α| D α u(˜ρ n ⋆ϕ) = (−1) |α| (ρ n ⋆D α u)ϕ.N N R NThis means that D α (ρ n ⋆u) ∈ L p (R N ) and that D α (ρ n ⋆u) = ρ n ⋆D α u;and so, ρ n ⋆ u ∈ W m,p (R N ). The convergence property follows fromproperty (ii).□Proof of Theorem 5.1.8. Let u ∈ W m,p (R N ) and ε > 0. Itfollows from properties (iii) and (i) of Lemma 5.1.9 that there existsv ∈ W m,p (R N ) ∩ C ∞ (R N ) such that ‖u − v‖ W m,p ≤ ε/2. Fix nowη ∈ Cc ∞ (R N ) such that 0 ≤ η ≤ 1, η(x) = 1 for |x| ≤ 1 and η(x) = 0for |x| ≥ 2. Set η n (x) = η(x/n) and let v n = η n v. It is clear thatv n ∈ Cc ∞ (R N ), and we claim that η n v → v in W m,p (R N ) as n → ∞.Indeed, it follows from Leibnitz’ formula (see Remark 5.1.4 (i)) thatD α (η n v) =∑D β η n D γ u.β+γ=αSince ‖D γ η n ‖ L ∞ ≤ C γ n −|γ| , it follows that all the terms with |β| > 0converge <strong>to</strong> 0 as n → ∞. The remaining term in the sum is η n D α vwhich, by dominated convergence, converges <strong>to</strong> D α v in L p (R N ). Wededuce that D α (η n v) → D α v in L p (R N ), which proves the claim.Therefore, there exists w ∈ C ∞ c (R N ) such that ‖v − w‖ W m,p ≤ ε/2.This implies that ‖u − w‖ W m,p ≤ ε, and the result follows. □Remark 5.1.10. We describe below some useful properties of theSobolev space W m,p0 (Ω).(i) If u ∈ W m,p (Ω) and if supp u is included in a compact subse<strong>to</strong>f Ω, then u ∈ W m,p0 (Ω). This is easily shown by using theregularization and truncation argument described above.(ii) If u ∈ W 1,p (Ω) ∩ C(Ω) and if u |∂Ω = 0, then u ∈ W 1,p0 (Ω).Indeed, if u has a bounded support, let F ∈ C 1 (R) satisfy|F (t)| ≤ |t|, F (t) = 0 for |t| ≤ 1 and F (t) = t for |t| ≥ 2. Settingu n (x) = n −1 F (nu(x)), it follows from Proposition 5.3.1 belowthat u n ∈ W 1,p (Ω). In addition, one verifies easily (see (5.3.1)


132 5. APPENDIX: SOBOLEV SPACESbelow) that u n → u in W 1,p (Ω) as n → ∞. Since supp u n ⊂{x ∈ Ω; |u(x)| ≥ n −1 }, supp u n is a compact subset of Ω, thusu n ∈ W 1,p0 (Ω) by (i) above; and so u ∈ W 1,p0 (Ω). If supp u isunbounded, we approximate u by ξ n u where ξ n ∈ Cc∞ (R N ) issuch that ξ n (x) = 1 for |x| ≤ n.(iii) If u ∈ W 1,p0 (Ω) ∩ C(Ω) and if Ω is of class C 1 , then u |∂Ω = 0 (seeBrezis [11], Théorème IX.17, p. 171). Note that this property isfalse if Ω is not smooth enough. For example, one can show thatif Ω = R N \{0} and N ≥ 2, then H0 1 (Ω) = H 1 (Ω). In particular,if u ∈ Cc∞ (R N ) and u(0) ≠ 0, then u ∈ H0 1 (Ω) but u ≠ 0 on ∂Ω.(iv) Let u ∈ L 1 loc (Ω) and define ũ ∈ L1 loc (RN ) by{u(x) if x ∈ Ω,ũ(x) =0 if x ∉ Ω.If u ∈ W 1,p0 (Ω), then ũ ∈ W 1,p (R N ). This is immediate by thedefinition of W 1,p0 (Ω). More generally, if u ∈ W m,p0 (Ω), thenũ ∈ W m,p (R N ). Conversely, if ũ ∈ W 1,p (R N ) and if Ω is of classC 1 (as in part (iii) above, the smoothness assumption on Ω isessential), then u ∈ W 1,p0 (Ω) (see Brezis [11], Proposition IX.18,p. 171).Proposition 5.1.11. Let 1 ≤ p ≤ ∞ and let u ∈ W 1,p (Ω). Letω ⊂ Ω be a connected, open set. If ∇u = 0 a.e. on ω, then thereexists a constant c such that u = c a.e. on ω.Proof. Let x ∈ ω and let ρ > 0 be such that B(x, ρ) ∈ ω. Weclaim that there exists c such that u = c a.e. on B(x, ρ). The resultfollows by Connectedness. To prove the claim, we argue as follows.Let 0 < ε < ρ and let η ∈ Cc∞ (R N ) satisfy η ≡ 1 on B(x, ρ − ε),supp η ⊂ B(x, ρ), and 0 ≤ η ≤ 1. Setting v = ηu, we deduce thatv ∈ W 1,10 (B(x, ρ)) and that ∇v = 0 a.e. on B(x, ρ − ε). We nowextend v by 0 outside B(x, ρ) and we call v the extension. Let (ρ n ) n≥0be a smoothing sequence and fix n > 1/(ρ − ε). We have w n =ρ n ⋆ v ∈ Cc ∞ (R N ). Furthermore, since ∇w n = ρ n ⋆ ∇v, and sincesupp ρ n ⊂ B(0, 1/n), it follows that ∇w n = 0 on B(x, ρ − ε − 1/n). Inparticular, there exists c n such that w n ≡ c n on B(x, ρ−ε−1/n). Sincew n → v in L 1 (R N ), we deduce in particular that for any µ < ρ − ε,there exists c(µ) such that v ≡ c(µ) on B(x, ρ − ε − µ). Therefore,c(µ) is independent of µ and we have v ≡ c on B(x, ρ − ε). for some


5.1. DEFINITIONS AND BASIC PROPERTIES 133constant c. It follows that c is independent of ε, and the claim followsby letting ε ↓ 0.□Proposition 5.1.12. Let u ∈ W m,∞ (R N ) for some m ≥ 0. IfD α u ∈ C b,u (R N ) for all |α| ≤ m, then u ∈ C m b,u (RN ). In other words,the distributional derivatives of u are the classical derivatives.Proof. Let (ρ n ) n≥0 be a smoothing sequence and set u n = ρ n ⋆u.It follows from Lemma 5.1.9 that u n ∈ C ∞ (R N ) ∩ W m,∞ (R N ). Moreover,it is clear that D α u n = ρ n ⋆ (D α u) is uniformly continuous onR N for all |α| ≤ m and n ≥ 0. Thus (u n ) n≥0 ⊂ Cb,u m (RN ). Moreover,it follows from Lemma 5.1.9 that D α u n → D α u in L ∞ (R N ), i.e.u n → u in W m,∞ (R N ). Since Cb,u m (RN ) is a Banach space, it is aclosed subset of W m,∞ (R N ), and we deduce that u ∈ Cb,u m (RN ). □W m,plocL 1 locWe next introduce the local Sobolev spaces.Definition 5.1.13. Given m ∈ N and 1 ≤ p ≤ ∞, we set(Ω) = {u ∈ L1 loc (Ω); Dα ∈ L p loc(ω) for all |α| ≤ m}.Proposition 5.1.14. Let m ∈ N and 1 ≤ p ≤ ∞, let. If u ∈(Ω), then the following properties are equivalent.(i) u ∈ W m,ploc(Ω).(ii) u |ω ∈ W m,p (ω) for all ω ⊂⊂ Ω.(iii) φu ∈ W m,p0 (Ω) (ϕ ∈ W m,∞ (Ω) if p = ∞) for all φ ∈ Cc ∞ (Ω).Proof. (i)⇒(ii). This is immediate.(ii)⇒(iii). Suppose u ∈ W m,ploc (Ω) and let φ ∈ C∞ c (Ω). If ω ⊂⊂Ω contains supp φ, then φu has compact support in ω. Since u ∈W m,p (ω), we know (see Remark 5.1.4 (i)) that φu ∈ W m,p (ω). Ifp < ∞, then φu ∈ W m,p0 (Ω) by Remark 5.1.10 (i).(iii)⇒(i). Suppose φu ∈ W m,p (Ω) for all φ ∈ Cc ∞ (Ω). Given|α| ≤ m, we define u α ∈ L p loc(Ω) as follows. Let ω ⊂⊂ Ω and letφ ∈ Cc ∞ (Ω) satisfy φ(x) = 1 for all x ∈ ω. We set (u α ) |ω = D α (φu) |ωand we claim that (u α ) |ω is independent of the choice of φ, so thatu α is well-defined. Indeed, if ψ ∈ Cc ∞ (Ω) is such that ψ(x) = 1 on ω,then for all ϕ ∈ Cc∞ (ω),∫∫D α (ψu − φu)ϕ = (−1) |α| (ψ − φ)uD α ϕ = 0,ωω


134 5. APPENDIX: SOBOLEV SPACESso that D α ψu = D α φu a.e. in ω. It remains <strong>to</strong> show that u α = D α u.Indeed, let |α| ≤ m and ϕ ∈ C c |α| (Ω). Let φ ∈ Cc ∞ (Ω) satisfy φ(x) = 1on supp ϕ. We have∫ ∫∫(−1) |α| u α ϕ = φuD α ϕ = uD α ϕ,ΩΩΩand the result follows.□We now introduce the Sobolev spaces of negative index.Definition 5.1.15. Given m ∈ N and 1 ≤ p < ∞, we defineW −m,p′ (Ω) = (W m,p0 (Ω)) ⋆ . For p = 2, we set H −m (Ω) =W −m,2 (Ω) = (H0 m (Ω)) ⋆ .Remark 5.1.16. Here are some comments on Definition 5.1.15.(i) It follows from Remark 5.1.7 that W −m,p′ (Ω) is a Banach space.If p > 1, then W −m,p′ (Ω) is reflexive and separable. H −m (Ω) isa separable Hilbert space.(ii) It follows from the dense embedding Cc∞ (Ω) ↩→ W m,p0 (Ω) thatW −m,p′ (Ω) is a space of distributions on Ω. In particular, we seethat (u, ϕ) W −m,p ′ ,W m,p = (u, ϕ) D′ ,D for every u ∈ W −m,p′ (Ω)0and ϕ ∈ Cc∞ (Ω). Like any distribution, an element of W −m,p′ (Ω)can be localized. Indeed, if u ∈ D ′ (Ω) and Ω ′ is an open subse<strong>to</strong>f Ω, then one defines u |Ω ′ as follows. Given any ϕ ∈ Cc∞ (Ω ′ ), let˜ϕ ∈ Cc ∞ (Ω) be equal <strong>to</strong> ϕ on Ω ′ and <strong>to</strong> 0 on Ω\Ω ′ . Then Ψ(ϕ) =(u, ˜ϕ) D ′ (Ω),D(Ω) defines a distribution Ψ ∈ D ′ (Ω ′ ), and one setsu |Ω ′ = Ψ. Note that this is consistent with the usual restrictionof functions. Since ‖ ˜ϕ‖ Wm,p0 (Ω ′ ) ≤ ‖ϕ‖ Wm,p0 (Ω), we see that ifu ∈ W −m,p′ (Ω), then u |Ω ′ ∈ W −m,p′ (Ω ′ ) and ‖u |Ω ′‖ W −m,p ′ (Ω ′ ) ≤‖u‖ W −m,p ′ (Ω) .Definition 5.1.17. Given m ∈ N and 1 ≤ p < ∞, we defineW −m,p′loc(Ω) = {u ∈ D ′ (Ω); u |ω ∈ W −m,p′ (ω) for all ω ⊂⊂ Ω}. (SeeRemark 5.1.16 (ii) for the definition of u |ω .) For p = 2, we set−m,2(Ω) = W (Ω).H −mloclocProposition 5.1.18. If 1 < p < ∞, then W 1,p0 (Ω) ↩→ L p (Ω) ↩→W −1,p (Ω), with dense embeddings, where the embedding e : L p (Ω) →


5.1. DEFINITIONS AND BASIC PROPERTIES 135W −1,p (Ω) is defined by∫eu(ϕ) =for all ϕ ∈ W 1,p′0 (Ω)(Ω) and all u ∈ L p (Ω).Ωu(x)ϕ(x) dx, (5.1.3)Proposition 5.1.18 is an immediate application of the followinguseful abstract result.Proposition 5.1.19. If X and Y are two Banach spaces suchthat X ↩→ Y with dense embedding, then, the following propertieshold.(i) Y ⋆ ↩→ X ⋆ , where the embedding e is defined by (ef, x) X⋆ ,X =(f, x) Y ⋆ ,Y , for all x ∈ X and f ∈ Y ⋆ .(ii) If X is reflexive, then the embedding Y ⋆ ↩→ X ⋆ is dense.(iii) If the embedding X ↩→ Y is compact and X is separable, then theembedding Y ⋆ ↩→ X ⋆ is compact. More precisely, if (y n) ′ n≥0 ⊂Y ⋆ and ‖y n‖ ′ Y ⋆ ≤ M, then there exist a subsequence (n k ) k≥0and y ′ ∈ Y ⋆ with ‖y ′ ‖ Y ⋆ ≤ M such that y n ′ k→ y ′ in X ⋆ ask → ∞.Proof. (i) Consider y ′ ∈ Y ⋆ and x ∈ X ↩→ Y . Let ey ′ (x) =(y ′ , x) Y ⋆ ,Y . Since|ey ′ (x)| ≤ ‖y ′ ‖ Y ⋆‖x‖ Y ≤ C‖y ′ ‖ Y ⋆‖x‖ X ,we see that e ∈ L(Y ⋆ , X ⋆ ). Suppose that ey ′ = ez ′ , for some y ′ , z ′ ∈Y ⋆ . It follows that (y ′ − z ′ , x) Y ⋆ ,Y = 0, for every x ∈ X. By density,we deduce that (y ′ − z ′ , y) Y ⋆ ,Y = 0, for every y ∈ Y ; and so y ′ = z ′ .Thus e is injective and (i) follows.(ii) Assume <strong>to</strong> the contrary that Y ⋆ ≠ X ⋆ . Then there existsx 0 ∈ X ⋆⋆ = X such that (y ′ , x 0 ) X⋆ ,X = 0, for every y ′ ∈ Y ⋆ (seee.g. Brezis [11], Corollary I.8). Let E = Rx 0 ⊂ Y , and let f ∈ E ⋆be defined by f(λx 0 ) = λ, for λ ∈ R. We have ‖f‖ E ⋆ = 1, and bythe Hahn-Banach theorem (see e.g. Brezis [11], Corollary I.2) thereexists y ′ ∈ Y ⋆ such that ‖y ′ ‖ Y ⋆ = 1 and (y ′ , x 0 ) Y ⋆ ,Y = 1, which is acontradiction, since (y ′ , x 0 ) Y ⋆ ,Y = (y ′ , x 0 ) X ⋆ ,X = 0.(iii) Let B X ⋆ (respectively, B X , B Y ⋆, B Y ) be the unit ball ofX ⋆ (respectively, X, Y ⋆ , Y ). Consider a sequence (y n) ′ n≥0 ⊂ B Y ⋆.Since Y ⋆ is the dual of a separable Banach space, it follows (see e.g.Brezis [11], Corollary III.26) that there exist a subsequence, which we


136 5. APPENDIX: SOBOLEV SPACESstill denote by (y n) ′ n≥0 , and an element y ′ ∈ B Y ⋆ such that y n ′ → y ′ inY ⋆ weak ⋆ . We show that ‖y n ′ − y ′ ‖ X ⋆ → 0, which proves the desiredresult. We note that‖y ′ n − y ′ ‖ X ⋆= supx∈B X|(y ′ n − y ′ , x) X ⋆ ,X|= supx∈B X|(y ′ n − y ′ , x) Y ⋆ ,Y |,(5.1.4)by (i). Let ε > 0. Since B X is a relatively compact subset of Y , wesee that there exists a (finite) sequence (x j ) 1≤j≤l ⊂ B X such that forevery x ∈ B X , there exists 1 ≤ j ≤ l such that ‖x − x j ‖ Y ≤ ε. Givenx ∈ B X and 1 ≤ j ≤ l as above, we deduce that|(y ′ n − y ′ , x) Y ⋆ ,Y | ≤ |(y ′ n − y ′ , x − x j ) Y ⋆ ,Y | + |(y ′ n − y ′ , x j ) Y ⋆ ,Y |≤ ε‖y ′ n − y ′ ‖ Y ⋆ + |(y ′ n − y ′ , x j ) Y ⋆ ,Y |≤ 2ε + |(y ′ n − y ′ , x j ) Y ⋆ ,Y |.Aplying now (5.1.4), we deduce that‖y ′ n − y ′ ‖ X ⋆≤ 2ε + sup |(y n ′ − y ′ , x j ) Y ⋆ ,Y |.1≤j≤lSince y ′ n → y ′ in Y ⋆ weak ⋆ , we conclude thatlim sup ‖y n ′ − y ′ ‖ X ⋆ ≤ 2ε.n→∞Since ε > 0 is arbitrary, the result follows.Remark 5.1.20. Proposition 5.1.18 calls for the following comments.(i) Note that any Hilbert space can be identified, via the Rieszrepresentation theorem, with its dual. By defining the embeddinge : L 2 (Ω) → H −1 (Ω) by (5.1.3), we implicitely identifiedL 2 (Ω) with its dual. If we identify H0 1 (Ω) with its dual, sothat H −1 (Ω) = H0 1 (Ω), then Proposition 5.1.18 becomes absurd.This means that we cannot, at the same time, identifyL 2 (Ω) with its dual and H0 1 (Ω) with its dual, and use the canonicalembedding H0 1 (Ω) ↩→ L 2 (Ω).(ii) Note that the density of the embedding W 1,p0 (Ω) ↩→ L p (Ω) canbe viewed by a constructive argument (truncation and regularization).As well, any element ϕ ∈ W −1,p0 (Ω) with compactsupport (in the sense that there exists a compact set K of Ω□


5.2. SOBOLEV SPACES AND FOURIER TRANSFORM 137such that (ϕ, u) W −1,p ,W 1,p′0= 0 for all u ∈ W 1,p′0 (Ω) supportedin Ω \ K) can be approximated by convolution by elements ofC ∞ c (Ω). However, it is not clear how <strong>to</strong> approximate explicitlyan element ϕ ∈ W −1,p0 (Ω) by elements of W −1,p0 (Ω) withcompact support.Proposition 5.1.21. If 1 < p < ∞ and −△ is defined by∫(−△u, ϕ) W −1,p= ∇u · ∇ϕ, (5.1.5),W 1,p′0for all ϕ ∈ W 1,p′0 (Ω), then −△ ∈ L(W 1,p (Ω), W −1,p (Ω)).Proof. We note that∫∣ ∇u · ∇ϕ∣ ≤ ‖∇u‖ L p‖∇ϕ‖ L p ′ΩΩ≤ ‖u‖ W 1,p‖ϕ‖ ,W 1,p′0for all u ∈ W 1,p0 (Ω), ϕ ∈ W 1,p′0 (Ω). It follows that (5.1.5) definesan element of W −1,p (Ω) (note also that this definition is consistentwith the classical definition) and that ‖ − △u‖ W −1,p ≤ ‖u‖ W 1,p, i.e.−△ ∈ L(W 1,p (Ω), W −1,p (Ω)).□Corollary 5.1.22. LetJ(u) = 1 2for u ∈ H 1 0 (Ω). Then J ∈ C 1 (H 1 0 (Ω), R) andfor all u ∈ H 1 0 (Ω).Proof. We have∫Ω|∇u| 2 , (5.1.6)J ′ (u) = −△u, (5.1.7)J(u + v) − J(u) − (−△u, v) H −1 ,H 1 0 = 1 2from which the result follows.∫Ω|∇v| 2 ,5.2. Sobolev spaces and Fourier transformWhen Ω = R N , one can characterize the space W m,p (R N ) interms of the Fourier transform. For that purpose, it is convenient inthis section <strong>to</strong> consider the Sobolev spaces of complex-valued functions.The case p = 2 is especially simple, by using Plancherel’sformula. We begin with the following lemma.□


138 5. APPENDIX: SOBOLEV SPACESLemma 5.2.1. Let u ∈ L 2 (R N ) and α a multi-index. Then D α u ∈L 2 (R N ) if and only if | · | |α| û ∈ L 2 (R N ). Moreover, F(D α u)(ξ) =(2πi) |α| ξ α û(ξ), where ξ α = ξ α11 · · · ξα Nn . In particular ‖D α u‖ L 2 =(2π) |α| ‖ | · | |α| û‖ L 2.Proof. Suppose D α u ∈ L 2 (R N ), which means that∫∫Re uD α ϕ = (−1) |α| Re D α uϕ, (5.2.1)R N R Nfor all ϕ ∈ Cc∞ (R N ). By density, (5.2.1) holds for all ϕ ∈ S(R N ). ByPlancherel’s formula, (5.2.1) is equivalent <strong>to</strong>∫∫Re ûF(D α ϕ) = (−1) |α| Re F(D α u) ̂ϕ.R N R NSince F(D α ϕ) = (2πi) |α| ξ α ̂ϕ, we deduce that∫∫(−2πi) |α| Re ξ α û ̂ϕ = (−1) |α| Re F(D α u) ̂ϕ,R N R Nfor all ϕ ∈ S(R N ), which means that F(D α u) = (2πi) |α| ξ α û. Conversely,suppose | · | |α| û ∈ L 2 (R N ) and let u α ∈ L 2 (R N ) be defined byû α = (2πi) |α| ξ α û. Given ϕ ∈ S(R N ),∫∫∫Re u α ϕ = Re û α ̂ϕ = (−1) |α| Re û(2πi) |α| ξ α ̂ϕR N R NR∫N ∫= (−1) |α| Re ûF(D α ϕ) = (−1) |α| Re uD α ϕ,R N R Nso that D α u = u α . This completes the proof.□Proposition 5.2.2. Given any m ∈ Z,H m (R N ) = {u ∈ S ′ (R N ); (1 + | · | 2 ) m 2 û ∈ L 2 (R N )},and ‖u‖ H m ≈ ‖(1 + | · | 2 ) m 2 û‖ L 2.Proof. If m ≥ 0, then the result easily follows from Lemma 5.2.1.Next, if m ≥ 0, then it is clear that the dual of the space {u ∈S ′ (R N ); (1 + | · | 2 ) m 2 û ∈ L 2 (R N )} with the norm ‖(1 + | · | 2 ) m 2 û‖ L 2is the space {u ∈ S ′ (R N ); (1 + | · | 2 ) − m 2 û ∈ L 2 (R N )} with the norm‖(1 + | · | 2 ) − m 2 û‖ L 2. The result in the case m ≤ 0 follows. □Proposition 5.2.2 can be extended <strong>to</strong> W m,p (R N ) with p ≠ 2. Moreprecisely, we have the following result.


5.2. SOBOLEV SPACES AND FOURIER TRANSFORM 139Theorem 5.2.3. Given any m ∈ Z, a, b > 0 and 1 < p < ∞,W m,p (R N ) = {u ∈ S ′ (R N ); F −1 [(a + b| · | 2 ) m 2 û] ∈ L p (R N )},and ‖u‖ W m,p ≈ ‖F −1 [(a + b| · | 2 ) m 2 û]‖ L p.The proof of Theorem 5.2.3 is, as opposed <strong>to</strong> the proof of Proposition5.2.2, fairly delicate. It is based on a Fourier multiplier theorem,which is a deep result in Fourier analysis. A typical such theoremthat can be used is the following. (See Bergh and Löfström [10],Theorem 6.1.6, p. 135.)Theorem 5.2.4. Let ρ ∈ L ∞ (R N ) and let l > N/2 be an integer.Suppose ρ ∈ W l,∞loc(RN \ {0}) andsup|α|≤less sup |ξ| |α| |∂ α ρ(ξ)| < ∞.ξ≠0It follows that for every 1 < p < ∞, there exists a constant C p suchthat‖F −1 (ρ̂v)‖ L p ≤ C p ‖v‖ L p, (5.2.2)for all v ∈ S(R N ).Proof. We refer the reader <strong>to</strong> Bergh and Löfström [10] for theproof. Note that an essential ingredient in the proof is the Marcinkiewiczinterpolation theorem. In fact, only a simplified form of thistheorem is needed, namely the form stated in Stein [41], §4.2, Theorem5, p. 21. A very simple proof of this (simplified version of the)Marcinkiewicz interpolation theorem is given in Stein [41], pp. 21–22. □Proof of Theorem 5.2.3. Without loss of generality, we mayassume a = b = 1. We only prove the result for m ≥ 0, the casem < 0 following easily by duality (see the proof of Proposition 5.2.2).The case m = 0 being trivial, we assume m ≥ 1. We set V = {u ∈S ′ (R N ); F −1 [(1 + | · | 2 ) m 2 û] ∈ L p (R N )} 1 and ‖u‖ V = ‖F −1 [(1 + | ·| 2 ) m 2 û]‖ L p for all u ∈ V . It is not difficult <strong>to</strong> show that (V, ‖ · ‖ V ) isa Banach space. We now proceed in three steps.Step 1. S(R N ) is dense in V . Let u ∈ V and set w = F −1 [(1+| · | 2 ) m 2 û] ∈ L p (R N ). S(R N ) being dense in L p (R N ), there exists1 Note that (1 + | · | 2 ) m 2 is a C ∞ function with polynomial growth, so that(1 + | · | 2 ) m 2 û is a well-defined element of S ′ (R N ) for all u ∈ S ′ (R N ).


140 5. APPENDIX: SOBOLEV SPACES(w n ) n≥0 ⊂ S(R N ) such that w n → w in L p (R N ). Setting u n =F −1 [(1 + | · | 2 ) − m 2 ŵ n ] ∈ S(R N ), this means that u n → u in V .Step 2. V ↩→ W m,p (R N ). By Step 1, it suffices <strong>to</strong> showthat ‖u‖ W m,p ≤ C‖u‖ V for all u ∈ S(R N ). Let α be a multi-indexwith |α| ≤ m and let ρ(ξ) = ξ α (1 + |ξ| 2 ) − m 2 . It easily follows thatρ satisfies the assumptions of Theorem 5.2.4. Applying (5.2.2) withv = F −1 [(1 + | · | 2 ) m 2 û], we deduce that ‖F −1 (ξ α û)‖ L p ≤ C‖u‖ V .Since F −1 (ξ α û) = (2πi) −|α| D α u, we deduce that ‖D α u‖ L p ≤ C‖u‖ V .The result follows, since α with |α| ≤ m is arbitrary.Step 3. W m,p (R N ) ↩→ V . By density of S(R N ) in W m,p (R N ),it suffices <strong>to</strong> show that ‖u‖ V ≤ C‖u‖ W m,p for all u ∈ S(R N ). Fix afunction θ ∈ C ∞ (R), θ ≥ 0, such that θ(t) = 0 for |t| ≤ 1 and θ(t) = 1for |t| ≥ 2. Setρ(ξ) = (1 + |ξ| 2 ) m 2(1 +N∑θ(ξ j )|ξ j | m) −1.It is not difficult <strong>to</strong> show that ρ satisfies the assumptions of Theorem5.2.4. Applying (5.2.2) with v = F −1 [(1+ ∑ Nj=1 θ(ξ j)|ξ j | m )û], wededuce thatj=1j=1∥‖u‖ V ≤ C∥F −1[( N∑1 + θ(ξ j )|ξ j | m) ]∥ ∥∥Lûp(≤ C ‖u‖ L p +N∑‖F −1 (θ(ξ j )|ξ j | m û)‖ L pj=1).(5.2.3)Next, we observe that ρ j (ξ) = θ(ξ j )|ξ j | m ξ −mj satisfies the assumptionsof Theorem 5.2.4. Applying (5.2.2) with ρ = ρ j and v = u,successively for j = 1, · · · , N, we deduce from (5.2.3) that(‖u‖ V ≤ C ‖u‖ L p +N∑)‖F −1 (ξj m û)‖ L pj=1∑= C(‖u‖ N )L p + (2π) −m ‖∂j m u‖ L p ≤ C‖u‖ W m,p,j=1which completes the proof.□


5.3. THE CHAIN RULE AND APPLICATIONS 1415.3. The chain rule and applicationsWe now study the chain rule, and we begin with a simple result.Proposition 5.3.1. Let F ∈ C 1 (R, R) satisfy F (0) = 0 and‖F ′ ‖ L ∞ = L < ∞, and consider 1 ≤ p ≤ ∞. If u ∈ W 1,p (Ω),then F (u) ∈ W 1,p (Ω) and∇F (u) = F ′ (u)∇u, (5.3.1)a.e. in Ω. Moreover, if p < ∞, then the mapping u ↦→ F (u) iscontinuous W 1,p (Ω) → W 1,p (Ω). Furthermore, if p < ∞ and u ∈W 1,p0 (Ω), then F (u) ∈ W 1,p0 (Ω).Proof. We proceed in three steps.Step 1. The case u ∈ Cc 1 (Ω). It is immediate that F (u) ∈Cc 1 (Ω) and that (5.3.1) holds.Step 2. The case u ∈ W 1,p0 (Ω). Suppose p < ∞, let u ∈W 1,p0 (Ω) and let (u n ) n≥0 ⊂ Cc ∞ (Ω) satisfy u n → u in W 1,p0 (Ω) asn → ∞. By possibly extracting a subsequence, we may assume thatand that|u n | + |∇u n | ≤ f ∈ L p (Ω),u n → u,∇u n → ∇u,a.e. in Ω. It follows from Step 1 that F (u n ) ∈ C 1 c (Ω) ⊂ W 1,p0 (Ω) andthat ∇F (u n ) = F ′ (u n )∇u n . In particular,|∇F (u n )| ≤ L|∇u n | ≤ Lf.Since F ′ (u n )∇u n → F ′ (u)∇u a.e., we obtain ∇F (u n ) → F ′ (u)∇uin L p (Ω). Moreover, since |F (u n ) − F (u)| ≤ L|u n − u|, we haveF (u n ) → F (u) in L p (Ω). This implies that F (u n ) → F (u) in W 1,p0 (Ω)and that (5.3.1) holds.Step 3. The case u ∈ W 1,p (Ω). We have F (u) ∈ L p (Ω).Furthermore, given ϕ ∈ Cc 1 (Ω), let ξ ∈ Cc 1 (Ω) satisfy ξ = 1 on supp ϕ.By Remark 5.1.4 (i) and Remark 5.1.10 (i), we have ξu ∈ W 1,q0 (Ω) forall 1 ≤ q < ∞ such that q ≤ p. It follows from Step 2 that∫∫∫∫F (u)∇ϕ = F (ξu)∇ϕ = − ϕF ′ (ξu)∇(ξu) = − ϕF ′ (u)∇u.ΩΩΩSince clearly F ′ (u)∇u ∈ L p (Ω), we deduce that F (u) ∈ W 1,p (Ω) andthat (5.3.1) holds.Ω


142 5. APPENDIX: SOBOLEV SPACESStep 4. Continuity. Suppose p < ∞ and u n → u in W 1,p (Ω).We show that F (u n ) → F (u) in W 1,p (Ω) by contradiction. Thus weassume that ‖F (u n ) − F (u)‖ W 1,p ≥ ε > 0. We have F (u n ) → F (u)in L p (Ω). By possibly extracting a subsequence, we may assume thatu n → u and ∇u n → ∇u a.e. It follows by dominated convergence thatF ′ (u n )∇u n → F ′ (u)∇u in L p (Ω). Thus F (u n ) → F (u) in W 1,p (Ω),which is absurd.□Remark 5.3.2. One can prove the following stronger result. IfF : R → R is (globally) Lipschitz continuous and if F (0) = 0, then forevery u ∈ W 1,p (Ω), we have F (u) ∈ W 1,p (Ω). Moreover, ∇F (u) =F ′ (u)∇u a.e. This formula makes sense, since F ′ exists a.e. and∇u = 0 a.e. on the set {x ∈ Ω; u(x) ∈ A} where A ⊂ R is anyset of measure 0. Furthermore, the mapping u → F (u) is continuousW 1,p (Ω) → W 1,p (Ω) if p < ∞. Finally, if p < ∞ and u ∈ W 1,p0 (Ω),then F (u) ∈ W 1,p0 (Ω). The proof is rather delicate and makes use inparticular of Lebesgue’s points theory. (See Marcus and Mizel [35]).We will establish below a particular case of that result.Proposition 5.3.3. Set u + = max{u, 0} for all u ∈ R and let1 ≤ p ≤ ∞. If u ∈ W 1,p (Ω), then u + ∈ W 1,p (Ω). Moreover,{∇u + ∇u if u > 0,=(5.3.2)0 if u ≤ 0,a.e. If p < ∞, then the mapping u ↦→ u + is continuous W 1,p (Ω) →W 1,p (Ω). Furthermore, if u ∈ W 1,p0 (Ω), then u + ∈ W 1,p0 (Ω)Proof. We proceed in four steps.Step 1. If p < ∞ and u ∈ W 1,p0 (Ω), then u + ∈ W 1,p0 (Ω)and (5.3.2) holds. Given ε > 0, let{√ε2 + uϕ ε (u) =2 − ε if u ≥ 0,0 if u ≤ 0.It follows from Proposition 5.3.1 that ϕ ε (u) ∈ W 1,p0 (Ω) and that∇ϕ ε (u) = ϕ ′ ε(u)∇u a.e. We deduce easily that ϕ ε (u) → u + andthat ∇ϕ ε (u) converges <strong>to</strong> the right-hand side of (5.3.2) in L p (Ω) asε ↓ 0. Thus u + ∈ W 1,p0 (Ω) and (5.3.2) holds.


5.3. THE CHAIN RULE AND APPLICATIONS 143Step 2. If u ∈ W 1,p (Ω), then u + ∈ W 1,p (Ω) and (5.3.2) holds.Using Step 1, this is proved by the argument in Step 3 of the proof ofProposition 5.3.1.Step 3. If a ∈ R and u ∈ W 1,p (Ω), then ∇u = 0 a.e. on theset {x ∈ Ω; u(x) = a}. Consider a function η ∈ Cc ∞ (R) such thatη(x) = 1 for |x| ≤ 1, η(x) = 0 for |x| ≥ 2 and 0 ≤ η ≤ 1. For n ∈ N,n ≥ 1, setandg n (x) = η(n(x − a)),h n (x) =∫ x0g n (s) ds.It follows from Proposition 5.3.1 that h n (u) ∈ W 1,p (Ω) and that∇h n (u) = g n (u)∇u a.e. Therefore,∫∫− h n (u)∇ϕ = g n (u)ϕ∇u,Rfor all ϕ ∈ Cc 1 (Ω). Since |h n | ≤ n −1 ‖η‖ L 1, the left-hand side of theabove inequality tends <strong>to</strong> 0 as n → ∞. Therefore,∫g n (u)ϕ∇u −→ 0.n→∞RNote that g n (u) → 1 {x∈Ω; u(x)=a} . Since 0 ≤ g n ≤ 1, we deduce that∫1 {x∈Ω; u(x)=a} ϕ∇u = 0,Rfor all ϕ ∈ Cc 1 (Ω); and so, 1 {x∈Ω; u(x)=a} ∇u = 0 a.e. The resultfollows.Step 4. Continuity. Suppose p < ∞ and let u n → u inW 1,p (Ω) as n → ∞. We have |u + − u + n | ≤ |u − u n |, so that u + n → u +in L p (Ω). Therefore, we need only show that for any subsequence,which we still denote by (u n ) n≥0 , there exists a subsequence (u nk ) k≥0such that ∇u + n k→ ∇u + in L p (Ω) as k → ∞. We may extract a subsequence(u nk ) k≥0 such that u nk → u and ∇u nk → ∇u a.e., and suchthat|u nk | + |∇u nk | ≤ f ∈ L p (Ω).R


144 5. APPENDIX: SOBOLEV SPACESSetA 0 = {x ∈ Ω; u(x) = 0},A + = {x ∈ Ω; u(x) > 0}, A + k = {x ∈ Ω; u n k(x) > 0},A − = {x ∈ Ω; u(x) < 0}, A − k = {x ∈ Ω; u n k(x) < 0}.For a.a. x ∈ A + , we have x ∈ A + k for k large, thus ∇u+ n k(x) =∇u nk (x) → ∇u(x) = ∇u + (x). For a.a. x ∈ A − , we have x ∈ A − kfork large, hence ∇u + n k(x) = 0 = ∇u + (x). For x ∈ A 0 , we have u(x) = 0,so that by Step 3, ∇u(x) = 0 a.e. Since ∇u nk → ∇u = 0 a.e. on A 0 ,we deduce in particular that |∇u + n k| ≤ |∇u nk | → 0 a.e. in A 0 . Thus∇u + n k→ 0 = ∇u + a.e. on A 0 . It follows that{∇u + ∇u if u > 0,n k→0 if u ≤ 0,a.e., and the result follows by dominated convergence. This completesthe proof.□Remark 5.3.4. Let u − = max{0, −u}. Since u − = (−u) + , wemay draw similar conclusions for u − . In particular, if u ∈ W 1,p (Ω),then u − ∈ W 1,p (Ω). Moreover,{∇u − −∇u if u < 0,=0 if u ≥ 0,a.e. If p < ∞, then the mapping u → u − is continuous W 1,p (Ω) →W 1,p (Ω). Furthermore, if u ∈ W 1,p0 (Ω), then u − ∈ W 1,p0 (Ω). Since|u| = u + + u − , we deduce the following properties. If u ∈ W 1,p (Ω),then |u| ∈ W 1,p (Ω). Moreover,⎧⎪⎨ ∇u if u > 0,∇|u| = −∇u if u < 0,⎪⎩0 if u = 0,a.e. Note in particular that|∇|u| | = |∇u|,a.e. If p < ∞, then the mapping u → |u| is continuous W 1,p (Ω) →W 1,p (Ω). Furthermore, if u ∈ W 1,p0 (Ω), then |u| ∈ W 1,p0 (Ω).


5.3. THE CHAIN RULE AND APPLICATIONS 145Corollary 5.3.5. Let 1 ≤ p < ∞, let u ∈ W 1,p (Ω) and v ∈W 1,p0 (Ω). If |u| ≤ |v| a.e., then u ∈ W 1,p0 (Ω).Proof. It follows from Remark 5.3.4 that |v| ∈ W 1,p0 (Ω). Let(w n ) n≥0 ⊂ Cc ∞ (Ω) satisfy w n → |v| in W 1,p (Ω) as n → ∞. It followsthat w n −u + → |v|−u + in W 1,p (Ω), so that (w n −u + ) + → (|v|−u + ) +in W 1,p (Ω) by Proposition 5.3.3. Since (w n −u + ) + ≤ w n + , we see that(w n − u + ) + has compact support; and so (w n − u + ) + ∈ W 1,p0 (Ω). Wededuce that (|v| − u + ) + ∈ W 1,p0 (Ω). Since (|v| − u + ) + = |v| − u + , wesee that u + ∈ W 1,p0 (Ω). One shows as well that u − ∈ W 1,p0 (Ω), andthe result follows.□Corollary 5.3.6. Let 1 ≤ p ≤ ∞ and let M ≥ 0.W 1,p (Ω), then (u − M) + ∈ u ∈ W 1,p (Ω) and{∇(u − M) + ∇u if u(x) > M,=0 if u(x) ≤ M,If u ∈(5.3.3)a.e. in Ω. If p < ∞, then the mapping u ↦→ (u − M) + is continuousW 1,p (Ω) → W 1,p (Ω). Moreover, if u ∈ W 1,p0 (Ω), then (u − M) + ∈W 1,p0 (Ω).Proof. The last property is a consequence of Corollary 5.3.5,since (u − M) + ≤ u + ∈ W 1,p0 (Ω). Next, observe that if Ω is bounded,then the conclusions are a consequence of Proposition 5.3.3, becauseu − M ∈ W 1,p (Ω) whenever u ∈ W 1,p (Ω). In particular, we seethat for an arbitrary Ω, if u ∈ W 1,p (Ω), then (u − M) + ∈ W 1,ploc (Ω)and (5.3.3) holds. In particular, |∇(u − M) + | ≤ |∇u| ∈ L p (Ω). Since(u − M) + ≤ u + ∈ L p (Ω), we see that (u − M) + ∈ W 1,p (Ω) andthat (5.3.3) holds.It now remains <strong>to</strong> show the continuity of the mapping u ↦→ (u −M) + when p < ∞. By the above observation, we may assume thatΩ is unbounded. Given R > 0, let Ω R = {x ∈ Ω; |x| < R} andU R = Ω \ Ω R . We argue by contradiction, and we consider a sequence(u n ) n≥0 ⊂ W 1,p (Ω) and u ∈ W 1,p (Ω) such that u n −→ u in W 1,p (Ω)n→∞and ‖(u n − M) + − (u − M) + ‖ W 1,p ≥ ε > 0. Note that|(u n − M) + − (u − M) + | ≤ |u n − u| −→n→∞0,in L p (Ω), so that we may assume ‖∇(u n − M) + − ∇(u − M) + ‖ L p ≥ε > 0. By possibly extracting a subsequence, we may also assume


146 5. APPENDIX: SOBOLEV SPACESthat there exists f ∈ L p (Ω) such that |∇u n | + |∇u| ≤ f a.e. Inparticular, it follows from (5.3.3) that |∇(u n − M) + − ∇(u − M) + | ≤|∇u n | + |∇u| ≤ f a.e. Therefore, by dominated convergence, we maychoose R large enough so that‖∇(u n − M) + − ∇(u − M) + ‖ L p (U R ) ≤ ε 4 .Finally, since Ω R is bounded, it follows that ‖∇(u n − M) + − ∇(u −M) + ‖ L p (Ω R ) → 0 as n → ∞. Therefore, for n large enough,‖∇(u n − M) + − ∇(u − M) + ‖ L p (Ω R ) ≤ ε 4 .We deduce that ‖∇(u n −M) + −∇(u−M) + ‖ Lp (Ω) ≤ ε/2, which yieldsa contradiction. This completes the proof.□Corollary 5.3.7. Let 1 ≤ p ≤ ∞, (u n ) n≥0 ⊂ W 1,p (Ω) andu ∈ W 1,p (Ω). If u n → u in W 1,p (Ω) as n → ∞, then there exist asubsequence (u nk ) k≥0 and v ∈ W 1,p (Ω) such that |u nk | ≤ v a.e. in Ωfor all k ≥ 0. If, in addition, p < ∞ and (u n ) n≥0 ⊂ W 1,p0 (Ω), thenone can choose v ∈ W 1,p0 (Ω).Proof. Let the subsequence (u nk ) k≥0 satisfy ‖u nk − u‖ W 1,p ≤2 −k−1 , so that ‖u nk+1 −u nk ‖ W 1,p ≤ 2 −k . It follows from Remark 5.3.4that |u nk+1 − u nk | ∈ W 1,p (Ω) and that ‖ |u nk+1 − u nk | ‖ W 1,p ≤ 2 −k .Thus, the seriesv = |u n0 | + ∑ j≥0|u nj+1 − u nj |,is normally convergent in W 1,p (Ω). Sinceu nk+1 = u n0 +k∑(u nj+1 − u nj ),j=0we see that |u nk+1 | ≤ v. The result follows, using again Remark 5.3.4in the case p < ∞ and (u n ) n≥0 ⊂ W 1,p0 (Ω). □Corollary 5.3.8. Let 1 ≤ p < ∞, 0 ≤ A, B ≤ ∞ and setE = {u ∈ W 1,p0 (Ω); −A ≤ u ≤ B a.e.},F = {u ∈ Cc∞ (Ω); −A ≤ u ≤ B}.


5.4. SOBOLEV’S INEQUALITIES 147It follows that E = F , where the closure is in W 1,p0 (Ω). In particular,{u ∈ W 1,p0 (Ω); u ≥ 0 a.e.} is the closure in W 1,p0 (Ω) of{u ∈ Cc ∞ (Ω); u ≥ 0}.Proof. We have F ⊂ E. Since E is clearly closed in W 1,p0 (Ω),we deduce that F ⊂ E. We now show the converse inclusion. Letu ∈ E and let (u n ) n≥0 ⊂ Cc ∞ (Ω) be such that u n → u in W 1,p0 (Ω).Setv n = max{−A, min{u n , B}} = u n + (u n + A) − − (u n − B) + .It follows from Corollary 5.3.6 that v n ∈ W 1,p0 (Ω) and thatv n −→n→∞u + (u + A) − − (u − B) + = u,in W 1,p0 (Ω). Thus if (v n ) n≥0 ⊂ F , then the conclusion follows. Sinceclearly v n ∈ C c (Ω), we need only show the following property: ifw ∈ E ∩ C c (Ω), then w ∈ F . To see this, let (ρ n ) n≥0 be a smoothingsequence and set ˜w n = ρ n ⋆ ˜w, where ˜w is the extension of w by 0outside Ω. Since w has compact support in Ω, we see that if n issufficiently large, then ˜w n also has compact support in Ω. Moreover,˜w n ∈ Cc ∞ (Ω), so that if w n = ( ˜w n ) |Ω , then w n ∈ Cc ∞ (Ω). In addition,˜w n → ˜w in W 1,p (R N ), so that w n → w in W 1,p0 (Ω). It remains <strong>to</strong>show that −A ≤ ˜w n ≤ B, which is immediate since −A ≤ ˜w ≤ B.This completes the proof.□5.4. Sobolev’s inequalitiesIn this section, we establish some Sobolev-type inequalities andembeddings. It is convenient <strong>to</strong> make the following definition.Definition 5.4.1. Given an integer m ≥ 0, 1 ≤ p ≤ ∞ and Ωand open subset of R N , we set|u| m,p,Ω = ∑‖D α u‖ L p (Ω).|α|=mWhen there is no risk of confusion, we seti.e. we omit the dependence on Ω.|u| m,p = |u| m,p,Ω ,We begin with inequalities for smooth functions on R N .following result is the main inequality of this section.The


148 5. APPENDIX: SOBOLEV SPACESTheorem 5.4.2 (Gagliardo-Nirenberg’s inequality). Consider 1 ≤p, q, r ≤ ∞ and let j, m be two integers, 0 ≤ j < m. If1p − j ( 1N = a r − m )+ 1 − a , (5.4.1)N qfor some a ∈ [j/m, 1] (a < 1 if r = N/(m − j) > 1), then there existsa constant C = C(N, m, j, a, q, r) such thatfor all u ∈ C m c (R N ).|u| j,p ≤ C|u| a m,r‖u‖ 1−aL q , (5.4.2)The proof of Theorem 5.4.2 uses various important inequalities.The fundamental ingredients are Sobolev’s inequality (Theorem 5.4.5),Morrey’s inequality (Theorem 5.4.8), and an inequality for intermediatederivatives (Theorem 5.4.10). We begin with the following firs<strong>to</strong>rderSobolev inequality.Theorem 5.4.3. Let N ≥ 1. For every u ∈ Cc 1 (R N ), we have‖u‖ NL N−1≤ 1 N∏∥ ∂u ∥ 1∥∥ N. (5.4.3)2 ∂x j L 1In particular,for all u ∈ C 1 c (R N ).and so,j=1‖u‖ NL N−1≤ 12N |u| 1,1, (5.4.4)Proof. We proceed in three steps.Step 1. The case N = 1. Given x ∈ R, we haveAs well,u(x) =|u(x)| ≤|u(x)| ≤∫ x−∞∫ x−∞∫ +∞xu ′ (s) ds;|u ′ (s)| ds.|u ′ (s)| ds,so that by summing up the two above inequalities,|u(x)| ≤ 1 2∫ +∞−∞|u ′ (s)| ds,


5.4. SOBOLEV’S INEQUALITIES 149which yields (5.4.3) (and (5.4.4)) in the case N = 1.Step 2. Proof of (5.4.3). We assume N ≥ 2. For any 1 ≤ j ≤N, it follows from Step 1 that|u(x)| ≤ 1 ∫|∂ j u(x 1 , . . . , x j−1 , s, x j+1 , . . . , x N )| ds;2and so,|u(x)| N ≤ 2 −NRN ∏j=1∫R|∂ j u(x 1 , . . . , x j−1 , s, x j+1 , . . . , x N )| ds.Taking the (N − 1) th root and integrating on R N , we obtain∫N|u(x)| N−1 dx ≤R N ∫2 − NN−1N ∏R j=1( ∫N R) 1N−1|∂ j u(x 1 , . . . , x j−1 , s, x j+1 , . . . , x N )| ds .We observe that the right-hand side is the product of N functions, eachof which depends only on N −1 of the variables x 1 , . . . , x N (with a permutation).Therefore, integrating in each of the variables x 1 , . . . , x N ,we may apply Hölder’s inequality∫Ra 1N−11 · · · a 1∏N−1N−1N−1 ≤l=1( ∫ Ra l) 1N−1.For example, if we first integrate in x 1 , we obtain∫Rdx 1∫×N ∏j=1( ∫ RN∏) 1N−1|∂ j u(x 1 , . . . , x j−1 , s, x j+1 , . . . , x N )| ds=R j=2( ∫ R≤( ∫ ) 1N−1|∂ 1 u(s, x 2 , . . . , x N )| dsR) 1N−1|∂ j u(x 1 , . . . , x j−1 , s, x j+1 , . . . , x N )| ds( ∫ ) 1N−1|∂ 1 u(s, x 2 , . . . , x N )| dsR


150 5. APPENDIX: SOBOLEV SPACES×N∏j=2( ∫ R 2 |∂ j u(x 1 , . . . , x j−1 , s, x j+1 , . . . , x N )| dsdx 1) 1N−1.Integrating successively in each of the variables x 1 , . . . , x N , we obtainfinally the estimate (5.4.3).Step 3. Proof of (5.4.4). We claim that if (a j ) 1≤j≤N ∈ R Nwith a j ≥ 0, then( N ∏j=1a j) 1N≤ 1 NN∑a j . (5.4.5)The estimate (5.4.4) is a consequence of (5.4.3) and (5.4.5).claim (5.4.5) follows if show thatmax|x| 2 =1j=1j=1TheN∏x 2 j = N −N . (5.4.6)To prove (5.4.6), we observe that if the maximum is achieved at x,then there exists a Lagrange multiplier λ ∈ R such that F ′ (x) = λx,where F (x) = x 2 1 . . . x 2 N . This implies that∑2x i x 2 j = λx i ,j≠ifor all 1 ≤ i ≤ N. Since none of the x i vanishes (for the maximum isclearly positive), this implies that x 2 1 = · · · = x 2 N , from which (5.4.6)follows.□Corollary 5.4.4. Let 1 ≤ r ≤ N (r < N if N ≥ 2). If r ∗ > ris defined by1r ∗ = 1 r − 1 N ,then‖u‖ L r ∗ ≤ c N,r |u| 1,r , (5.4.7)for every u ∈ C 1 c (R N ), with c N,r = (N − 1)r/2N(N − r). (We usethe convention that (N − 1)/(N − 1) = 1 if N = 1.)Proof. The case N = 1 follows from Theorem 5.4.3, so we assumeN ≥ 2. Lett = N − 1N r∗ =(N − 1)rN − r .


5.4. SOBOLEV’S INEQUALITIES 151Since r ≥ 1, we have t ≥ 1. We observe thatNtN − 1 = (t − 1)r′ = r ∗ ,and we apply (5.4.4) with u replaced by |u| t−1 u, and we obtain‖u‖ t L r∗ ≤ (2N)−1 | |u| t−1 u| 1,1 . (5.4.8)It follows from (5.3.1) that ∂ j (|u| t−1 u) = t|u| t−1 ∂ j u for all 1 ≤ j ≤ N.Therefore, by Hölder’s inequality,‖∂ j (|u| t−1 u)‖ L 1≤ t‖u‖ t−1L (t−1)r′ ‖∂ ju‖ L r = t‖u‖ t−1L r∗ ‖∂ ju‖ L r.Thus | |u| t−1 u| 1,1 ≤ t‖u‖ t−1L r∗ |u| 1,r, and we deduce from (5.4.8) thatand (5.4.7) follows.‖u‖ t L r∗≤ (2N)−1 t‖u‖ t−1L r∗ |u| 1,r, (5.4.9)The following Sobolev’s inequality is now a consequence of Corollary5.4.4.Theorem 5.4.5 (Sobolev’s inequality). Let m ≤ N be an integer,let 1 ≤ r ≤ N/m (r < N/m if N ≥ 2), and let r ∗ > r be defined byIfthen1r ∗ = 1 r − m N .[(N − 1)r] mc N,m,r =(2N) ∏m (N − lr) , (5.4.10)1≤l≤m‖u‖ L r ∗ ≤ c N,m,r |u| m,r , (5.4.11)for all u ∈ C m c (R N ). (We use the convention that (N −1)/(N −1) = 1if N = 1.)Proof. We argue by induction on m. By Corollary 5.4.4, (5.4.11)holds for m = 1. Suppose it holds up <strong>to</strong> some m ≥ 1. We supposethat m + 1 < N and we show (5.4.11) at the level m + 1. Let 1 ≤ r


152 5. APPENDIX: SOBOLEV SPACESDefine p by1p = 1 r ∗ + 1 N = 1 r − m N , (5.4.12)so that r < p < r ∗ . It follows from Corollary 5.4.4 and the firstidentity in (5.4.12) that‖u‖ L r ∗ ≤ c N,p |u| 1,p .Next, it follows from the second identity in (5.4.12) and (5.4.11) applied<strong>to</strong> ∂ j u that‖∂ j u‖ L p ≤ c N,m,r |∂ j u| m,r ,for all 1 ≤ j ≤ N. We deduce that|u| 1,p ≤ c N,m,r |u| m+1,r ,and (5.4.11) at the level m + 1 follows with c N,m+1,r = c N,p c N,m,r ,i.e. (5.4.10).□Remark 5.4.6. Note that when N ≥ 2, the inequality ‖u‖ L ∞ ≤C|u| 1,N does not hold, for any constant C. Indeed, given 0 < θ < 1 −1/N, let f ∈ C ∞ (0, ∞) satisfy f(r) = | log r| θ for r ≤ 1/2 and f(r) = 0for r ≥ 1. Let (f n ) n≥1 ⊂ C ∞ ([0, ∞)) be such that f n (r) = f(r) forr ≥ 1/n and 0 ≤ f n (r) ≤ f(r) and |f n(r)| ′ ≤ |f ′ (r)| for all r > 0.Setting u n (x) = f n (|x|), one verifies easily that ‖u n ‖ L ∞ → ∞ andlim sup ‖∇u n ‖ L N < ∞ as n → ∞. More generally, a similar examplewith 0 < θ < 1 − m/N shows that the inequality ‖u‖ L ∞ ≤ C|u| m,N/mdoes not hold, for any constant C if 1 ≤ m < N.The following result, in the same spirit as Theorem 5.4.3 (caseN = 1) shows that the inequality ‖u‖ L ∞ ≤ C|u| N,1 holds in anydimension.Theorem 5.4.7. Given any N ≥ 1,for all u ∈ C N c (R N ).‖u‖ L ∞ ≤ 2 −N |u| N,1 , (5.4.13)Proof. Let y ∈ R N Integrating ∂ 1 · · · ∂ N u in x 1 on (−∞, y 1 )yields∂ 2 · · · ∂ N u(y 1 , x 2 , . . . , x N ) =∫ y1−∞∂ 1 · · · ∂ N u dx 1 .


5.4. SOBOLEV’S INEQUALITIES 153Integrating successively in the variables x 2 , . . . , x N , we obtainTherefore,u(y) =∫ y1· · ·∫ yN−∞ −∞∂ 1 · · · ∂ N u dx 1 · · · dx N .|u(y)| ≤∫ y1∫ y2−∞−∞∫ yN· · · |∂ 1 · · · ∂ N u| dx 1 · · · dx N . (5.4.14)−∞We observe that instead on integrating in x 1 on (−∞, y 1 ), we mighthave integrated on (y 1 , ∞), thus obtaining|u(y)| ≤∫ ∞ ∫ y2y 1−∞· · ·∫ yN−∞Summing up (5.4.14) and (5.4.15), we obtain|u(y)| ≤ (1/2)∫ ∞ ∫ y2−∞−∞· · ·|∂ 1 · · · ∂ N u| dx 1 · · · dx N . (5.4.15)∫ yN−∞|∂ 1 · · · ∂ N u| dx 1 · · · dx N .Rereating this argument for each of the variables, we deduce that|u(y)| ≤ 2 −N ∫R N |∂ 1 · · · ∂ N u| ≤ |u| N,1 ,and the result follows since y is arbitrary.□In the case p > N, we have the following result.Theorem 5.4.8 (Morrey’s inequality). If r > N ≥ 1, then thereexists a constant c(N) such thatr|u(x) − u(y)| ≤ c(N)r − N |x − y|1− N r |u|1,r , (5.4.16)for all u ∈ Cc 1 (R N ). Moreover, if 1 ≤ q ≤ ∞ and a ∈ [0, 1) is definedby( 10 = ar − 1 )+ 1 − a ,N qthenfor all u ∈ C 1 c (R N ).r‖u‖ L ∞ ≤ c(N)r − N |u|a 1,r‖u‖ 1−aL , (5.4.17)q


154 5. APPENDIX: SOBOLEV SPACESProof. In the following calculations, we denote by c(N) variousconstants that may change from line <strong>to</strong> line but depend only on N.Let z ∈ R N and ρ > 0, and set B = B(z, ρ). Consider x ∈ B, andassume for simplicity x = 0. We haveu(y) − u(0) =∫ 10∫d1dt u(ty) dt = y · ∇u(ty) dt,for all y ∈ B. Integrating on B and dividing by |B|, we obtain∫1u(y) dy − u(0) = 1 ∫ 1 ∫y · ∇u(ty) dy dt.|B||B|BSince∫∣ y · ∇u(ty) dy∣ ≤B0( ∫ ) 1 (∫ )|y| r′ rdy′ 1|∇u(ty)| r rdyBB0B( ∫ )= (N + r ′ ) − 1 r ′ γ 1 r ′N ρ1+ N r ′ t − N 1r |∇u(y)| r rdytB≤ (N + r ′ ) − 1 r ′ γ 1 r ′N ρ1+ N r ′ t − N r ‖∇u‖L r,where γ N is the measure of the unit sphere, we deduce∣ 1 ∫u(y) dy − u(0) ∣ ≤Nr|B|N − r (N + r′ ) − 1 r ′ γ − 1 rN ρ1− N r ‖∇u‖L r.BSince (N + r ′ ) − 1 r ′ γ − 1 rNis bounded uniformly in r > 1, it follows thatif B = B(z, ρ) and x ∈ B, then∣ 1 ∫ru(y) dy − u(x) ∣ ≤ c(N)|B|r − N ρ1− N r |u|1,r . (5.4.18)BLet now x 1 , x 2 ∈ R N , x 1 ≠ x 2 and let z = (x 1 + x 2 )/2 and ρ =|x 1 − x 2 |. Applying (5.4.18) successively with x = x 1 and x = x 2 andmaking the sum, we obtainr|u(x 1 ) − u(x 2 )| ≤ c(N)r − N |x 1 − x 2 | 1− N r ‖∇u‖L r,which proves (5.4.16).Consider now 1 ≤ q ≤ ∞. We have∫∣ u(y) dy∣ ≤ |B| 1 q ′ ‖u‖ L q;B


5.4. SOBOLEV’S INEQUALITIES 155and so,1∫∣|B|Bu(y) dy∣ ≤ |B| − 1 q ‖u‖L q≤ c(N)ρ − N q ‖u‖L q.= N 1 −q γ 1 qNNρ− q ‖u‖L qTherefore, we deduce from (5.4.18) that|u(x)| ≤ c(N)ρ − N rq ‖u‖L q + c(N)r − N ρ1− N r ‖∇u‖L r.We now choose ρ = ‖u‖ α L q‖∇u‖−α Lr , with 1 = α(1 − N/r + N/q), andwe obtainr|u(x)| ≤ c(N)r − N ‖∇u‖a L r‖u‖1−a L . qSince x ∈ R N is arbitrary, this proves (5.4.17).□For the proof of Theorem 5.4.2, we will use the following (firs<strong>to</strong>rder)Gagliardo-Nirenberg’s inequality, which is a consequence ofSobolev and Morrey’s inequalities.Theorem 5.4.9. Let 1 ≤ p, q, r ≤ ∞ and assume1( 1p = a r − 1 )+ 1 − a , (5.4.19)N qfor some a ∈ [0, 1] (a < 1 if r = N ≥ 2). It follows that there exists aconstant C = C(N, p, q, r, a) such thatfor every u ∈ C 1 c (R N ).‖u‖ L p≤ C|u| a 1,r‖u‖ 1−aL q , (5.4.20)Proof. We consider separately several cases.The case r > N. Note that in this case, p ≥ q, so that byHölder’s inequality,‖u‖ L pp−q qp≤ ‖u‖L ‖u‖ p∞ L q.Estimating ‖u‖ L ∞ by (5.4.17), we deduce (5.4.20).The case r < N (thus N ≥ 2). Let r ∗ = Nr/(N − r). Itfollows from Hölder’s inequality that‖u‖ L p≤ ‖u‖ a Lr∗ ‖u‖1−aL , qwith a given by (5.4.19). (5.4.20) follows, estimating ‖u‖ L r ∗ by (5.4.7).


156 5. APPENDIX: SOBOLEV SPACESThe case r = N. Suppose first N = 1. Then by Hölder’sinequality,‖u‖ L p ≤ ‖u‖ a L ∞‖u‖1−a L , qand the result follows from (5.4.4). In the case N ≥ 2 (thus a < 1) wecannot use the same argument since ‖u‖ L ∞ is not estimated in termsof ‖∇u‖ L N . Instead, we apply (5.4.4) with u replaced by |u| t−1 u forsome t ≥ 1. As in the proof of (5.4.9), we obtain‖u‖ t L tNN−1≤ (2N) −1 t‖u‖ t−1(t−1)NL N−1|u| 1,r . (5.4.21)Suppose first that p ≥ q + N/(N − 1), and let t ≥ 1 be defined bytNN − 1 = p.It follows that (t − 1)N/(N − 1) ≥ q. By Hölder’s inequality,with‖u‖ (t−1)NL N−1N − 1(t − 1)N≤ ‖u‖ α L p‖u‖1−α L q , (5.4.22)=α(N − 1)tNIt follows from (5.4.21)-(5.4.22) thatand so,‖u‖ t L p≤ (2N)−1 t‖u‖ (t−1)αL p‖u‖ L p ≤ (t/2N)Since one verifies easily that1t − (t − 1)α = p − q = a,p+ 1 − α .q(t−1)(1−α)1t−(t−1)α ‖u‖t−(t−1)α‖u‖(t−1)(1−α)L |u| 1,r;qt−(t−1)α1,r .L q |u| 1(t − 1)(1 − α)t − (t − 1)α = q p = 1 − a,this yields (5.4.20), since t/2N ≤ p. For p < q + N/(N − 1), we applyHölder’s inequality‖u‖ L p3(p−q) 3q−p2p2p≤ ‖u‖ ‖u‖L . qL 3q(Note that 3q ≥ q + 2 ≥ p.) We estimate ‖u‖ L 3q by applying (5.4.20)with p = 3q, and the result follows.□We now study interpolation inequalities for intermediate derivatives.


5.4. SOBOLEV’S INEQUALITIES 157Theorem 5.4.10. Given an integer m ≥ 1, there exists a constantC m with the following property. If 0 ≤ j ≤ m and if 1 ≤ p, q, r ≤ ∞satisfymp = j r + m − 1 , (5.4.23)qthen for every i ∈ {1, . . . , N},‖∂ j i u‖ L p ≤ C m‖u‖ m−jmL q ‖∂m i u‖ j mL r, (5.4.24)for all u ∈ Cc m (R N ). Moreover,for all u ∈ C m c (R N ).|u| j,p ≤ C m ‖u‖ m−jmL |u| j q m m,r , (5.4.25)The proof of Theorem 5.4.10 is based on the following lemma.Lemma 5.4.11. If 1 ≤ p, q, r ≤ ∞ satisfy2p = 1 q + 1 r , (5.4.26)then‖u ′ ‖ L p ≤ 8‖u‖ 1 2L q‖u ′′ ‖ 1 2L r, (5.4.27)for all u ∈ C 2 c (R).Proof. We first observe that we need only prove (5.4.27) forr > 1 and p < ∞, since the general case can then be obtained byletting p ↑ ∞ or r ↓ 1. Thus we now assume p < ∞ and r > 1. Let0 ≤ γ ≤ 2 be defined byγ = 1 + 1 p − 1 r . (5.4.28)so that by (5.4.26),− γ = −1 − 1 q + 1 p . (5.4.29)We observe that p ≤ 2r by (5.4.26), so thatγ ≥ 1/2. (5.4.30)We now fix u ∈ Cc 2 (R) and, given any interval I ⊂ R, we setf(I) = |I| −γp ‖u‖ p L q (I) , (5.4.31)g(I) = |I| γp ‖u ′′ ‖ p L r (I) . (5.4.32)We now proceed in five steps.


158 5. APPENDIX: SOBOLEV SPACESStep 1.The estimate‖v ′ ‖ Lp (0,1) ≤ 4‖v‖ Lq (0,1) + 2‖v ′′ ‖ Lr (0,1), (5.4.33)holds for all v ∈ C 2 ([0, 1]). Let ξ(x) = 1 − 2x 2 for 0 ≤ x ≤ 1/2,ξ(x) = 2(1 − x) 2 for 1/2 ≤ x ≤ 1. It follows that ξ ∈ C 1 ([0, 1]) ∩C 2 ([0, 1] \ {1/2}) and 0 ≤ ξ ≤ 1, ξ(0) = 1, ξ ′ (0) = ξ(1) = ξ ′ (1) = 0.Moreover, ξ ′′ (x) = −4 for 0 ≤ x ≤ 1/2, ξ ′′ (x) = 4 for 1/2 ≤ x ≤ 1.<strong>An</strong> integration by parts yields∫ 1∫ 1/2 ∫ 1ξv ′′ = −v ′ (0) − 4 v + 4 v;001/2and so,Given 0 ≤ x ≤ 1, we deduce that|v ′ (x)| ≤ |v ′ (0)| +|v ′ (0)| ≤ 4‖v‖ L1 (0,1) + ‖v ′′ ‖ L1 (0,1).∫ xx ∈ [0, 1] being arbitrary, we conclude that0|v ′′ | ≤ 4‖v‖ L1 (0,1) + 2‖v ′′ ‖ L1 (0,1).‖v ′ ‖ L ∞ (0,1) ≤ 4‖v‖ L 1 (0,1) + 2‖v ′′ ‖ L 1 (0,1).The estimate (5.4.33) follows by Hölder’s inequality.Step 2. The estimate‖u ′ ‖ Lp (a,b) ≤ 4(b − a) −γ ‖u‖ Lq (a,b) + 2(b − a) γ ‖u ′′ ‖ Lr (a,b), (5.4.34)holds for all −∞ < a < b < ∞. Set v(x) = u(a + (b − a)x), so thatv ∈ C 2 ([0, 1]). The estimate (5.4.34) follows by applying (5.4.33) <strong>to</strong> vthen using (5.4.28)-(5.4.29).Step 3. If f and g are defined by (5.4.31)-(5.4.32), then theestimate ∫|u ′ | p ≤ 2 2p−1 (2 p f(I) + g(I)), (5.4.35)Iholds for all finite interval I ⊂ R. This follows from (5.4.34) and theelementary inequality (x + y) p ≤ 2 p−1 (x p + y p ).Step 4. Given any δ > 0, there exist a positive integer l anddisjoints intervals I 1 , . . . , I l such that ∪ 1≤j≤l I j ⊃ supp u and withthe following properties.l ≤ 1 + |supp u|/δ, (5.4.36)


5.4. SOBOLEV’S INEQUALITIES 159{either |I j | = δ and f(I j ) ≤ g(I j )or else |I j | > δ and f(I j ) = g(I j ),(5.4.37)for all 1 ≤ j ≤ l. Indeed, set x 0 = inf supp u and let I = (x 0 , x 0 +δ).If f(I) ≤ g(I), we let I 1 = I. If f(I) > g(I), we observe that thefunctions ϕ(t) = f(x 0 , x 0 +δ+t), ψ(t) = g(x 0 , x 0 +δ+t) satisfy ϕ(0) >ψ(0) and ϕ(t) → 0, ψ(t) → ∞ as t → ∞ (we use (5.4.30)). Thus thereexists t > 0 such that ϕ(t) = ψ(t) and we let I 1 = (x 0 , x 0 + δ + t).We then see that I 1 satisfies (5.4.37). If supp u ⊄ I 1 , we can repeatthis construction. Since supp u is compact and |I j | ≥ δ, we obtain ina finite number of steps, say l, a collection of disjoint open intervalsI j that all satisfy (5.4.37) and such that∪ I j ⊂ supp u ⊂ ∪ I j,1≤j≤l−1 1≤j≤lwhich clearly imply (5.4.36).Step 5. Conclusion. Fix δ > 0. It follows from Step 4and (5.4.35) that∫l∑|u ′ | p ≤ 2 2p−1 [2 p f(I j ) + g(I j )], (5.4.38)We letso that by (5.4.37)Rj=1A 1 = {j ∈ {1, · · · , l}; |I j | = δ},A 2 = {j ∈ {1, · · · , l}; |I j | > δ},{1, · · · , l} = A 1 ∪ A 2 . (5.4.39)If j ∈ A 1 , then f(I j ) ≤ g(I j ) by (5.4.37), so that2 p f(I j ) + g(I j ) ≤ (2 p + 1)g(I j ) ≤ (2 p + 1)|I j | γp ‖u ′′ ‖ p L r (I)≤ (2 p + 1)δ γp ‖u ′′ ‖ p L r (R) , (5.4.40)where we applied (5.4.32). We deduce from (5.4.40) and (5.4.36) that∑j∈A 1[2 p f(I j )+g(I j )] ≤ (2 p +1)(1+|supp u|/δ)δ γp ‖u ′′ ‖ p L r (R) . (5.4.41)If j ∈ A 2 , then f(I j ) = g(I j ) by (5.4.37). Sincef(I j )g(I j ) = ‖u‖ p L q (I j) ‖u′′ ‖ p L r (I j) ,


160 5. APPENDIX: SOBOLEV SPACESwe see thatf(I j ) = g(I j ) = ‖u‖ p 2L q (I j) ‖u′′ ‖ p 2L r (I j) , (5.4.42)for all j ∈ A 2 . It follows from (5.4.42) that∑j∈A 2[2 p f(I j ) + g(I j )] ≤ (2 p + 1) ∑j∈A 2‖u‖ p 2Lq (I j) ‖u′′ ‖ p 2Lr (I j) . (5.4.43)Using (5.4.26) and applying Hölder’s inequality for the sum in theright-hand side of (5.4.43), we deduce that∑( ∑ ) p ( ∑ ) p[2 p f(I j )+g(I j )] ≤ (2 p +1) ‖u‖ q 2qL q (I j)‖u ′′ ‖ r 2rL r (I j) ,j∈A 2 j∈A 2 j∈A 2which implies∑j∈A 2[2 p f(I j ) + g(I j )] ≤ (2 p + 1)‖u‖ p 2Lq (R) ‖u′′ ‖ p 2Lr (R) . (5.4.44)We now deduce from (5.4.38), (5.4.39), (5.4.41) and (5.4.44) that∫|u ′ | p ≤ 2 2p−1 (2 p + 1)×R[‖u‖ p 2Lq (R) ‖u′′ ‖ p 2Lr (R) + (1 + |supp u|/δ)δγp ‖u ′′ ‖ p L r (R)]. (5.4.45)Note that by (5.4.28)γp = 1 + p − p r > 1,since r > 1. Letting δ ↓ 0 in (5.4.45) we obtain∫|u ′ | p ≤ 2 2p−1 (2 p + 1)‖u‖ p 2L q (R) ‖u′′ ‖ p 2L r (R) .RSince 2 p + 1 ≤ 2 p+1 , we see that 2 2p−1 (2 p + 1) ≤ 2 3p and the estimate(5.4.27) follows by taking the p th root of the above inequality.□Remark 5.4.12. The proof of Lemma 5.4.11 is fairly technical.Note, however, that some special cases of the inequality (5.4.27) canbe established very easily. For example, if p = q = r, then settingf = −u ′′ + u, we see that u = (1/2)e −|·| ⋆ f, so that u ′ = φ ⋆ f, withφ(x) = (x/2|x|)e −|x| . By Young’s inequality, ‖u ′ ‖ L p ≤ ‖φ‖ L 1‖f‖ L p =


5.4. SOBOLEV’S INEQUALITIES 161‖f‖ L p. Since ‖f‖ L p ≤ ‖u ′′ ‖ L p + ‖u‖ L p, (5.4.27) follows. <strong>An</strong>other easycase is p = 2 (so that r = q ′ ). Indeed, u ′2 = (uu ′ ) ′ − uu ′′ , so that∫ ∫u ′2 = − uu ′′ ≤ ‖u ′′ ‖ L r‖u‖ L q,by Hölder’s inequality, which shows (5.4.27). Note that in both thesesimple cases, one obtains (5.4.27) with the (better) constant 1.Proof of Theorem 5.4.10. The cases j = 0 and j = m beingtrivial, we assume 1 ≤ j ≤ m − 1 and we proceed in four steps.Step 1. If 1 ≤ p, q, r ≤ ∞ satisfy (5.4.26) and i ∈ {1, . . . , N},then‖∂ i u‖ L p (R N ) ≤ 8‖u‖ 1 2L q (R N ) ‖∂2 i u‖ 1 2L r (R N ) , (5.4.46)for all u ∈ C 2 c (R N ). Indeed, assume first p < ∞ and let x =(x 1 , . . . , x N ) ∈ R N . We apply (5.4.27) <strong>to</strong> the functionv(t) = u(x 1 , . . . , x i−1 , t, x i+1 , . . . , x N ),and we deduce that∫|v ′ (t)| p ≤ 8 p(∫ |v(t)| q) p (∫2q|v ′′ (t)| r) p2r.RRRIntegrating on R N−1 in the variables (x 1 , . . . , x i−1 , x i+1 , . . . , x N ) andapplying Hölder’s inequality <strong>to</strong> the right-hand side (note that 2q/p +2r/p = 1), we deduce (5.4.46). The case p = ∞ follows by lettingp ↑ ∞ in (5.4.46).Step 2. If m ≥ 2 and if 1 ≤ p, q, r ≤ ∞ satisfymp = 1 r + m − 1 , (5.4.47)qand i ∈ {1, . . . , N}, then‖∂ i u‖ L p≤ 8 2m−3 ‖u‖ m−1mL q ‖∂m i u‖ 1 mL r, (5.4.48)for all u ∈ C m c (R N ). We argue by induction on m. By Step 1,(5.4.48) holds for m = 2. Suppose it holds up <strong>to</strong> some m ≥ 2. Assumeand let t be defined bym + 1p= 1 r + m q , (5.4.49)mt = 1 r + m − 1 . (5.4.50)p


162 5. APPENDIX: SOBOLEV SPACESIn particular, min{p, r} ≤ t ≤ max{p, r}, so that 1 ≤ t ≤ ∞. Applying(5.4.48) <strong>to</strong> ∂ i u, we obtain‖∂ 2 i u‖ L t ≤ 8 2m−3 ‖∂ m+1iu‖ 1 mL r‖∂ i u‖ m−1mL . (5.4.51)pNow, we observe that by (5.4.49) and (5.4.50), 2/p = 1/q + 1/t, so itfollows from (5.4.46) that‖∂ i u‖ L p ≤ 8‖∂ 2 i u‖ 1 2L t‖u‖ 1 2L q. (5.4.52)(5.4.51) and (5.4.52) now yield (5.4.48) at the level m + 1.Step 3. Proof of (5.4.24). We argue by induction on m ≥ 2.For m = 2, the result follows from Step 1. Suppose now that up <strong>to</strong>some m ≥ 2, (5.4.24) holds for all 1 ≤ j ≤ m − 1. Assume 1 ≤ j ≤ m,and let t be defined bym + 1pmp = j − 1rWe first note that by (5.4.54) and (5.4.53),m + 1 − jt= j r + m + 1 − j , (5.4.53)q+ m + 1 − j . (5.4.54)t= m m + 1− j − 1m + 1 p r= m m + 1 − j+ m + 1 − jm + 1 q (m + 1)r ≥ 0,so that 0 ≤ t ≤ ∞. Also, by the above identity, and since q, r ≥ 1,m + 1 − jt= m m + 1 − jm + 1 qm(m + 1 − j)≤m + 1+ m + 1 − j(m + 1)r+ m + 1 − jm + 1= m + 1 − j,so that t ≥ 1. Applying (5.4.24) (with j replaced by j − 1) <strong>to</strong> ∂ i u, weobtain‖∂ j i u‖ L p ≤ C m‖∂ m+1iu‖ j−1mL r‖∂ iu‖ m−j+1mL. (5.4.55)tNow, we observe that by (5.4.53) and (5.4.54), (m+1)/t = 1/r +m/q,so it follows from (5.4.48) (applied with m replaced by m + 1) that‖∂ i u‖ L t ≤ 8 2m−1 ‖∂ m+1i u‖ 1m+1L r‖u‖ mm+1L q . (5.4.56)(5.4.55) and (5.4.56) now yield (5.4.24) at the level m + 1.


5.4. SOBOLEV’S INEQUALITIES 163Step 4. Proof of (5.4.25). We note that by (5.4.48),|u| 1,p ≤ 8 2m−3 ‖u‖ m−1mL |u| 1 q m m,r ,whenever (5.4.47) holds. The proof is now parallel <strong>to</strong> the proof of theestimate (5.4.24) in Step 3 above.□Proof of Theorem 5.4.2. We consider several cases, and weproceed in three steps.Step 1. The case (m − j)r < N. Let t be defined by1t = 1 r − m − jN , (5.4.57)so that r < t < ∞. It follows from Sobolev’s inequality (5.4.11)applied <strong>to</strong> j th derivatives of u thatNext, let s be defined by|u| j,t ≤ C|u| m,r . (5.4.58)ms = j r + m − 1 , (5.4.59)qso that min{q, r} ≤ s ≤ max{q, r}. It follows from the interpolationinequality (5.4.25) that|u| j,s ≤ C‖u‖ m−jmL |u| j q m m,r . (5.4.60)It follows from (5.4.1), (5.4.57) and (5.4.59) thatwith1p = θ t + 1 − θ ,sθ = ma − jm − j .Since j/m ≤ a ≤ 1, we see that 0 ≤ θ ≤ 1, and we deduce fromHölder’s inequality that|u| j,p ≤ |u| θ j,t|u| 1−θj,s≤ C|u| θ m,r(‖u‖ m−jmL |u| j q m m,r ) 1−θ ,where we used (5.4.58) and (5.4.60). The estimate (5.4.2) follows.Step 2. The case (m − j)r ≥ N and a = 1. Note that if a = 1,then by (5.4.1),1p = 1 r − m − jN ≤ 0.


164 5. APPENDIX: SOBOLEV SPACESThe only possibility is (m − j)r = N and p = ∞. This is allowed onlyif r = 1, and the result is then a consequence of Theorem 5.4.7.Step 3. The case (m − j)r ≥ N and a < 1. Let t be definedbymt = j r + m − 1 , (5.4.61)qso that min{q, r} ≤ t ≤ max{q, r}. It follows from the interpolationinequality (5.4.25) thatNext, let s be defined by|u| j,t ≤ C‖u‖ m−jmL |u| j q m m,r . (5.4.62)m − js= 1 r + m − j − 1 , (5.4.63)pso that min{p, r} ≤ s ≤ max{p, r}. It follows from the interpolationinequality (5.4.25) applied <strong>to</strong> j th order derivatives of u that|u| j+1,s ≤ C|u|Next, let α ∈ [0, 1) be defined byα =1 m−j−1m−j m−jm,r |u|j,p . (5.4.64)(m − j)(a − j/m)1 − a + (m − j)(a − j/m) , (5.4.65)so that by (5.4.61), (5.4.63) and (5.4.1)1( 1p = α s − 1 )+ 1 − α .N tIt follows from Theorem 5.4.9 applied <strong>to</strong> j th order derivatives of uthat|u| j,p ≤ C|u| α j+1,s|u| 1−αj,t . (5.4.66)We deduce from (5.4.66), (5.4.64) and (5.4.62) thatSince by (5.4.65),this yields (5.4.2).α+(j/m)(1−α)(m−j) (1−j/m)(1−α)(m−j)α+(1−α)(m−j)α+(1−α)(m−j)|u| j,p ≤ C|u| m,r‖u‖L .qa =α + (j/m)(1 − α)(m − j),α + (1 − α)(m − j)□


5.4. SOBOLEV’S INEQUALITIES 165Corollary 5.4.13 (Gagliardo-Nirenberg’s inequality). Let Ω ⊂R N be an open subset. Let 1 ≤ p, q, r ≤ ∞ and let j, m be two integers,0 ≤ j < m. Assume that (5.4.1) holds for some a ∈ [j/m, 1] (a < 1 ifr = N/(m − j) > 1), and suppose further that r < ∞. It follows thatD α u ∈ L p (Ω) for all u ∈ W m,r0 (Ω) ∩ L q (Ω) if |α| = j. Moreover, theinequality (5.4.2) holds for all u ∈ W m,r0 (Ω) ∩ L q (Ω).Proof. We first consider the case Ω = R N . Let u ∈ W m,r (R N )∩L q (R N ) and let (u n ) n≥0 ⊂ Cc∞ (R N ) be the sequence constructed byregularization and truncation in the proof of Theorem 5.1.8, so thatu n −→n→∞u in W m,r (R N ) and ‖u n ‖ L q ≤ ‖u‖ L q. (5.4.67)Applying (5.4.2) <strong>to</strong> u n − u l , we obtain|u n − u l | j,p ≤ C|u n − u l | a m,r‖u n − u l ‖ 1−aL q . (5.4.68)Let α be a multi-index with |α| = j. It follows from (5.4.67)-(5.4.68)that D α u n is a Cauchy sequence in L p (R N ). Thus D α u n has a limitv in L p (R N ). In particular,∫∫D α u n ϕ −→ vϕ,R N n→∞R Nfor all ϕ ∈ C j c (R N ). Since∫R N D α u n ϕ = (−1) j ∫∫u n D α ϕ −→ (−1) j uD α ϕ,R N n→∞R Nby (5.4.67), we see that D α u = v ∈ L p (R N ) and|u n − u| j,p −→n→∞0, (5.4.69)which proves the first part of the result. Finally, we apply the inequality(5.4.2) <strong>to</strong> u n . Letting n → ∞ and using (5.4.67) and (5.4.69), wededuce that (5.4.2) holds for u.□We are now in a position <strong>to</strong> state and prove the Sobolev embeddingtheorems. We restrict ourselves <strong>to</strong> functions of W m,p0 (Ω). Similarstatements hold for functions of W m,p (Ω), but they are obtained byusing extension opera<strong>to</strong>rs, so they require a certain amount of regularityof the domain. For functions of W m,p0 (Ω), instead, no regularityassumption on Ω is necessary. Furthermore, these results are sufficientfor our purpose. Our first result in this direction is the following.


166 5. APPENDIX: SOBOLEV SPACESTheorem 5.4.14. Let Ω ⊂ R N be an open subset, let 1 ≤ r < ∞and let m ∈ N, m ≥ 1.(i) If mr < N, then W m,r0 (Ω) ↩→ L p (Ω) for all p such that r ≤ p ≤Nr/(N − mr).(ii) If m = N and r = 1, then W m,r0 (Ω) ↩→ L p (Ω) for all p such thatr ≤ p ≤ ∞. Moreover, W m,r0 (Ω) ↩→ C 0 (Ω).(iii) If mr = N and r > 1, then W m,r0 (Ω) ↩→ L p (Ω) for all p suchthat r ≤ p < ∞.(iv) If mr > N, then W m,r0 (Ω) ↩→ L p (Ω) for all p such that r ≤ p ≤∞. Moreover, W m,r0 (Ω) ↩→ C 0 (Ω).Proof. The first embeddings of Properties (i)–(iv) follow fromCorollary 5.4.13 by taking j = 0, q = r and a = N(p − r)/mpr. Theembeddings W m,r0 (Ω) ↩→ C 0 (Ω) in (ii) and (iv) follow from the densityof Cc∞ (Ω) in W m,r0 (Ω) and the embedding W m,r0 (Ω) ↩→ L ∞ (Ω). □The next result is the general case of Sobolev’s embedding forfunctions of W m,p0 (Ω).Theorem 5.4.15. Let Ω ⊂ R N be an open subset, let 1 ≤ r < ∞and let m, j ∈ N, m ≥ 1.(i) If mr < N, then W m+j,r0 (Ω) ↩→ W j,p0 (Ω) for all p such thatr ≤ p ≤ Nr/(N − mr).(ii) If m = N and r = 1, then W m+j,r0 (Ω) ↩→ W j,p0 (Ω) ∩ W j,∞ (Ω)for all p such that r ≤ p < ∞. Moreover, W m+j,r0 (Ω) ↩→ C j 0 (Ω).(iii) If mr = N and r > 1, then W m+j,r0 (Ω) ↩→ W j,p0 (Ω) for all psuch that r ≤ p < ∞.(iv) If mr > N, then W m+j,r0 (Ω) ↩→ W j,p0 (Ω) ∩ W j,∞ (Ω) for all psuch that r ≤ p < ∞. Moreover, W m+j,r0 (Ω) ↩→ C j 0 (Ω).Proof. We first prove (iv). Applying Theorem 5.4.14 (iv) <strong>to</strong>D α u with |α| ≤ j, we deduce that W m+j,r0 (Ω) ↩→ W j,p (Ω) for allr ≤ p ≤ ∞. The embedding W m+j,r0 (Ω) ↩→ W j,p0 (Ω) if r ≤ p < ∞follows from the density of Cc∞ (Ω) in W m+j,r0 (Ω) and the embeddingW m+j,r0 (Ω) ↩→ W j,p (Ω). Next, the embedding W m+j,r0 (Ω) ↩→ C j 0 (Ω)follows from the density of Cc∞ (Ω) in W m+j,r0 (Ω) and the embeddingW m+j,r0 (Ω) ↩→ W j,∞ (Ω). The proofs of (i), (ii) and (iii) are similar,using properties (i), (ii) and (iii) of Theorem 5.4.14, respectively. □


5.4. SOBOLEV’S INEQUALITIES 167We now apply Morrey’s inequality <strong>to</strong> obtain embedings in spacesof the type C j,α (Ω).Theorem 5.4.16. Let Ω ⊂ R N be an open subset, let 1 ≤ r < ∞.Let m ≥ 1 be the smallest integer such that mr > N. It follows that forall integers j ≥ 0, W m+j,r0 (Ω) ↩→ C j 0 (Ω)∩Cj,α (Ω) with α = m−(N/r)if (m − 1)r < N, α any number in (0, 1) if (m − 1)r = N.Proof. Let u ∈ C ∞ c (Ω). It follows from Theorem 5.4.15 (iv)that‖u‖ W j,∞ ≤ C‖u‖ W m+j,r. (5.4.70)Let α be a multi-index with |α| = j. Setting v = D α u, we see that‖v‖ W m,r ≤ ‖u‖ W m+j,r. (5.4.71)Let {p =NrN−(m−1)rif (m − 1)r < N,max{r, N} < p < ∞ if (m − 1)r = N,so that p > N. It follows from Theorem 5.4.15 (i) and (5.4.71) that‖v‖ W 1,p ≤ C‖v‖ W m,r ≤ C‖u‖ W m+j,r. (5.4.72)Finally, we deduce from (5.4.72) and Morrey’s inequality (5.4.16) that|v(x) − v(y)| ≤ C|x − y| 1− N p ‖u‖W m+j,r,for all x, y ∈ R N . Applying (5.4.70), we conclude that ‖u‖ C j,α ≤C‖u‖ W m+j,r, with α = 1 − (N/p), which is the desired estimate. □The following two results are applications of Sobolev’s embeddingtheorems.C ∞ (Ω).Corollary 5.4.17. Given any 1 ≤ r ≤ ∞,∩ W m,rm≥0loc (Ω) =Proof. It is clear that C ∞ (Ω) ⊂ W m,rloc(Ω) for all m ≥ 0. Conversely,suppose u ∈ W m,rloc(Ω) for all m ≥ 0. Let ω ⊂⊂ Ω and letϕ ∈ Cc ∞ (Ω) satisfy ϕ(x) = 1 for x ∈ ω. It follows from Proposition5.1.14 that v = ϕu ∈ W m,10 (Ω) for all m ≥ 0, so that v ∈ C ∞ (Ω)by Theorem 5.4.15. Thus u ∈ C ∞ (ω) and the result follows, since ωis arbitrary.□


168 5. APPENDIX: SOBOLEV SPACESProposition 5.4.18. Let 1 ≤ p ≤ ∞, m ∈ N, m ≥ 1 and u ∈W m,ploc(Ω). If Dα u ∈ C(Ω) for all multi-index α with |α| = m, thenu ∈ C m (Ω).Proof. We proceed in three steps.Step 1. u ∈ C(Ω). Suppose u ∈ L q0loc (Ω) for some q 0 ≤ N andlet q 0 ≤ q 1 < ∞ satisfy1≥ 1 − 1 q 1 q 0 N .Let ϕ ∈ Cc ∞ (Ω) and set v = ϕu, so that v ∈ L q0 (Ω). Since ∇v =ϕ∇u + u∇ϕ , we deduce that ∇v ∈ L q0 (Ω). v being compactly supportedin Ω, it follows that v ∈ W 1,q00 (Ω) (see Remark 5.1.10 (i)).Applying Theorem 5.4.14, we see that v ∈ L q1 (Ω) and, since ϕ is arbitrary,we deduce that u ∈ L q1loc(Ω). We now iterate the above argumentand, starting from q 0 = 1, we construct q 0 < · · · < q k−1 ≤ N < q ksuch that u ∈ L qjloc (Ω) for 0 ≤ j ≤ k. Finally, let ϕ ∈ C∞ c (Ω) andset v = ϕu. We see as above that v ∈ W 1,q k(Ω), and it follows fromTheorem 5.4.14 that v ∈ C(Ω). Since ϕ is arbitrary, we conclude thatu ∈ C(Ω).Step 2. The case m = 1. It follows from Step 1 that u ∈ C(Ω).Since ∇u ∈ C(Ω) by assumption, we have in particular u ∈ W 1,∞loc(Ω).Let ω ⊂⊂ Ω and let ϕ ∈ Cc∞ (Ω) satisfy ϕ(x) = 1 for x ∈ ω. Setv = ϕu, so that v ∈ W 1,∞ (Ω) by Proposition 5.1.14. Since v issupported in a compact subset of Ω, it follows that if{v(x) if x ∈ Ω,w(x) =0 if x ∈ R N \ Ω,then w ∈ W 1,∞ (R N ) ∩ C(R N ). Moreover, one verifies easily that{u∇ϕ + ϕ∇u in Ω,∇w =0 in R N \ Ω,so that ∇w ∈ C(R N ). Since w and ∇w have compact support, wesee that w, ∇w ∈ C b,u (R N ). Applying Proposition 5.1.12, we deducethat w ∈ C 1 (R N ), and since w = u in ω, it follows that u ∈ C 1 (ω).Hence the result, since ω is arbitrary.Step 3. The case m ≥ 2. We proceed by induction on m.By Step 2, the result holds for m = 1. Suppose it holds up <strong>to</strong> somem ≥ 1. Let u ∈ W m+1,ploc(Ω) satisfy D α u ∈ C(Ω) for all multi-index


5.4. SOBOLEV’S INEQUALITIES 169α with |α| = m + 1. Consider 1 ≤ j ≤ N and set v = ∂ j u. Itfollows that v ∈ W m,ploc(Ω) and Dα u ∈ C(Ω) for all multi-index αwith |α| = m. Applying the result at the level m, we deduce thatv ∈ C m (Ω). Thus ∇u ∈ C m (Ω). In particular, ∇u ∈ C(Ω) and wededuce from Step 2 that u ∈ C 1 (Ω). Since ∇u ∈ C m (Ω), we concludethat u ∈ C m+1 (Ω).□If |Ω| < ∞, then ‖u‖ L r is dominated only in terms of ‖∇u‖ L r forfunctions of W 1,r0 (Ω). This is the object of the following result.Theorem 5.4.19 (Poincaré’s inequality). If |Ω| < ∞ and 1 ≤ r


170 5. APPENDIX: SOBOLEV SPACESnot the dual of L ∞ (Ω). In this case, we argue directly as follows. Itfollows from Theorem 5.4.14 that W m,r0 (Ω) ↩→ L ∞ (Ω). Define∫eu(ϕ) = uϕ,for all u ∈ L 1 (Ω) and ϕ ∈ W m,r0 (Ω). We have|eu(ϕ)| ≤ ‖u‖ L 1‖ϕ‖ L ∞Ω≤ ‖u‖ L 1‖ϕ‖ Wm,r,0so that e defines a mapping L 1 (Ω) → W −m,r′ (Ω). This mapping isinjective, because if (eu, ϕ) W −m,r ′ ,W m,r = 0 for all ϕ ∈ W m,r00 (Ω), thenin particular ∫ Ω uϕ = 0 for all ϕ ∈ C∞ c (Ω), which implies u = 0. Itremains <strong>to</strong> show that the embedding e : L 1 (Ω) → W −m,r′ (Ω) is dense.To prove this, we observe that by the density of Cc∞ (Ω) in L r′ (Ω)and of L r′ (Ω) in W −m,r′ (Ω) (see just above), it follows that Cc∞ (Ω) isdense in in W −m,r′ (Ω). The result follows, since Cc ∞ (Ω) ⊂ L 1 (Ω). □5.5. Compactness propertiesWe now study the compact embeddings of W 1,r0 (Ω). We beginwith a local compactness result in R N .Proposition 5.5.1. Let 1 ≤ r < ∞ and let K be a bounded subse<strong>to</strong>f W 1,r (R N ). For every R < ∞, K R := {u |BR ; u ∈ K} is relativelycompact in L r (B R ), where B R = B(0, R).Proof. We proceed in three steps.Step 1. If (ρ n ) n≥1 is a smoothing sequence, then‖u − ρ n ⋆ u‖ L r ≤ C n ‖∇u‖ Lr, (5.5.1)for all u ∈ W 1,r (R N ), where C = (∫ |y| r ρ(y) dy ) 1 r. By density, weR Nneed only show (5.5.1) for u ∈ Cc∞ (R N ). We claim that∫|u(x − y) − u(x)| r dx ≤ |y| r ‖∇u‖ r Lr. (5.5.2)R NIndeed,u(x − y) − u(x) =∫ 10∫d1dt u(x − ty) dt = y · ∇u(x − ty) dt;0


5.5. COMPACTNESS PROPERTIES 171and so,|u(x−y)−u(x)| ≤ |y|∫ 10( ∫ 1|∇u(x−ty)| dt ≤ |y| |∇u(x−ty)| r dt(5.5.2) follows after integration in x. Next, since ‖ρ n ‖ L 1 = 1,∫ρ n ⋆ u(x) − u(x) = ρ n (y)(u(x − y) − u(x)) dyR∫N= ρ n (y) r−1r [ρn (y) 1 r (u(x − y) − u(x))] dy.R NBy Hölder’s inequality, we deduce∫|ρ n ⋆ u(x) − u(x)| r ≤ ρ n (y)|u(x − y) − u(x)| r dy.R NIntegrating the above inequality on R N and applying (5.5.2), we find‖ρ n ⋆ u − u‖ r L r ≤ ‖∇u‖r L r ∫R N |y| r ρ n (y) dy.Hence (5.5.1).Step 2. If (ρ n ) n≥1 is as in Step 1, then0) 1r‖ρ n ⋆ u‖ W 1,∞ ≤ n N r ‖ρ‖L r ′ ‖u‖ W 1,r, (5.5.3)for all u ∈ W 1,r (R N ). Since ∇(ρ n ⋆ u) = ρ n ⋆ ∇u by Lemma 5.1.9, itfollows from Young’s inequality that‖ρ n ⋆ u‖ W 1,∞ ≤ ‖ρ n ‖ L r ′ ‖u‖ W 1,r,and the result follows.Step 3. Conclusion. Let R > 0, let K R be as in the statemen<strong>to</strong>f the proposition, and let ε > 0. Given n ≥ 1, set K n = {ρ n ⋆ u; u ∈K} and K n R = {u |B R; u ∈ K n }. Fix n large enough so thatsup ‖u − ρ n ⋆ u‖ L r ≤ εu∈K2 . (5.5.4)Such a n exists by (5.5.1). It follows from (5.5.3) that K n is a set ofuniformly Lipschitz continuous functions on R N . By Ascoli’s theorem,KR n is relatively compact in L∞ (B R ), thus in L r (B R ). Therefore, KRncan be covered by a finite number of balls of radius ε/2 in L r (B R ).By (5.5.4), we see that K R can be covered by a finite number of ballsof radius ε. Since ε > 0 is arbitrary, this shows compactness. □.


172 5. APPENDIX: SOBOLEV SPACESCorollary 5.5.2. Let Ω ⊂ R N be an open subset, let 1 ≤ r < ∞and let (u n ) n≥0 be a bounded sequence of W 1,r0 (Ω). There exist u ∈L r (Ω) and a subsequence (u nk ) k≥0 such that u nk → u a.e. in Ω andin L r (Ω ∩ B R ), for any R < ∞, as k → ∞.Proof. We first consider the case Ω = R N . It follows fromProposition 5.5.1 applied with R = 1 that there exist n(1, k) → ∞ ask → ∞ and w 1 ∈ L r (B 1 ) such that u n(1,k) → w 1 in L r (B 1 ) and a.e.in B 1 . We now apply Proposition 5.5.1 with R = 2 <strong>to</strong> the sequence(u n(1,k) ) k≥0 . It follows that there exist a subsequence n(2, k) → ∞ ask → ∞ and w 2 ∈ L r (B 2 ) such that u n(2,k) → w 2 in L r (B 2 ) and a.e. inB 2 . By recurrence, we construct n(l, k) → ∞ as k → ∞ and (w l ) l≥1with w l ∈ L r (B l ) such that u n(l,k) → w l in L r (B l ) and a.e. in B l .Moreover, (n(l, k)) k≥0 is a subsequence of (n(m, k)) k≥0 for l > m,i.e. for every k ≥ 0, there exists k ′ ≥ k such that n(l, k) = n(m, k ′ ).We set n k = n(k, k). Since n(k, k) is a subsequence of n(l, k) for anyl ≥ 1, we see that u nk → w l in L r (B l ) and a.e. in B l . In particular,w l ≡ w m on B m if l ≥ m. We now set u ≡ w l on B m , for l ≥ m.We have u nk → u in L r (B R ) and a.e. in B R , for any R < ∞. Inparticular,‖u‖ Lr (B R ) = lim ‖u n k‖ Lr (B R ) ≤ lim sup ‖u n ‖ Lr (Rk→∞ N ).n→∞We deduce that u ∈ L r (R N ). Since u nk → u a.e. in B R for anyR < ∞, we conclude that u nk → u a.e. in R N .We now consider the case of an arbitrary domain Ω ⊂ R N . Let(u n ) n≥0 be as above and set{u n (x) if x ∈ Ω,ũ n (x) =0 if x ∈ R N \ Ω,so that the sequence (ũ n ) n≥0 is bounded in W 1,r (R N ). (See Remark5.1.10 (iv).) It follows from what precedes that there existũ ∈ L r (R N ), supported in Ω, and a subsequence (ũ nk ) k≥0 such thatũ nk → ũ as k → ∞ in L r (B R ) for any R < ∞ and a.e. in R N . Theresult now follows by setting u = ũ |Ω . This completes the proof. □Lemma 5.5.3. Let Ω be an open subset of R N . Let 1 ≤ r ≤ ∞, let(u n ) n≥0 be a bounded sequence of L r (Ω) and let u ∈ L 1 loc(Ω). Suppose


5.5. COMPACTNESS PROPERTIES 173that∫u n ϕ −→ uϕ, (5.5.5)Ω n→∞∫Ωfor all ϕ ∈ Cc∞ (Ω) (which is satisfied in particular if u n → u in L 1 (ω)for every ω ⊂⊂ Ω). Then u ∈ L r (Ω) and‖u‖ L r≤ lim infn→∞ ‖u n‖ L r. (5.5.6)Moreover, if r > 1 then (5.5.5) holds for all ϕ ∈ L r′ (Ω). In addition,if 1 < r < ∞ and if ‖u n ‖ L r → ‖u‖ L r as n → ∞, then u n → u inL r (Ω).Suppose further that 1 < r < ∞ and that (u n ) n≥0 is a boundedsequence of W 1,r0 (Ω). It follows that u ∈ W 1,r0 (Ω),∫∇u n ϕ −→ ∇uϕ, (5.5.7)Ω n→∞∫Ωfor all ϕ ∈ L r′ (Ω) and‖∇u‖ L r≤ lim infn→∞ ‖∇u n‖ L r. (5.5.8)If in addition ‖∇u n ‖ L r → ‖∇u‖ L r as n → ∞, then ∇u n → ∇u inL r (Ω).Proof. We claim that for all u ∈ L 1 loc (Ω),{∣∫‖u‖ L r∣∣ ∣= sup uϕ∣; ϕ ∈ Cc ∞ (Ω), ‖ϕ‖ L r}. ′ = 1 (5.5.9)ΩIf r > 1, this is immediate because L r′ (Ω) ⋆ = L r (Ω) and Cc∞ (Ω) isdense in L r′ (Ω). Suppose now r = 1 and suppose u ≠ 0 (the caseu = 0 is immediate). Fix 0 < M < ‖u‖ L 1 ≤ ∞. There exists acompact set K ⊂ Ω such that ∫|u| > M.KLeth(x) ={ u(x)|u(x)|if u(x) ≠ 0 and x ∈ K,0 if u(x) = 0 or x ∉ K.We have h ∈ L ∞ (Ω), ‖h‖ L ∞ = 1. Moreover, h has compact supportin Ω and∫ ∫uh = |u| > M.ΩK


174 5. APPENDIX: SOBOLEV SPACESLet (ρ n ) n≥0 be a smoothing sequence and set h n = (ρ n ⋆ h) |Ω . For nlarge enough, we have h n ∈ Cc∞ (Ω). Moreover, up <strong>to</strong> a subsequence,h n → h a.e. In addition, ‖h n ‖ L ∞ ≤ ‖h‖ L ∞ = 1. By dominatedconvergence, we deduce∫uh n −→ uh > M;Ω n→∞∫Ωand so,{∣∫} ∣∣ ∣sup uϕ∣; ϕ ∈ Cc ∞ (Ω), ‖ϕ‖ L p ′ = 1 ≥ M.ΩSince M < ‖u‖ L 1 is arbitrary, we deduce{∣∫} ∣∣ ∣sup uϕ∣; ϕ ∈ Cc ∞ (Ω), ‖ϕ‖ L p ′ = 1 ≥ ‖u‖ L 1.ΩThe converse inequality being immediate, (5.5.9) follows. Now, since∫∣ u n ϕ∣ ≤ ‖u n ‖ L r‖ϕ‖ L r ′ ,Ω(5.5.6) follows from (5.5.5) and (5.5.9). The fact that if r > 1,then (5.5.5) holds for all ϕ ∈ L r′ (Ω) follows by density of Cc∞ (Ω)in L r′ (Ω).Suppose now that 1 < r < ∞, that ‖u n ‖ L r → ‖u‖ L r as n → ∞and let us show that u n → u in L r (Ω). If ‖u‖ L r = 0, then the resultis immediate. Therefore, we may assume that ‖u‖ L r ≠ 0, so that also‖u n ‖ L r ≠ 0 for n large. Let then ũ = ‖u‖ −1L u and ũ r n = ‖u n ‖ −1L u n. Itrfollows that‖ũ n ‖ L r = ‖ũ‖ L r = 1.Furthermore, (5.5.5) is satisfied with u and u n replaced by ũ and ũ n .Setting w = 2ũ and w n = ũ + ũ n , we deduce that (5.5.5) is satisfiedwith u and u n replaced by w and w n . In particular, it follows fromwhat precedes that ‖w‖ L r ≤ lim inf ‖w n ‖ L r. Since ‖w‖ L r = 2 and‖w n ‖ L r ≤ ‖ũ‖ L r + ‖ũ n ‖ L r = 2, it follows that‖w n ‖ L r−→ 2.n→∞If r ≥ 2, we have Clarkson’s inequality (see e.g. Hewitt and Stromberg[26])‖ũ n − ũ‖ r L r ≤ 2r−1 (‖ũ‖ r L r + ‖ũ n‖ r L r) − ‖ũ + ũ n‖ r L r.


5.5. COMPACTNESS PROPERTIES 175Therefore, ‖ũ n − ũ‖ L r → 0, from which it follows that u n → u inL r (Ω). In the case r ≤ 2, the conclusion is the same by using Clarkson’sinequality (see e.g. Hewitt and Stromberg [26])‖ũ n − ũ‖ rr−1L r ≤ 2(‖ũ‖r L r + ‖ũ n‖ r L r) 1r−1− ‖ũ + ũn ‖ rr−1L r .Suppose finally that 1 < r < ∞ and that (u n ) n≥0 is a boundedsequence of W 1,r0 (Ω). If ϕ ∈ Cc ∞ (Ω), then for all j ∈ {1, . . . , N}∫ ∫∂u n ∂ϕ− ϕ = u n −→ uΩ ∂x j Ω ∂x j n→∞∫Ω∂ϕ , (5.5.10)∂x jby (5.5.5). Set now∫f j (ϕ) = − u ∂ϕ ,Ω ∂x jfor ϕ ∈ Cc∞ (Ω). It follows from (5.5.10) and the boundedness of thesequence (u n ) n≥0 in W 1,r (Ω) that|f j (ϕ)| ≤ C‖ϕ‖ L r ′ .Therefore, f can be extended by continuity and density <strong>to</strong> a linear,continuous functional on L r′ (Ω). Since L r′ (Ω) ⋆ = L r (Ω), there existsg j ∈ L r (Ω) such that∫f j (ϕ) = g j ϕ,for all ϕ ∈ Cc∞ (Ω) and by density, for all ϕ ∈ L r′ (Ω). This impliesthat∫u ∂ϕ ∫= − g j ϕ,∂x jΩfor all ϕ ∈ Cc ∞ (Ω). Thus u ∈ W 1,r (Ω). (5.5.7) follows from (5.5.10)and the above identity. The last properties follow by using (5.5.7) andapplying the first part of the result <strong>to</strong> ∇u n instead of u n . □We can now establish the compact sobolev embeddings.Theorem 5.5.4 (Rellich-Kondrachov). Let Ω ⊂ R N be a bounded,open set, and let 1 ≤ r < ∞. It follows that the embedding W 1,r0 (Ω) ↩→L r (Ω) is compact.Proof. Let (u n ) n≥0 be a bounded sequence of W 1,r0 (Ω). It followsfrom Corollary 5.5.2 that there exist u ∈ L r (Ω) and a subsequence(u nk ) k≥0 such that u nk → u in L r (Ω) as k → ∞. This completesthe proof.□ΩΩ


176 5. APPENDIX: SOBOLEV SPACESIn fact, we have the following stronger result.Theorem 5.5.5. Let Ω ⊂ R N be a bounded, open set, let 1 ≤ r 1, then u ∈ W 1,r0 (Ω) and∫∇u nk ϕ −→ ∇uϕ,Ω k→∞∫Ωfor all ϕ ∈ L r′ (Ω). In particular,‖∇u‖ L r≤ lim infk→∞ ‖∇u n k‖ L r.If, in addition, ‖∇u‖ L r = lim ‖∇u nk ‖ L r as k → ∞, then u nk →u in W 1,r0 (Ω).Proof. The first part of the result follows from Theorem 5.5.4.Next, except in the case r = N ≥ 2, it follows from Theorem 5.4.14that (u n ) n≥0 is bounded in L r (Ω), from which we deduce u ∈ L r (Ω).(See Lemma 5.5.3.) In the case r = N ≥ 2, it follows from Theorem5.4.14 that (u n ) n≥0 is bounded in L p (Ω) for any p < ∞, fromwhich we deduce u ∈ L p (Ω) for all p < ∞. Property (i) now followsfrom the L r bound (or L p bound for all p < ∞ if r = N ≥ 2) and theL r convergence by applying Hölder’s inequality <strong>to</strong> u nk − u. Finally,property (ii) follows from Lemma 5.5.3.□Remark 5.5.6. If Ω is not bounded, we still have a local compactnessresult. Given R > 0, set Ω R = {x ∈ Ω; |x < R|}. Givenany bounded sequence (u n ) n≥0 of W 1,r0 (Ω), there exist a subsequence(u nk ) k≥0 and u ∈ L r (Ω) such that u nk → u as k → ∞, a.e. in Ω andin L r (Ω R ) for every R < ∞. Moreover, the following properties hold.(i) If r = N = 1, then u ∈ L ∞ (Ω) and u nk → u in L p (Ω R ) for everyp < ∞ and every R < ∞. In addition, ‖u‖ L p ≤ lim inf ‖u n ‖ L pas n → ∞ for every 1 ≤ p ≤ ∞.


5.5. COMPACTNESS PROPERTIES 177(ii) If N ≥ 2 and 1 ≤ r < N, then u ∈ L NrN−r (Ω) and unk → uin L p (Ω R ) for every p < Nr/(N − r) and every R < ∞. Inaddition, ‖u‖ L p ≤ lim inf ‖u n ‖ L p as n → ∞.(iii) If N ≥ 2 and r = N, then u ∈ L p (Ω) and u nk → u in L p (Ω R )for every p < ∞ and every R < ∞. In addition, ‖u‖ L p ≤lim inf ‖u n ‖ L p as n → ∞.(iv) If r > N, then u ∈ L ∞ (Ω) and u nk → u in L ∞ (Ω R ) for everyR < ∞. In addition, ‖u‖ L p ≤ lim inf ‖u n ‖ L p as n → ∞ for everyr ≤ p ≤ ∞.(v) If r > 1, then u ∈ W 1,r0 Ω) and∫∇u nk ϕ −→ ∇uϕ,Ω k→∞∫Ωfor all ϕ ∈ L r′ (Ω). In particular,‖∇u‖ L r≤ lim infk→∞ ‖∇u n k‖ L r.If moreover ‖∇u‖ L r = lim ‖∇u nk ‖ L r and ‖u‖ L r = lim ‖u nk ‖ L ras k → ∞, then u nk → u in W 1,r0 (Ω).Those properties are proved like Theorem 5.5.5, except for thelocal convergence in (i)–(iv). This follows by applying Theorem 5.5.5<strong>to</strong> the sequence (ξu n ) n≥0 , where ξ ∈ C ∞ c (R N ) is such that ξ(x) = 1for |x| ≤ R.Corollary 5.5.7. Suppose Ω ⊂ R N is a bounded open set. Let1 < r < ∞ and let m ≥ 1 be an integer. If{∞ if mr ≥ N,p =NrN−mrif mr < N,then the embeddings W m,r0 (Ω) ↩→ L p (Ω) and L p′ (Ω) ↩→ W −m,r′ arecompact for all 1 ≤ p < p.Proof. We first observe that by Theorem 5.5.4, the embeddingW 1,r0 (Ω) ↩→ L r (Ω) is compact, hence the embedding W m,r0 (Ω) ↩→L 1 (Ω) is also compact. Applying Theorem 5.4.14 and Hölder’s inequality,we deduce that if 1 ≤ p < p, then the embedding W m,r0 (Ω) ↩→L p (Ω) is compact. This proves the first part of the result, and thesecond part follows from the abstract duality property of Proposition5.1.19 (iii).□


178 5. APPENDIX: SOBOLEV SPACES5.6. Compactness properties in R NIn the case of unbounded domains, compactness may fail for variousreasons. Consider for example the case Ω = R N . Let ϕ ∈Cc∞ (R N ), ϕ ≢ 0, and let y ∈ R N , y ≠ 0. Setting u n (x) = ϕ(x − ny),it is clear that (u n ) n≥0 is bounded in W 1,r (R N ) for any 1 ≤ r < ∞.On the other hand, one sees easily that given any R > 0, u n ≡ 0 onB R for n large enough, so that the limit u given by Corollary 5.5.2 is0. On the other hand, u n ↛ u in L r (R N ) since ‖u n ‖ L r = ‖ϕ‖ L r ≠ 0.However, one sees that the sequence is relatively compact in L r (R N ),up <strong>to</strong> a translation (since this is how the sequence was constructed).One can also consider z ∈ R N , z ≠ 0, z ≠ y and set u n (x) =ϕ(x − ny) + ϕ(x − nz). In this case, u n also converges locally <strong>to</strong>0 and for n large enough ‖u n ‖ L r = 2‖ϕ‖ L r. However, in this case,one verifies easily that the sequence is not relatively compact up <strong>to</strong> atranslation. Indeed the sequence splits in two parts, each of which isrelatively compact up <strong>to</strong> translations, but with different translations.<strong>An</strong>other case of noncompactness is the following. For n ≥ 1,set u n (x) = n − N r ϕ(n −1 x). Then ‖u n ‖ L r = ‖ϕ‖ L r. On the otherhand, ‖∇u n ‖ L r = n −1 ‖∇ϕ‖ L r, so that u n is bounded in W 1,r (R N ).However, u n converges locally <strong>to</strong> 0. In this case, the sequence is notrelatively compact up <strong>to</strong> translations.As a matter of fact, the three cases considered above describe thegeneral situation, as follows from the following result. For simplicity,we consider the case r = 2, but a similar result holds in general.This result is based on the concentration compactness techniques introducedby P.-L. Lions [32, 33].Theorem 5.6.1. Let (u n ) n≥0 be a bounded sequence of H 1 (R N ).Suppose there exists a > 0 such that ‖u n ‖ 2 L 2 → a as n → ∞. Itfollows that there exists a subsequence (u nk ) k≥0 which satisfies one ofthe following properties.(i) (Compactness up <strong>to</strong> a translation.) There exist u ∈ H 1 (R N ) anda sequence (y k ) k≥0 ⊂ R N such that u nk (· − y k ) → u in L p (R N )as k → ∞, for 2 ≤ p < 2N/(N − 2) (2 ≤ p ≤ ∞ if N = 1).(ii) (Vanishing.) ‖u nk ‖ L p → 0 as k → ∞ for 2 < p < 2N/(N − 2)(2 < p ≤ ∞ if N = 1).


5.6. COMPACTNESS PROPERTIES IN R N 179(iii) (Dicho<strong>to</strong>my.) There exist 0 < µ < a and two sequences (v k ) k≥0and (w k ) k≥0 of H 1 (R N ) with compact support, such that‖v k ‖ H 1 + ‖w k ‖ H 1 ≤ C sup{‖u n ‖ H 1; n ≥ 0},and‖v k ‖ 2 L → µ, ‖w k‖ 2 2 L → a − µ,2dist (supp v k , supp w k ) → ∞,‖u nk − v k − w k ‖ L p → 0 for 2 ≤ p < 2N/(N − 2),lim sup ‖∇v k ‖ 2 L 2 + ‖∇w k‖ 2 L 2 ≤ lim inf ‖∇u n k‖ 2 L 2.as k → ∞For the proof of Theorem 5.6.1, we will use the following estimate.Lemma 5.6.2. If 2 < p < 2N/(N − 2) (2 < p ≤ ∞ if N = 1),then there exists a contant C such that(1−θ,‖u‖ L p (R N ) ≤ C‖u‖ θ H 1 (R N )|u(x)| dx) 2supy∈R N ∫|x−y|≤1for every u ∈ H 1 (R N ). Here, θ = max{2/p, N(p − 2)/2p}.Proof. Let (Q j ) j≥1 be a sequence of disjoint unit cubes of R Nsuch that ∪ j≥0 Q j = R N . Let x j be the center of the cube Q j andassume for example that x 0 = 0. Let ρ ∈ Cc∞ (R N ) satisfy ρ ≡ 1 onQ 0 and 0 ≤ ρ ≤ 1. Let k be the number of cubes that intersect thesupport of ρ. Finally, set ρ j (x) = ρ(x − x j ) and p 0 = 2 + 4/N. Itfollows from Gagliardo-Nirenberg’s inequality that‖u‖ p0L p 0 (Q j) ≤ ‖ρ ju‖ p0L p 0 (R N ) ≤ C 1‖∇(ρ j u)‖ 2 L 2 (R N ) ‖ρ ju‖ 4 NL 2 (R N ) .Summing in j, we obtain(∑)‖u‖ L p 0 (R N ) ≤ C 1 ‖∇(ρ j u)‖ 2 L 2 (R N )j≥0supj≥0‖ρ j u‖ 4 NL 2 (R N ) .Since ‖∇(ρ j u)‖ 2 L 2 (R N ) ≤ C 2‖u‖ 2 H 1 (supp (ρ j)), we deduce∑‖∇(ρ j u)‖ 2 L 2 (R N ) ≤ kC ∑2 ‖u‖ 2 H 1 (Q ≤ kC j) 2‖u‖ 2 H 1 (R N ) ;j≥0and so,j≥0‖u‖ p0L p 0 (R N ) ≤ kC 1C 2 ‖u‖ 2 H 1 (R N ) sup ‖ρ j u‖ 4 NL 2 (R N ) .j≥0


180 5. APPENDIX: SOBOLEV SPACESIf R is large enough so that supp ρ ⊂ B R , we deduce that() 4‖u‖ p0L p 0 (R N ) ≤ kC 1C 2 ‖u‖ 2 H 1 (R N )sup |u(x)|y∈R∫|x−y|≤R2 Ndx .NChanging u(x) <strong>to</strong> u(R −1 x), we deduce() 4‖u‖ p0L p 0 (R N ) ≤ C‖u‖2 H 1 (R N )sup |u(x)|y∈R∫|x−y|≤12 Ndx .NThe result now follows from Hölder’s inequality between ‖u‖ L 2 and‖u‖ L p 0 if 2 < p ≤ p 0 , and from Gagliardo-Nirenberg’s inequalitybetween ‖u‖ L p 0 and ‖∇u‖ L 2 if p > p 0 . □Proof of Theorem 5.6.1. Given u ∈ H 1 (R N ), consider thedistribution function of u,ρ(t) = sup |u(x)|y∈R∫{|x−y|


5.6. COMPACTNESS PROPERTIES IN R N 181for all s, t ≥ 0. Indeed, if t > s then∫∫ρ(t) − ρ(s) =|u(x)| 2 dx −∫≤∫={|x−y(t)|


182 5. APPENDIX: SOBOLEV SPACESwe denote again by (ũ n ) n≥0 , and u ∈ H 1 (R N ) such that ũ n → u inL 2 (B R ) for every R > 0. Moreover, ‖u‖ L 2 ≤ lim inf n→∞ ‖ũ n ‖ L 2 = a.We now consider separately three cases.Case 1: µ = a. We prove that in this case, (i) occurs. We firs<strong>to</strong>bserve that‖u‖ 2 L 2 ≤ a = µ.On the other hand, let µ/2 < λ < µ and let R be large enoughso that ρ(R) > λ. It follows that for n large enough, ρ n (R) ≥ λ.We claim that |y n (r) − y n (R)| ≤ R + r. Indeed, otherwise the sets{|x − y n (r)| < r} and {|x − y n (R)| < R} are disjoint, so that∫∫∫|u n (x)| 2 dx ≥|u n | 2 +|u n | 2R N{|x−y n(r)| µ,{|x−y n(R)|


5.6. COMPACTNESS PROPERTIES IN R N 183Indeed, we have ρ n (t n /2) ≤ ρ n (t n ), so that lim sup ρ n (t n /2) ≤ µby (5.6.4). On the other hand, let t > 0 and let n be large enough sothat t n /2 ≥ t. It follows thatρ n (t n /2) ≥ ρ n (t) −→n→∞ρ(t);and so, lim inf ρ n (t n /2) ≥ ρ(t). Letting t → ∞, we deduce thatlim inf ρ n (t n /2) ≥ µ, which proves (5.6.5). Next, we choose τ n > 0such that∫|u n | 2 ≥ ‖u n ‖ 2 L − 1 2 n . (5.6.6){|x−y n(t n/2)| t n . Finally, let θ ∈ C ∞ ([0, ∞))satisfy θ(t) ≡ 1 for 0 ≤ t ≤ 1/2, θ(t) ≡ 0 for t ≥ 5/8 and 0 ≤ θ ≤ 1.Let ϕ ∈ C ∞ ([0, ∞)) be such that ϕ(t) ≡ 0 for 0 ≤ t ≤ 7/8, ϕ(t) ≡ 1for t ≥ 1 and 0 ≤ ϕ ≤ 1. For n ≥ 0, let⎧ (⎨θ n (x) = θ |x−yn(t n/2)|t n),( (⎩ϕ n (x) = ϕ |x−yn(t n/2)|t n)θ |x−yn(t n/2)|2τ n),for x ∈ R N . Note that θ n vanishes for |x−y n (t n /2)| ≥ 5t n /8 and thatϕ n vanishes for |x − y n (t n /2)| ≤ 7t n /8 and |x − y n (t n /2)| ≥ 5τ n /4.Moreover, θ n = 1 for |x − y n (t n /2)| ≤ t n /2 and ϕ n = 1 for t n ≤|x − y n (t n /2)| ≤ τ n if n is large enough so that τ n > t n . In addition,( 1|∇θ n | + |∇ϕ n | ≤ C + 1 )≤ C . (5.6.7)t n τ n t nWe now define the sequences (v n ) n≥0 and (w n ) n≥0 byv n (x) = θ n (x)u n (x),w n (x) = ϕ n (x)u n (x).In particular, (v n ) n≥0 ⊂ H 1 (R N ), (w n ) n≥0 ⊂ H 1 (R N ), and both v nand w n have compact support. Moreover, one sees easily thatanddist (supp v n , supp w n ) ≥ t n /8 −→n→∞+∞, (5.6.8)‖v n ‖ H 1 + ‖w n ‖ H 1 ≤ C‖u n ‖ H 1. (5.6.9)


≤ ρ n (t n ) − ρ n (t n /2) + 1 n , (5.6.11)184 5. APPENDIX: SOBOLEV SPACESFurthermore,∫ρ n (t n /2) =∫≤{|x−y n(t n/2)|


5.6. COMPACTNESS PROPERTIES IN R N 185Therefore, (iii) is satisfied.≥ − C t 2 n‖u n ‖ 2 L 2 − C t n‖u n ‖ L 2‖∇u n ‖ L 2 −→n→∞0.For spherically symmetric functions in dimension N ≥ 2, thesituation is simpler, and we have the following compactness resul<strong>to</strong>f Strauss [42].Theorem 5.6.3. Suppose N ≥ 2. If (u n ) n≥0 ⊂ H 1 (R N ) is abounded sequence of spherically symmetric functions, then there exista subsequence (u nk ) k≥0 and u ∈ H 1 (R N ) such that u nk −→ u ink→∞L p (R N ) for every 2 < p < 2N/(N − 2) (2 < p < ∞ if N = 2).Proof. We proceed in three steps.Step 1. Let (u n ) n≥0 be a bounded sequence of H 1 (R N ). Assumethat u n (x) → 0 as |x| → ∞, uniformly in n ≥ 0, i.e. for every ε > 0there exists R < ∞ such that |u n (x)| ≤ ε for almost all |x| ≥ R andall n ≥ 0. It follows that there exist a subsequence (u nk ) k≥0 andu ∈ H 1 (R N ) such that u nk → u in L p (R N ) as k → ∞, for all R < ∞and all 2 < p < 2N/(N − 2).Indeed, by Remark 5.5.6, there exist a subsequence (u nk ) k≥0 andu ∈ H 1 (R N ) such that u nk → u in L p (B R ) for any R < ∞ and a.e.in R N . In particular, u(x) converges <strong>to</strong> 0 as x → ∞. We have‖u nk − u‖ L p (R N ) = ‖u nk − u‖ L p (B R ) + ‖u nk − u‖ L p ({|x|>R})p−2p≤ ‖u nk − u‖ Lp (B R ) + ‖u nk − u‖L ∞ ({|x|>R}) ‖u n k− u‖ L2 (R N ).Let δ > 0. By uniform convergence, there exists R < ∞ such thatp−2p‖u nk − u‖L ∞ ({|x|>R}) ‖u n k− u‖ L 2 (R N ) ≤ δ 2 .Next, R being chosen, for k large enough we have‖u nk − u‖ Lp (B R ) ≤ δ 2 ;and so, ‖u nk − u‖ L p (R N ) ≤ δ. Hence the result.Step 2. If u ∈ H 1 (R N ) is spherically symmetric, then□for a.a. x ∈ R N .|x| N−12 |u(x)| ≤ √ 2‖u‖ L 2‖∇u‖ L 2, (5.6.13)


186 5. APPENDIX: SOBOLEV SPACESBy truncation and regularisation, there exists a sequence of sphericallysymmetric functions (u n ) n≥0 ⊂ Cc∞ (R N ) such that u n → u asn → ∞, in H 1 (R N ) and a.e. Therefore, we need only establish theestimate for spharically symmetric, smooth functions. In this case,∫ ∞∫ ∞r N−1 u(r) 2 d= −r ds (sN−1 u(s) 2 ) ds ≤ 2 s N−1 u(s)u ′ (s) ds.rThe result now follows from Cauchy-Schwarz inequality.Step 3. Conclusion. We deduce from the estimate (5.6.13)that u n (x) → 0 as |x| → ∞, uniformly in n ≥ 0, and the conclusionfollows from Step 1.□Remark 5.6.4. The conclusion of Theorem 5.6.3 could also beobtained by using Theorem 5.6.1 and the estimate (5.6.13).Remark 5.6.5. Suppose N ≥ 2. Let R > 0 and Ω = {x ∈R N ; |x| > R}. Let W be the subspace of H0 1 (Ω) of radially symmetricfunctions. Given u ∈ W , we may extend u by 0 outside Ω inorder <strong>to</strong> obtain a radially symmetric function of H 1 (R N ). By applying(5.6.13), we deduce that W ↩→ L ∞ (Ω). Arguing as in Theorem5.6.3, one shows that if (u n ) n≥0 ⊂ W is a bounded sequence, thenthere exist a subsequence (u nk ) k≥0 and u ∈ W such that u nk → u inL p (R N ) as k → ∞, for every 2 < p < ∞.


Bibliography[1] Adams R.A., Sobolev spaces, Academic press, New-York, 1975.[2] Agmon S., Lower bounds of solutions of Schrödinger <strong>equations</strong>, J. <strong>An</strong>al.Math. 23 (1970), 1–25.[3] Agmon S., Douglis A. and Nirenberg L., Estimates near the boundary forsolutions of <strong>elliptic</strong> partial differential <strong>equations</strong> satisfying general boundaryconditions, Comm. Pure Appl. Math. 17 (1964), 35–92.[4] Ambrosetti A. and Rabinowitz P.H., Dual variational methods in criticalpoint theory and applications, J. Funct. <strong>An</strong>al. 14 (1973), 349–381.[5] Bahri A., Topological results on a certain class of functionals and applications,J. Funct. <strong>An</strong>al. 41 (1981), 397–427.[6] Bahri A. and Berestycki H., A perturbation method in critical point theoryand applications, Trans. Amer. Math. Soc. 267 (1981), 1–32.[7] Bahri A. and Lions P.-L., Morse index of some min-max critical points I.Application <strong>to</strong> multiplicity results, Comm. Pure Appl. Math. 41 (1988), 1027–1037.[8] Berestycki H. and Lions P.-L., Nonlinear scalar field <strong>equations</strong>, Arch. Ration.Mech. <strong>An</strong>al. 82 (1983), 313–375.[9] Berestycki H. and Nirenberg L., On the method of moving planes and thesliding method, Bol. Soc. Bras. Mat. 22 (1991), 1–37.[10] Bergh J. and Löfström J., Interpolation spaces, Springer, New York, 1976.[11] Brezis H., <strong>An</strong>alyse fonctionnelle, théorie et applications, Masson, Paris,1983.[12] Brezis H., Cazenave T., Martel Y. and Ramiandrisoa A., Blow up foru t − △u = g(u) revisited, Adv. Differential Equations 1 (1996), 73–90.[13] Brezis H. and Lions P.-L., <strong>An</strong> estimate related <strong>to</strong> the strong maximumprinciple, Boll. U.M.I. 17A (1980), 503–508.[14] Brezis H. and Nirenberg L., Nonlinear Functional <strong>An</strong>alysis and Applications,Part 1, <strong>to</strong> appear.[15] Brezis H. and Nirenberg L., Positive solutions of nonlinear <strong>elliptic</strong> <strong>equations</strong>involving critical Sobolev exponents, Comm. Pure Appl. Math. 36(1983), 434–477.[16] Brock F. and Solynin A., <strong>An</strong> approach <strong>to</strong> symmetrization via polarization,Trans. Amer. Math. Soc. 352 (2000), 1759–1796.[17] Budd C. and Norbury J., Semilinear <strong>elliptic</strong> <strong>equations</strong> and supercriticalgrowth, J. Differential Equations 68 (1987), 169–197.187


188 BIBLIOGRAPHY[18] Cazenave T., Escobedo M. and Pozio M.A., Some stability properties forminimal solutions of −∆u = λg(u), Portugal. Math. 59 (2002), 373–391.[19] Cerami G, Solimini S. and Struwe M., Some existence results for superlinear<strong>elliptic</strong> boundary value problems involving critical exponents, J. Funct.<strong>An</strong>al. 69 (1986), 289–306.[20] Comte M., Solutions of <strong>elliptic</strong> <strong>equations</strong> with critical Sobolev exponent indimension three, Nonlinear <strong>An</strong>al. 17 (1991), 445-455.[21] Ekeland I., On the variational principle, J. Math. <strong>An</strong>al. Appl. 47 (1974),324–353.[22] Gidas B., Ni W.M. and Nirenberg L., Symmetry of positive solutions ofnonlinear <strong>elliptic</strong> <strong>equations</strong> in R n , Advances in Math. Suppl. Studies 7A(1981), 369-402.[23] Gilbarg D. and Trudinger N.S., Elliptic partial differential <strong>equations</strong> ofthe second order, Springer-Verlag, Berlin, 1983.[24] Hardy G.H., Littlewood J.E. and Pólya G., Inequalities, Cambridge UniversityPress, London, 1952.[25] Hartman P. and Stampacchia G., On some nonlinear <strong>elliptic</strong> partial differential<strong>equations</strong>, Acta Math. 115 (1966), 271–310.[26] Hewitt E. and Stromberg K., Real and abstract analysis, Springer, NewYork, 1965.[27] Ka<strong>to</strong> T., Growth properties of solutions of the reduced wave equation witha variable coefficient, Comm. Pure Appl. Math. 12 (1959), 403–425.[28] Kavian O., Introduction à la théorie des points critiques et applications auxproblèmes elliptiques, Mathématiques & Applications 13, Springer-Verlag,Paris, 1993.[29] Kwong M.K., Uniqueness of positive solutions of △u − u + u p = 0 in R n ,Arch. Ration. Mech. <strong>An</strong>al. 105 (1989), 243–266.[30] Lieb E.H., Existence and uniqueness of the minimizing solution of Choquard’snonlinear equation, Studies in Applied Math. 57 (1977), 93–105.[31] Lieb E.H. and Loss M., <strong>An</strong>alysis, Graduate Studies in Mathematics 14,American Mathematical Society, Providence, RI, 2001.[32] Lions P.-L., The concentration-compactness principle in the calculus of variations.The locally compact case. Part 1, <strong>An</strong>n. Inst. H. Poincaré <strong>An</strong>al. NonLinéaire 1 (1984), 109–145.[33] Lions P.-L., The concentration-compactness principle in the calculus of variations.The locally compact case. Part 2, <strong>An</strong>n. Inst. H. Poincaré <strong>An</strong>al. NonLinéaire 1 (1984), 223–283.[34] Lopes O., personnal communication.[35] Marcus M. and Mizel V.J., Complete characterization of functions whichact, via superposition, on Sobolev spaces, Trans. Amer. Math. Soc. 251(1979), 187–218.[36] Martel Y., Uniqueness of weak extremal solutions of nonlinear <strong>elliptic</strong> problems,Hous<strong>to</strong>n J. Math. 23 (1997), 161–168.[37] McLeod K., Uniqueness of positive radial solutions of △u + f(u) = 0 in R N ,Trans. Amer. Math. Soc. 339 (1993), 495–505.


BIBLIOGRAPHY 189[38] McLeod K., Troy W.C. and Weissler F.B., Radial solutions of △u +f(u) = 0 with prescribed number of zeroes, J. Differential Equations 83(1990), 368–378.[39] Pazy A., Semi-groups of linear opera<strong>to</strong>rs and applications <strong>to</strong> partial differential<strong>equations</strong>, Applied Math. Sciences 44, Springer, New-York, 1983.[40] Pohožaev S.I., Eigenfunctions of the equation △u + λf(u) = 0, Sov. Math.Dokl. 6 (1965), 1408–1411.[41] Stein E.M., Singular integrals and differentiability of functions, Prince<strong>to</strong>nUniversity Press, Prince<strong>to</strong>n, 1970.[42] Strauss W.A., Existence of solitary waves in higher dimensions, Comm.Math. Phys. 55 (1977), 149–162.[43] Struwe M., Infintely many critical points for functionals which are not evenand applications <strong>to</strong> superlinear boundary value problems, Manuscripta Math.32 (1980), 335–364.[44] Véron L., Weak and strong singularities of nonlinear <strong>elliptic</strong> <strong>equations</strong>, inNonlinear Functional <strong>An</strong>alysis and its Applications, Part II (F.E. Browder,Edi<strong>to</strong>r), Proc. Symp. Pure Math. 45-2, Amer. Math. Soc., Providence, 1986.


Index of subjectsDuality argument, 104, 109Eigenfunction, eigenvalue, 84Ekeland’s principle, 56Exponential decay, 21, 119Supersolution, 90Symmetric-decreasingrearrangement, 73Symmetry of solutions, 120Fourier multiplier, 139Fréchet derivative, 35Gagliardo-Nirenberg’s inequality,148, 155, 165Ground state, 51, 65, 67Iteration method, 91Lagrange multipliers, 48Lax-Milgram’s lemma, 32Maximum principle (strong), 82, 83Maximum principle (weak), 81Morrey’s inequality, 153Mountain pass theorem, 56Nodal properties of solutions, 16Pohožaev’s identity, 75, 76Poincaré’s inequality, 169Rellich-Kondrachov’s theorem, 175Shooting method, 2Smoothing sequence, 129Sobolev’s embedding theorem, 166,167Sobolev’s inequality, 148, 150–152Spectral decomposition, 84Subsolution, 90191


Index of authorsAdams, 127, 129Agmon, 26, 77, 110Ambrosetti, 56Bahri, 50Berestycki, 50, 66, 75, 121Bergh, 38, 139Brezis, vi, 31, 56, 79, 87, 96, 104,132, 135Brock, 73, 74Budd, 79Cazenave, 96Cerami, 79Comte, 79Douglis, 110Ekeland, 56Escobedo, 96Gidas, 28, 120Gilbarg, 110Hardy, 73Hartman, 104Hewitt, 174, 175Ka<strong>to</strong>, 26, 77Kavian, 31, 50, 64, 67, 76, 80Kwong, 28Löfström, 38, 139Lieb, 73, 74Lions, 50, 66, 75, 104, 178Littlewood, 73Lopes, 26Loss, 73Marcus, 142Martel, 96McLeod, 14, 16, 28Mizel, 142Ni, 28, 120Nirenberg, vi, 28, 31, 56, 79, 110,120, 121Nurbury, 79Pólya, 73Pazy, 110Pohožaev, 79Pozio, 96Rabinowitz, 56Ramiandrisoa, 96Solimini, 79Solynin, 73, 74Stampacchia, 104Stein, 139Strauss, 185Stromberg, 174, 175Struwe, 50, 79Troy, 14, 16Trudinger, 110Weissler, 14, 16193

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!