20.08.2015 Views

Integral Equations

Integral Equations

Integral Equations

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Karlstad UniversityDivision for Engineering Science, Physics and MathematicsYury V. Shestopalov and Yury G. Smirnov<strong>Integral</strong> <strong>Equations</strong>A compendiumKarlstad 2002


Contents1 Preface 42 Notion and examples of integral equations (IEs). Fredholm IEs of the firstand second kind 52.1 Primitive function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Examples of solution to integral equations and ordinary differential equations 64 Reduction of ODEs to the Volterra IE and proof of the unique solvabilityusing the contraction mapping 84.1 Reduction of ODEs to the Volterra IE . . . . . . . . . . . . . . . . . . . . . . . 84.2 Principle of contraction mappings . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Unique solvability of the Fredholm IE of the 2nd kind using the contractionmapping. Neumann series 125.1 Linear Fredholm IE of the 2nd kind . . . . . . . . . . . . . . . . . . . . . . . . . 125.2 Nonlinear Fredholm IE of the 2nd kind . . . . . . . . . . . . . . . . . . . . . . . 135.3 Linear Volterra IE of the 2nd kind . . . . . . . . . . . . . . . . . . . . . . . . . . 145.4 Neumann series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145.5 Resolvent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 IEs as linear operator equations. Fundamental properties of completely continuousoperators 176.1 Banach spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176.2 IEs and completely continuous linear operators . . . . . . . . . . . . . . . . . . . 186.3 Properties of completely continuous operators . . . . . . . . . . . . . . . . . . . 207 Elements of the spectral theory of linear operators. Spectrum of a completelycontinuous operator 218 Linear operator equations: the Fredholm theory 238.1 The Fredholm theory in Banach space . . . . . . . . . . . . . . . . . . . . . . . 238.2 Fredholm theorems for linear integral operators and the Fredholm resolvent . . . 269 IEs with degenerate and separable kernels 279.1 Solution of IEs with degenerate and separable kernels . . . . . . . . . . . . . . . 279.2 IEs with degenerate kernels and approximate solution of IEs . . . . . . . . . . . 319.3 Fredholm’s resolvent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3210 Hilbert spaces. Self-adjoint operators. Linear operator equations with completelycontinuous operators in Hilbert spaces 3410.1 Hilbert space. Selfadjoint operators . . . . . . . . . . . . . . . . . . . . . . . . . 3410.2 Completely continuous integral operators in Hilbert space . . . . . . . . . . . . . 3510.3 Selfadjoint operators in Hilbert space . . . . . . . . . . . . . . . . . . . . . . . . 3710.4 The Fredholm theory in Hilbert space . . . . . . . . . . . . . . . . . . . . . . . . 372


10.5 The Hilbert–Schmidt theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4111 IEs with symmetric kernels. Hilbert–Schmidt theory 4211.1 Selfadjoint integral operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4211.2 Hilbert–Schmidt theorem for integral operators . . . . . . . . . . . . . . . . . . 4512 Harmonic functions and Green’s formulas 4712.1 Green’s formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4712.2 Properties of harmonic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 4913 Boundary value problems 5014 Potentials with logarithmic kernels 5114.1 Properties of potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5214.2 Generalized potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5615 Reduction of boundary value problems to integral equations 5616 Functional spaces and Chebyshev polynomials 5916.1 Chebyshev polynomials of the first and second kind . . . . . . . . . . . . . . . . 5916.2 Fourier–Chebyshev series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6017 Solution to integral equations with a logarithmic singulatity of the kernel 6317.1 <strong>Integral</strong> equations with a logarithmic singulatity of the kernel . . . . . . . . . . 6317.2 Solution via Fourier–Chebyshev series . . . . . . . . . . . . . . . . . . . . . . . . 6418 Solution to singular integral equations 6618.1 Singular integral equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6618.2 Solution via Fourier–Chebyshev series . . . . . . . . . . . . . . . . . . . . . . . . 6619 Matrix representation 7119.1 Matrix representation of an operator in the Hilbert space . . . . . . . . . . . . . 7119.2 Summation operators in the spaces of sequences . . . . . . . . . . . . . . . . . . 7220 Matrix representation of logarithmic integral operators 7420.1 Solution to integral equations with a logarithmic singularity of the kernel . . . . 7621 Galerkin methods and basis of Chebyshev polynomials 7921.1 Numerical solution of logarithmic integral equation by the Galerkin method . . . 813


1 PrefaceThis compendium focuses on fundamental notions and statements of the theory of linear operatorsand integral operators. The Fredholm theory is set forth both in the Banach and Hilbertspaces. The elements of the theory of harmonic functions are presented, including the reductionof boundary value problems to integral equations and potential theory, as well as some methodsof solution to integral equations. Some methods of numerical solution are also considered.The compendium contains many examples that illustrate most important notions and thesolution techniques.The material is largely based on the books Elements of the Theory of Functions and FunctionalAnalysis by A. Kolmogorov and S. Fomin (Dover, New York, 1999), Linear <strong>Integral</strong><strong>Equations</strong> by R. Kress (2nd edition, Springer, Berlin, 1999), and Logarithmic <strong>Integral</strong> <strong>Equations</strong>in Electromagnetics by Y. Shestopalov, Y. Smirnov, and E. Chernokozhin (VSP, Utrecht,2000).4


2 Notion and examples of integral equations (IEs). FredholmIEs of the first and second kindConsider an integral equation (IE)∫ bf(x) = φ(x) + λ K(x, y)φ(y)dy. (1)aThis is an IE of the 2nd kind. Here K(x, y) is a given function of two variables (the kernel)defined in the squareΠ = {(x, y) : a ≤ x ≤ b, a ≤ y ≤ b},f(x) is a given function, φ(x) is a sought-for (unknown) function, and λ is a parameter.We will often assume that f(x) is a continuous function.The IEf(x) =∫ baK(x, y)φ(y)dy (2)constitutes an example of an IE of the 1st kind.We will say that equation (1) (resp. (2)) is the Fredholm IE of the 2nd kind (resp. 1st kind)if the the kernel K(x, y) satisfies one of the following conditions:(i) K(x, y) is continuous as a function of variables (x, y) in the square Π;(ii) K(x, y) is maybe discontinuous for some (x, y) but the double integral∫ b ∫ baa|K 2 (x, y)|dxdy < ∞, (3)i.e., takes a finite value (converges in the square Π in the sense of the definition of convergentRiemann integrals).An IE∫ xf(x) = φ(x) + λ K(x, y)φ(y)dy, (4)0is called the Volterra IE of the 2nd kind. Here the kernel K(x, y) may be continuous in thesquare Π = {(x, y) : a ≤ x ≤ b, a ≤ y ≤ b} for a certain b > a or discontinuous and satisfyingcondition (3); f(x) is a given function; and λ is a parameter,Example 1 Consider a Fredholm IE of the 2nd kindx 2 = φ(x) −∫ 10(x 2 + y 2 )φ(y)dy, (5)where the kernel K(x, y) = (x 2 + y 2 ) is defined and continuous in the square Π 1 = {(x, y) : 0 ≤x ≤ 1, 0 ≤ y ≤ 1}, f(x) = x 2 is a given function, and the parameter λ = −1.Example 2 Consider an IE of the 2nd kindf(x) = φ(x) −∫ 10ln |x − y|φ(y)dy, (6)5


were the kernel K(x, y) = ln |x − y| is discontinuous (unbounded) in the square Π 1 along theline x = y, f(x) is a given function, and the parameter λ = −1. Equation (6) a Fredholm IEof the 2nd kind because the double integral∫ 1 ∫ 1i.e., takes a finite value—converges in the square Π 1 .00ln 2 |x − y|dxdy < ∞, (7)2.1 Primitive functionBelow we will use some properties of the primitive function (or (general) antiderivative) F (x)defined, for a function f(x) defined on an interval I, byF ′ (x) = f(x). (8)The primitive function F (x) for a function f(x) (its antiderivative) is denoted using theintegral sign to give the notion of the indefinite integral of f(x) on I,∫F (x) = f(x)dx. (9)Theorem 1 If f(x) is continuous on [a, b] then the functionF (x) =∫ xaf(x)dx. (10)(the antiderivative) is differentiable on [a, b] and F ′ (x) = f(x) for each x ∈ [a, b] .Example 3 F 0 (x) = arctan √ x is a primitive function for the functionbecausef(x) =12 √ x(1 + x)F ′ 0(x) = dF 0dx = 12 √ x(1 + x)= f(x). (11)3 Examples of solution to integral equations and ordinarydifferential equationsExample 4 Consider a Volterra IE of the 2nd kindf(x) = φ(x) − λ∫ x0e x−y φ(y)dy, (12)where the kernel K(x, y) = e x−y , f(x) is a given continuously differentiable function, and λ isa parameter. Differentiating (12) we obtainf(x) = φ(x) − λ∫ xf ′ (x) = φ ′ (x) − λ[φ(x) +06e x−y φ(y)dy∫ x0e x−y φ(y)dy].


Subtracting termwise we obtain an ordinary differential equatin (ODE) of the first orderφ ′ (x) − (λ + 1)φ(x) = f ′ (x) − f(x) (13)with respect to the (same) unknown function φ(x). Setting x = 0 in (12) we obtain the initialcondition for the unknown function φ(x)Thus we have obtained the initial value problem for φ(x)φ(0) = f(0). (14)φ ′ (x) − (λ + 1)φ(x) = F (x), F (x) = f ′ (x) − f(x), (15)φ(0) = f(0). (16)Integrating (15) subject to the initial condition (16) we obtain the Volterra IE (12), whichis therefore equivalent to the initial value problem (15) and (16).Solve (12) by the method of successive approximations. SetWrite two first successive approximations:K(x, y) = e x−y 0 ≤ y ≤ x ≤ 1,K(x, y) = 0 0 ≤ x ≤ y ≤ 1.φ 0 (x) = f(x),∫ 1φ 1 (x) = f(x) + λ K(x, y)φ 0 (x)dy, (17)0Set∫ 1f(x) + λ0K 2 (x, y) =∫ 1φ 2 (x) = f(x) + λ∫ 10K(x, y)f(y)dy + λ 2 ∫ 1K(x, t)K(t, y)dt =to obtain (by changing the order of integration)The third approximation∫ 1φ 2 (x) = f(x) + λ00K(x, y)φ 1 (y)dy = (18)0∫ xy∫ 1K(x, t)dt K(t, y)f(y)dy. (19)0e x−t e t−y dt = (x − y)e x−y (20)∫ 1K(x, y)f(y)dy + λ 2 K 2 (x, y)f(y)dy. (21)0where∫ 1φ 3 (x) = f(x) + λ K(x, y)f(y)dy +0∫ 1∫ 1+ λ 2 K 2 (x, y)f(y)dy + λ 3K 3 (x, y) =0∫ 10K(x, t)K 2 (t, y)dt =0K 3 (x, y)f(y)dy, (22)(x − y)2e x−y . (23)2!7


The general relationship has the formwheren∑∫ 1φ n (x) = f(x) + λ m K m (x, y)f(y)dy, (24)m=10K 1 (x, y) = K(x, y), K m (x, y) =∫ 10K(x, t)K m−1 (t, y)dt =(x − y)m−1e x−y . (25)(m − 1)!Assuming that successive approximations (24) converge we obtain the (unique) solution to IE(12) by passing to the limit in (24)∞∑∫ 1φ(x) = f(x) + λ m K m (x, y)f(y)dy. (26)m=10Introducing the notation for the resolvent∞∑Γ(x, y, λ) = λ m−1 K m (x, y). (27)m=1and changing the order of summation and integration in (26) we obtain the solution in thecompact form∫ 1φ(x) = f(x) + λ Γ(x, y, λ)f(y)dy. (28)0Calculate the resolvent explicitly using (25):Γ(x, y, λ) = e x−yso that (28) gives the solutionNote that according to (17)∞ ∑m=1m−1 (x − y)m−1λ(m − 1)!φ(x) = f(x) + λand the series (27) converges for every λ.∫ 10= e x−y e λ(x−y) = e (λ+1)(x−y) , (29)e (λ+1)(x−y) f(y)dy. (30)Γ(x, y, λ) = e (λ+1)(x−y) , 0 ≤ y ≤ x ≤ 1,Γ(x, y, λ) = 0, 0 ≤ x ≤ y ≤ 1.4 Reduction of ODEs to the Volterra IE and proof ofthe unique solvability using the contraction mapping4.1 Reduction of ODEs to the Volterra IEConsider an ordinary differential equatin (ODE) of the first orderdydx= f(x, y) (31)8


with the initial conditiony(x 0 ) = y 0 , (32)where f(x, y) is defined and continuous in a two-dimensional domain G which contains thepoint (x 0 , y 0 ).Integrating (31) subject to (32) we obtainφ(x) = y 0 +∫ xx 0f(t, φ(t))dt, (33)which is called the Volterra integral equation of the second kind with respect to the unknownfunction φ(t). This equation is equivalent to the initial value problem (31) and (32). Note thatthis is generally a nonlinear integral equation with respect to φ(t).Consider now a linear ODE of the second order with variable coefficientswith the initial conditiony ′′ + A(x)y ′ + B(x)y = g(x) (34)y(x 0 ) = y 0 , y ′ (x 0 ) = y 1 , (35)where A(x) and B(x) are given functions continuous in an interval G which contains the pointx 0 . Integrating y ′′ in (34) we obtainy ′ (x) = −∫ xx 0A(t)y ′ (x)dx −∫ xx 0B(x)y(x)dx +Integrating the first integral on the right-hand side in (36) by parts yieldsy ′ (x) = −A(x)y(x) −∫ xIntegrating a second time we obtainy(x) = −∫ xx 0A(x)y(x)dx−Using the relationship∫ xwe transform (38) to obtainy(x) = −∫ xx 0(B(x) − A ′ (x))y(x)dx +x 0∫ xx 0(B(t)−A ′ (t))y(t)dtdx+∫ xx 0∫ xx 0f(t)dtdx =∫ xx 0{A(t)+(x−t)[(B(t)−A ′ (t))]}y(t)dt+∫ x∫ x∫ xx 0g(x)dx + y 1 . (36)x 0g(x)dx + A(x 0 )y 0 + y 1 . (37)x 0∫ xx 0g(t)dtdx+[A(x 0 )y 0 +y 1 ](x−x 0 )+y 0 .(38)x 0(x − t)f(t)dt, (39)∫ xx 0(x−t)g(t)dt+[A(x 0 )y 0 +y 1 ](x−x 0 )+y 0 .Separate the known functions in (40) and introduce the notation for the kernel functionThen (40) becomes(40)K(x, t) = −A(t) + (t − x)[(B(t) − A ′ (t))], (41)f(x) =∫ xx 0(x − t)g(t)dt + [A(x 0 )y 0 + y 1 ](x − x 0 ) + y 0 . (42)∫ xy(x) = f(x) + K(x, t)y(t))dt, (43)x 0which is the Volterra IE of the second kind with respect to the unknown function φ(t). Thisequation is equivalent to the initial value problem (34) and (35). Note that here we obtain alinear integral equation with respect to y(x).9


Example 5 Consider a homogeneous linear ODE of the second order with constant coefficientsand the initial conditions (at x 0 = 0)We see that herey ′′ + ω 2 y = 0 (44)y(0) = 0, y ′ (0) = 1. (45)A(x) ≡ 0, B(x) ≡ ω 2 , g(x) ≡ 0, y 0 = 0, y 1 = 1,if we use the same notation as in (34) and (35).Substituting into (40) and calculating (41) and (42),K(x, t) = ω 2 (t − x),f(x) = x,we find that the IE (43), equivalent to the initial value problem (44) and (45), takes the formy(x) = x +∫ x4.2 Principle of contraction mappings0(t − x)y(t)dt. (46)A metric space is a pair of a set X consisting of elements (points) and a distance which is anonnegative function satisfyingρ(x, y) = 0 if and only if x = y,ρ(x, y) = ρ(y, x) (Axiom of symmetry), (47)ρ(x, y) + ρ(y, z) ≥ ρ(x, z) (Triangle axiom)A metric space will be denoted by R = (X, ρ).A sequence of points x n in a metric space R is called a fundamental sequence if it satisfiesthe Cauchy criterion: for arbitrary ɛ > 0 there exists a number N such thatρ(x n , x m ) < ɛ for all n, m ≥ N. (48)Definition 1 If every fundamental sequence in the metric space R converges to an element inR, this metric space is said to be complete.Example 6 The (Euclidian) n-dimensional space R n of vectors ā = (a 1 , . . . , a n ) with the Euclidianmetricρ(ā, ¯b) = ||ā − ¯b| ∑2 = √ n (a i − b i ) 2is complete.i=110


Example 7 The space C[a, b] of continuous functions defined in a closed interval a ≤ x ≤ bwith the metricρ(φ 1 , φ 2 ) = maxa≤x≤b |φ 1(x) − φ 2 (x)|is complete. Indeed, every fundamental (convergent) functional sequence {φ n (x)} of continuousfunctions converges to a continuous function in the C[a, b]-metric.Example 8 The space C (n) [a, b] of componentwise continuous n-dimensional vector-functionsf = (f 1 , . . . , f n ) defined in a closed interval a ≤ x ≤ b with the metricρ(f 1 , f 2 ) = || maxa≤x≤b |f 1 i (x) − f 2 i (x)||| 2is complete. Indeed, every componentwise-fundamental (componentwise-convergent) functionalsequence { ¯φ n (x)} of continuous vector-functions converges to a continuous function in theabove-defined metric.Definition 2 Let R be an arbitrary metric space. A mapping A of the space B into itself issaid to be a contraction if there exists a number α < 1 such thatρ(Ax, Ay) ≤ αρ(x, y) for any two points x, y ∈ R. (49)Every contraction mapping is continuous:x n → x yields Ax n → Ax (50)Theorem 2 Every contraction mapping defined in a complete metric space R has one and onlyone fixed point; that is the equation Ax = x has only one solution.We will use the principle of contraction mappings to prove Picard’s theorem.Theorem 3 Assume that a function f(x, y) satisfies the Lipschitz condition|f(x, y 1 ) − f(x, y 2 )| ≤ M|y 1 − y 2 | in G. (51)Then on a some interval x 0 − d < x < x 0 + d there exists a unique solution y = φ(x) to theinitial value problemdydx = f(x, y), y(x 0) = y 0 . (52)Proof. Since function f(x, y) is continuous we have |f(x, y)| ≤ k in a region G ′ ⊂ G whichcontains (x 0 , y 0 ). Now choose a number d such that(x, y) ∈ G ′ if |x 0 − x| ≤ d, |y 0 − y| ≤ kdMd < 1.Consider the mapping ψ = Aφ defined byψ(x) = y 0 +∫ xx 0f(t, φ(t))dt, |x 0 − x| ≤ d. (53)11


Let us prove that this is a contraction mapping of the complete set C ∗ = C[x 0 − d, x 0 + d] ofcontinuous functions defined in a closed interval x 0 − d ≤ x ≤ x 0 + d (see Example 7). We have∫ x∫ x∫ x|ψ(x) − y 0 | =∣ f(t, φ(t))dt∣ ≤ |f(t, φ(t))| dt ≤ k dt = k(x − x 0 ) ≤ kd. (54)x 0 x 0 x 0∫ x|ψ 1 (x) − ψ 2 (x)| ≤ |f(t, φ 1 (t)) − f(t, φ 2 (t))| dt ≤ Md max |φ 1(x)| − φ 2 (x). (55)x 0 x 0 −d≤x≤x 0 +dSince Md < 1, the mapping ψ = Aφ is a contraction.From this it follows that the operator equation φ = Aφ and consequently the integralequation (33) has one and only one solution.This result can be generalized to systems of ODEs of the first order.To this end, denote by ¯f = (f 1 , . . . , f n ) an n-dimensional vector-function and write theinitial value problem for a system of ODEs of the first order using this vector notationdȳdx = ¯f(x, ȳ), ȳ(x 0 ) = ȳ 0 . (56)where the vector-function ¯f is defined and continuous in a region G of the n + 1-dimensionalspace R n+1 such that G contains the ’n+1-dimensional’ point x 0 , ȳ 0 . We assume that f satisfiesthe the Lipschitz condition (51) termwise in variables ȳ 0 . the initial value problem (56) isequivalent to a system of IEs¯φ(x) = ȳ 0 +∫ xx 0¯f(t, ¯φ(t))dt. (57)Since function ¯f(t, ȳ) is continuous we have ||f(x, y)|| ≤ K in a region G ′ ⊂ G which contains(x 0 , ȳ 0 ). Now choose a number d such that(x, ȳ) ∈ G ′ if |x 0 − x| ≤ d, ||ȳ 0 − ȳ|| ≤ KdMd < 1.Then we consider the mapping ¯ψ = A ¯φ defined by¯ψ(x) = ȳ 0 +∫ xx 0¯f(t, ¯φ(t))dt, |x0 − x| ≤ d. (58)and prove that this is a contraction mapping of the complete space ¯C ∗ = ¯C[x 0 − d, x 0 + d] ofcontinuous (componentwise) vector-functions into itself. The proof is a termwise repetitiion ofthe reasoning in the scalar case above.5 Unique solvability of the Fredholm IE of the 2nd kindusing the contraction mapping. Neumann series5.1 Linear Fredholm IE of the 2nd kindConsider a Fredholm IE of the 2nd kindf(x) = φ(x) + λ∫ ba12K(x, y)φ(y)dy, (59)


where the kernel K(x, y) is a continuous function in the squareΠ = {(x, y) : a ≤ x ≤ b, a ≤ y ≤ b},so that |K(x, y)| ≤ M for (x, y) ∈ Π. Consider the mapping g = Af defined byof the complete space C[a, b] into itself. We have∫ bg(x) = φ(x) + λ K(x, y)φ(y)dy, (60)a∫ bg 1 (x) = λg 2 (x) = λa∫ bρ(g 1 , g 2 ) = max |g 1(x) − g 2 (x)| ≤a≤x≤b∫ baK(x, y)f 1 (y)dy + φ(x),K(x, y)f 2 (y)dy + φ(x),≤ |λ| |K(x, y)||f 1 (y) − f 2 (y)|dy ≤ |λ|M(b − a) max |f 1(y)| − f 2 (y)| (61)aa≤y≤b1< ρ(f 1 , f 2 ) if |λ|


5.3 Linear Volterra IE of the 2nd kindConsider a Volterra IE of the 2nd kindf(x) = φ(x) + λ∫ xaK(x, y)φ(y)dy, (68)where the kernel K(x, y) is a continuous function in the square Π for some b > a, so that|K(x, y)| ≤ M for (x, y) ∈ Π.Formulate a generalization of the principle of contraction mappings.Theorem 4 If A is a continuous mapping of a complete metric space R into itself such thatthe mapping A n is a contraction for some n, then the equation Ax = x has one and only onesolution.In fact if we take an arbitrary point x ∈ R and consider the sequence A kn x, k = 0, 1, 2, . . ., therepetition of the proof of the classical principle of contraction mappings yields the convergenceof this sequence. Let x 0 = lim k→∞ A kn x, then Ax 0 = x 0 . In fact Ax 0 = lim k→∞ A kn Ax. Sincethe mapping A n is a contraction, we haveρ(A kn Ax, A kn x) ≤ αρ(A (k−1)n Ax, A (k−1)n x) ≤ . . . ≤ α k ρ(Ax, x).Consequently,limk→∞ ρ(Akn Ax, A kn x) = 0,i.e., Ax 0 = x 0 .Consider the mapping g = Af defined byg(x) = φ(x) + λof the complete space C[a, b] into itself.5.4 Neumann series∫ xaK(x, y)φ(y)dy, (69)Consider an IE∫ bf(x) = φ(x) − λ K(x, y)φ(y)dy, (70)aIn order to determine the solution by the method of successive approximations and obtain theNeumann series, rewrite (70) as∫ bφ(x) = f(x) + λ K(x, y)φ(y)dy, (71)aand take the right-hand side f(x) as the zero approximation, settingφ 0 (x) = f(x). (72)Substitute the zero approximation into the right-hand side of (73) to obtain the first approximation∫ bφ 1 (x) = f(x) + λ K(x, y)φ 0 (y)dy, (73)a14


and so on, obtaining for the (n + 1)st approximation∫ bφ n+1 (x) = f(x) + λ K(x, y)φ n (y)dy. (74)aAssume that the kernel K(x, y) is a bounded function in the square Π = {(x, y) : a ≤ x ≤b, a ≤ y ≤ b}, so that|K(x, y)| ≤ M, (x, y) ∈ Π.or even that there exists a constant C 1 such that∫ bThe the following statement is valid.a|K(x, y)| 2 dy ≤ C 1 ∀x ∈ [a, b], (75)Theorem 5 Assume that condition (75) holds Then the sequence φ n of successive approximations(74) converges uniformly for all λ satisfyingλ ≤ 1 B ,B = ∫ bThe limit of the sequence φ n is the unique solution to IE (70).a∫ ba|K(x, y)| 2 dxdy. (76)Proof. Write two first successive approximations (74):Set∫ bf(x) + λa∫ bφ 1 (x) = f(x) + λ K(x, y)f(y)dy, (77)a∫ bφ 2 (x) = f(x) + λK(x, y)f(y)dy + λ 2 ∫ bK 2 (x, y) =∫ bto obtain (by changing the order of integration)∫ bφ 2 (x) = f(x) + λaIn the same manner, we obtain the third approximationaaK(x, y)φ 1 (y)dy = (78)a∫ bK(x, t)dt K(t, y)f(y)dy. (79)aK(x, t)K(t, y)dt (80)∫ bK(x, y)f(y)dy + λ 2 K 2 (x, y)f(y)dy. (81)awhere∫ bφ 3 (x) = f(x) + λ K(x, y)f(y)dy +a∫ b∫ b+ λ 2 K 2 (x, y)f(y)dy + λ 3 K 3 (x, y)f(y)dy, (82)aK 3 (x, y) =∫ baaK(x, t)K 2 (t, y)dt. (83)15


The general relationship has the formwheren∑∫ bφ n (x) = f(x) + λ m K m (x, y)f(y)dy, (84)m=1aK 1 (x, y) = K(x, y), K m (x, y) =∫ baK(x, t)K m−1 (t, y)dt. (85)and K m (x, y) is called the mth iterated kernel. One can easliy prove that the iterated kernelssatisfy a more general relationshipK m (x, y) =∫ baK r (x, t)K m−r (t, y)dt, r = 1, 2, . . . , m − 1 (m = 2, 3, . . .). (86)Assuming that successive approximations (84) converge we obtain the (unique) solution to IE(70) by passing to the limit in (84)∞∑∫ bφ(x) = f(x) + λ m K m (x, y)f(y)dy. (87)m=1aIn order to prove the convergence of this series, write∫ band estimate C m . Setting r = m − 1 in (86) we haveaK m (x, y) =|K m (x, y)| 2 dy ≤ C m ∀x ∈ [a, b], (88)∫ bApplying to (89) the Schwartz inequality we obtaina|K m (x, y)| 2 =Integrating (90) with respect to y yields∫ bawhich gives∫ b|K m (x, y)| 2 dy ≤ B 2 |K m−1 (x, t)| 2 ≤ B 2 C m−1 (B =and finally the required estimateaK m−1 (x, t)K(t, y)dt, (m = 2, 3, . . .). (89)∫ ba∫ b|K m−1 (x, t)| 2 |K(t, y)| 2 dt. (90)a∫ b ∫ baa|K(x, y)| 2 dxdy) (91)C m ≤ B 2 C m−1 , (92)C m ≤ B 2m−2 C 1 . (93)Denoting√ ∫ bD = |f(y)| 2 dy (94)aand applying the Schwartz inequality we obtain∫ b2 ∫ bK∣ m (x, y)f(y)dy≤a∣ a∫ b|K m (x, y)| 2 dy |f(y)| 2 dy ≤ D 2 C 1 B 2m−2 . (95)a16


Thus the common term of the series (87) is estimated by the quantity√D C 1 |λ| m B m−1 , (96)so that the series converges faster than the progression with the denominator |λ|B, which provesthe theorem.If we take the first n terms in the series (87) then the resulting error will be not greaterthan|λ|D√C n+1 B n11 − |λ|B . (97)5.5 ResolventIntroduce the notation for the resolvent∞∑∫ bΓ(x, y, λ) = λ m−1 K m (x, y) (98)m=1aChanging the order of summation and integration in (87) we obtain the solution in the compactform∫ bφ(x) = f(x) + λ Γ(x, y, λ)f(y)dy. (99)aOne can show that the resolvent satisfies the IE∫ bΓ(x, y, λ) = K(x, y) + λ K(x, t)Γ(t, y, λ)dt. (100)a6 IEs as linear operator equations. Fundamental propertiesof completely continuous operators6.1 Banach spacesDefinition 3 A complete normed space is said to be a Banach space, B.Example 9 The space C[a, b] of continuous functions defined in a closed interval a ≤ x ≤ bwith the (uniform) norm||f|| C = maxa≤x≤b |f(x)|[and the metricρ(φ 1 , φ 2 ) = ||φ 1 − φ 2 || C = maxa≤x≤b |φ 1(x)| − φ 2 (x)|]is a normed and complete space. Indeed, every fundamental (convergent) functional sequence{φ n (x)} of continuous functions converges to a continuous function in the C[a, b]-metric.Definition 4 A set M in the metric space R is said to be compact if every sequence of elementsin M contains a subsequence that converges to some x ∈ R.17


Definition 5 A family of functions {φ} defined on the closed interval [a, b] is said to be uniformlybounded if there exists a number M > 0 such thatfor all x and for all φ in the family.|φ(x)| < MDefinition 6 A family of functions {φ} defined on the closed interval [a, b] is said to be equicontinuousif for every ɛ > 0 there is a δ > 0 such that|φ(x 1 ) − φ(x 2 )| < ɛfor all x 1 , x 2 such that |x 1 ) − x 2 | < δ and for all φ in the given family.Formulate the most common criterion of compactness for families of functions.Theorem 6 (Arzela’s theorem). A necessary and sufficient condition that a family of continuousfunctions defined on the closed interval [a, b] be compact is that this family be uniformlybounded and equicontinuous.6.2 IEs and completely continuous linear operatorsDefinition 7 An operator A which maps a Banach space E into itself is said to be completelycontinuous if it maps and arbitrary bounded set into a compact set.Consider an IE of the 2nd kindf(x) = φ(x) + λwhich can be written in the operator form asHere the linear (integral) operator A defined byf(x) =∫ baK(x, y)φ(y)dy, (101)f = φ + λAφ. (102)∫ baK(x, y)φ(y)dy (103)and considered in the (Banach) space C[a, b] of continuous functions in a closed interval a ≤x ≤ b represents an extensive class of completely continuous linear operators.Theorem 7 Formula (103) defines a completely continuous linear operator in the (Banach)space C[a, b] if the function K(x, y) is to be bounded in the squareΠ = {(x, y) : a ≤ x ≤ b, a ≤ y ≤ b},and all points of discontinuity of the function K(x, y) lie on the finite number of curveswhere ψ k (x) are continuous functions.y = ψ k (x), k = 1, 2, . . . n, (104)18


Proof. To prove the theorem, it is sufficient to show (according to Arzela’s theorem) that thefunctions (103) defined on the closed interval [a, b] (i) are continuous and the family of thesefunctions is (ii) uniformly bounded and (iii) equicontinuous. Below we will prove statements(i), (ii), and (iii).Under the conditions of the theorem the integral in (103) exists for arbitrary x ∈ [a, b]. SetM = sup |K(x, y)|,Πand denote by G the set of points (x, y) for which the condition|y − ψ k (x)| < ɛ , k = 1, 2, . . . n, (105)12Mnholds. Denote by F the complement of G with respect to Π. Since F is a closed set and K(x, y)is continuous on F , there exists a δ > 0 such that for points (x ′ , y) and (x ′′ , y) in F for which|x ′ − x ′′ | < δ the inequality|K(x ′ , y) − K(x ′′ , y)| < (b − a) 1 3 ɛ (106)holds. Now let x ′ and x ′′ be such that |x ′ − x ′′ | < δ holds. Then|f(x ′ ) − f(x ′′ )| ≤can be evaluated by integrating over the sum A of the intervals∫ b|y − ψ k (x ′ )|


6.3 Properties of completely continuous operatorsTheorem 8 If {A n } is a sequence of completely continuous operators on a Banach space Ewhich converges in (operator) norm to an operator A then the operator A is completely continuous.Proof. Consider an arbitrary bounded sequence {x n } of elements in a Banach space E suchthat ||x n || < c. It is necessary to prove that sequence {Ax n } contains a convergent subsequence.The operator A 1 is completely continuous and therefore we can select a convergent subsequencefrom the elements {A 1 x n }. Letx (1)1 , x (1)2 , . . . , x (1)n , . . . , (112)be the inverse images of the members of this convergent subsequence. We apply operator A 2 toeach members of subsequence (112)). The operator A 2 is completely continuous; therefore wecan again select a convergent subsequence from the elements {A 2 x (1)n }, denoting byx (2)1 , x (2)2 , . . . , x (2)n , . . . , (113)the inverse images of the members of this convergent subsequence. The operator A 3 is also completelycontinuous; therefore we can again select a convergent subsequence from the elements{A 3 x (2)n }, denoting byx (3)1 , x (3)2 , . . . , x (3)n , . . . , (114)the inverse images of the members of this convergent subsequence; and so on. Now we can forma diagonal subsequencex (1)1 , x (2)2 , . . . , x (n)n , . . . . (115)Each of the operators A n transforms this diagonal subsequence into a convergent sequence.If we show that operator A also transforms (115)) into a convergent sequence, then this willestablish its complete continuity. To this end, apply the Cauchy criterion (48) and evaluate thenormWe have the chain of inequalities||Ax (n)nChoose k so that ||A − A k || < ɛ2c||Ax (n)n− Ax (m)m || ≤ ||Ax n(n) − A k x (n)+ |A k x (n)n≤− Ax (m)m |. (116)n | +− A k x (m)m||A − A k |(||x (m)mand N such that||A k x (n)n| + ||A k x (m)m|| + ||x (m)m− A k x (m)m || < ɛ 2− Ax (m)m || ≤|) + ||A k x (n)n− A k x (m)m ||.holds for arbitrary n > N and m > N (which is possible because the sequence ||A k x (n)n ||converges). As a result,||Ax (n)ni.e., the sequence ||Ax (n)n || is fundamental.− Ax (m)m || < ɛ, (117)20


Theorem 9 (Superposition principle). If A is a completely continuous operator and B is abounded operator then the operators AB and BA are completely continuous.Proof. If M is a bounded set in a Banach space E, then the set BM is also bounded. Consequently,the set ABM is compact and this means that the operator AB is completely continuous.Further, if M is a bounded set in E, then AM is compact. Then, because B is a continuousoperator, the set BAM is also compact, and this completes the proof.Corollary A completely continuous operator A cannot have a bounded inverse in a finitedimensionalspace E.Proof. In the contrary case, the identity operator I = AA −1 would be completely continuousin E which is impossible.Theorem 10 The adjoint of a completely operator is completely continuous.7 Elements of the spectral theory of linear operators.Spectrum of a completely continuous operatorIn this section we will consider bounded linear operators which map a Banach space E intoitself.Definition 8 The number λ is said to be a characteristic value of the operator A if there existsa nonzero vector x (characteristic vector) such that Ax = λx.In an n-dimensional space, this definition is equivalent to the following:the number λ is said to be a characteristic value of the operator A if it is a root of thecharacteristic equationDet |A − λI| = 0.The totality of those values of λ for which the inverse of the operator A − λI does not existis called the spectrum of the operator A.The values of λ for which the operator A − λI has the inverse are called regular values ofthe operator A.Thus the spectrum of the operator A consists of all nonregular points.The operatorR λ = (A − λI) −1is called the resolvent of the operator A.In an infinite-dimensional space, the spectrum of the operator A necessarily contains all thecharacteristic values and maybe some other numbers.In this connection, the set of characteristic values is called the point spectrum of the operatorA. The remaining part of the spectrum is called the continuous spectrum of the operator A. Thelatter also consists, in its turn, of two parts: (1) those λ for which A − λI has an unboundedinverse with domain dense in E (and this part is called the continuous spectrum); and (2) the21


emainder of the spectrum: those λ for which A − λI has a (bounded) inverse whose domain isnot dense in E, this part is called the residual spectrum.If the point λ is regular, i.e., if the inverse of the operator A − λI exists, then for sufficientlysmall δ the operator A − (λ + δ)I also has the inverse, i.e., the point δ + λ is regular also. Thusthe regular points form an open set. Consequently, the spectrum, which is a complement, is aclosed set.Theorem 11 The inverse of the operator A − λI exists for arbitrary λ for which |λ| > ||A||.Proof. SinceA − λI = −λ(I − 1 )λ A ,the resolventR λ = (A − λI) −1 = − 1 λ(I − A ) −1= − 1 λ λThis operator series converges for ||A/λ|| < 1, i.e., the operator A − λI has the inverse.Corollary. The spectrum of the operator A is contained in a circle of radius ||A| with centerat zero.Theorem 12 Every completely continuous operator A in the Banach space E has for arbitraryρ > 0 only a finite number of linearly independent characteristic vectors which correspond tothe characteristic values whose absolute values are greater than ρ.∞∑k=0A kλ k .Proof. Assume, ad abs, that there exist an infinite number of linearly independent characteristicvectors x n , n = 1, 2, . . ., satisfying the conditionsAx n = λ n x n , |λ n | > ρ > 0 (n = 1, 2, . . .). (118)Consider in E the subspace E 1 generated by the vectors x n (n = 1, 2, . . .). In this subspacethe operator A has a bounded inverse. In fact, according to the definition, E 1 is the totality ofelements in E which are of the form vectorsand these vectors are everywhere dense in E 1 . For these vectorsx = α 1 x 1 + . . . + α k x k , (119)Ax = α 1 λ 1 x 1 + . . . + α k λ k x k , (120)and, consequently,andA −1 x = (α 1 /λ 1 )x 1 + . . . + (α k /λ k )x k (121)||A −1 x|| < 1 ρ ||x||.Therefore, the operator A −1 defined for vectors of the form (119) by means of (120) can be extendedby continuity (and, consequently, with preservation of norm) to all of E 1 . The operator22


A, being completely continuous in the Banach space E, is completely continuous in E 1 . Accordingto the corollary to Theorem (9), a completely continuous operator cannot have a boundedinverse in infinite-dimensional space. The contradiction thus obtained proves the theorem.Corollary. Every nonzero characteristic value of a completely continuous operator A in theBanach space E has only finite multiplicity and these characteristic values form a bounded setwhich cannot have a single limit point distinct from the origin of coordinates.We have thus obtained a characterization of the point spectrum of a completely continuousoperator A in the Banach space E.One can also show that a completely continuous operator A in the Banach space E cannothave a continuous spectrum.8 Linear operator equations: the Fredholm theory8.1 The Fredholm theory in Banach spaceTheorem 13 Let A be a completely continuous operator which maps a Banach space E intoitself. If the equation y = x − Ax is solvable for arbitrary y, then the equation x − Ax = 0 hasno solutions distinct from zero solution.Proof. Assume the contrary: there exists an element x 1 ≠ 0 such that x 1 − Ax 1 = T x 1 = 0Those elements x for which T x = 0 form a linear subspace E 1 of the banach space E. Denoteby E n the subspace of E consisting of elements x for which the powers T n x = 0.It is clear that subspaces E n form a nondecreasing subsequenceE 1 ⊆ E 2 ⊆ . . . E n ⊆ . . . .We shall show that the equality sign cannot hold at any point of this chain of inclusions. Infact, since x 1 ≠ 0 and equation y = T x is solvable for arbitrary y, we can find a sequence ofelements x 1 , x 2 , . . . , x n , . . . distinct from zero such thatT x 2 = x 1 ,T x 3 = x 2 ,. . . ,T x n = x n−1 ,. . . .The element x n belongs to subspace E n for each n but does not belong to subspace E n−1 . Infact,butT n x n = T n−1 x n−1 = . . . = T x 1 = 0,T n−1 x n = T n−2 x n−1 = . . . = T x 2 = x 1 ≠ 0.23


All the subspaces E n are linear and closed; therefore for arbitrary n there exists an elementy n+1 ∈ E n+1 such that||y n+1 || = 1 and ρ(y n+1 , E n ) ≥ 1 2 ,where ρ(y n+1 , E n ) denotes the distance from y n+1 to the space E n :ρ(y n+1 , E n ) = inf {||y n+1 − x||, x ∈ E n }.Consider the sequence {Ay k }. We have (assuming p > q)||Ay p − Ay q || = ||y p − (y q + T y p − T y q )|| ≥ 1 2 ,since y q + T y p − T y q ∈ E p−1 . It is clear from this that the sequence {Ay k } cannot containany convergent subsequence which contradicts the complete continuity of the operator A. Thiscontradiction proves the theoremCorollary 1. If the equation y = x − Ax is solvable for arbitrary y, then this equation hasunique solution for each y, i.e., the operator I − A has an inverse in this case.We will consider, together with the equation y = x − Ax the equation h = f − A ∗ f whichis adjoint to it, where A ∗ is the adjoint to A and h, f are elements of the Banach space Ē, theconjugate of E.Corollary 2. If the equation h = f − A ∗ f is solvable for all h, then the equation f − A ∗ f = 0has only the zero solution.It is easy to verify this statement by recalling that the operator adjoint to a completelycontinuous operator is also completely continuous, and the space Ē conjugate to a Banachspace is itself a Banach space.Theorem 14 A necessary and sufficient condition that the equation y = x − Ax be solvable isthat f(y) = 0 for all f for which f − A ∗ f = 0.Proof. 1. If we assume thast the equation y = x − Ax is solvable, thenf(y) = f(x) − f(Ax) = f(y) = f(x) − A ∗ f(x)i.e., f(y) = 0 for all f for which f − A ∗ f = 0 .2. Now assume that f(y) = 0 for all f for which f − A ∗ f = 0. For each of these functionalswe consider the set L f of elements for which f takes on the value zero. Then our assertion isequivalent to the fact that the set ∩L f consists only of the elements of the form x−Ax. Thus, itis necessary to prove that an element y 1 which cannot be represented in the form x−Ax cannotbe contained in ∩L f . To do this we shall show that for such an element y 1 we can construct afunctional f 1 satisfyingf 1 (y 1 ) ≠ 0, f 1 − A ∗ f 1 = 0.24


These conditions are equivalent to the following:In fact,f 1 (y 1 ) ≠ 0, f 1 (x − Ax) = 0 ∀x.(f 1 − A ∗ f 1 )(x) = f 1 (x) − A ∗ f 1 (x) = f 1 (x) − f 1 (Ax) = f 1 (x − Ax).Let G 0 be a subspace consisting of all elements of the form x − Ax. Consider the subspace{G 0 , y 1 } of the elements of the form z + αy 1 , z ∈ G 0 and define the linear functional by settingf 1 (z + αy 1 ) = α.Extending the linear functional to the whole space E (which is possible by virtue of the Hahn–Banach theorem) we do obtain a linear functional satisfying the required conditions. Thiscompletes the proof of the theorem.Corollary If the equation f − A ∗ f = 0 does not have nonzero solution, then the equationx − Ax = y is solvable for all y.Theorem 15 A necessary and sufficient condition that the equation f − A ∗ f = h be solvableis that h(x) = 0 for all x for which x − Ax = 0.Proof. 1. If we assume that the equation f − A ∗ f = h is solvable, thenh(x) = f(x) − A ∗ f(x) = f(x − Ax)i.e., h(x) = 0 if x − Ax = 0.2. Now assume that h(x) = 0 for all x such that x−Ax = 0. We shall show that the equationf − A ∗ f = h is solvable. Consider the set F of all elements of the form y = x − Ax. Constructa functional f on F by settingf(T x) = h(x).This equation defines in fact a linear functional. Its value is defined uniquely for each y becauseif T x 1 = T x 2 then h(x 1 ) = h(x 2 ). It is easy to verify the linearity of the functional and extendit to the whole space E. We obtainf(T x) = T ∗ f(x) = h(x).i.e., the functional is a solution of equation f − A ∗ f = h. This completes the proof of thetheorem.Corollary If the equation x − Ax = 0 does not have nonzero solution, then the equationf − A ∗ f = h is solvable for all h.The following theorem is the converse of Thereom 13.Theorem 16 If the equation x − Ax = 0 has x = 0 for its only solution, then the equationx − Ax = y has a solution for all y.25


Proof. If the equation x − Ax = 0 has only one solution, then by virtue of the corollary tothe preceding theorem, the equation f − A ∗ f = h is solvable for all h. Then, by Corollary 2to Thereom 13, the equation f − A ∗ f = 0 has f = 0 for its only solution. Hence Corollary toThereom 14 implies that the equation x − Ax = y has a solution for all y.Thereoms 13 and 16 show that for the equationx − Ax = y (122)only the following two cases are possible:(i) Equation (122) has a unique solution for each y, i.e., the operator I − A has an inverse.(ii) The corresponding homogeneous equation x − Ax = 0 has a nonzero solution, i.e., thenumber λ = 1 is a characteristic value for the operator A.All the results obtained above for the equation x − Ax = y remain valid for the equationx − λAx = y (the adjoint equation is f − ¯λA ∗ f = h). It follows that either the operator I − λAhas an inverse or the number 1/λ is a characteristic value for the operator A. In other words,in the case of a completely continuous operator, an arbitrary number is either a regular pointor a characteristic value. Thus a completely continuous operator has only a point spectrum.Theorem 17 The dimension n of the space N whose elements are the solutions to the equationx − Ax = 0 is equal to the dimension of the subspace N ∗ whose elements are the solutions tothe equation f − A ∗ f = 0.This theorem is proved below for operators in the Hilbert space.8.2 Fredholm theorems for linear integral operators and the FredholmresolventOne can formulate the Fredholm theorems for linear integral operatorsAφ(x) =∫ baK(x, y)φ(y)dy (123)in terms of the Fredholm resolvent.The numbers λ for which there exists the Fredholm resolvent of a Fredholm IEφ(x) − λ∫ baK(x, y)φ(y)dy = f(x) (124)will be called regular values. The numbers λ for which the Fredholm resolvent Γ(x, y, λ) is notdefined (they belong to the point spectrum) will be characteristic values of the IE (124), andthey coincide with the poles of the resolvent. The inverse of characteristic values are eigenvaluesof the IE.Formulate, for example, the first Fredholm theorem in terms of regular values of the resolvent.If λ is a regular value, then the Fredholm IE (124) has one and only one solution for arbitraryf(x). This solution is given by formula (99),∫ bφ(x) = f(x) + λ Γ(x, y, λ)f(y)dy. (125)a26


In particular, if λ is a regular value, then the homogeneous Fredholm IE (124)has only the zero solution φ(x) = 0.∫ bφ(x) − λ K(x, y)φ(y)dy = 0 (126)a9 IEs with degenerate and separable kernels9.1 Solution of IEs with degenerate and separable kernelsAn IEwith a degenerate kernel∫ bφ(x) − λ K(x, y)φ(y)dy = f(x) (127)an∑K(x, y) = a i (x)b i (y) (128)i=1can be represented, by changing the order of summation and integration, in the formφ(x) − λn∑i=1∫ ba i (x) b i (y)φ(y)dy = f(x). (129)aHere one may assume that functions a i (x) (and b i (y) ) are linearly independent (otherwise, thenumer of terms in (128)) can be reduced).It is easy to solve such IEs with degenerate and separable kernels. Denotec i =which are unknown constants. Equation (129)) becomes∫ bab i (y)φ(y)dy, (130)n∑φ(x) = f(x) + λ c i a i (x), (131)i=1and the problem reduces to the determination of unknowns c i . To this end, substitute (131)into equation (129) to obtain, after simple algebra,{n∑∫ [] }bn∑a i (x) c i − b i (y) f(y) + λ c k a k (y) dy = 0. (132)i=1ak=1Since functions a i (x) are linearly independent, equality (132) yields∫ []bn∑c i − b i (y) f(y) + λ c k a k (y) dy = 0, i = 1, 2, . . . n. (133)ak=127


Denotingf i =∫ bab i (y)f(y)dy, a ik =and changing the order of summation and integration, we rewrite (133) as∫ bab i (y)a k (y)dy (134)n∑c i − λ a ik c k = f i , i = 1, 2, . . . n, (135)k=1which is a linear equation system of order n with respect to unknowns c i .Note that the same system can be obtained by multiplying (131) by a k (x) (k = 1, 2, . . . n)and integrating from a to bSystem (135) is equivalent to IE (129) (and (127), (128)) in the following sense: if (135) isuniquely solvable, then IE (129) has one and only one solution (and vice versa); and if (135)has no solution, then IE (129) is not solvable (and vice versa).The determinant of system (135) isD(λ) =∣1 − λa 11 −λa 12 . . . −λa 1n−λa 21 1 − λa 22 . . . −λa 2n. . . . . .−λa n1 −λa n2 . . . 1 − λa nn∣ ∣∣∣∣∣∣∣∣. (136)D(λ) is a polynomial in powers of λ of an order not higher than n. Note that D(λ) is notidentically zero because D(0) = 1. Thus there exist not more than n different numbers λ = λ ∗ ksuch that D(λ ∗ k) = 0. When λ = λ ∗ k, then system (135) together with IE (129) are eithernot solvable or have infinitely many solutions. If λ ≠ λ ∗ k, then system (135) and IE (129) areuniquely solvable.Example 10 Consider a Fredholm IE of the 2nd kindHerewhere∫ 1φ(x) − λ (x + y)φ(y)dy = f(x). (137)02∑K(x, y) = x + y = a i (x)b i (y) = a 1 (x)b 1 (y) + a 2 (x)b 2 (y),i=1a 1 (x) = x, b 1 (y) = 1, a 2 (x) = 1, b 2 (y) = y,is a degenerate kernel, f(x) is a given function, and λ is a parameter.Look for the solution to (137) in the form (131) (note that here n = 2)Denoting2∑φ(x) = f(x) + λ c i a i (x) = f(x) + λ(c 1 a 1 (x) + c 2 a 2 (x)) = f(x) + λ(c 1 x + c 2 ). (138)i=1f i =∫ 10b i (y)f(y)dy, a ik =28∫ 10b i (y)a k (y)dy, (139)


so thatanda 11 =a 12 =a 21 =a 22 =f 1 =f 2 =∫ 10∫ 10∫ 10∫ 10∫ 10∫ 10b 1 (y)a 1 (y)dy =b 1 (y)a 2 (y)dy =b 2 (y)a 1 (y)dy =b 2 (y)a 2 (y)dy =b 1 (y)f(y)dy =b 2 (y)f(y)dy =∫ 10∫ 10∫ 10∫ 10∫ 10∫ 10ydy = 1 2 ,dy = 1, (140)y 2 dy = 1 3 ,ydy = 1 2 ,f(y)dy, (141)yf(y)dy,and changing the order of summation and integration, we rewrite the corresponding equation(133) asor, explicitly,2∑c i − λ a ik c k = f i , i = 1, 2, (142)k=1(1 − λ/2)c 1 − λc 2 = f 1 ,−(λ/3)c 1 + (1 − λ/2)c 2 = f 2 ,which is a linear equation system of order 2 with respect to unknowns c i .The determinant of system (143) is1 − λ/2 −λD(λ) =∣ −λ/3 1 − λ/2 ∣ = 1 + λ2 /4 − λ − λ 2 /3 = 1 − λ − λ 2 /12 = − 1 12 (λ2 + 12λ − 12)(143)is a polynomial of order 2. (D(λ) is not identically, and D(0) = 1). There exist not more than2 different numbers λ = λ ∗ k such that D(λ ∗ k) = 0. Here there are exactly two such numbers; theyareλ ∗ 1 = −6 + 4 √ 3, (144)λ ∗ 2 = −6 − 4 √ 3.If λ ≠ λ ∗ k, then system (143) and IE (137) are uniquely solvable.To find the solution of system (143) apply Cramer’s rule and calculate the determinants ofsystem (143)fD 1 = 1 −λ∣ f 2 1 − λ/2 ∣ = f 1(1 − λ/2) + f 2 λ == (1 − λ/2)∫ 10f(y)dy + λ∫ 10yf(y)dy =29∫ 10[(1 − λ/2) + yλ]f(y)dy, (145)


fD 1 = 1 −λ∣ f 2 1 − λ/2 ∣ = f 1(1 − λ/2) + f 2 λ =∫ 1∫ 1∫ 1= (1 − λ/2) f(y)dy + λ yf(y)dy = [(1 − λ/2) + yλ]f(y)dy, (146)000D 2 =∣ (1 − λ/2) f ∣∣∣∣ 1= f∣ −λ/3 f 2 (1 − λ/2) + f 1 λ/3 =2= (1 − λ/2)∫ 10yf(y)dy + (λ/3)Then, the solution of system (143) isc 1 = D 1D = − 12=1λ 2 + 12λ − 12λ 2 + 12λ − 12∫ 1c 2 = D 2D = − 12=1λ 2 + 12λ − 120λ 2 + 12λ − 12∫ 10∫ 10∫ 10f(y)dy =∫ 10[y(1 − λ/2) + λ/3]f(y)dy, (147)[(1 − λ/2) + yλ]f(y)dy =[(6λ − 12) − 12yλ]f(y)dy, (148)∫ 1According to (138), the solution of the IE (137) isφ(x) = f(x) + λ(c 1 x + c 2 ) ==λf(x) +λ 2 + 12λ − 120[y(1 − λ/2) + λ/3]f(y)dy =[y(6λ − 12) − 4λ]f(y)dy (149)∫ 10[6(λ − 2)(x + y) − 12xy − 4λ]f(y)dy. (150)When λ = λ ∗ k (k = 1, 2), then system (143) together with IE (137) are not solvable.Example 11 Consider a Fredholm IE of the 2nd kindHere∫ 2πφ(x) − λ sin x cos yφ(y)dy = f(x). (151)0K(x, y) = sin x cos y = a(x)b(y)is a degenerate kernel, f(x) is a given function, and λ is a parameter.Rewrite IE (151) in the form (131) (here n = 1)φ(x) = f(x) + λc sin x, c =∫ 2π0cos yφ(y)dy. (152)Multiplying the first equality in (152) by b(x) = cos x and integrating from 0 to 2π we obtainc =∫ 2π0f(x) cos xdx (153)30


Therefore the solution isφ(x) = f(x) + λc sin x = f(x) + λ∫ 2πExample 12 Consider a Fredholm IE of the 2nd kindx 2 = φ(x) −∫ 100sin x cos yf(y)dy. (154)(x 2 + y 2 )φ(y)dy, (155)where K(x, y) = x 2 +y 2 is a degenerate kernel f(x) = x 2 is a given function, and the parameterλ = 1.9.2 IEs with degenerate kernels and approximate solution of IEsAssume that in an IE∫ bφ(x) − λ K(x, y)φ(y)dy = f(x) (156)athe kernel K(x, y and f(x) are continuous functions in the squareΠ = {(x, y) : a ≤ x ≤ b, a ≤ y ≤ b}.Then φ(x) will be also continuous. Approximate IE (156) with an IE having a degenerate kernel.To this end, replace the integral in (156) by a finite sum using, e.g., the rectangle rulewhereThe resulting approximation takes the form∫ bn∑K(x, y)φ(y)dy ≈ h K(x, y k )φ(y k ), (157)ak=1h = b − an , y k = a + kh. (158)n∑φ(x) − λh K(x, y k )φ(y k ) = f(x) (159)k=1of an IE having a degenerate kernel. Now replace variable x in (159) by a finite number ofvalues x = x i , i = 1, 2, . . . , n. We obtain a linear equation system with unknowns φ k = φ(x k )and right-hand side containing the known numbers f i = f(x i )n∑φ i − λh K(x i , y k )φ k = f i , i = 1, 2, . . . , n. (160)k=1We can find the solution {φ i } of system (160) by Cramer’s rule,φ i = D(n) i (λ), i = 1, 2, . . . , n, (161)D (n) (λ)31


calculating the determinantsD (n) (λ) =∣1 − hλK(x 1 , y 1 ) −hλK(x 1 , y 2 ) . . . −hλK(x 1 , y n )−hλK(x 2 , y 1 ) 1 − hλK(x 2 , y 2 ) . . . −hλK(x 2 , y n ). . . . . .−hλK(x n , y 1 ) −hλK(x n , y 2 ) . . . 1 − hλK(x n , y n ). (162)∣and so on.To reconstruct the approximate solution ˜φ (n) using the determined {φ i } one can use the IE(159) itself:φ(x) ≈ ˜φn∑(n) = λh K(x, y k )φ k + f(x). (163)k=1One can prove that ˜φ (n) → φ(x) as n → ∞ where φ(x) is the exact solution to IE (159).9.3 Fredholm’s resolventAs we have already mentioned the resolvent Γ(x, y, λ) of IE (159) satisfies the IE∫ bΓ(x, y, λ) = K(x, y) + λ K(x, t)Γ(t, y, λ)dt. (164)aSolving this IE (164) by the described approximate method and passing to the limit n → ∞we obtain the resolvent as a ratio of two power seriesΓ(x, y, λ) =D(x, y, λ), (165)D(λ)whereD(x, y, λ) =D(λ) =∞∑ (−1) nB n (x, y)λ n ,n=0n!(166)∞∑ (−1) nc n λ n ,n=0n!(167)B n (x, y) =∫ ba. . .∫ ba∣B 0 (x, y) = K(x, y),K(x, y) K(x, y 1 ) . . . K(x, y n )K(y 1 , y) K(y 1 , y 1 ) . . . K(y 1 , y n ). . . . . .K(y n , y) K(y n , y 1 ) . . . K(y n , y n )dy 1 . . . dy n ; (168)∣c 0 = 1,32


∫ bc n = . . .a∫ ba∣K(y 1 , y 1 ) K(y 1 , y 2 ) . . . K(y 1 , y n )K(y 2 , y 1 ) K(y 2 , y 2 ) . . . K(y 2 , y n ). . . . . .K(y n , y 1 ) K(y n , y 2 ) . . . K(y n , y n )dy 1 . . . dy n . (169)∣D(λ) is called the Fredholm determinant, and D(x, y, λ), the first Fredholm minor.If the integral∫ b ∫ baa|K(x, y)| 2 dxdy < ∞, (170)then seires (167) for the Fredholm determinant converges for all complex λ, so that D(λ) is anentire function of λ. Thus (164)–(167) define the resolvent on the whole complex plane λ, withan exception of zeros of D(λ) which are poles of the resolvent.We may conclude that for all complex λ that do not coincide with the poles of the resolvent,the IE∫ bφ(x) − λ K(x, y)φ(y)dy = f(x) (171)ais uniquely solvable, and its solution is given by formula (99)φ(x) = f(x) + λ∫ baΓ(x, y, λ)f(y)dy. (172)For practical computation of the resolvent, one can use the following recurrent relationships:B 0 (x, y) = K(x, y), c 0 = 1,c n =∫ bB n (x, y) = c n K(x, y) − naB n−1 (y, y)dy, n = 1, 2, . . . , (173)∫ baK(x, t)B n−1 (t, y)dt, n = 1, 2, . . . . (174)Example 13 Find the resolvent of the Fredholm IE (137) of the 2nd kindφ(x) − λWe will use recurrent relationships (173) and (174). Here∫ 10(x + y)φ(y)dy = f(x). (175)K(x, y) = x + yis a degenerate kernel. We have c 0 = 1, B i = B i (x, y) (i = 0, 1, 2), andB 0 = K(x, y) = x + y,c 1 =∫ 10B 0 (y, y)dy =B 1 = c 1 K(x, y) −∫ 1∫ 10∫ 102ydy = 1,K(x, t)B 0 (t, y)dt == x + y − (x + t)(t + y)dt = 102 (x + y) − xy − 1 3 ,∫ 1∫ 1 ( 1c 2 = B 1 (y, y)dy =2 (y + y) − y2 − 1 dy = −3)1 6 ,0∫ 10B 2 = c 2 K(x, y) − K(x, t)B 1 (t, y)dt =0= − 1 ∫ 1 ( 16 (x + y) − 2 (x + t)2 (t + y) − ty − 1 dt = 0.3)033


Since B 2 (x, y) ≡ 0, all subsequent c 3 , c 4 , . . ., and B 3 , B 4 , . . ., vanish (see (173) and (174)),and we obtainandD(x, y, λ) =1∑ (−1) n=B n (x, y)λ n = B 0 (x, y) − λB 1 (x, y) =n=0n!( 1= x + y − λ2 (x + y) − xy − 1 , (176)3)D(λ) ==2∑ (−1) nc n λ n = c 0 − c 1 λ + 1n=0n!2 c 2λ 2 == 1 − λ − 1 12 λ2 = − 1 12 (λ2 + 12λ − 12)Γ(x, y, λ) =D(x, y, λ)D(λ)= −12 x + y − λ ( 1(x + y) − xy − ) 12 3 ). (177)λ 2 + 12λ − 12The solution to (175) is given by formula (99)so that we obtain, using (176),φ(x) = f(x) +∫ 1φ(x) = f(x) + λ Γ(x, y, λ)f(y)dy,0∫λ1[6(λ − 2)(x + y) − 12xy − 4λ]f(y)dy, (178)λ 2 + 12λ − 12 0which coincides with formula (150).The numbers λ = λ ∗ k (k = 1, 2) determined in (144), λ ∗ 1 = −6 + √ 4 and λ ∗ 2 = −6 − √ 4, are(simple, i.e., of multiplicity one) poles of the resolvent because thay are (simple) zeros of theFredholm determinant D(λ). When λ = λ ∗ k (k = 1, 2), then IE (175) is not solvable.10 Hilbert spaces. Self-adjoint operators. Linear operatorequations with completely continuous operatorsin Hilbert spaces10.1 Hilbert space. Selfadjoint operatorsA real linear space R with the inner product satisfying(x, y) = (y, x) ∀x, y ∈ R,(x 1 + x 2 , y) = (x 1 , y) + (x 2 , y), ∀x 1 , x 2 , y ∈ R,(λx, y) = λ(x, y), ∀x, y ∈ R,(x, x) ≥ 0, ∀x ∈ R, (x, x) = 0 if and only if x = 0.34


is called the Euclidian space.In a complex Euclidian space the inner product satisfies(x, y) = (y, x),(x 1 + x 2 , y) = (x 1 , y) + (x 2 , y),(λx, y) = λ(x, y),Note that in a complex linear space(x, x) ≥ 0, (x, x) = 0 if and only if x = 0.(x, λy) = ¯λ(x, y).The norm in the Euclidian space is introduced by||x| = √ x, x.The Hilbert space H is a complete (in the metric ρ(x, y) = ||x − y||) infinite-dimensionalEuclidian space.In the Hilbert space H, the operator A ∗ adjoint to an operator A is defined by(Ax, y) = (x, A ∗ y), ∀x, y ∈ H.The selfadjoint operator A = A ∗ is defined from(Ax, y) = (x, Ay), ∀x, y ∈ H.10.2 Completely continuous integral operators in Hilbert spaceConsider an IE of the second kindφ(x) = f(x) +∫ baK(x, y)φ(y)dy. (179)Assume that K(x, y) is a Hilbert–Schmidt kernel, i.e., a square-integrable function in the squareΠ = {(x, y) : a ≤ x ≤ b, a ≤ y ≤ b}, so thatand f(x) ∈ L 2 [a, b], i.e.,∫ b ∫ baa∫ ba|K(x, y)| 2 dxdy ≤ ∞, (180)|f(x)| 2 dx ≤ ∞.Define a linear Fredholm (integral) operator corresponding to IE (179)Aφ(x) =∫ baK(x, y)φ(y)dy. (181)If K(x, y) is a Hilbert–Schmidt kernel, then operator (181) will be called a Hilbert–Schmidtoperator.Rewrite IE (179) as a linear operator equationφ = Aφ(x) + f, f, φ ∈ L 2 [a, b]. (182)35


Theorem 18 Equality (182) and condition (180) define a completely continuous linear operatorin the space L 2 [a, b]. The norm of this operator is estimated as√ ∫ b ∫ b||A|| ≤ |K(x, y)| 2 dxdy. (183)a aProof. Note first of all that we have already mentioned (see (75)) that if condition (180) holds,then there exists a constant C 1 such that∫ ba|K(x, y)| 2 dy ≤ C 1 almost everywhere in [a, b], (184)i.e., K(x, y) ∈ L 2 [a, b] as a function of y almost for all x ∈ [a, b]. Therefore the functionψ(x) =∫ baK(x, y)φ(y)dy is defined almost everywhere in [a, b] (185)We will now show that ψ(x) ∈ L 2 [a, b]. Applying the Schwartz inequality we obtain|ψ(x)| 2 =∫ b2 ∫ bK(x, y)φ(y)dy≤∣ a∣ a∫ b= ||φ|| 2 |K(x, y)| 2 dy.a∫ b|K(x, y)| 2 dy |φ(y)| 2 dy = (186)aIntegrating with respect to x and replacing an iterated integral of |K(x, y)| 2 by a double integral,we obtain∫ b∫ b ∫ b||Aφ|| 2 = |ψ(y)| 2 dy ≤ ||φ|| 2 |K(x, y)| 2 dxdy, (187)which proves (1) that ψ(x) is a square-integrable function, i.e.,a∫ baaa|ψ(x)| 2 dx ≤ ∞,and estimates the norm of the operator A : L 2 [a, b] → L 2 [a, b].Let us show that the operator A : L 2 [a, b] → L 2 [a, b] is completely continuous. To this end,we will use Theorem 8 and construct a sequence {A n } of completely continuous operators on thespace L 2 (Π) which converges in (operator) norm to the operator A. Let {ψ n } be an orthonormalsystem in L 2 [a, b]. Then {ψ m (x)ψ n (y)} form an orthonormal system in L 2 (Π). Consequently,By settingK(x, y) =K N (x, y) =∞∑m,n=1N∑m,n=1a mn ψ m (x)ψ n (y). (188)a mn ψ m (x)ψ n (y). (189)define an operator A N which is finite-dimensional (because it has a degenerate kernel and mapstherefore L 2 [a, b] into a finite-dimensional space generated by a finite system {ψ m } N m=1) andtherefore completely continuous.36


Now note that K N (x, y) in (189) is a partial sum of the Fourier series for K(x, y), so that∫ b ∫ baa|K(x, y) − K N (x, y)| 2 dxdy → 0, N → ∞. (190)Applying estimate (183) to the operator A − A N and using (190) we obtainThis completes the proof of the theorem.10.3 Selfadjoint operators in Hilbert space||A − A N || → 0, N → ∞. (191)Consider first some properties of a selfadjoint operator A (A = A ∗ ) acting in the Hilbert spaceH.Theorem 19 All eigenvalues of a selfadjoint operator A in H are real.Proof. Indeed, let λ be an eigenvalue of a selfadjoint operator A, i.e., Ax = λx, x ∈ H, x ≠ 0.Then we haveλ(x, x) = (λx, x) = (Ax, x) = (x, Ax) = (x, λx) = ¯λ(x, x), (192)which yields λ = ¯λ, i.e., λ is a real number.Theorem 20 Eigenvectors of a selfadjoint operator A corresponding to different eigenvaluesare orthogonal.Proof. Indeed, let λ and µ be two different eigenvalues of a selfadjoint operator A, i.e.,Ax = λx, x ∈ H, x ≠ 0; Ay = µy, y ∈ H, y ≠ 0.Then we havewhich givesi.e., (x, y) = 0.λ(x, y) = (λx, y) = (Ax, y) = (x, Ay) = (x, µy) = µ(x, y), (193)(λ − µ)(x, y) = 0, (194)Below we will prove all the Fredholm theorems for integral operators in the Hilbert space.10.4 The Fredholm theory in Hilbert spaceWe take the integral operator (181), assume that the kernel satisfies the Hilbert–Schmidt condition∫ b ∫ ba aand write IE (179)φ(x) =|K(x, y)| 2 dxdy ≤ ∞, (195)∫ baK(x, y)φ(y)dy + f(x) (196)37


in the operator form (182),φ = Aφ + f, (197)orThe corresponding homogeneous IEand the adjoint equationsandT φ = f, T = I − A. (198)T φ 0 = 0, (199)T ∗ ψ = g, T ∗ = I − A ∗ , (200)T ∗ ψ 0 = 0. (201)We will consider equations (198)–(201) in the Hilbert space H = L 2 [a, b]. Note that (198)–(201) may be considered as operator equations in H with an arbitrary completely continuousoperator A.The four Fredholm theorems that are formulated below establish relations between thesolutions to these four equations.Theorem 21 The inhomogeneous equation T φ = f is solvable if and only if f is orthogonal toevery solution of the adjoint homogeneous equation T ∗ ψ 0 = 0 .Theorem 22 (the Fredholm alternative). One of the following situations is possible:(i) the inhomogeneous equation T φ = f has a unique solution for every f ∈ H(ii) the homogeneous equation T φ 0 = 0 has a nonzero solution .Theorem 23 The homogeneous equations T φ 0 = 0 and T ∗ ψ 0 = 0 have the same (finite)number of linearly independent solutions.To prove the Fredholm theorems, we will use the definiton of a subspace in the Hilbert spaceH:M is a linear subspace of H if M closed and f, g ∈ M yields αf + βg ∈ M where α and β arearbitrary numbers,and the following two properties of the Hilbert space H:(i) if h is an arbitrary element in H, then the set h orth of all elements f ∈ H orthogonal to h,h orth = {f : f ∈ H, (f, h) = 0} form a linear subspace of H which is called the orthogonalcomplement,and(ii) if M is a (closed) linear subspace of H, then arbitrary element f ∈ H is uniquely representedas f = h + h ′ , where h ∈ M, and h ′ ∈ M orth and M orth is the orthogonal complement to M.Lemma 1 The manifold Im T = {y ∈ H : y = T x, x ∈ H} is closed.38


Proof. Assume that y n ∈ Im T and y n → y. To prove the lemma, one must show that y ∈ Im T ,i.e., y = T x. We will now prove the latter statement.According to the definition of Im T , there exist vectors x n ∈ H such thaty n = T x n = x n − Ax n . (202)We will consider the manifold Ker T = {y ∈ H : T y = 0} which is a closed linear subspaceof H. One may assume that vectors x n are orthogonal to Ker T , i.e., x n ∈ Ker T orth ; otherwise,one can writex n = h n + h ′ n, h n ∈ Ker T,and replace x n by h n = x n − h ′ n with h ′ n ∈ Ker T orth . We may assume also that the set ofnorms ||x n || is bounded. Indeed, if we assume the contrary, then there will be an unboundedsubsequence {x nk } with ||x nk || → ∞. Dividing by ||x nk || we obtain from (202)x nk||x nk || − A x nk||x nk || → 0.Note that A is a completely continuous operator; therefore we may assume that{A x }nk||x nk ||is a convergent subsequence, which converges, e.g., to a vector z ∈ H. It is clear that ||z|| = 1and T z = 0, i.e., z ∈ Ker T . We assume however that vectors x n are orthogonal to Ker T ;therefore vector z must be also orthogonal to Ker T . This contradiction means that sequence||x n || is bounded. Therefore, sequence {Ax n } contains a convergent subsequence; consequently,sequence {x n } will also contain a convergent subsequence due to (202). Denoting by x the limitof {x n }, we see that, again y = T x due to (202). The lemma is proved.Lemma 2 The space H is the direct orthogonal sum of closed subspaces Ker T ∗ and Im T ∗ ,and alsoKer T + Im T ∗ = H, (203)Ker T ∗ + Im T = H. (204)Proof. The closed subspaces Ker T and Im T ∗ are orthogonal because if h ∈ Ker T then (h, T ∗ x) =(T h, x) = (0, x) = 0 for all x ∈ H. It is left to prove there is no nonzero vector orthogonal toKer T and Im T ∗ . Indeed, if a vector z is orthogonal to Im T ∗ , then (T z, x) = (z, T ∗ x) = 0 forall x ∈ H, i.e., z ∈ Ker T . The lemma is proved.Lemma 2 yields the first Fredholm theorem 21. Indeed, f is orthogonal to Ker T ∗ if andonly if f ∈ Im T ∗ , i.e., there exists a vector φ such that T φ = f.Next, set H k = Im T k for every k, so that, in particular, H 1 = Im T . It is clear that. . . ⊂ H k ⊂ . . . ⊂ H 2 ⊂ H 1 ⊂ H, (205)all the subsets in the chain are closed, and T (H k ) = H k+1 .39


Lemma 3 There exists a number j such that H k+1 = H k for all k ≥ j.Proof. If such a j does not exist, then all H k are different, and one can construct an orthonormalsequence {x k } such that x k ∈ H k and x k is orthogonal to H k+1 . Let l > k. ThenAx l − Ax k = −x k + (x l + T x k − T x l ).Consequently, ||Ax l − Ax k || ≥ 1 because x l + T x k − T x l ∈ H k+1 . Therefore, sequence {Ax k }does not contain a convergent subsequence, which contradicts to the fact that A is a completelycontinuous operator. The lemma is proved.Lemma 4 If Ker T = {0} then Im T = H.Proof. If Ker T = {0} then T is a one-to-one operator. If in this case Im T ≠ H then the chain(205) consists of different subspaces which contradicts Lemma 3. Therefore Im T = H.In the same manner, we prove that Im T ∗ = H if Ker T ∗ = {0}.Lemma 5 If Im T = H then Ker T = {0}.Proof. Since Im T = H, we have Ker T ∗ = {0} according to Lemma 3. Then Im T ∗ = Haccording to Lemma 4 and Ker T = {0} by Lemma 2.The totality of statements of Lemmas 4 and 5 constitutes the proof of Fredholm’s alternative22.Now let us prove the third Fredholm’s Theorem 23.To this end, assume that Ker T is an infinite-dimensional space: dimKer T = ∞. Then thisspace √ conyains an infinite orthonormal sequence {x n } such that Ax n = x n and |Ax l − Ax k || =2 if k ≠ l. Therefore, sequence {Axn } does not contain a convergent subsequence, whichcontradicts to the fact that A is a completely continuous operator.Now set µ = dimKer T and ν = dimKer T ∗ and assume that µ < ν. Let φ 1 , . . . , φ µ be anorthonormal basis in Ker T and ψ 1 , . . . , ψ ν be an orthonormal basis in Ker T ∗ . Setµ∑Sx = T x + (x, φ j )ψ j .j=1Operator S is a sum of T and a finite-dimensional operator; therefore all the results provedabove for T remain valid for S.Let us show that the homogeneous equation Sx = 0 has only a trivial solution. Indeed,assume thatµ∑T x + (x, φ j )ψ j = 0. (206)j=1According to Lemmas 2 vectors psi j are orthogonal to all vectors of the form T x (206) yieldsandT x = 0(x, φ j ) = 0, quad1 ≤ j ≤ µ.40


We see that, on the one hand, vector x must be a linear combination of vectors psi j , and onthe other hand, vector x must be orthogonal to these vectors. Therefore, x = 0 and equationSx = 0 has only a trivial solution. Thus according to the second Fredholm’s Theorem 22 thereexists a vector y such thatµ∑T y + (y, φ j )ψ j = ψ µ+1 .j=1Calculating the inner product of both sides of this equality and ψ µ+1 we obtainµ∑(T y, ψ µ+1 ) + (y, φ j )(ψ j , ψ µ+1 ) = (ψ µ+1 , ψ µ+1 ).j=1Here (T y, ψ µ+1 ) = 0 because T y ∈ Im T and Im T is orthogonal to Ker T ∗ . Thus we haveµ∑(0 + (y, φ j ) × 0 = 1.j=1This contradiction is obtained because we assumed µ < ν. Therefore µ ≥ ν. Replacing nowT by T ∗ we obtain in the same manner that µ ≤ ν. Therefore µ = ν. The third Fredholm’stheorem is proved.To sum up, the Fredholm’s theorems show that the number λ = 1 is either a regular pointor an eigenvalue of finite multiplicity of the operator A in (197).All the results obtained above for the equation (197) φ = Aφ+f (i.e., for the operator A−I)remain valid for the equation λφ = Aφ + f (for the operator A − λI with λ ≠ 0). It followsthat in the case of a completely continuous operator, an arbitrary nonzero number is either aregular point or an eigenvalue of finite multiplicity. Thus a completely continuous operator hasonly a point spectrum. The point 0 is an exception and always belongs to the spectrum of acompletely continuous operator in an infinite-dimensional (Hilbert) space but may not be aneigenvalue.10.5 The Hilbert–Schmidt theoremLet us formulate the important Hilbert–Schmidt theorem.Theorem 24 A completely continuous selfadjoint operator A in the Hilbert space H has an orthonormalsystem of eigenvectors {ψ k } corresponding to eigenvalues λ k such that every elementξ ∈ H is represented asξ = ∑ c k ψ k + ξ ′ , (207)kwhere the vector ξ ′ ∈ Ker A, i.e., Aξ ′ = 0. The representation (207) is unique, andAξ = ∑ kλ k c k ψ k , (208)andif {ψ k } is an infinite system.λ k → 0, n → ∞, (209)41


11 IEs with symmetric kernels. Hilbert–Schmidt theoryConsider first a general a Hilbert–Schmidt operator (181).Theorem 25 Let A be a Hilbert–Schmidt operator with a (Hilbert–Schmidt) kernel K(x, y).Then the operator A ∗ adjoint to A is an operator with the adjoint kernel K ∗ (x, y) = K(y, x).Proof. Using the Fubini theorem and changing the order of integration, we obtain∫ { b ∫ }b(Af, g) = K(x, y)f(y)dy g(x)dx ====which proves the theorem.Consider a symmetric IEa a∫ b ∫ ba a∫ { b ∫ ba a∫ { b ∫ baf(y)K(x, y)f(y)g(x)dydx =}K(x, y)g(x)dxaK(x, y)g(x)dxf(y)dy =}dy,with a symmetric kernel∫ bφ(x) − λ K(x, y)φ(y)dy = f(x) (210)aK(x, y) = K(y, x) (211)for complex-valued orK(x, y) = K(y, x) (212)for real-valued K(x, y).According to Theorem 25, an integral operator (181) is selfadjoint in the space L 2 [a, b], i.e.,A = A ∗ , if and only if it has a symmetric kernel.11.1 Selfadjoint integral operatorsLet us formulate the properties (19) and (20) of eigenvalues and eigenfunctions (eigenvectors)for a selfadjoint integral operator acting in the Hilbert space H.Theorem 26 All eigenvalues of a integral operator with a symmetric kernel are real.Proof. Let λ 0 and φ 0 be an eigenvalue and eigenfunction of a integral operator with a symmetrickernel, i.e.,∫ bφ 0 (x) − λ 0 K(x, y)φ 0 (y)dy = 0. (213)a42


Multiply equality (213) by ¯φ 0 and integrate with respect to x between a and b to obtainwhich yieldsRewrite this equality as||φ 0 || 2 − λ 0 (Kφ 0 , φ 0 ) = 0, (214)λ 0 = ||φ 0 | 2(Kφ 0 , φ 0 ) . (215)λ 0 = ||φ 0|| 2µ , µ = (Kφ 0, φ 0 ), (216)and prove that µ is a real number. For a symmetric kernel, we haveAccording to the definition of the inner productThusµ = (Kφ 0 , φ 0 ) = (φ 0 , Kφ 0 ). (217)(φ 0 , Kφ 0 ) = (Kφ 0 , φ 0 ).µ = (Kφ 0 , φ 0 ) = µ = (Kφ 0 , φ 0 ),so that µ = µ, i.e., µ is a real number and, consequently, λ 0 in (215) is a real number.Theorem 27 Eigenfunctions of an integral operator with a symmetric kernel corresponding todifferent eigenvalues are orthogonal.Proof. Let λ and µ and φ λ and φ µ be two different eigenvalues and the corresponding eigenfunctionsof an integral operator with a symmetric kernel, i.e.,Then we have∫ bφ λ (x) = λφ µ (x) = λa∫ baK(x, y)φ λ (y)dy = 0, (218)K(x, y)φ µ (y)dy = 0. (219)λ(φ λ , φ µ ) = (λφ λ , φ µ ) = (Aφ λ , φ µ ) = (φ λ , Aφ µ ) = (φ λ , µφ µ ) = µ(φ λ , φ µ ), (220)which yieldsi.e., (φ λ , φ µ ) = 0.(λ − µ)(φ λ , φ µ ) = 0, (221)One can also reformulate the Hilbert–Schmidt theorem 24 for integral operators.43


Theorem 28 Let λ 1 , λ 2 , . . . , and φ 1 , φ 2 , . . . , be eigenvalues and eigenfunctions (eigenvectors)of an integral operator with a symmetric kernel and let h(x) ∈ L 2 [a, b], i.e.,∫ ba|h(x)| 2 dx ≤ ∞.If K(x, y) is a Hilbert–Schmidt kernel, i.e., a square-integrable function in the square Π ={(x, y) : a ≤ x ≤ b, a ≤ y ≤ b}, so thatthen the function∫ b ∫ baaf(x) = Ah =|K(x, y)| 2 dxdy < ∞, (222)∫ baK(x, y)h(y)dy (223)is decomposed into an absolutely and uniformly convergent Fourier series in the orthonornalsystem φ 1 , φ 2 , . . . ,∞∑f(x) = f n φ n (x), f n = (f, φ n ).n=1The Fourier coefficients f n of the function f(x) are coupled with the Fourier coefficients h n ofthe function h(x) by the relationshipsso thatf(x) =∞∑n=1Setting h(x) = K(x, y) in (224), we obtainK 2 (x, y) =f n = h nλ n, h n = (h, φ n ),h nλ nφ n (x) (f n = (f, φ n ), h n = (h, φ n )). (224)∞∑n=1ω n (y)λ nφ n (x), ω n (y) = (K(x, y), φ n (x)), (225)and ω n (y) are the Fourier coefficients of the kernel K(x, y). Let us calculate ω n (y). We have,using the formula for the Fourier coefficients,ω n (y) =or, because the kernel is symmetric,ω n (y) =Note that φ n (y) satisfies the equation∫ ba∫ baK(x, y)φ n (x)dx, (226)K(x, y)φ n (x)dx, (227)∫ bφ n (x) = λ n K(x, y)φ n (y)dy. (228)a44


Replacing x by y abd vice versa, we obtainso thatNow we haveIn the same manner, we obtainand, generally,∫1bφ n (y) = K(y, x)φ n (x)dx, (229)λ n aω n (y) = 1 λ nφ n (y).K 2 (x, y) =K 3 (x, y) =K m (x, y) =which is bilinear series for kernel K m (x, y).For kernel K(x, y), the bilinear series isK(x, y) =∞∑n=1∞∑n=1∞∑n=1∞∑n=1φ n (x)φ n (y). (230)λ 2 nφ n (x)φ n (y),λ 3 nφ n (x)φ n (y). (231)λ m nφ n (x)φ n (y)λ n. (232)This series may diverge in the sense of C-norm (i.e., uniformly); however it alwyas convergesin L 2 -norm.11.2 Hilbert–Schmidt theorem for integral operatorsConsider a Hilbert–Schmidt integral operator A with a symmetric kernel K(x, y). In this case,the following conditions are assumed to be satisfied:i Aφ(x) =iiiii∫ b ∫ baa∫ baK(x, y)φ(y)dy,|K(x, y)| 2 dxdy ≤ ∞, (233)K(x, y) = K(y, x).According to Theorem 18, A is a completely continuous selfadjoint operator in the space L 2 [a, b]and we can apply the Hilbert–Schmidt theorem 24 to prove the following statement.45


Theorem 29 If 1 is not an eigenvalue of the operator A then the IE (210), written in theoperator form asφ(x) = Aφ + f(x), (234)has one and only one solution for every f. If 1 is an eigenvalue of the operator A, then IE(210) (or (234)) is solvable if and only if f is orthogonal to all eigenfunctions of the operator Acorresponding to the eigenvalue λ = 1, and in this case IE (210) has infinitely many solutions.Proof. According to the Hilbert–Schmidt theorem, a completely continuous selfadjoint operatorA in the Hilbert space H = L 2 [a, b] has an orthonormal system of eigenvectors {ψ k }corresponding to eigenvalues λ k such that every element ξ ∈ H is represented asξ = ∑ kc k ψ k + ξ ′ , (235)where ξ ′ ∈ Ker A, i.e., Aξ ′ = 0. Setf = ∑ kb k ψ k + f ′ , Af ′ = 0, (236)and look for a solution to (234) in the formφ = ∑ kx k ψ k + φ ′ , Aφ ′ = 0. (237)Substituting (236) and (237) into (234) we obtain∑x k ψ k + φ ′ = ∑ x k λ k ψ k + ∑kkkb k ψ k + f ′ , (238)which holds if and only iff ′ = φ ′ ,x k (1 − λ k ) = b k (k = 1, 2, . . .)i.e., whenf ′ = φ ′ ,x k =b k,1 − λ kλ k ≠ 1b k = 0, λ k = 1. (239)The latter gives the necessary and sufficient condition for the solvability of equation (234). Notethat the coefficients x n that correspond to those n for which λ n = 1 are arbitrary numbers.The theorem is proved.46


12 Harmonic functions and Green’s formulasDenote by D ∈ R 2 a two-dimensional domain bounded by the closed smooth contour Γ.A twice continuously differentiable real-valued function u defined on a domain D is calledharmonic if it satisfies Laplace’s equationwhere∆ u = 0 in D, (240)∆u = ∂2 u∂x 2 + ∂2 u∂y 2 (241)is called Laplace’s operator (Laplacian), the function u = u(x), and x = (x, y) ∈ R 2 . We willalso use the notation y = (x 0 , y 0 ).The functionΦ(x, y) = Φ(x − y) = 12π ln 1(242)|x − y|is called the fundamental solution of Laplace’s equation. For a fixed y ∈ R 2 , y ≠ x, the functionΦ(x, y) is harmonic, i.e., satisfies Laplace’s equationThe proof follows by straightforward differentiation.12.1 Green’s formulas∂ 2 Φ∂x + ∂2 Φ= 0 in D. (243)2 ∂y2 Let us formulate Green’s theorems which leads to the second and third Green’s formulas.Let D ∈ R 2 be a (two-dimensional) domain bounded by the closed smooth contour Γand let n y denote the unit normal vector to the boundary Γ directed into the exterior of Γand corresponding to a point y ∈ Γ. Then for every function u which is once continuouslydifferentiable in the closed domain ¯D = D + Γ, u ∈ C 1 ( ¯D), and every function v which is twicecontinuously differentiable in ¯D, v ∈ C 2 ( ¯D), Green’s first theorem (Green’s first formula) isvalid∫ ∫∫(u∆v + grad u · grad v)dx = u ∂v dl y , (244)∂n yDwhere · denotes the inner product of two vector-functions. For u ∈ C 2 ( ¯D) and v ∈ C 2 ( ¯D),Green’s second theorem (Green’s second formula) is validΓ∫ ∫∫ ((u∆v − v∆u)dx = u ∂v − v ∂u )dl y , (245)∂n y ∂n yDΓProof. Apply Gauss theorem∫ ∫∫div Adx =n y · Adl y (246)DΓ47


to the vector function (vector field) A = (A 1 .A 2 ) ∈ C 1 ( ¯D) (with the components A 1 . A 2 beingonce continuously differentiable in the closed domain ¯D = D + Γ, A i ∈ C 1 ( ¯D), i = 1, 2) definedby[A = u · grad v = u ∂v ]∂x , u∂v , (247)∂ythe components areWe haveA 1 = u ∂v∂x ,A 2 = u ∂v∂y . (248)div A = div (u grad v) = ∂ (u ∂v )+ ∂ (u ∂v )= (249)∂x ∂x ∂y ∂y∂u ∂v∂x ∂x + vu∂2 ∂x + ∂u ∂v2 ∂y ∂y + vu∂2 = grad u · grad v + u∆v. (250)∂y2 On the other hand, according to the definition of the normal derivative,so that, finally,∫ ∫Dn y · A = n y · (ugrad v) = u(n y · grad v) = u ∂v∂n y, (251)∫ ∫div Adx = (u∆v + grad u · grad v)dx,D∫Γ∫n y · Ad l y =Γu ∂v∂n ydl y , (252)which proves Green’s first formula. By interchanging u and v and subtracting we obtain thedesired Green’s second formula (245).Let a twice continuously differentiable function u ∈ C 2 ( ¯D) be harmonic in the domain D.Then Green’s third theorem (Green’s third formula) is valid∫ (u(x) =ΓΦ(x, y) ∂u∂n y− u(y))∂Φ(x, y)dl y , x ∈ D. (253)∂n yProof. For x ∈ D we choose a circleΩ(x, r) = {y : |x − y| =√(x − x 0 ) 2 + (y − y 0 ) 2 = r}of radius r (the boundary of a vicinity of a point x ∈ D) such that Ω(x, r) ∈ D and direct theunit normal n to Ω(x, r) into the exterior of Ω(x, r). Then we apply Green’s second formula(245) to the harmonic function u and Φ(x, y) (which is also a harmonic function) in the domainy ∈ D : |x − y| > r to obtain∫ ∫0 = (u∆Φ(x, y) − Φ(x, y)∆u)dx = (254)∫Γ∪Ω(x,r)D\Ω(x,r)(u(y)∂Φ(x, y)∂n y48− Φ(x, y) ∂u(y) )dl y . (255)∂n y


Since on Ω(x, r) we havegrad y Φ(x, y) = n y2πr , (256)a straightforward calculation using the mean-value theorem and the fact thatfor a harmonic in D function v shows that∫ (∂Φ(x, y)lim u(y)r→0∂n ywhich yields (253).Ω(x,r)12.2 Properties of harmonic functions∫Γ∂v∂n ydl y = 0 (257)− Φ(x, y) ∂u(y) )dl y = u(x), (258)∂n yTheorem 30 Let a twice continuously differentiable function v be harmonic in a domain Dbounded by the closed smooth contour Γ. Then∫Γ∂v(y)∂n ydl y = 0. (259)Proof follows from the first Green’s formula applied to two harmonic functions v and u = 1.Theorem 31 (Mean Value Formula) Let a twice continuously differentiable function u beharmonic in a ball√B(x, r) = {y : |x − y| = (x − x 0 ) 2 + (y − y 0 ) 2 ≤ r}of radius r (a vicinity of a point x) with the boundary Ω(x, r) = {y : |x − y| = r} (a circle)and continuous in the closure ¯B(x, r). Thenu(x) = 1πr 2∫B(x,r)u(y)dy = 12πr∫Ω(x,r)u(y)dl y , (260)i.e., the value of u in the center of the ball is equal to the integral mean values over both theball and its boundary.Proof. For each 0 < ρ < r we apply (259) and Green’s third formula to obtainu(x) = 12πρ∫|x−y|=ρu(y)dl y , (261)whence the second mean value formula in (260) follows by passing to the limit ρ → r. Multiplying(261) by ρ and integrating with respect to ρ from 0 to r we obtain the first mean valueformula in (260).49


Theorem 32 (Maximum–Minimum Principle) A harmonic function on a domain cannotattain its maximum or its minimum unless it is constant.Corollary. Let D ∈ R 2 be a two-dimensional domain bounded by the closed smooth contour Γand let u be harmonic in D and continuous in ¯D. Then u attains both its maximum and itsminimum on the boundary.13 Boundary value problemsFormulate the interior Dirichlet problem: find a function u that is harmonic in a domain Dbounded by the closed smooth contour Γ, continuous in ¯D = D ∪ Γ and satisfies the Dirichletboundary condition:∆ u = 0 in D, (262)u| Γ= −f, (263)where f is a given continuous function.Formulate the interior Neumann problem: find a function u that is harmonic in a domain Dbounded by the closed smooth contour Γ, continuous in ¯D = D ∪ Γ and satisfies the Neumannboundary condition∂u= −g, (264)∂n∣ Γwhere g is a given continuous function.Theorem 33 The interior Dirichlet problem has at most one solution.Proof. Assume that the interior Dirichlet problem has two solutions, u 1 and u 2 . Their differenceu = u 1 − u 2 is a harmonic function that is continuous up to the boundary and satisifes thehomogeneous boundary condition u = 0 on the boundary Γ of D. Then from the maximum–minimum principle or its corollary we obtain u = 0 in D, which proves the theorem.Theorem 34 Two solutions of the interior Neumann problem can differ only by a constant.The exterior Neumann problem has at most one solution.Proof. We will prove the first statement of the theorem. Assume that the interior Neumannproblem has two solutions, u 1 and u 2 . Their difference u = u 1 − u 2 is a harmonic function thatis continuous up to the boundary and satisifes the homogeneous boundary condition ∂u∂n y= 0on the boundary Γ of D. Suppose that u is not constant in D. Then there exists a closed ballB contained in D such that ∫ ∫|grad u| 2 dx > 0.B50


From Green’s first formula applied to a pair of (harmonic) functions u, v = u and the interiorD h of a surface Γ h = {x − hn x : x ∈ Γ} parallel to the Ds boundary Γ with sufficiently smallh > 0,∫ ∫∫(u∆u + |grad u| 2 )dx = u ∂u dl y , (265)∂n yD h Γwe have first∫ ∫(because B ∈ D h ); next we obtainB∫ ∫D h∫ ∫|grad u| 2 dx ≤∫|grad u| 2 dx ≤Passing to the limit h → 0 we obtain a contradiction∫ ∫|grad u| 2 dx ≤ 0.Hence, the difference u = u 1 − u 2 must be constant in D.14 Potentials with logarithmic kernelsIn the theory of boundary value problems, the integrals∫∫u(x) = E(x, y)ξ(y)dl y , v(x) =CBΓD h|grad u| 2 dx, (266)u ∂u∂n ydl y = 0. (267)C∂∂n yE(x, y)η(y)dl y (268)are called the potentials. Here, x = (x, y), y = (x 0 , y 0 ) ∈ R 2 ; E(x, y) is the fundamental solutionof a second-order elliptic differential operator;∂∂n y=is the normal derivative at the point y of the closed piecewise smooth boundary C of a domainin R 2 ; and ξ(y) and η(y) are sufficiently smooth functions defined on C. In the case of Laplace’soperator ∆u,g(x, y) = g(x − y) = 12π ln 1|x − y| . (269)In the case of the Helmholtz operator H(k) = ∆ + k 2 , one can take E(x, y) in the form∂∂n yE(x, y) = E(x − y) = i 4 H(1) 0 (k|x − y|),where H (1)0 (z) = −2ig(z) + h(z) is the Hankel function of the first kind and zero order (oneof the so-called cylindrical functions) and 1 1g(x − y) =2 2π ln 1is the kernel of the twodimensionalsingle layer potential; the second derivative of h(z) has a logarithmic|x − y|singularity.51


14.1 Properties of potentialsStatement 1. Let D ∈ R 2 be a domain bounded by the closed smooth contour Γ. Then thekernel of the double-layer potentialV (x, y) =∂Φ(x, y)∂n y, Φ(x, y) = 12π ln 1|x − y| , (270)is a continuous function on Γ for x, y ∈ Γ.Proof. Performing differentiation we obtainV (x, y) = 1 cos θ x,y2π |x − y| , (271)where θ x,y is the angle between the normal vector n y at the integration point y = (x 0 , y 0 ) andvector x − y. Choose the origin O = (0, 0) of (Cartesian) coordinates at the point y on curveΓ so that the Ox axis goes along the tangent and the Oy axis, along the normal to Γ at thispoint. Then one can write the equation of curve Γ in a sufficiently small vicinity of the point yin the formy = y(x).The asumption concerning the smoothness of Γ means that y 0 (x 0 ) is a differentiable functionin a vicinity of the origin O = (0, 0), and one can write a segment of the Taylor seriesy = y(0) + xy ′ (0) + x22 y′′ (ηx) = 1 2 x2 y ′′ (ηx) (0 ≤ η < 1)because y(0) = y ′ (0) = 0. Denoting r = |x − y| and taking into account that x = (x, y) andthe origin of coordinates is placed at the point y = (0, 0), we obtainandr =√√√(x − 0) 2 + (y − 0) 2 = x 2 + y 2 = x 2 + 1 √4 x4 (y ′′ (ηx)) 2 = x 1 + 1 4 x2 (y ′′ (ηx)) 2 ;cos θ x,y = y r = 1 xy ′′ (ηx)2√1 + x 2 (1/4)(y ′′ (ηx)) ,2cos θ x,yrThe curvature K of a plane curve is given bywhich yields y ′′ (0) = K(y), and, finally,= y r = 1 y ′′ (ηx)2 2 1 + x 2 (1/4)(y ′′ (ηx)) . 2K =cos θ x,ylimr→0 ry ′′(1 + (y ′ ) 2 ) 3/2 ,= 1 2 K(y)52


which proves the continuity of the kernel (270).Statement 2 (Gauss formula) Let D ∈ R 2 be a domain bounded by the closed smoothcontour Γ. For the double-layer potential with a constant density∫v 0 (x) =Γ∂Φ(x, y)∂n ydl y , Φ(x, y) = 12π ln 1|x − y| , (272)where the (exterior) unit normal vector n of Γ is directed into the exterior domain R 2 \ ¯D, wehavev 0 (x) = −1, x ∈ D,Proof follows for x ∈ R 2 \ ¯D from the equalityv 0 (x) = − 1 , x ∈ Γ, (273)2v 0 (x) = 0, x ∈ R 2 \ ¯D.∫Γ∂v(y)∂n ydl y = 0 (274)applied to v(y) = Φ(x, y). For x ∈ D it follows from Green’s third formula∫ (u(x) =ΓΦ(x, y) ∂u∂n y− u(y))∂Φ(x, y)dl y , x ∈ D, (275)∂n yapplied to u(y) = 1 in D.Note also that if we setv 0 (x ′ ) = − 1 2 , x′ ∈ Γ,we can also write (273) asv 0 ±(x ′ ) = limh→±0 v(x + hn x ′) = v0 (x ′ ) ± 1 2 , x′ ∈ Γ. (276)Corollary. Let D ∈ R 2 be a domain bounded by the closed smooth contour Γ. Introducethe single-layer potential with a constant density∫u 0 (x) =For the normal derivative of this single-layer potentialΓΦ(x, y)dl y , Φ(x, y) = 12π ln 1|x − y| . (277)∂u 0 ∫(x)=∂n xΓ∂Φ(x, y)∂n xdl y , (278)53


where the (exterior) unit normal vector n x of Γ is directed into the exterior domain R 2 \ ¯D, wehave∂u 0 (x)∂n x= 1, x ∈ D,∂u 0 (x)∂n x= 1 ,2x ∈ Γ, (279)∂u 0 (x)∂n x= 0, x ∈ R 2 \ ¯D.Theorem 35 Let D ∈ R 2 be a domain bounded by the closed smooth contour Γ. The doublelayerpotential∫∂Φ(x, y)v(x) =ϕ(y)dl y , Φ(x, y) = 1∂n y 2π ln 1|x − y| , (280)Γwith a continuous density ϕ can be continuously extended from D to ¯D and from R 2 \ ¯D toR 2 \ D with the limiting values on Γor∫v ± (x ′ ) =Γ∂Φ(x ′ , y)∂n yϕ(y)dl y ± 1 2 ϕ(x′ ), x ′ ∈ Γ, (281)v ± (x ′ ) = v(x ′ ) ± 1 2 ϕ(x′ ), x ′ ∈ Γ, (282)wherev ± (x ′ ) = lim v(x + hn x ′). (283)h→±0Proof.1. Introduce the function∫I(x) = v(x) − v 0 (x) =Γ∂Φ(x, y)∂n y(ϕ(y) − ϕ 0 )dl y (284)where ϕ 0 = ϕ(x ′ ) and prove its continuity at the point x ′ ∈ Γ. For every ɛ > 0 and every η > 0there exists a vicinity C 1 ⊂ Γ of the point x ′ ∈ Γ such that|ϕ(x) − ϕ(x ′ )| < η, x ′ ∈ C 1 . (285)SetThen∫I = I 1 + I 2 = . . . +C 1|I 1 | ≤ ηB 1 ,∫Γ\C 1. . . .54


where B 1 is a constant taken according to∫Γ∣ ∂Φ(x, y) ∣∣∣∣ dl∣y ≤ B 1 ∀x ∈ D, (286)∂n yand condition (286) holds because kernel (270) is continuous on Γ.Taking η = ɛ/B 1 , we see that for every ɛ > 0 there exists a vicinity C 1 ⊂ Γ of the pointx ′ ∈ Γ (i.e., x ′ ∈ C 1 ) such that|I 1 (x)| < ɛ ∀x ∈ D. (287)Inequality (287) means that I(x) is continuous at the point x ′ ∈ Γ.2. Now, if we take v ± (x ′ ) for x ′ ∈ Γ and consider the limits v ± (x ′ ) = lim h→±0 v(x + hn y ) frominside and outside contour Γ using (276), we obtainwhich prove the theorem.v − (x ′ ) = I(x ′ ) + limh→−0 v0 (x ′ − hn x ′) = v(x ′ ) − 1 2 ϕ(x′ ), (288)v + (x ′ ) = I(x ′ ) + limh→+0 v0 (x ′ + hn x ′) = v(x ′ ) + 1 2 ϕ(x′ ), (289)Corollary. Let D ∈ R 2 be a domain bounded by the closed smooth contour Γ. Introducethe single-layer potential∫u(x) =ΓΦ(x, y)ϕ(y)dl y , Φ(x, y) = 12π ln 1|x − y| . (290)with a continuous density ϕ. The normal derivative of this single-layer potential∫∂u(x)=∂n xΓ∂Φ(x, y)∂n xϕ(y)dl y (291)can be continuously extended from D to ¯D and from R 2 \ ¯D to R 2 \ D with the limiting valueson Γ∂u(x ′ ∫) ∂Φ(x ′ , y)=ϕ(y)dl y ∓ 1 ∂n x ± ∂n x ′2 ϕ(x′ ), x ′ ∈ Γ, (292)orwhereΓ∂u(x ′ )∂n x ±= ∂u(x′ )∂n x∓ 1 2 ϕ(x′ ), x ′ ∈ Γ, (293)∂u(x ′ )∂n x ′= limh→±0 n x ′ · grad v(x′ + hn x ′). (294)55


14.2 Generalized potentialsLet S Π (Γ) ∈ R 2 be a domain bounded by the closed piecewise smooth contour Γ. We assumethat a rectilinear interval Γ 0 is a subset of Γ, so that Γ 0 = {x : y = 0, x ∈ [a, b]}.Let us say that functions U l (x) are the generalized single layer (SLP) (l = 1) or double layer(DLP) (l = 2) potentials ifwhere∫U l (x) = K l (x, t)ϕ l (t)dt,Γx = (x, y) ∈ S Π (Γ),K l (x, t) = g l (x, t) + F l (x, t) (l = 1, 2),g 1 (x, t) = g(x, y 0 ) = 1 π ln 1|x − y 0 | , g 2(x, t) = ∂∂y 0g(x, y 0 ) [y 0 = (t, 0)],F 1,2 are smooth functions, and we shall assume that for every closed domain S 0Π (Γ) ⊂ S Π (Γ),the following conditions holdi) F 1 (x, t) is once continuously differentiable with respect to thevariables of x and continuous in t;ii) F 2 (x, t) andare continuous.F 1 2 (x, t) = ∂ ∂y∫tqF 2 (x, s)ds, q ∈ R 1 ,We shall also assume that the densities of the generalized potentials ϕ 1 ∈ L (1)2 (Γ) andϕ 2 ∈ L (2)2 (Γ), where functional spaces L (1)2 and L (2)2 are defined in Section 3.15 Reduction of boundary value problems to integralequationsGreen’s formulas show that each harmonic function can be represented as a combination ofsingle- and double-layer potentials. For boundary value problems we will find a solution in theform of one of these potentials.Introduce integral operators K 0 and K 1 acting in the space C(Γ) of continuous functionsdefined on contour Γand∫K 0 (x) = 2Γ∫K 1 (x) = 2Γ∂Φ(x, y)∂n yϕ(y)dl y , x ∈ Γ (295)∂Φ(x, y)∂n xψ(y)dl y , x ∈ Γ. (296)The kernels of operators K 0 and K 1 are continuous on Γ As seen by interchanging the orderof integration, K 0 and K 1 are adjoint as integral operators in the space C(Γ) of continuousfunctions defined on curve Γ.56


Theorem 36 The operators I − K 0 and I − K 1 have trivial nullspacesN(I − K 0 ) = {0}, N(I − K 1 ) = {0}, (297)The nullspaces of operators I + K 0 and I + K 1 have dimension one andwithN(I + K 0 ) = span {1}, N(I + K 1 ) = span {ψ 0 } (298)∫Γψ 0 dl y ≠ 0. (299)Proof. Let φ be a solution to the homogeneous integral equation φ − K 0 φ = 0 and define adouble-layer potential∫∂Φ(x, y)v(x) =ϕ(y)dl y , x ∈ D (300)∂n yΓwith a continuous density φ. Then we have, for the limiting values on Γ,∫∂Φ(x, y)v ± (x) =ϕ(y)dl y ± 1 ϕ(x) (301)∂n y 2which yieldsΓ2v − (x) = K 0 ϕ(x) − ϕ = 0. (302)From the uniqueness of the interior Dirichlet problem (Theorem 33) it follows that v = 0 in thewhole domain D. Now we will apply the contunuity of the normal derivative of the double-layerpotential (300) over a smooth contour Γ,∂v(y)∂n y∣ ∣∣∣∣+− ∂v(y)∂n y∣ ∣∣∣∣−= 0, y ∈ Γ, (303)where the internal and extrenal limiting values with respect to Γ are defined as followsand∣∂v(y) ∣∣∣∣− ∂v(y)= lim∂n y y→Γ, y∈D ∂n y∣∂v(y) ∣∣∣∣+ ∂v(y)= lim∂n y y→Γ, y∈R 2 \ ¯D ∂n yNote that (303) can be written asFrom (304)–(306) it follows that= limh→0n y · grad v(y − hn y ) (304)= limh→0n y · grad v(y + hn y ). (305)lim n y · [grad v(y + hn y ) − grad v(y − hn y )] = 0. (306)h→0∂v(y)∂n y∣ ∣∣∣∣+− ∂v(y)∂n y∣ ∣∣∣∣−= 0, y ∈ Γ. (307)Using the uniqueness for the exterior Neumann problem and the fact that v(y) → 0, |y| → ∞,we find that v(y) = 0, y ∈ R 2 \ ¯D. Hence from (301) we deduce ϕ = v + − v − = 0 on Γ. ThusN(I − K 0 ) = {0} and, by the Fredholm alternative, N(I − K 1 ) = {0}.57


Theorem 37 Let D ∈ R 2 be a domain bounded by the closed smooth contour Γ. The doublelayerpotential∫v(x) =Γ∂Φ(x, y)ϕ(y)dl y , Φ(x, y) = 1∂n y 2π ln 1, x ∈ D, (308)|x − y|with a continuous density ϕ is a solution of the interior Dirichlet problem provided that ϕ is asolution of the integral equation∫ϕ(x) − 2Γ∂Φx, y)∂n yϕ(y)dl y = −2f(x), x ∈ Γ. (309)Proof follows from Theorem 14.1.Theorem 38 The interior Dirichlet problem has a unique solution.Proof The integral equation φ−Kφ = 2f of the interior Dirichlet problem has a unique solutionby Theorem 36 because N(I − K) = {0}.Theorem 39 Let D ∈ R 2 be a domain bounded by the closed smooth contour Γ. The doublelayerpotential∫∂Φ(x, y)u(x) =ϕ(y)dl y , x ∈ R 2 \∂n ¯D, (310)yΓwith a continuous density ϕ is a solution of the exterior Dirichlet problem provided that ϕ is asolution of the integral equation∫ϕ(x) + 2Here we assume that the origin is contained in D.Γ∂Φ(x, y)∂n yϕ(y)dl y = 2f(x), x ∈ Γ. (311)Proof follows from Theorem 14.1.Theorem 40 The exterior Dirichlet problem has a unique solution.Proof. The integral operator ˜K : C(Γ) → C(Γ) defined by the right-hand side of (310) iscompact in the space C(Γ) of functions continuous on Γ because its kernel is a continuousfunction on Γ. Let ϕ be a solution to the homogeneous integral equation ϕ + ˜Kϕ = 0 on Γand define u by (310). Then 2u = ϕ + ˜Kϕ = 0 on Γ, and by the uniqueness for the exteriorDirichlet problem it follows that u ∈∈ R 2 \ ¯D, and ∫ Γ ϕdl y = 0. Therefore ϕ + Kϕ = 0 whichmeans according to Theorem 36 (because N(I + K) = span {1}), that ϕ = const on Γ. Now∫Γ ϕdl y = 0 implies that ϕ = 0 on Γ, and the existence of a unique solution to the integralequation (311) follows from the Fredholm property of its operator.58


Theorem 41 Let D ∈ R 2 be a domain bounded by the closed smooth contour Γ. The singlelayerpotential∫u(x) = Φ(x, y)ψ(y)dl y , x ∈ D, (312)Γwith a continuous density ψ is a solution of the interior Neumann problem provided that ψ isa solution of the integral equation∫∂Φ(x, y)ψ(x) + 2ψ(y)dl y = 2g(x), x ∈ Γ. (313)∂n xΓTheorem 42 The interior Neumann problem is solvable if and only if∫ψdl y = 0 (314)is satisfied.16 Functional spaces and Chebyshev polynomials16.1 Chebyshev polynomials of the first and second kindΓThe Chebyshev polynomials of the first and second kind, T n and U n , are defined by the followingrecurrent relationships:T 0 (x) = 1, T 1 (x) = x; (315)T n (x) = 2xT n−1 (x) − T n−2 (x), n = 2, 3, ...; (316)U 0 (x) = 1, U 1 (x) = 2x; (317)U n (x) = 2xU n−1 (x) − U n−2 (x), n = 2, 3, ... . (318)We see that formulas (316) and (318) are the same and the polynomials are different becausethey are determined using different initial conditions (315) and (317).Write down few first Chebyshev polynomials of the first and second kind; we determinethem directly from formulas (315)–(318):T 0 (x) = 1,T 1 (x) = x,T 2 (x) = 2x 2 − 1,T 3 (x) = 4x 3 − 3x,T 4 (x) = 8x 4 − 8x 2 + 1,T 5 (x) = 16x 5 − 20x 3 + 5x,T 6 (x) = 32x 6 − 48x 4 + 18x 2 − 1;U 0 (x) = 1,U 1 (x) = 2x,U 2 (x) = 4x 2 − 1,U 3 (x) = 8x 3 − 4x,U 4 (x) = 16x 4 − 12x 2 + 1,U 5 (x) = 32x 5 − 32x 3 + 6x,U 6 (x) = 64x 6 − 80x 4 + 24x 2 − 1.59(319)(320)


We see that T n and U n are polynomials of degree n.Let us breifly summarize some important properties of the Chebyshev polynomials.If |x| ≤ 1, the following convenient representations are validU n (x) =T n (x) = cos(n arccos x), n = 0, 1, ...; (321)1√ sin ((n + 1) arccos x), n = 0, 1, ... . (322)1 − x2At x = ±1 the right-hand side of formula (322) should be replaced by the limit.Polynomials of the first and second kind are coupled by the relationshipsU n (x) = 1n + 1 T ′ n+1(x), n = 0, 1, ... . (323)Polynomials T n and U n are, respectively, even functions for even n and odd functions for oddn:T n (−x) = (−1) n T n (x),U n (−x) = (−1) n (324)U n (x).One of the most important properties is the orthogonality of the Chebyshev polynomialsin the segment [−1, 1] with a certain weight function (a weight). This property is expressed asfollows:⎧∫1⎪⎨ 0, n ≠ m1T n (x)T m (x) √ dx = π/2, n = m ≠ 0 ; (325)−11 − x2 ⎪ ⎩π, n = m = 0∫ 1−1U n (x)U m (x) √ 1 − x 2 dx ={0, n ≠ mπ/2, n = m . (326)Thus in the segment [−1, 1], the Chebyshev polynomials of the first kind are orthogonalwith the weight 1/ √ 1 − x 2 and the Chebyshev polynomials of the second kind are orthogonalwith the weight √ 1 − x 2 .16.2 Fourier–Chebyshev seriesFormulas (325) and (326) enable one to decompose functions in the Fourier series in Chebyshevpolynomials (the Fourier–Chebyshev series),orf(x) = a 02 T 0(x) +Coefficients a n and b n are determined as followsa n = 2 π∞∑a n T n (x), x ∈ [−1, 1], (327)n=1∞∑f(x) = b n U n (x), x ∈ [−1, 1]. (328)n=0∫1−1dxf(x)T n (x) √ , n = 0, 1, ...; (329)1 − x260


n = 2 π∫1−1f(x)U n (x) √ 1 − x 2 dx, n = 0, 1, ... . (330)Note that f(x) may be a complex-valued function; then the Fourier coefficients a n and b nare complex numbers.Consider some examples of decomposing functions in the Fourier–Chebyshev series√1 − x2 = 2 π − 4 π∞∑n=114n 2 − 1 T 2n(x), −1 ≤ x ≤ 1;arcsin x = 4 ∞∑ 1πn=0(2n + 1) T 2n+1(x), −1 ≤ x ≤ 1;2∞∑ (−1) n−1ln (1 + x) = − ln 2 + 2T n (x), −1 < x ≤ 1.n=1nSimilar decompositions can be obtained using the Chebyshev polynomials U n of the secondkind. In the first example above, an even function f(x) = √ 1 − x 2 is decomposed; therefore itsexpansion contains only coefficients multiplying T 2n (x) with even indices, while the terms withodd indices vanish.Since |T n (x)| ≤ 1 according to (321), the first and second series converge for all x ∈ [−1, 1]uniformly in this segment. The third series represents a different type of convergence: it convergesfor all x ∈ (−1, 1] (−1 < x ≤ 1) and diverges at the point x = −1; indeed, thedecomposed function f(x) = ln(x + 1) is not defined at x = −1.In order to describe the convergence of the Fourier–Chebyshev series introduce the functionalspaces of functions square-integrable in the segment (−1, 1) with the weights 1/ √ 1 − x 2 and√1 − x2 . We shall write f ∈ L (1)2 if the integral∫1−1|f(x)| 2 dx√1 − x2(331)converges, and f ∈ L (2)2 if the integral∫1−1|f(x)| 2√ 1 − x 2 dx. (332)converges. The numbers specified by (convergent) integrals (331) and (332) are the squarednorms of the function f(x) in the spaces L (1)2 and L (2)2 ; we will denote them by ‖f‖ 2 1 and ‖f‖ 2 2,respectively. Write the symbolic definitions of these spaces:∫1L (1)2 := {f(x) : ‖f‖ 2 1 :=−1∫1L (2)2 := {f(x) : ‖f‖ 2 2 :=−161|f(x)| 2 dx√ < ∞},1 − x2|f(x)| 2√ 1 − x 2 dx < ∞}.


Spaces L (1)2 and L (2)2 are (infinite-dimensional) Hilbert spaces with the inner products(f, g) 1 =(f, g) 2 =∫ 1−1∫1−1dxf(x)g(x) √ , 1 − x2f(x)g(x) √ 1 − x 2 dx.Let us define two more functional spaces of differentiable functions defined in the segment(−1, 1) whose first derivatives are square-integrable in (−1, 1) with the weights 1/ √ √ 1 − x 2 and1 − x2 :˜W 2 1 := {f(x) : f ∈ L (1)2 , f ′ ∈ L (2)2 },andŴ 1 2 := {f(x) : f ∈ L (1)2 , f ′ (x) √ 1 − x 2 ∈ L (2)2 };they are also Hilbert spaces with the inner productsand norms(f, g) ˜W 12=(f, g)Ŵ 12=∫1−1∫1−1dxf(x) √1 − x2dxf(x) √1 − x2‖f‖ 2˜W 1 =2 ∣‖f‖ 2 Ŵ 1 2=∣∫ 1−1∫1−1∫1−1∫1−1∫1dxg(x) √ + f ′ (x)g ′ (x) √ 1 − x 2 dx,1 − x2−1∫1dxg(x) √ + f ′ (x)g ′ (x) (1 − x 2 ) 3/2 dx1 − x2dxf(x) √ 1 − x2∣dxf(x) √ 1 − x2∣22++∫ 1−1∫1−1−1|f ′ (x)| 2√ 1 − x 2 dx,|f ′ (x)| 2 (1 − x 2 ) 3/2 dx.If a function f(x) is decomposed in the Fourier–Chebyshev series (327) or (328) and isan elemet of the space L (1)2 or L (2)2 (that is, integrals (331) and (332) converge), then thecorresponding series always converge in the root-mean-square sense, i.e.,‖f − S N ‖ 1 → 0 and ‖f − S N ‖ 2 → 0 as N → ∞,where S N are partial sums of the Fourier–Chebyshev series. The ’smoother’ function f(x) inthe segment [−1, 1] the ’faster’ the convergence of its Fourier–Chebyshev series. Let us explainthis statement: if f(x) has p continuous derivatives in the segment [−1, 1], then∣ f(x) − a 0N∑2 T 0(x) − a n T n (x)∣ ≤ C ln N , x ∈ [−1, 1], (333)n=1N pwhere C is a constant that does not depend on N.62


A similar statement is valid for the Fourier–Chebyshev series in polynomials U n : if f(x) hasp continuous derivatives in the segment [−1, 1] and p ≥ 2, thenN ∣ f(x) − ∑b n U n (x)∣ ≤n=0C , x ∈ [−1, 1]. (334)Np−3/2We will often use the following three relationships involving the Chebyshev polynomials:∫1−11x − y T dxn(x) √ = πU 1 − x2n−1(y), −1 < y < 1, n ≥ 1; (335)∫1−11x − y U n−1(x) √ 1 − x 2 dx = −πT n (y), −1 < y < 1, n ≥ 1; (336)∫1−1{1ln|x − y| T dx π ln 2, n = 0;n(x) √ = 1 − x2 π T n (y)/n, n ≥ 1.(337)The integrals in (335) and (336) are improper integrals in the sense of Cauchy (a detailedanalysis of such integrals is performed in [8, 9]). The integral in (337) is a standard convergentimproper integral.17 Solution to integral equations with a logarithmic singulatityof the kernel17.1 <strong>Integral</strong> equations with a logarithmic singulatity of the kernelMany boundary value problems of mathematical physics are reduced to the integral equationswith a logarithmic singulatity of the kerneland(L + K)ϕ :=Lϕ :=∫1−1(ln∫1−1ln1|x − y| ϕ(x) dx√1 − x21|x − y| + K(x, y)) ϕ(x) dx√1 − x2= f(y), −1 < y < 1 (338)= f(y), −1 < y < 1. (339)In equations (338) and (339) ϕ(x) is an unknown function and the right-hand side f(y) is agiven (continuous) function. The functions ln (1/|x − y|) in (338) and ln (1/|x − y|) + K(x, y)in (339) are the kernels of the integral equations; K(x, y) is a given function which is assumedto be continuous (has no singularity) at x = y. The kernels go to infinity at x = y andtherefore referred to as functions with a (logarithmic) singulatity, and equations (338) and(339) are called integral equations with a logarithmic singulatity of the kernel. In equations(338) and (339) the weight function 1/ √ 1 − x 2 is separated explicitly. This function describes63


the behaviour (singularty) of the solution in the vicinity of the endpoints −1 and +1. Linearintegral operators L and L + K are defined by the left-hand sides of (338) and (339).Equation (338) is a particular case of (339) for K(x, y) ≡ 0 and can be solved explicitly.Assume that the right-hand side f(y) ∈ ˜W 21 (for simplicity, one may assume that f is acontinuously differentiable function in the segment [−1, 1]) and look for the solution ϕ(x) inthe space L (1)2 . This implies that integral operators L and L + K are considered as boundedlinear operatorsL : L (1)2 → ˜W 2 1 and L + K : L (1)2 → ˜W 2 1 .Note that if K(x, y) is a sufficiently smooth function, then K is a compact (completely continuous)operator in these spaces.17.2 Solution via Fourier–Chebyshev seriesLet us describe the method of solution to integral equation (338). Look for an approximationϕ N (x) to the solution ϕ(x) of (338) in the form of a partial sum of the Fourier–Chebyshev seriesϕ N (x) = a 02 T 0(x) +N∑a n T n (x), (340)n=1where a n are unknown coefficients. Expand the given function f(y) in the Fourier–Chebyshevseriesf N (y) = f 0N∑2 T 0(y) + f n T n (y), (341)where f n are known coefficients determined according to (329). Substituting (340) and (341)into equation (338) and using formula (337), we obtaina 02 ln 2T 0(y) +N∑n=1n=11n a nT n (y) = f 02 T 0(y) +Now equate coefficients multiplying T n (y) with the same indices n:a 02 ln 2 = f 02 , 1n a n = f n ,N∑f n T n (y). (342)n=1which gives unknown coefficients a na 0 = f 0ln 2 ,and the approximationϕ N (x) = f 02 ln 2 T 0(x) +a n = nf nN∑f n T n (x). (343)n=1The exact solution to equation (338) is obtained by a transition to the limit N → ∞ in (343)ϕ(x) = f 02 ln 2 T 0(x) +64∞∑nf n T n (x). (344)n=1


In some cases one can determine the sum of series (344) explicitly and obtain the solutionin a closed form. One can show that the approximation ϕ N (x) of the solution to equation (338)defined by (340) converges to the exact solution ϕ(x) as N → ∞ in the norm of space L (1)2 .Consider integral equation (339). Expand the kernel K(x, y) in the double Fourier–Chebyshevseries, take its partial sumN∑ N∑K(x, y) ≈ K N (x, y) = k ij T i (x)T j (y), (345)i=0 j=0and repeat the above procedure. A more detailed description is given below.Example 14 Consider the integral equation∫1−1(ln1|x − y| + xy) ϕ(x) dx√√1 = y + − y 2 , −1 < y < 1.1 − x2Use the method of solution described above and setto obtainy +ϕ N (x) = a 02 T 0(x) +N∑a n T n (x)n=1√1 − y 2 = 2 π T 0(y) + T 1 (y) − 4 πK(x, y) = T 1 (x)T 1 (y).∞∑k=1T 2k (y)4k 2 − 1 ,Substituting these expressions into the initial integral equation, we obtaina 02 ln 2T 0(y) +N∑n=1a nn T n(y) + π 2 a 1T 1 (y) = 2 π T 0(y) + T 1 (y) − 4 [N/2]∑ T 2k (y)πk=14k 2 − 1 ,here [ N 2 ] denotes the integer part of a number. Equating the coefficients that multiply T n(y) withthe same indices n we determine the unknownsa 0 = 4π ln 2 , a 1 = 2 π , a 2n+1 = 0, a 2n = − 8 nπ 4n 2 − 1 , n ≥ 1.The approximate solution has the formϕ N (x) = 4π ln 2 + 2 π x − 8 [N/2]∑ nπn=14n 2 − 1 T 2n(x).The exact solution is represented as the Fourier–Chebyshev seriesϕ(x) = 4π ln 2 + 2 π x − 8 π∞∑n=1n4n 2 − 1 T 2n(x).The latter series converges to the exact solution ϕ(x) as N → ∞ in the norm of space L (1)and its sum ϕ ∈ L (1)2 .652 ,


18 Solution to singular integral equations18.1 Singular integral equationsConsider two types of the so-called singular integral equations(S 1 + K 1 )ϕ =(S 2 + K 2 )ϕ :=S 1 ϕ :=∫ 1−1S 2 ϕ :=∫1−1∫ 1−11x − y ϕ(x) dx√ = f(y), −1 < y < 1; (346)1 − x2( )1x − y + K dx1(x, y) ϕ(x) √1 − x2∫1−1= f(y), −1 < y < 1; (347)1x − y ϕ(x) √ 1 − x 2 dx = f(y), −1 < y < 1; (348)( )1x − y + K 2(x, y) ϕ(x) √ 1 − x 2 dx = f(y), −1 < y < 1. (349)<strong>Equations</strong> (346) and (348) constitute particular cases of a general integral equation (347)and can be solved explicitly. Singular integral operators S 1 and S 1 + K 1 are considered asbounded linear operators in the weighted functional spaces introduced above, S 1 : L (1)2 → L (2)2 ,S 1 + K 1 : L (1)2 → L (2)2 . Singular integral operators S 2 S 2 + K 2 are considered as bounded linearoperators in the weighted functional spaces S 2 : L (2)2 → L (1)2 , S 2 + K 2 : L (2)2 → L (1)2 . If the kernelfunctions K 1 (x, y) and K 2 (x, y) are sufficiently smooth, K 1 and K 2 will be compact operatorsin these spaces.18.2 Solution via Fourier–Chebyshev seriesLook for an approximation ϕ N (x) to the solution ϕ(x) of the singular integral equations in theform of partial sums of the Fourier–Chebyshev series. From formulas (335) and (336) it followsthat one should solve (346) and (347) by expanding functions ϕ in the Chebyshev polynomialsof the first kind and f in the Chebyshev polynomials of the second kind. In equations (348)and (349), ϕ should be decomposed in the Chebyshev polynomials of the second kind and f inthe Chebyshev polynomials of the first kind.Solve equation (346). Assume that f ∈ L (2)2 and look for a solution ϕ ∈ L (1)2 :ϕ N (x) = a 02 T 0(x) +f N (y) =N−1 ∑k=0N∑a n T n (x); (350)n=1f k U k (y), (351)where coefficients f k are calculated by formulas (330). Substituting expressions (350) and (351)into equation (346) and using (335), we obtainN∑N−1 ∑π a n U n−1 (y) =n=1k=066f k U k (y). (352)


Coefficient a 0 vanishes because (see [4])∫1−11 dx√x − y 1 − x2= 0, −1 < y < 1. (353)Equating, in (352), the coefficients multiplying U n (y) with equal indices, we obtain a n =f n−1 /π, n ≥ 1. The coefficient a 0 remains undetermined. Thus the solution to equation (346)depends on an arbitrary constant C = a 0 /2. Write the corresponding formulas for the approximatesolution,ϕ N (x) = C + 1 N∑f n−1 T n (x); (354)πand the exact solution,n=1ϕ(x) = C + 1 ∞∑f n−1 T n (x). (355)πn=1The latter series converges in the L (1)2 -norm.In order to obtain a unique solution to equation (346) (to determine the constant C),one should specify an additional condition for function ϕ(x), for example, in the form of anorthogonality condition∫1−1dxϕ(x) √ = 0. (356)1 − x2(orthogonality of the solution to a constant in the space L (1)2 ). Then using the orthogonality(325) of the Cebyshev polynomial T 0 (x) ≡ 1 to all T n (x), n ≥ 1, and calculating the innerproduct in (355) (multiplying both sides by 1/ √ 1 − x 2 and integrating from -1 to 1) we obtainC = 1 π∫1−1ϕ(x)dx√1 − x2 = 0.Equation (347) can be solved in a similar manner; however first it is convenient to decomposethe kernel function K 1 (x, y) in the Fourier–Chebyshev series and take the partial sumK 1 (x, y) ≈ K (1)N (x, y) =N∑N−1 ∑i=0 j=0k (1)ij T i (x)U j (y), (357)and then proceed with the solution according to the above.If K 1 (x, y) is a polynomial in powers of x and y, representation (357) will be exact and canbe obtained by rearranging explicit formulas (315)–(320).Example 15 Solve a singular integral equation∫ 1−1( )1x − y + x2 y 3 dxϕ(x) √1 − x2= y, −1 < y < 167


subject to condition (356). Look for an approximate solution in the formϕ N (x) = a 02 T 0(x) +N∑a n T n (x).n−1The right-hand sidey = 1 2 U 1(y).Decompose K 1 (x, y) in the Fourier–Chebyshev series using (320),( 1K 1 (x, y) =2 T 0(x) + 1 ( 12(x))2 T 4 U 1(y) + 1 )8 U 3(y) .Substituting the latter into equation (335) and using the orthogonality conditions (325), weobtainN∑( 1π a n U n−1 (y) +4 U 1(y) + 1 )8 U π(a0 + a 2 )3(y)= 1 4 2 U 1(y),n=1Equating the coefficients multiplying U n (y) with equal indices, we obtainπa 1 = 0, πa 2 + π(a 0 + a 2 )16= 1 2 , πa 3 = 0,πa 4 + π(a 0 + a 2 )= 0; πa n = 0, n ≥ 5.32Finally, after some algebra, we determine the unknown coefficientsa 0 = C, a 2 = 1 17 ( 8 π − C), a 4 = 168 (2C − 1 π ), a n = 0, n ≠ 0, 2, 4,where C is an arbitrary constant. In order to determine the unique solution that satisfies (356),calculateC = 0, a 0 = 0, a 2 = 817π , a 4 = − 168π .The final form of the solution isϕ(x) = 817π T 2(x) − 168π T 4(x),orϕ(x) = − 217π x4 + 1817π x2 − 3368π .Note that the approximate solution ϕ N (x) = ϕ(x) for N ≥ 4. We see that here, one can obtainthe solution explicitly as a polynomial.Consider equations (348) and (349). Begin with (348) and look for its solution in the formϕ N (x) =N−1 ∑n=0b n U n (x). (358)68


Expand the right-hand side in the Chebyshev polynomials of the first kindN−1−πn=0f N (y) = f 02 T 0(y) +N∑f k T k (y), (359)k=1where coefficients f k are calculated by (329). Substitute (358) and (359) into (348) and use(336) to obtain∑b n T n+1 (y) = f 0N∑2 T 0(y) + f k T k (y), (360)where unknown coefficients b n are determined form (360),k=1b n = − 1 π f n+1, n ≥ 0.Equality (360) holds only if f 0 = 0. Consequently, we cannot state that the solution to (348)exists for arbitrary function f(y). The (unique) solution exists only if f(y) satisfies the orthogonalitycondition∫1−1dyf(y) √ = 0, (361)1 − y2which is equivalent to the requirement f 0 = 0 (according to (329) taken at n = 0). Note that,when solving equation (348), we impose, unlike equation (356), the condition (361) for equation(346), on the known function f(y). As a result, we obtaina unique approximate solution to (348) for every N ≥ 2anda unique exact solution to (348)ϕ N (x) = − 1 πN−1 ∑n=0f n+1 U n (x), (362)ϕ(x) = − 1 ∞∑f n+1 U n (x). (363)πn=0We will solve equations (348) and (349) under the assumption that the right-hand sidef ∈ L (1)2 and ϕ ∈ L (2)2 . Series (363) converges in the norm of the space L (2)2 .The solution of equation (349) is similar to that of (348). The difference is that K 2 (x, y) isapproximately replaced by a partial sum of the Fourier–Chebyshev seriesK 2 (x, y) ≈ K (2)N−1 ∑N (x, y) =N∑i=0 j=0k (2)ij U i (x)T j (y). (364)Substituting (358), (359), and (364) into equation (349) we obtain a linear equation systemwith respect to unknowns b n . As well as in the case of equation (348), the solution to (349)exists not for all right-hand sides f(y).The method of decomposing in Fourier–Chebyshev series allows one not only to solve equations(348) and (349) but also verify the existence of solution. Consider an example of anequation of the form (349) that has no solution (not solvable).69


Example 16 Solve the singular integral equation∫1−1( 1x − y + √1 − y 2 )ϕ(x) √ 1 − x 2 dx = 1, −1 < y < 1; ϕ ∈ L (2)2 .Look for an approximate solution, as well as in the case of (348), in the form of a seriesϕ N (x) =N−1 ∑n=0b n U n (x).Replace K 2 (x, y) = √ 1 − y 2 by an approximate decompositionK (2)N (x, y) = 2 π T 0(y) − 4 [N/2]∑ 1πk=14k 2 − 1 T 2k(y).The right-hand side of this equation is 1 = T 0 (y). Substituting this expression into the equation,we obtain⎛⎞N−1[N/2]∑∑ 1−π b n T n+1 (y) + ⎝T 0 (y) − 24k 2 − 1 T 2k(y) ⎠ b 0 = T 0 (y).n=0Equating the coefficients that multiply T 0 (y) on both sides of the equation, we obtain b 0 = 1;equating the coefficients that multiply T 1 (y), we have −πb 0 = 0,which yields b 0 = 0. Thiscobtradiction proves that the considered equation has no solution. Note that this circumstanceis not connected with the fact that we determine an approxiate solution ϕ N (x) rather than theexact solution ϕ(x). By considering infinite series (setting N = ∞) in place of finite sums wecan obtain the same result.Let us prove the conditions for function f(y) that provide the solvability of the equation∫1−1k=1( 1x − y + √1 − y 2 )ϕ(x) √ 1 − x 2 dx = f(y), −1 < y < 1with the right-hand side f(y) To this end, replace the decomposition of the right-hand sidewhich was equal to one by the expansionf N (y) = f 02 T 0(y) +N∑f n T n (y).n=1Equating the coefficients that multiply equal-order polynomials, we obtainb 0 = f 02 , b 0 = − 1 π f 1,b 2k−1 = − 1 π()2f 2k +4k 2 − 1 b 0 ,b 2k = − 1 π f 2k+1; k ≥ 1.70


We see that the equation has the unique solution if and only ifRewrite this condition using formula (329)12∫1−1f 02 = − 1 π f 1.dyf(y) √ = − 1 1 − y2 π∫1−1y f(y)dy√ 1 − y2 .Here, one can uniquely determine coefficient b n , which provides the unique solvability of theequation.19 Matrix representation of an operator in the Hilbertspace and summation operators19.1 Matrix representation of an operator in the Hilbert spaceLet {ϕ n } be an orthonormal basis in the Hilbert space H; then each element f ∈ H can∞∑be represented in the form of the generalized Fourier series f = a n ϕ n , where the sequencen=1∞∑of complex numbers {a n } is such that the series |a n | 2 < ∞ converges in H. There exists an=1(linear) one-to-one correspondence between such sequences {a n } and the elements of the Hilbertspace.We will use this isomorphism to construct the matrix representation of a completely continuousoperator A and a completely continuous operator-valued function A(γ) holomorphic withrespect to complex variable γ acting in H.Assume that g = Af, f ∈ H, andg k = g k (γ) = (A(γ)f, ϕ k ),i.e., an infinite sequence (vector) of the Fourier coefficients {g n } = (g 1 , g 2 , ...) corresponds to∞∑an element g ∈ H and we can write g = (g 1 , g 2 , ...). Let f = f k ϕ k ; then f = (f 1 , f 2 , ...). Setk=1N∑∞∑f N = P f = f k ϕ k and Aϕ n = a kn ϕ k , n = 1, 2, . . .. Then,k=1k=1N∑lim Af N ∑ ∞ ∞∑ ( ∑ ∞ )= lim f n a kn ϕ k = Af = a kn f n ϕk .N→∞ N→∞n=1 k=1k=1 n=1∞∑∞∑On the other side, Af = g k ϕ k , hence, g k = a kn (Af, ϕ k ) = (g, ϕ k ). Now we set Aϕ k = g kk=1n=1and(g k , ϕ j ) = (Aϕ k , ϕ j ) = a jk ,71


where a jk are the Fourier coefficients in the expansion of element g k over the basis {ϕ j }.Consequently,∞∑Aϕ k = g k = a jk ϕ j ,and the seriesj=1∞∑∞∑|a jk | 2 = |(g k , ϕ j )| 2 < ∞; k = 1, 2, . . . .j=1j=1Infinite matrix {a jk } ∞ j,k=1 defines operator A; that is, one can reconstruct A by this matrix andan orthonormal basis {ϕ k } of the Hilbert space H: find the unique element Af for each f ∈ H.∑In order to prove the latter assertion, we will assume that f = ∞ ξ k ϕ k and f N = P f. Then,k=1∞∑N∑Af N = ηk N ϕ k , where ηk N = a kj ξ j . Applying the continuity of operator A, we havek=1j=1Hence, for each f ∈ H,C k = (Af, ϕ k ) = limN→∞ (Af N , ϕ k ) = limN→∞ ηN k =∞∑Af = C k ϕ k ,∞∑C k = a kj ξ j ,k=1j=1∞∑a kj ξ j .j=1and the latter equalities define the matrix representation of the operator A in the basis {ϕ k }of the Hilbert space H.19.2 Summation operators in the spaces of sequencesIn this section, we will employ the approach based on the use of Chebyshev polynomials toconstruct families of summation operators associated with logarithmic integral equations.Let us introduce the linear space h p formed by the sequences of complex numbers {a n } suchthat∞∑|a n | 2 n p < ∞, p ≥ 0,n=1with the inner product and the norm∞∑((a, b) = a n ¯bn n p ∑ ∞ ) 1, ‖a‖ p = |a n | 2 n p 2.n=1n=1One can easily verify the validity of the following statement.Theorem 43 h p , p ≥ 0 is the Hilbert space.Let us introduce the summation operators⎛⎞1 0 . . . 0 . . .0 2 −1 . . . 0 . . .L =. . . . . . . . . . . . . . .⎜⎝ 0 . . . n −1 ⎟0 . . . ⎠. . . . . . . . . . . . . . .72


or, in other notation,andorL = {l nj } ∞ n,j=1, l nj = 0, n ≠ j, l nn = 1 , n = 1, 2, . . . ,n⎛⎞1 0 . . . 0 . . .0 2 . . . 0 . . .L −1 =. . . . . . . . . . . . . . .⎜⎟⎝ 0 . . . n 0 . . . ⎠. . . . . . . . . . . . . . .L −1 = {l nj } ∞ n,j=1, l nj = 0, n ≠ j, l nn = n, n = 1, . . . .that play an important role in the theory of integral operators and operator-valued functionswith a logarithmic singularity of the kernel.Theorem 44 L is a linear bounded invertible operator acting from the space h p to the space h p+2 :L : h p → h p+2 (Im L = h p+2 , Ker L = {∅}). The inverse operator L −1 : h p+2 → h p , is bounded,p ≥ 0 .✷ Let us show that L acts from h p to h p+2 and L −1 acts from h p+2 to h p . Assume that a ∈ h p .Then, for La we obtain∞∑a n∞∑∣∣ ∣2n p+2 = |a n | 2 n p < ∞.nn=1n=1If b ∈ h p+2 , we obtain similar relationships for L −1 b:∞∑∞∑|b n n| 2 n p = |b n | 2 n p+2 < ∞.n=1n=1The norm ‖La‖ p+2 = ‖a‖ p . The element a = L −1 b, a ∈ h p , is the unique solution of the equationLa = b, and ker L = {∅}. According to the Banach theorem, L −1 is a bounded operator. ✷Note that ‖L‖ = ‖L −1 ‖ = 1 andL −1 L = LL −1 = E,where E is the infinite unit matrix.Assume that the summation operator A acting in h p is given by its matrix {a nj } ∞ n,j=1 andthe following condition holds:∞∑n=1 j=1∞∑ n p+2Then, we can prove the following statement.j p |a nj| 2 ≤ C 2 < ∞, C = const, C > 0.Theorem 45 The operator A : h p → h p+2 is completely continuous73


✷ Let ξ ∈ h p and Aξ = η. Then,∣ ∣∞∑∞∑ ∣∣∣∣∣ ∞ ∣∣∣∣∣n p+2 |η n | 2 = n p+2 ∑a nj ξ jn=1n=1 j=1≤ ‖ξ‖ 2 p∞∑∞∑n=1 j=1n p+2 |a nj| 2j p ≤ C 2 ‖ξ‖ 2 p,hence, η ∈ h p+2 . Now we will prove that A is completely continuous. In fact, operator A canbe represented as a limit with respect to the operator norm of a sequence of finite-dimensionaloperators A N , which are produced by the cut matrix A and defined by the matrices A N ={a nj } N n,j=1:∣ ∣N−1‖(A − A N )ξ‖ 2 ∑ ∣∣∣∣∣ ∞p+2 = n p+2 ∑ ∣∣∣∣∣ 2 ∣ ∣∞∑ ∣∣∣∣∣ ∞a nj ξ j + n p+2 ∑ ∣∣∣∣∣ 2a nj ξ j≤ ‖ξ‖ 2 p≤ ‖ξ‖ 2 p∞∑∞∑n=N j=1[ ∑ ∞ ( ∑ ∞n=Nj=1n=1n p+2 |a nj| 2j pj=Nn p+2 |a nj| 2j p+ ‖ξ‖ 2 pN−1 ∑n=N∞∑n=1 j=N) ∑ ∞ ( ∑ ∞+j=Nn=1|a nj | 2j=1j p n p+2n p+2 |a nj| 2j p )],and, consequently, the following estimates are valid with respect to the operator norm:‖A − A N ‖ ≤∞∑ ( ∑ Nn=Nj=1n p+2 |a nj| 2j p) ∑ ∞ ( ∑ ∞+j=Nn=1n p+2 |a nj| 2j p )→ 0, N → ∞. ✷Corollary 1 The sequence of operators A N strongly converges to operator A: ‖A − A N ‖ → 0,N → ∞, and ‖A‖ < C, C = const. In order to provide the complete continuity of A, it issufficient to require the fulfilment of the estimate∞∑ ∞∑n p+2 |a nj | 2 < C, C = const. (365)n=1 j=1Definition 9 Assume that condition (365) holds for the matrix elements of summation operatorA. Then, the summation operator F = L + A : h p → h p+2 is called the L-operator.Theorem 46 L-operator F = L + A : h p → h p+2 is a Fredholm operator; ind (L + A) = 0.20 Matrix Representation of Logarithmic <strong>Integral</strong> OperatorsConsider a class Φ of functions ϕ such that they admit in the interval (−1, 1) the representationin the form of the Fourier–Chebyshev seriesϕ(x) = 1 ∞∑2 T 0(x)ξ 0 + ξ n T n (x); −1 ≤ x ≤ 1, (366)n=174


where T n (x) = cos(n arccos x) are the Chebyshev polynomials and ξ = (ξ 0 , ξ 1 , ...ξ n , ...) ∈ h 2 ;that is,‖ξ‖ 2 2 = 1 ∞∑2 |ξ 0| 2 + |ξ n | 2 n 2 < ∞.n=1In this section we add the coordinate ξ 0 to vector ξ and modify the corresponding formulas. It iseasy to see that all results of previous sections are also valid in this case. Recall the definitionsof weighted Hilbert spaces associated with logarithmic integral operators:∫1L (1)2 = {f(x) : ‖f‖ 2 1 =−1|f(x)| 2 dx√ < ∞},1 − x2and∫ 1L (2)2 = {f(x) : ‖f‖ 2 2 =−1|f(x)| 2√ 1 − x 2 dx < ∞},˜W 1 2 = {f(x) : f ∈ L (1)2 , f ′ ∈ L (2)2 },Ŵ 1 2 = {f(x) : f ∈ L (2)2 , f ∗ (x) = f(x) √ 1 − x 2 , f ∗′ ∈ L (2)2 , f ∗ (−1) = f ∗ (1) = 0}.Below, we will denote the weights by p 1 (x) = √ 1 − x 2 and p −11 (x).Note that the Chebyshev polynomials specify orthogonal bases in spaces L (1)2 and L (2)2 . Inorder to obtain the matrix representation of the integral operators with a logarithmic singularityof the kernel, we will prove a similar statement for weighted space ˜W 2 1 .Lemma 6 ˜W 1 2and Φ are isomorphic.✷ Assume that ϕ ∈ Φ. We will show that ϕ ∈ ˜W 1 2 . Function ϕ(x) is continuous, and (366) isits Fourier–Chebyshev series with the coefficientsξ n = 2 π∫1−1T n (x)ϕ(x)p −11 (x)dx.∞∑The series |ξ n | 2 n 2 converges and, due to the Riesz–Fischer theorem, there exists an elementn=1f ∈ L (1)2 , such that its Fourier coefficients in the Chebyshev polynomial series are equal to nξ n ;that is, setting U n (x) = sin(n arccos x), one obtains= − 2 πnξ n = 2 π∫1−1∫1−1nT n (x)ϕ(x)p −11 (x)dx = − 2 πU n(x)(p ′ 1 (x)ϕ(x)) dxp 1 (x) = 2 π∫1−1∫1−1U ′ n(x)ϕ(x)dxU n (x)f(x) dxp 1 (x) , n ≥ 1.75


∫1Since f(x) is an integrable function, the integralF (x) =∫ x−1−1|f(x)|p −11 (x)dx converges, andf(t)dtp 1 (t)is absolutely continuous (as a function in the form of an integral depending on the upper limitof integration), F ′ (x) = f(x)p −11 (x), and2π∫1−1The latter equalities yieldU n (x)f(x) dxp 1 (x) = 2 π∫1−1U n (x)F ′ (x)dx = − 2 π∫1−1U ′ n(x)F (x)dx.∫1−1∫1U n(x)[ϕ(x) ′ − F (x)]dx = −n−1dxT n (x)[ϕ(x) − F (x)]p 1 (x)= 0, n ≥ 1.Hence, ϕ(x) − F (x) = C = const. The explicit form of F ′ (x) yields ϕ ′ (x) = f(x)p −11 (x);therefore, ϕ ′ (x)p 1 (x) ∈ L (1)2 and ϕ ′ (x) ∈ L (2)2 .Now, let us prove the inverse statement. Let ϕ ∈ ˜W 21 and be represented by the Fourier–Chebyshev series (366). Form the Fourier serieswhereη n = 2 π∫1−1∞∑p 1 (x)ϕ ′ (x) = η n U n (x), |x| ≤ 1,n=1U n (x)(p 1 (x)ϕ ′ (x)) dxp 1 (x) = − 2 π= 2n π∫1−1∫1−1T n (x) ϕ(x)dxp 1 (x) .U ′ n(x)ϕ(x)dx =Now, in order to prove that ξ ∈ h 2 , it is sufficient to apply the Bessel inequality to η n = nξ n ,whereξ n = 2 ∫1ϕ(x)T n (x) dxπp 1 (x) .✷−120.1 Solution to integral equations with a logarithmic singularity ofthe kernelConsider the integral operator with a logarithmic singularity of the kernelLϕ = − 1 π∫1−1ln |x − s|ϕ(s) dsp 1 (s) , ϕ ∈ ˜W 1 2 ,76


and the integral equationwhereΦ N (x) = Nϕ =Lϕ + Nϕ = f, ϕ ∈ ˜W 1 2 , (367)∫ 1−1dsN(q)ϕ(s)πp 1 (x) ,q = (x, s),and N(q) is an arbitrary complex-valued function satisfying the continuity conditionsandN(q) ∈ C 1 (Π 1 );Write the Fourier–Chebyshev expansionswhere the coefficientsϕ(s) = 1 √2ξ 0 T 0 +f n = 2 π∂ 2 N∂x 2 (q) ∈ L 2[Π 1 , p −11 (x)p −11 (s)],Π 1 = ([−1, 1] × [−1, 1]).∞∑ξ n T n (s), ξ = (ξ 0 , ξ 1 , ..., ξ n , ...) ∈ h 2 , (368)n=1f(x) = f 02 T 0(x) +∫ 1−1∞∑f n T n (x),n=1dxf(x)T n (x) √ , n = 0, 1, . . . .1 − x2Function Φ N (x) is continuous and may be thus also expanded in the Fourier–Chebyshev serieswhereΦ N (x) = 1 ∞∑2 b 0T 0 (x) + b n T n (x), (369)n=1∫ ∫b n = 2Π 1dxdsN(q)T n (x)ϕ(s)π 2 p 1 (x)p 1 (s) . (370)According to Lemma 6 and formulas (368) and (369), equation (367) is equivalent in ˜W 2 1 to theequationln 2 ξ 0∞∑√ T 0 (x) + n −1 ξ n T n (x) + b 0∞∑2 n=12 T 0(x) + b n T n (x)n=1= f 0∞∑(371)2 T 0(x) + f n T n (x), |x| ≤ 1.n=1Note that since Φ N (x) is a differentiable function of x in [−1, 1], the series in (371) convergeuniformly, and the equality in (371) is an identity in (−1, 1).Now, we substitute series (368) for ϕ(x) into (370). Taking into account that {T n (x)} is abasis and one can change the integration and summation and to pass to double integrals (thesestatements are verified directly), we obtain an infinite system of linear algebraic equations{ √2 ln 2ξ0 + b 0 = f 0 ,(372)n −1 ξ n + b n = f n , n ≥ 1.77


In terms of the summation operators, this systems can be represented asLξ + Aξ = f; ξ = (ξ 0 , ξ 1 , . . . , ξ n , . . .) ∈ h 2 ; f = (f 0 , f 1 , . . . , f n , . . .) ∈ h 4 .Here, operators L and A are defined byL = {l nj } ∞ n,j=0, l nj = 0, n ≠ j; l 00 = ln 2; l nn = 1 , n = 1, 2, . . . ;nwhereA = {a nj } ∞ n,j=0,a nj = ε nj∫ ∫Π 1⎧⎨ 1, n = j = 0,ε nj = 2, n ≤ 1, j ≤ 1,⎩ √2, for other n, j.dxdsN(q)T n (x)T j (s)π 2 p 1 (x)p 1 (s) , (373)Lemma 7 Let N(q) satisfy the continuity conditions formulated in (367). Then∞∑n=1,j=0|a nj | 2 n 4 +∞∑j=0|a 0j | 2 1ln 2 2 ≤ C 0, (374)whereC 0 = 1ln 2 2∫1∫1∣−1 −1N(q)dxπp 1 (x)2∣∫ ∫dsπp 1 (s) +Π 1p 2 1(x) |(p 1 (x)N ′ x) ′ x| 2 dxdsπ 2 p 1 (x)p 1 (s) . (375)✷ Expand a function of two variables belonging to the weighted space L 2 [Π 1 , p −11 (x)p −11 (s)] inthe double Fourier–Chebyshev series using the Fourier–Chebyshev system {T n (x)T m (s)}:where∞∑ ∞∑−p 1 (x)(p 1 (x)N x(q)) ′ ′ x = C nm T n (x)T m (s),n=0 m=0C nm = − ε2 nmπThe Bessel inequality yields= ε2 nmπ∫1−1∫ 1−1∫= nε2 1nmπ 2= n2 ε 2 nmπ 2−1T m (s) dsp 1 (s)Π 1∫1−1∫ 1T n (x)(p 1 (x)N ′ x(q)) ′ xdxT m (s) ds Tp 1 (s)n(x)p ′ 1 (x)N x(q)dx′−1∫1T m (s)p 1 (s) ds U n (x)N x(q)dx′−1∫ ∫N(q)T n (x)T m (s)dxds = n 2 ε nm a nm .p 1 (x)p 1 (s)∞∑ ∞∑∫ ∫n 4 π 2 |a nm | 2 ≤ p 2 1(x)|[p 1 (x)N x(q)] ′ ′ x| 2 dxdsn=1 m=0p 1 (x)p 1 (s) .Π 178


Let us expand the functiong(s) = 1ln 2∫1−1N(q)dxπ 2 p 1 (x)in the Fourier–Chebyshev series. Again, due to the Bessel inequality, we obtain1ln 2 2∞∑|a 0m | 2 ≤ 1 ∫1∫1m=0 ln 2 2 ∣−1 −1N(q)dsπp 1 (x)2∣dsπp 1 (s) .21 Galerkin methods and basis of Chebyshev polynomialsWe describe the approximate solution of linear operator equations considering their projectionsonto finite-dimensional subspaces, which is convenient for practical calculations. Below, it isassumed that all operators are linear and bounded. Then, the general results concerning thesubstantiation and convergence the Galerkin methods will be applied to solving logarithmicintegral equations.Definition 10 Let X and Y be the Banach spaces and let A : X → Y be an injective operator.Let X n ⊂ X and Y n ⊂ Y be two sequences of subspaces with dim X n = dim Y n = n andP n : Y → Y n be the projection operators. The projection method generated by X n and P napproximates the equationAϕ = fby its projectionP n Aϕ n = P n f.This projection method is called convergent for operator A, if there exists an index N such thatfor each f ∈ Im A, the approximating equation P n Aϕ = P n f has a unique solution ϕ n ∈ X n forall n ≥ N and these solutions converge, ϕ n → ϕ, n → ∞, to the unique solution ϕ of Aϕ = f.In terms of operators, the convergence of the projection method means that for all n ≥N, the finite-dimensional operators A n := P n A : X n → Y n are invertible and the pointwiseconvergence A −1n P n Aϕ → ϕ, n → ∞, holds for all ϕ ∈ X. In the general case, one may expectthat the convergence occurs if subspaces X n form a dense set:inf ‖ψ − ϕ‖ → 0, n → ∞ (376)ψ∈X nfor all ϕ ∈ X. This property is called the approximating property (any element ϕ ∈ X can beapproximated by elements ψ ∈ X n with an arbitrary accuracy in the norm of X). Therefore,in the subsequent analysis, we will always assume that this condition is fulfilled. Since A n =P n A is a linear operator acting in finite-dimensional spaces, the projection method reduces tosolving a finite-dimensional linear system. Below, the Galerkin method will be considered asthe projection method under the assumption that the above definition is valid in terms of theorthogonal projection.Consider first the general convergence and error analysis.79✷


Lemma 8 The projection method converges if and only if there exists an index N and a positiveconstant M such that for all n ≥ N, the finite-dimensional operators A n = P n A : X n → Y n areinvertible and operators A −1n P n A : X → X are uniformly bounded,The error estimate is valid‖A −1n P n A‖ ≤ M. (377)‖ϕ n − ϕ‖ ≤ (1 + M) infψ∈X n‖ψ − ϕ‖. (378)Relationship (378) is usually called the quasioptimal estimate and corresponds to the errorestimate of a projection method based on the approximation of elements from X by elementsof subspaces X n . It shows that the error of the projection method is determined by the qualityof approximation of the exact solution by elements of subspace X n .Consider the equationSϕ + Kϕ = f (379)and the projection methodsandwith subspaces X n and projection operators P n : Y → Y n .P n Sϕ n = P n f (380)P n (S + K)ϕ n = P n f (381)Lemma 9 Assume that S : X → Y is a bounded operator with a bounded inverse S −1 : Y → Xand that the projection method (380) is convergent for S. Let K : X → Y be a compact boundedoperator and S + K is injective. Then, the projection method (381) converges for S + K.Lemma 10 Assume that Y n = S(X n ) and ‖P n K − K‖ → 0, n → ∞. Then, for sufficientlylarge n, approximate equation (381) is uniquely solvable and the error estimate‖ϕ n − ϕ‖ ≤ M‖P n Sϕ − Sϕ‖is valid, where M is a positive constant depending on S and K.For operator equations in Hilbert spaces, the projection method employing an orthogonalprojection into finite-dimensional subspaces is called the Galerkin method.Consider a pair of Hilbert spaces X and Y and assume that A : X → Y is an injectiveoperator. Let X n ⊂ X and Y n ⊂ Y be the subspaces with dim X n = dim Y n = n. Then, ϕ n ∈ X nis a solution of the equation Aϕ = f obtained by the projection method generated by X n andoperators of orthogonal projections P n : Y → Y n if and only if(Aϕ n , g) = (f, g) (382)for all g ∈ Y n . Indeed, equations (382) are equivalent to P n (Aϕ n − f).Equation (382) is called the Galerkin equation.80


Assume that X n and Y n are the spaces of linear combinations of the basis and probe functions:X n = {u 1 , . . . , u n } and Y n = {v 1 , . . . , v n }. Then, we can express ϕ n as a linear combinationn∑ϕ n = γ n u n ,k=1and show that equation (382) is equivalent to the linear system of order nn∑γ k (Au k , v j ) = (f, v j ), j = 1, . . . , n, (383)k=1with respect to coefficients γ 1 , . . . , γ n .Thus, the Galerkin method may be specified by choosing subspaces X n and projectionoperators P n (or the basis and probe functions). We note that the convergence of the Galerkinmethod do not depend on a particular choice of the basis functions in subspaces X n and probefunctions in subspaces Y n . In order to perform a theoretical analysis of the Galerkin methods,we can choose only subspaces X n and projection operators P n . However, in practice, it is moreconvenient to specify first the basis and probe functions, because it is impossible to calculatethe matrix coefficients using (383).21.1 Numerical solution of logarithmic integral equation by the GalerkinmethodNow, let us apply the projection scheme of the Galerkin method to logarithmic integral equation(L + K)ϕ =∫1−1(ln1|x − y| + K(x, y)) ϕ(x) dx√1 − x2= f(y), −1 < y < 1. (384)Here, K(x, y) is a smooth function; accurate conditions concerning the smoothness of K(x, y)that guarantee the compactness of integral operator K are imposed in Section 2.One can determine an approximate solution of equation (384) by the Galerkin methodtaking the Chebyshev polynomials of the first kind as the basis and probe functions. Accordingto the scheme of the Galerkin method, we setϕ N (x) = a N−102 T ∑0(x) + a n T n (x),n=1X N = {T 0 , . . . , T N−1 }, Y N = {T 0 , . . . , T N−1 }, (385)and find the unknown coefficients from finite linear system (383). Since L : X N → Y N is abijective operator, the convergence of the Galerkin method follows from Lemma 10 applied tooperator L. If operator K is compact and L + K is injective, then we can also prove the convergenceof the Galerkin method for operator L + K using Lemma 9. Note also that quasioptimalestimate 378 in the L (1)2 norm is valid for ϕ N (x) → ϕ(x), N → ∞ .Theorem 47 If operator L + K : L (1)2 → ˜W 1 2 is injective and operator K : L (1)2 → ˜W 1 2 iscompact, then the Galerkin method (385) converges for operator L + K.81


The same technique can be applied to constructing an approximate solution to the hypersingularequation(H + K)ϕ = ddy∫1−1( 1x − y + K(x, y) )ϕ(x) √ 1 − x 2 dx = f(y), −1 < y < 1. (386)The conditions concerning the smoothness of K(x, y) that guarantee the compactness of integraloperator K are imposed in Section 2.Choose the Chebyshev polynomials of the second kind as the basis and probe functions, i.e.,setX N = {U 0 , . . . , U N−1 }, Y N = {U 0 , . . . , U N−1 }. (387)Unknown approximate solution ϕ N (x) is determined in the formϕ N (x) =N−1 ∑n=0b n U n (x). (388)Note that hypersingular operator H maps X N onto Y N and is bijective.From Lemma 10, it follows that the Galerkin method for operator H converges. By virtueof Lemma 9, this result is also valid for operator H + K under the assumption that K iscompact and H + K is injective. Quasioptimal formula (378) estimates the rate of convergenceof approximate solution ϕ N (x) to exact one ϕ(x), N → ∞ in the norm of space Ŵ 1 2 . Unknowncoefficients a n are found from finite linear system (383).Theorem 48 If operator H + K : Ŵ 1 2 → L (2)2 is injective and K : Ŵ 1 2 → L (2)2 is compact, thenthe Galerkin method (387) converges for operator H + K.82

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!