06.09.2021 Views

First Semester in Numerical Analysis with Julia, 2020a

First Semester in Numerical Analysis with Julia, 2020a

First Semester in Numerical Analysis with Julia, 2020a

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

CHAPTER 5. APPROXIMATION THEORY 200<br />

normal equations as a matrix equation has certa<strong>in</strong> drawbacks:<br />

• The <strong>in</strong>tegrals ∫ b<br />

a xi+j dx =(b i+j+1 − a i+j+1 ) /(i + j +1)<strong>in</strong> the coefficient matrix gives<br />

rise to matrix equation that is prone to roundoff error.<br />

• There is no easy way to go from P n (x) to P n+1 (x) (which we might want to do if we<br />

were not satisfied by the approximation provided by P n ).<br />

There is a better approach to solve the discrete and cont<strong>in</strong>uous least squares problem us<strong>in</strong>g<br />

the orthogonal polynomials we encountered <strong>in</strong> Gaussian quadrature. Both discrete and<br />

cont<strong>in</strong>uous least squares problem tries to f<strong>in</strong>d a polynomial P n (x) = ∑ n<br />

j=0 a jx j that satisfies<br />

some properties. Notice how the polynomial is written <strong>in</strong> terms of the monomial basis<br />

functions x j and recall how these basis functions caused numerical difficulties <strong>in</strong> <strong>in</strong>terpolation.<br />

That was the reason we discussed different basis functions like Lagrange and Newton for the<br />

<strong>in</strong>terpolation problem. So the idea is to write P n (x) <strong>in</strong> terms of some other basis functions<br />

P n (x) =<br />

n∑<br />

a j φ j (x)<br />

j=0<br />

which would then update the normal equations for cont<strong>in</strong>uous least squares (5.11) as<br />

n∑<br />

∫ b<br />

a j φ j (x)φ k (x)dx =<br />

j=0<br />

a<br />

∫ b<br />

a<br />

f(x)φ k (x)dx<br />

for k =0, 1, ..., n. The normal equations for the discrete least squares (5.1) gets a similar<br />

update:<br />

(<br />

n∑ m<br />

)<br />

∑<br />

m∑<br />

a j φ j (x i )φ k (x i ) = y i φ k (x i ).<br />

j=0<br />

i=1<br />

Go<strong>in</strong>g forward, the crucial observation is that the <strong>in</strong>tegral of the product of two functions<br />

∫<br />

φj (x)φ k (x)dx, or the summation of the product of two functions evaluated at some discrete<br />

po<strong>in</strong>ts, ∑ φ j (x i )φ k (x i ), can be viewed as an <strong>in</strong>ner product 〈φ j ,φ k 〉 of two vectors <strong>in</strong> a suitably<br />

def<strong>in</strong>ed vector space. And when the functions (vectors) φ j are orthogonal, the <strong>in</strong>ner product<br />

〈φ j ,φ k 〉 is 0 if j ≠ k, which makes the normal equations trivial to solve. We will discuss<br />

details <strong>in</strong> the next section.<br />

i=1<br />

5.3 Orthogonal polynomials and least squares<br />

Our discussion <strong>in</strong> this section will mostly center around the cont<strong>in</strong>uous least squares problem,<br />

however, the discrete problem can be approached similarly. Consider the sets C 0 [a, b], the

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!