03.03.2014 Views

Numerical Methods Course Notes Version 0.1 (UCSD Math 174, Fall ...

Numerical Methods Course Notes Version 0.1 (UCSD Math 174, Fall ...

Numerical Methods Course Notes Version 0.1 (UCSD Math 174, Fall ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

3.4. ITERATIVE SOLUTIONS 35<br />

3.3.3 Some Theory<br />

We aren’t doing much proving here. The following theorem has an ugly proof in the Cheney &<br />

Kincaid [7].<br />

Theorem 3.3. If A is an n × n matrix, and naïve Gaussian Elimination does not encounter a zero<br />

pivot, then the algorithm generates a LU factorization of A, where L is the lower triangular part of<br />

the output matrix, and U is the upper triangular part.<br />

This theorem relies on us using the fancy version of G.E., which saves the multipliers in the<br />

spots where there should be zeros. If correctly implemented, then, L is the lower triangular part<br />

but with ones put on the diagonal.<br />

This theorem is proved in Cheney & Kincaid [7]. This appears to me to be a case of something<br />

which can be better illustrated with an example or two and some informal investigation. The proof<br />

is an unillustrating index-chase–read it at your own risk.<br />

3.3.4 Computing Inverses<br />

We consider finding the inverse of A. Since<br />

AA −1 = I,<br />

then the j th column of the inverse A −1 solves the equation<br />

Ax = e j ,<br />

where e j is the column matrix of all zeros, but with a one in the j th position.<br />

Thus we can find the inverse of A by running n linear solves. Obviously we are only going<br />

to run G.E. once, to put the matrix in LU form, then run n solves using forward and backward<br />

substitutions.<br />

3.4 Iterative Solutions<br />

Recall we are trying to solve<br />

Ax = b.<br />

We examine the computational cost of Gaussian Elimination to motivate the search for an alternative<br />

algorithm.<br />

3.4.1 An Operation Count for Gaussian Elimination<br />

We consider the number of floating point operations (“flops”) required to solve the system Ax = b.<br />

Gaussian Elimnation first uses row operations to transform the problem into an equivalent problem<br />

of the form Ux = b ′ , where U is upper triangular. Then back substitution is used to solve for x.<br />

First we look at how many floating point operations are required to reduce<br />

⎛<br />

⎞<br />

a 11 a 12 a 13 · · · a 1n b 1<br />

a 21 a 22 a 23 · · · a 2n b 2<br />

a 31 a 32 a 33 · · · a 3n b 3<br />

⎜<br />

⎝<br />

.<br />

. . . ..<br />

⎟<br />

. ⎠<br />

a n1 a n2 a n3 · · · a nn b n

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!