03.03.2014 Views

Numerical Methods Course Notes Version 0.1 (UCSD Math 174, Fall ...

Numerical Methods Course Notes Version 0.1 (UCSD Math 174, Fall ...

Numerical Methods Course Notes Version 0.1 (UCSD Math 174, Fall ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

3.4. ITERATIVE SOLUTIONS 37<br />

3.4.2 Dividing by Multiplying<br />

We saw that Gaussian Elimination requires around 2 3 n3 operations just to find the LU factorization,<br />

then about n 2 operations to solve the system, when A is n × n. When n is large, this may take too<br />

long to be practical. Additionally, if A is sparse (has few nonzero elements per row), we would like<br />

the complexity of our computations to scale with the sparsity of A. Thus we look for an alternative<br />

algorithm.<br />

First we consider the simplest case, n = 1. Suppose we are to solve the equation<br />

for scalars A, b. We solve this by<br />

Ax = b.<br />

x = 1 A b = 1<br />

ωA ωb = 1<br />

1 − (1 − ωA) ωb = 1<br />

1 − r ωb,<br />

where ω ≠ 0 is some real number chosen to “weight” the problem appropriately, and r = 1 − ωA.<br />

Now suppose that ω is chosen such that |r| < 1. This can be done so long as A ≠ 0, which would<br />

have been a problem anyway. Now use the geometric expansion:<br />

1<br />

1 − r = 1 + r + r2 + r 3 + . . .<br />

Because of the assumption |r| < 1, the terms r n converge to zero as n → ∞. This gives the<br />

approximate solution to our one dimensional problem as<br />

[<br />

x ≈ 1 + r + r 2 + r 3 + . . . + r k] ωb<br />

[<br />

= ωb + r + r 2 + r 3 + . . . + r k] ωb<br />

[<br />

= ωb + r 1 + r + r 2 + . . . + r k−1] ωb<br />

This suggests an iterative approach to solving Ax = b. First let x (0) = ωb, then let<br />

x (k) = ωb + rx (k−1) .<br />

The iterates x (k) will converge to the solution of Ax = b if |r| < 1.<br />

You should now convince yourself that because r n → 0, that the choice of the initial iterate x (0)<br />

was immaterial, i.e., that under any choice of initial iterate convergence is guaranteed.<br />

We now translate this scalar result into the vector case. The algorithm proceeds as follows: first<br />

fix some initial estimate of the solution, x (0) . A good choice might be ωb, but this is not necessary.<br />

Then calculate successive approximations to the actual solution by updates of the form<br />

x (k) = ωb + (I − ωA) x (k−1) .<br />

It turns out that we can consider a slightly more general form of the algorithm, one in which<br />

successive iterates are defined implicitly. That is we consider iterates of the form<br />

Qx (k+1) = (Q − ωA) x (k) + ωb, (3.3)<br />

for some matrix Q, and some scaling factor ω. Note that this update relies on vector additions<br />

and possibly by premultiplication of a vector by A or Q. In the case where these two matrices are<br />

sparse, such an update can be relatively cheap.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!