13.07.2015 Views

An Introduction to the Conjugate Gradient Method Without the ...

An Introduction to the Conjugate Gradient Method Without the ...

An Introduction to the Conjugate Gradient Method Without the ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

¨£¡§¢¡¡¡£ ¡£¢¥¤¤¡¡¦¢ ¡§¢0¡§¡20 Jonathan Richard Shewchuk420¡2-4 -2 2 4 61-2-4-6Figure 19: Solid lines represent <strong>the</strong> starting points that give <strong>the</strong> worst convergence for Steepest Descent.Dashed lines represent steps <strong>to</strong>ward convergence. If <strong>the</strong> first iteration starts from a worst-case point, so doall succeeding iterations. Each step taken intersects <strong>the</strong> paraboloid axes (gray arrows) at precisely a 45angle. ¢ 3¨ Here, 5.upper bound for § (corresponding <strong>to</strong> <strong>the</strong> worst-case starting points) is found by setting 2£ 2 :§ 2 ¡ 1 ¡ 4 45 2 4 5 ¡ 24£ 54 221¨¥ £1¨ 2 ¥¡ 1 §1 33 3(27)Inequality 27 is plotted in Figure 20. The more ill-conditioned <strong>the</strong> matrix (that is, <strong>the</strong> larger its condition number ), <strong>the</strong> slower <strong>the</strong> convergence of Steepest Descent. In Section 9.2, it is proven that Equation 27 isalso valid § for 2, if <strong>the</strong> condition number of a symmetric, positive-definite matrix is defined <strong>to</strong> be¤<strong>the</strong> ratio of <strong>the</strong> largest <strong>to</strong> smallest eigenvalue. The convergence results for Steepest Descent are1and (28)¡§ ¡¨§ 1©

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!