Linear Algebra and Applications: Numerical Linear Algebra
Linear Algebra and Applications: Numerical Linear Algebra
Linear Algebra and Applications: Numerical Linear Algebra
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
IMA Summer Program, 2008 – p.My Pledge to YouI promise not to cover as much materialas I previously claimed I would.
ResourcesIMA Summer Program, 2008 – p.
Resources (a biased list)IMA Summer Program, 2008 – p.
IMA Summer Program, 2008 – p.Resources (a biased list)David S. Watkins, Fundamentals of MatrixComputations, Second Edition, John Wiley <strong>and</strong>Sons, 2002. (FMC)
IMA Summer Program, 2008 – p.Resources (a biased list)David S. Watkins, Fundamentals of MatrixComputations, Second Edition, John Wiley <strong>and</strong>Sons, 2002. (FMC)David S. Watkins, The QR algorithm revisited,SIAM Review, 50 (2008), pp. 133–145.
IMA Summer Program, 2008 – p.Resources (a biased list)David S. Watkins, Fundamentals of MatrixComputations, Second Edition, John Wiley <strong>and</strong>Sons, 2002. (FMC)David S. Watkins, The QR algorithm revisited,SIAM Review, 50 (2008), pp. 133–145.David S. Watkins, The Matrix Eigenvalue Problem,GR <strong>and</strong> Krylov Subspace Methods, SIAM, 2007.
Leslie Hogben, H<strong>and</strong>book of <strong>Linear</strong> <strong>Algebra</strong>,Chapman <strong>and</strong> Hall/CRC, 2007.IMA Summer Program, 2008 – p.
IMA Summer Program, 2008 – p.Leslie Hogben, H<strong>and</strong>book of <strong>Linear</strong> <strong>Algebra</strong>,Chapman <strong>and</strong> Hall/CRC, 2007.Lloyd N. Trefethen <strong>and</strong> David Bau, III, <strong>Numerical</strong><strong>Linear</strong> <strong>Algebra</strong>, SIAM, 1997.
IMA Summer Program, 2008 – p.Leslie Hogben, H<strong>and</strong>book of <strong>Linear</strong> <strong>Algebra</strong>,Chapman <strong>and</strong> Hall/CRC, 2007.Lloyd N. Trefethen <strong>and</strong> David Bau, III, <strong>Numerical</strong><strong>Linear</strong> <strong>Algebra</strong>, SIAM, 1997.James W. Demmel, Applied <strong>Numerical</strong> <strong>Linear</strong><strong>Algebra</strong>, SIAM, 1997.
IMA Summer Program, 2008 – p.Leslie Hogben, H<strong>and</strong>book of <strong>Linear</strong> <strong>Algebra</strong>,Chapman <strong>and</strong> Hall/CRC, 2007.Lloyd N. Trefethen <strong>and</strong> David Bau, III, <strong>Numerical</strong><strong>Linear</strong> <strong>Algebra</strong>, SIAM, 1997.James W. Demmel, Applied <strong>Numerical</strong> <strong>Linear</strong><strong>Algebra</strong>, SIAM, 1997.G. H. Golub <strong>and</strong> C. F. Van Loan, MatrixComputations, Third Edition, Johns HopkinsUniversity Press, 1996.
Common <strong>Linear</strong> <strong>Algebra</strong>ComputationsIMA Summer Program, 2008 – p.
IMA Summer Program, 2008 – p.Common <strong>Linear</strong> <strong>Algebra</strong>Computationslinear system Ax = b
IMA Summer Program, 2008 – p.Common <strong>Linear</strong> <strong>Algebra</strong>Computationslinear system Ax = boverdetermined linear system Ax = b
IMA Summer Program, 2008 – p.Common <strong>Linear</strong> <strong>Algebra</strong>Computationslinear system Ax = boverdetermined linear system Ax = beigenvalue problem Av = λv
IMA Summer Program, 2008 – p.Common <strong>Linear</strong> <strong>Algebra</strong>Computationslinear system Ax = boverdetermined linear system Ax = beigenvalue problem Av = λvvarious generalized eigenvalue problems,e.g. Av = λBv
<strong>Linear</strong> SystemsIMA Summer Program, 2008 – p.
IMA Summer Program, 2008 – p.<strong>Linear</strong> SystemsAx = b, n × n, nonsingular, real or complex
IMA Summer Program, 2008 – p.<strong>Linear</strong> SystemsAx = b, n × n, nonsingular, real or complexExamples: FMC §1.2, 7.1; any linear algebra text
IMA Summer Program, 2008 – p.<strong>Linear</strong> SystemsAx = b, n × n, nonsingular, real or complexExamples: FMC §1.2, 7.1; any linear algebra textMajor tools:Gaussian elimination (LU Decomp.)various iterative methods
Overdetermined <strong>Linear</strong> SystemsIMA Summer Program, 2008 – p.
IMA Summer Program, 2008 – p.Overdetermined <strong>Linear</strong> SystemsAx = b, n × m, n > m
IMA Summer Program, 2008 – p.Overdetermined <strong>Linear</strong> SystemsAx = b, n × m, n > moften n ≫ m
IMA Summer Program, 2008 – p.Overdetermined <strong>Linear</strong> SystemsAx = b, n × m, n > moften n ≫ mExample: fitting data by a straight line
IMA Summer Program, 2008 – p.Overdetermined <strong>Linear</strong> SystemsAx = b, n × m, n > moften n ≫ mExample: fitting data by a straight lineminimize ‖b − Ax‖ 2(least squares)
IMA Summer Program, 2008 – p.Overdetermined <strong>Linear</strong> SystemsAx = b, n × m, n > moften n ≫ mExample: fitting data by a straight lineminimize ‖b − Ax‖ 2(least squares)Major tools:QR decompositionsingular value decomposition
Eigenvalue ProblemsIMA Summer Program, 2008 – p.
IMA Summer Program, 2008 – p.Eigenvalue Problemsst<strong>and</strong>ard: Av = λv, n × n, real or complex
IMA Summer Program, 2008 – p.Eigenvalue Problemsst<strong>and</strong>ard: Av = λv, n × n, real or complexExamples: FMC § 5.1
IMA Summer Program, 2008 – p.Eigenvalue Problemsst<strong>and</strong>ard: Av = λv, n × n, real or complexExamples: FMC § 5.1generalized: Av = λBv
IMA Summer Program, 2008 – p.Eigenvalue Problemsst<strong>and</strong>ard: Av = λv, n × n, real or complexExamples: FMC § 5.1generalized: Av = λBvExamples: FMC § 6.7
Eigenvalue Problemsst<strong>and</strong>ard: Av = λv, n × n, real or complexExamples: FMC § 5.1generalized: Av = λBvExamples: FMC § 6.7product: A 1 A 2IMA Summer Program, 2008 – p.
IMA Summer Program, 2008 – p.Eigenvalue Problemsst<strong>and</strong>ard: Av = λv, n × n, real or complexExamples: FMC § 5.1generalized: Av = λBvExamples: FMC § 6.7product: A 1 A 2Examples: generalized (AB −1 ), SVD (A ∗ A)
IMA Summer Program, 2008 – p.Eigenvalue Problemsst<strong>and</strong>ard: Av = λv, n × n, real or complexExamples: FMC § 5.1generalized: Av = λBvExamples: FMC § 6.7product: A 1 A 2Examples: generalized (AB −1 ), SVD (A ∗ A)quadratic: (λ 2 K + λG + M)v = 0
Sizes of <strong>Linear</strong> <strong>Algebra</strong> ProblemsIMA Summer Program, 2008 – p.
IMA Summer Program, 2008 – p.Sizes of <strong>Linear</strong> <strong>Algebra</strong> Problemssmall
IMA Summer Program, 2008 – p.Sizes of <strong>Linear</strong> <strong>Algebra</strong> Problemssmallmedium
IMA Summer Program, 2008 – p.Sizes of <strong>Linear</strong> <strong>Algebra</strong> Problemssmallmediumlarge
Solving <strong>Linear</strong> Systems:IMA Summer Program, 2008 – p. 1
Solving <strong>Linear</strong> Systems:small problemsIMA Summer Program, 2008 – p. 1
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:small problemsAx = b, n × n, n “small”
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:small problemsAx = b, n × n, n “small”store A conventionally
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:small problemsAx = b, n × n, n “small”store A conventionallysolve using Gaussian elimination
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:small problemsAx = b, n × n, n “small”store A conventionallysolve using Gaussian eliminationA = LU
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:small problemsAx = b, n × n, n “small”store A conventionallysolve using Gaussian eliminationA = LUPA = LU (partial pivoting)
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:small problemsAx = b, n × n, n “small”store A conventionallysolve using Gaussian eliminationA = LUPA = LU (partial pivoting)forward <strong>and</strong> back substitution
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:small problemsAx = b, n × n, n “small”store A conventionallysolve using Gaussian eliminationA = LUPA = LU (partial pivoting)forward <strong>and</strong> back substitutionQuestions: cost?,
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:small problemsAx = b, n × n, n “small”store A conventionallysolve using Gaussian eliminationA = LUPA = LU (partial pivoting)forward <strong>and</strong> back substitutionQuestions: cost?, accuracy? (FMC Ch. 2)
Positive Definite CaseIMA Summer Program, 2008 – p. 1
Positive Definite CaseA = A ∗ , x ∗ Ax > 0 for all x ≠ 0IMA Summer Program, 2008 – p. 1
IMA Summer Program, 2008 – p. 1Positive Definite CaseA = A ∗ , x ∗ Ax > 0 for all x ≠ 0A = R ∗ RCholesky decomposition
IMA Summer Program, 2008 – p. 1Positive Definite CaseA = A ∗ , x ∗ Ax > 0 for all x ≠ 0A = R ∗ RCholesky decompositionsymmetric variant of Gaussian elimination
IMA Summer Program, 2008 – p. 1Positive Definite CaseA = A ∗ , x ∗ Ax > 0 for all x ≠ 0A = R ∗ RCholesky decompositionsymmetric variant of Gaussian eliminationflop count is halved
Solving <strong>Linear</strong> Systems:IMA Summer Program, 2008 – p. 1
Solving <strong>Linear</strong> Systems:medium problemsIMA Summer Program, 2008 – p. 1
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:medium problemsLarger problems are usually sparser.
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:medium problemsLarger problems are usually sparser.Use sparse data structure.
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:medium problemsLarger problems are usually sparser.Use sparse data structure.sparse Gaussian elimination
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:medium problemsLarger problems are usually sparser.Use sparse data structure.sparse Gaussian eliminationA = LU
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:medium problemsLarger problems are usually sparser.Use sparse data structure.sparse Gaussian eliminationA = LUfactors “usually” less sparse than A,
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:medium problemsLarger problems are usually sparser.Use sparse data structure.sparse Gaussian eliminationA = LUfactors “usually” less sparse than A, but still sparse
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:medium problemsLarger problems are usually sparser.Use sparse data structure.sparse Gaussian eliminationA = LUfactors “usually” less sparse than A, but still sparseCrucial question: Can factors fit in main memory?
Solving <strong>Linear</strong> Systems:IMA Summer Program, 2008 – p. 1
Solving <strong>Linear</strong> Systems:large problemsIMA Summer Program, 2008 – p. 1
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:large problemsL <strong>and</strong> U factors may be too large to store . . .
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:large problemsL <strong>and</strong> U factors may be too large to store . . .Use an iterative method.
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:large problemsL <strong>and</strong> U factors may be too large to store . . .Use an iterative method.direct vs. iterative methods
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:large problemsL <strong>and</strong> U factors may be too large to store . . .Use an iterative method.direct vs. iterative methodsSome buzz words: descent method, conjugategradients (CG), GMRES, . . .
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:large problemsL <strong>and</strong> U factors may be too large to store . . .Use an iterative method.direct vs. iterative methodsSome buzz words: descent method, conjugategradients (CG), GMRES, . . .preconditioners,
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:large problemsL <strong>and</strong> U factors may be too large to store . . .Use an iterative method.direct vs. iterative methodsSome buzz words: descent method, conjugategradients (CG), GMRES, . . .preconditioners, <strong>and</strong> on <strong>and</strong> on.
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:large problemsL <strong>and</strong> U factors may be too large to store . . .Use an iterative method.direct vs. iterative methodsSome buzz words: descent method, conjugategradients (CG), GMRES, . . .preconditioners, <strong>and</strong> on <strong>and</strong> on.FMC Chapter 7
IMA Summer Program, 2008 – p. 1Solving <strong>Linear</strong> Systems:large problemsL <strong>and</strong> U factors may be too large to store . . .Use an iterative method.direct vs. iterative methodsSome buzz words: descent method, conjugategradients (CG), GMRES, . . .preconditioners, <strong>and</strong> on <strong>and</strong> on.FMC Chapter 7Richard Barrett et. al., Templates for the Solution of<strong>Linear</strong> Systems, . . . , SIAM 1994. (FREE!!!)
Moving OnIMA Summer Program, 2008 – p. 1
Moving OnOrthogonal TransformationsIMA Summer Program, 2008 – p. 1
IMA Summer Program, 2008 – p. 1Moving OnOrthogonal Transformationsgenerally useful computing tools
IMA Summer Program, 2008 – p. 1Moving OnOrthogonal Transformationsgenerally useful computing toolssticking to real case for simplicity
IMA Summer Program, 2008 – p. 1Moving OnOrthogonal Transformationsgenerally useful computing toolssticking to real case for simplicityst<strong>and</strong>ard inner product:〈x,y〉 = ∑ nj=1 x jy j
Moving OnOrthogonal Transformationsgenerally useful computing toolssticking to real case for simplicityst<strong>and</strong>ard inner product:〈x,y〉 = ∑ nj=1 x jy jEuclidean norm: ‖x‖ 2=( ∑nj=1 x2 j) 1/2IMA Summer Program, 2008 – p. 1
IMA Summer Program, 2008 – p. 1Moving OnOrthogonal Transformationsgenerally useful computing toolssticking to real case for simplicityst<strong>and</strong>ard inner product:〈x,y〉 = ∑ nj=1 x jy jEuclidean norm: ‖x‖ 2=( ∑nj=1 x2 j‖x‖ 2= √ 〈x,x〉) 1/2
Moving OnOrthogonal Transformationsgenerally useful computing toolssticking to real case for simplicityst<strong>and</strong>ard inner product:〈x,y〉 = ∑ nj=1 x jy jEuclidean norm: ‖x‖ 2=( ∑nj=1 x2 j‖x‖ 2= √ 〈x,x〉) 1/2definition of orthogonal: Q T = Q −1 IMA Summer Program, 2008 – p. 1
IMA Summer Program, 2008 – p. 1Moving OnOrthogonal Transformationsgenerally useful computing toolssticking to real case for simplicityst<strong>and</strong>ard inner product:〈x,y〉 = ∑ nj=1 x jy jEuclidean norm: ‖x‖ 2=( ∑nj=1 x2 j‖x‖ 2= √ 〈x,x〉definition of orthogonal: Q T = Q −1properties of orthogonal matrices) 1/2
Elementary ReflectorsIMA Summer Program, 2008 – p. 1
IMA Summer Program, 2008 – p. 1Elementary Reflectors= Householder transformations
IMA Summer Program, 2008 – p. 1Elementary Reflectors= Householder transformationsone of two major classes of computationally usefulorthogonal transformations
IMA Summer Program, 2008 – p. 1Elementary Reflectors= Householder transformationsone of two major classes of computationally usefulorthogonal transformationsQ = I − 2uu T , ‖u‖ 2= 1
IMA Summer Program, 2008 – p. 1Elementary Reflectors= Householder transformationsone of two major classes of computationally usefulorthogonal transformationsQ = I − 2uu T , ‖u‖ 2= 1geometric action
IMA Summer Program, 2008 – p. 1Elementary Reflectors= Householder transformationsone of two major classes of computationally usefulorthogonal transformationsQ = I − 2uu T , ‖u‖ 2= 1geometric actionQx = y
IMA Summer Program, 2008 – p. 1Elementary Reflectors= Householder transformationsone of two major classes of computationally usefulorthogonal transformationsQ = I − 2uu T , ‖u‖ 2= 1geometric actionQx = ycreating zeros
IMA Summer Program, 2008 – p. 1Elementary Reflectors= Householder transformationsone of two major classes of computationally usefulorthogonal transformationsQ = I − 2uu T , ‖u‖ 2= 1geometric actionQx = ycreating zerosdetails: FMC Chapter 3
IMA Summer Program, 2008 – p. 1Elementary Reflectors= Householder transformationsone of two major classes of computationally usefulorthogonal transformationsQ = I − 2uu T , ‖u‖ 2= 1geometric actionQx = ycreating zerosdetails: FMC Chapter 3QR decomposition
Uses of the QR DecompositionIMA Summer Program, 2008 – p. 1
IMA Summer Program, 2008 – p. 1Uses of the QR DecompositionAx = b,n × n
IMA Summer Program, 2008 – p. 1Uses of the QR DecompositionAx = b, n × noverdetermined system
IMA Summer Program, 2008 – p. 1Uses of the QR DecompositionAx = b, n × noverdetermined systemorthonormalizing vectors
The Gram-Schmidt ProcessIMA Summer Program, 2008 – p. 1
The Gram-Schmidt Processorthonormalization of vectorsIMA Summer Program, 2008 – p. 1
IMA Summer Program, 2008 – p. 1The Gram-Schmidt Processorthonormalization of vectorsrelationship to QR decomposition
IMA Summer Program, 2008 – p. 1The Gram-Schmidt Processorthonormalization of vectorsrelationship to QR decompositionreorthogonalization
The SVDIMA Summer Program, 2008 – p. 1
The SVDsingular value decompositionIMA Summer Program, 2008 – p. 1
The SVDsingular value decompositionA = UΣV T IMA Summer Program, 2008 – p. 1
IMA Summer Program, 2008 – p. 1The SVDsingular value decompositionA = UΣV Tproduct eigenvalue problem
IMA Summer Program, 2008 – p. 1The SVDsingular value decompositionA = UΣV Tproduct eigenvalue problemFMC Chapter 4
IMA Summer Program, 2008 – p. 1The SVDsingular value decompositionA = UΣV Tproduct eigenvalue problemFMC Chapter 4numerical rank determination
IMA Summer Program, 2008 – p. 1The SVDsingular value decompositionA = UΣV Tproduct eigenvalue problemFMC Chapter 4numerical rank determinationsolution of least-squares problem
End of Part IIMA Summer Program, 2008 – p. 1