10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.C.2: Eigenvectors <strong>and</strong> eigenvalues 607Symmetric matricesIf A is a real symmetric N × N matrix then1. all the eigenvalues <strong>and</strong> eigenvectors of A are real;2. every left-eigenvector of A is also a right-eigenvector of A with the sameeigenvalue, <strong>and</strong> vice versa;3. a set of N eigenvectors <strong>and</strong> eigenvalues {e (a) , λ a } N a=1 can be found thatare orthonormal, that is,e (a)·e (b) = δ ab ;(C.3)the matrix can be expressed as a weighted sum of outer products of theeigenvectors:N∑A = λ a [e (a) ][e (a) ] T .(C.4)a=1(Whereas I often use i <strong>and</strong> n as indices for sets of size I <strong>and</strong> N, I will use theindices a <strong>and</strong> b to run over eigenvectors, even if there are N of them. This isto avoid confusion with the components of the eigenvectors, which are indexedby n, e.g. e (a)n .)General square matricesAn N × N matrix can have up to N distinct eigenvalues. Generically, thereare N eigenvalues, all distinct, <strong>and</strong> each has one left-eigenvector <strong>and</strong> one righteigenvector.In cases where two or more eigenvalues coincide, for each distincteigenvalue that is non-zero there is at least one left-eigenvector <strong>and</strong> one righteigenvector.Left- <strong>and</strong> right-eigenvectors that have different eigenvalue are orthogonal,that is,if λ a ≠ λ b then e (a)L ·e(b) R= 0. (C.5)Non-negative matricesDefinition. If all the elements of a non-zero matrix C satisfy C mn ≥ 0 then Cis a non-negative matrix. Similarly, if all the elements of a non-zero vector csatisfy c n ≥ 0 then c is a non-negative vector.Properties. A non-negative matrix has a principal eigenvector that is nonnegative.It may also have other eigenvectors with the same eigenvalue thatare not non-negative. But if the principal eigenvalue of a non-negative matrixis not degenerate, then the matrix has only one principal eigenvector e (1) , <strong>and</strong>it is non-negative.Generically, all the other eigenvalues are smaller in absolute magnitude.[There can be several eigenvalues of identical magnitude in special cases.]Transition probability matricesAn important example of a non-negative matrix is a transition probabilitymatrix Q.Definition. A transition probability matrix Q has columns that are probabilityvectors, that is, it satisfies Q ≥ 0 <strong>and</strong>∑Q ij = 1 for all j.(C.6)i

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!