10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.C.3: Perturbation theory 609(a)(a ′ )(b)Matrix⎡⎤.90 .20 0 0⎢ .10 .80 0 0⎥⎣ 0 0 .90 .20 ⎦0 0 .10 .80⎡⎤.90 .20 0 0⎢ .10 .79 .02 0⎥⎣ 0 .01 .88 .20 ⎦0 0 .10 .80⎡⎤0 0 .90 .20⎢ 0 0 .10 .80⎥⎣ .90 .20 0 0 ⎦.10 .80 0 0Eigenvalues <strong>and</strong> eigenvectors e L , e R⎡ ⎤1⎡ ⎤ ⎡ ⎤1⎡ ⎤ ⎡0.70⎤ ⎡ ⎤ ⎡0.70⎤ ⎡ ⎤0 0 .71 .89 .45 .71 0 0⎢ 0⎥ ⎢ 0⎥ ⎢.71⎥ ⎢.45⎥ ⎢−.89⎥ ⎢−.71⎥ ⎢ 0⎥ ⎢ 0⎥⎣.71⎦⎣.89⎦⎣ 0⎦⎣ 0⎦⎣ 0⎦⎣ 0⎦⎣−.45⎦⎣−.71⎦.71 .45 0 0 0 0 .89 .71⎡ ⎤1 ⎡ ⎤ ⎡ 0.98 ⎤ ⎡ ⎤ ⎡ 0.70 ⎤ ⎡ ⎤ ⎡ 0.69 ⎤ ⎡ ⎤.50 .87 −.18 −.66 .20 .63 −.19 −.61⎢.50⎥ ⎢.43⎥ ⎢−.15⎥ ⎢−.28⎥ ⎢−.40⎥ ⎢−.63⎥ ⎢ .41⎥ ⎢ .65⎥⎣.50⎦⎣.22⎦⎣ .66⎦⎣ .61⎦⎣−.40⎦⎣−.32⎦⎣−.44⎦⎣−.35⎦.50 .11 .72 .33 .80 .32 .77 .30⎡ ⎤1 ⎡ ⎤ ⎡ 0.70 ⎤ ⎡ ⎤ ⎡ −0.70 ⎤ ⎡ ⎤ ⎡ ⎤−1⎡ ⎤.50 .63 −.32 .50 .32 −.50 .50 .63⎢.50⎥ ⎢.32⎥ ⎢ .63⎥ ⎢−.50⎥ ⎢−.63⎥ ⎢ .50⎥ ⎢ .50⎥ ⎢ .32⎥⎣.50⎦⎣.63⎦⎣−.32⎦⎣ .50⎦⎣−.32⎦⎣ .50⎦⎣−.50⎦⎣−.63⎦.50 .32 .63 −.50 .63 −.50 −.50 −.32Table C.6. Illustrative transition probability matrices <strong>and</strong> their eigenvectors showing the two ways ofbeing non-ergodic. (a) More than one principal eigenvector with eigenvalue 1 because thestate space falls into two unconnected pieces. (a ′ ) A small perturbation breaks the degeneracyof the principal eigenvectors. (b) Under this chain, the density may oscillate betweentwo parts of the state space. In addition to the invariant distribution, there is anotherright-eigenvector with eigenvalue −1. In general such circulating densities correspond tocomplex eigenvalues with magnitude 1.We assume that we have an N × N matrix H that is a function H(ɛ) ofa real parameter ɛ, with ɛ = 0 being our starting point. We assume that aTaylor expansion of H(ɛ) is appropriate:H(ɛ) = H(0) + ɛV + · · ·(C.9)whereV ≡ ∂H∂ɛ .(C.10)We assume that for all ɛ of interest, H(ɛ) has a complete set of N righteigenvectors<strong>and</strong> left-eigenvectors, <strong>and</strong> that these eigenvectors <strong>and</strong> their eigenvaluesare continuous functions of ɛ. This last assumption is not necessarily agood one: if H(0) has degenerate eigenvalues then it is possible for the eigenvectorsto be discontinuous in ɛ; in such cases, degenerate perturbation theoryis needed. That’s a fun topic, but let’s stick with the non-degenerate casehere.We write the eigenvectors <strong>and</strong> eigenvalues as follows:<strong>and</strong> we Taylor-exp<strong>and</strong>H(ɛ)e (a)R(ɛ) = λ(a) (ɛ)e (a)R(ɛ), (C.11)λ (a) (ɛ) = λ (a) (0) + ɛµ (a) + · · ·(C.12)with<strong>and</strong>e (a)Rµ (a) ≡ ∂λ(a) (ɛ)∂ɛ(C.13)(a)(ɛ) = e(a)R(0) + ɛfR + · · · (C.14)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!