31.12.2013 Views

Numerical Methods in Quantum Mechanics - Dipartimento di Fisica

Numerical Methods in Quantum Mechanics - Dipartimento di Fisica

Numerical Methods in Quantum Mechanics - Dipartimento di Fisica

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

The only nonzero matrix elements for the two first terms are between states |µ〉<br />

and |ν〉 states such that σ µ (k) = σ ν (k) for all k ≠ i, j, while for k = i, j:<br />

〈α(i)| ⊗ 〈β(j)|S + (i)S − (j)|β(i)〉 ⊗ |α(j)〉 (11.5)<br />

〈β(i)| ⊗ 〈α(j)|S − (i)S + (j)|α(i)〉 ⊗ |β(j)〉 (11.6)<br />

where α(i), β(i) mean i−th sp<strong>in</strong> up and down, respectively. The term S z (i)S z (j)<br />

is <strong>di</strong>agonal, i.e. nonzero only for µ = ν.<br />

Sparseness, <strong>in</strong> conjunction with symmetry, can be used to reduce the Hamiltonian<br />

matrix <strong>in</strong>to blocks of much smaller <strong>di</strong>mensions that can be <strong>di</strong>agonalized<br />

with a much reduced computational effort.<br />

11.3 Iterative <strong>di</strong>agonalization<br />

In ad<strong>di</strong>tion to sparseness, there is another aspect that can be exploited to make<br />

the calculation more tractable. Typically one is <strong>in</strong>terested <strong>in</strong> the ground state<br />

and <strong>in</strong> a few low-ly<strong>in</strong>g excited states, not <strong>in</strong> the entire spectrum. Calculat<strong>in</strong>g<br />

just a few eigenstates, however, is just marg<strong>in</strong>ally cheaper than calculat<strong>in</strong>g all<br />

of them, with conventional (LAPACK) <strong>di</strong>agonalization algorithms: an expensive<br />

tri<strong>di</strong>agonal (or equivalent) step, cost<strong>in</strong>g O(Nh 3 ) float<strong>in</strong>g-po<strong>in</strong>t operations,<br />

has to be performed anyway. It is possible to take advantage of the smallness<br />

of the number of desired eigenvalues, by us<strong>in</strong>g iterative <strong>di</strong>agonalization<br />

algorithms. Unlike conventional algorithms, they are based on successive ref<strong>in</strong>ement<br />

steps of a trial solution, until the required accuracy is reached. If an<br />

approximate solution is known, the convergence may be very quick. Iterative<br />

<strong>di</strong>agonalization algorithms typically use as basic <strong>in</strong>gre<strong>di</strong>ents Hψ, where ψ is<br />

the trial solution. Such operations, <strong>in</strong> practice matrix-vector products, require<br />

O(Nh 2 ) float<strong>in</strong>g-po<strong>in</strong>t operations. Sparseness can however be exploited to speed<br />

up the calculation of Hψ products. In some cases, the special structure of the<br />

matrix can also be exploited (this is the case for one-electron Hamiltonians <strong>in</strong><br />

a plane-wave basis set). It is not just a problem of speed but of storage: even<br />

if we manage to store <strong>in</strong>to memory vectors of length N h , storage of a N h × N h<br />

matrix is impossible.<br />

Among the many algorithms and variants, described <strong>in</strong> many thick books,<br />

a time-honored one that stands for its simplicity is the Lanczos algorithm.<br />

Start<strong>in</strong>g from |v 0 〉 = 0 and from some <strong>in</strong>itial guess |v 1 〉, we generate the follow<strong>in</strong>g<br />

cha<strong>in</strong> of vectors:<br />

where<br />

|w j+1 〉 = H|v j 〉 − α j |v j 〉 − β j |v j−1 〉, |v j+1 〉 = 1<br />

β j+1<br />

|w j+1 〉, (11.7)<br />

α j = 〈v j |H|v j 〉, β j+1 = (〈w j+1 |w j+1 〉) 1/2 . (11.8)<br />

It can be shown that all vectors |v j 〉 are orthogonal: 〈v i |v j 〉 = 0 ∀i ≠ j, and<br />

that <strong>in</strong> the basis of the |v j 〉 vectors, the Hamiltonian has a tri<strong>di</strong>agonal form,<br />

81

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!