24.10.2014 Views

Get my PhD Thesis

Get my PhD Thesis

Get my PhD Thesis

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Part 1<br />

Improving Self-consistent Field Convergence<br />

with a renormalization at the end. This gives an eigenvalue problem to be solved instead of the set<br />

of linear equations in normal DIIS, and thus singularities are more easily handled. However, one of<br />

the examples (Pd 2 in the Hyla-Kripsin basis set 35 ) given in ref. 34 , where DIIS supposedly diverges,<br />

converges for our plain DIIS implementation to 10 -7 in the energy in 14 iterations.<br />

Even though DIIS is successful, examples of divergence with no relation to numerical instabilities<br />

have been encountered over the years. In the year 2000 Cancès and Le Bris presented a damping<br />

algorithm named the Optimal damping Algorithm 36 (ODA) that ensures a decrease in energy at each<br />

iteration and converges toward a solution to the HF equations. In ODA the damping factor λ is<br />

found based on the minimum of the Hartree-Fock energy for the damped density in Eq. (1.9)<br />

E<br />

damp<br />

( Dn+<br />

1<br />

, λ) = E ( Dn ) + 2λTrF( Dn )( Dn+<br />

−Dn<br />

)<br />

HF HF 1<br />

2<br />

+ λ Tr ( D −D ) G( D − D ) + h ,<br />

n+ 1 n n+<br />

1 n nuc<br />

(1.17)<br />

much like Karlström did it in 1979. The damping factor is thus optimized in each iteration, hence<br />

the name of the algorithm.<br />

Recently Kudin, Scuseria, and Cancès proposed a method in which the gradient-norm minimization<br />

in DIIS is replace by a minimization of an approximation to the true energy function and they<br />

named it the energy DIIS (EDIIS) method 37 . Where the ODA used the energy expression of Eq.<br />

(1.17) to find the optimal λ, EDIIS uses an approximation of the Hartree-Fock energy for the<br />

averaged density<br />

n<br />

EDIIS 1<br />

n<br />

D = ∑ ciD i , (1.18)<br />

i=<br />

1<br />

( , ) = ∑ i SCF ( i ) −<br />

2 ∑ i j Tr( ( i − j ) ⋅( i − j ))<br />

i= 1 i, j=<br />

1<br />

n<br />

E Dc c E D c c F F D D , (1.19)<br />

where the sum of the coefficients c i is still restricted to 1. They combine the scheme with DIIS, such<br />

that the EDIIS optimized coefficients are used to construct the averaged Fock matrix if all<br />

coefficients fall between 0 and 1. If not, the coefficients from the DIIS scheme are used instead. The<br />

EDIIS scheme introduces some Hessian information not found in DIIS and thus improves<br />

convergence in cases where the start guess has a Hessian structure far from the optimized one. For<br />

non-problematic cases and near the optimized state EDIIS has a slower convergence rate than DIIS,<br />

but it has been demonstrated that EDIIS can converge cases where DIIS diverges.<br />

Recently, we suggested another subspace minimization algorithm along the same line as EDIIS, but<br />

with a smaller idempotency error in the energy model and the same orbital rotation gradient in the<br />

subspace as the SCF energy (the EDIIS energy model actually has a different gradient). We named<br />

it TRDSM 38 for trust region density subspace minimization since a trust region optimization is<br />

10

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!