07.04.2013 Views

Essentials of Computational Chemistry

Essentials of Computational Chemistry

Essentials of Computational Chemistry

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

6.3 KEY TECHNICAL AND PRACTICAL POINTS OF HARTREE–FOCK THEORY 191<br />

some point the linear scaling algorithm will become more efficient given a system <strong>of</strong> large<br />

enough size. In any case, because there are several features present in electronic structure<br />

programs that allow some control over the efficiency <strong>of</strong> the calculation, we discuss here the<br />

most common ones, recapitulating a few that have already been mentioned above.<br />

First, there is the issue <strong>of</strong> how to go about computing the four-index integrals, which<br />

are responsible for the formal N 4 scaling. One might imagine that the most straightforward<br />

approach is to compute every single one and, as it is computed, write it to storage – then, as<br />

the Fock matrix is assembled element by element, call back the computed values whenever<br />

they are required (most <strong>of</strong> the integrals are required several times). In practice, however, this<br />

approach is only useful when the time required to write to and read from storage is very, very<br />

fast. Otherwise, modern processors can actually recompute the integral from scratch faster<br />

than modern hardware can recover the previously computed value from, say, disk storage.<br />

The process <strong>of</strong> computing each integral as it is needed rather than trying to store them all<br />

is called ‘direct SCF’. Only when the storage <strong>of</strong> all <strong>of</strong> the integrals can be accomplished in<br />

memory itself (i.e., not on an external storage device) is the access time sufficiently fast that<br />

the ‘traditional’ method is to be preferred over direct SCF.<br />

As the size <strong>of</strong> the system increases, it becomes possible to take advantage <strong>of</strong> other features<br />

<strong>of</strong> the electronic structure that further improve the efficiency <strong>of</strong> direct SCF. For instance, it<br />

is possible to estimate upper bounds for four-index integrals reasonably efficiently, and if<br />

the upper bound is so small that the integral can make no significant contribution, there is<br />

no point evaluating it more accurately than assigning it to be zero. Such small integrals are<br />

legion in large systems, since if each <strong>of</strong> the four basis functions is distantly separated from<br />

all <strong>of</strong> the others simple overlap arguments make it clear that the integral cannot be very<br />

large.<br />

With very, very large systems, fast-multipole methods analogous to those described in<br />

Section 2.4.2 can be used to reduce the scaling <strong>of</strong> Coulomb integral evaluation to linear<br />

(see, for instance, Strain, Scuseria, and Frisch 1996; Challacombe and Schwegler 1997), and<br />

linear methods to evaluate the exchange integrals have also been promulgated (Ochensfeld,<br />

White, and Head-Gordon 1998). At this point, the bottleneck in HF calculations becomes<br />

diagonalization <strong>of</strong> the Fock matrix (a step having formal N 3 scaling), and early efforts to<br />

reduce the scaling <strong>of</strong> this step have also appeared (Millam and Scuseria 1997).<br />

As already described above, efficiency in converging the SCF for systems with large<br />

basis sets can be enhanced by using as an initial guess the converged wave function from a<br />

different calculation, one using either a smaller basis set or a less negative charge. This same<br />

philosophy can be applied to geometry optimization, which can be quite time-consuming for<br />

very large calculations. It is <strong>of</strong>ten very helpful to optimize the geometry first at a more<br />

efficient level <strong>of</strong> theory. This is true not just because the geometry optimized with the lower<br />

level is probably a good place to start for the higher level, but also because typically one<br />

can compute the force constants at the lower level and use them as an initial guess for the<br />

higher level Hessian matrix that will be much better than the typical guess generated by<br />

the optimizing algorithm. As described in Section 2.4.1, the availability <strong>of</strong> a good Hessian<br />

matrix can make an enormous amount <strong>of</strong> difference in how quickly a geometry optimization<br />

can be induced to converge.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!