22.01.2015 Views

Energy Minimization Methods

Energy Minimization Methods

Energy Minimization Methods

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2<br />

Newton-Raphson Method<br />

The Newton-Raphson method is the most computationally expensive per step of all the methods utilized to perform<br />

energy minimization. It is based on Taylor series expansion of the potential energy surface at the current geometry.<br />

The equation for updating the geometry is<br />

x new = x old −<br />

( )<br />

( ) . (2)<br />

E ʹ′ x old<br />

E ʹ′ ʹ′ x old<br />

Notice that the correction term depends on both the first derivative (also called the slope or gradient) of the potential<br />

energy surface at the current geometry € and also on the second derivative (otherwise known as the curvature). It is<br />

the necessity of calculating these derivatives at each step that makes the method very expensive per step, especially<br />

for a multidimensional potential energy surface where there are many directions in which to calculate the gradients<br />

and curvatures. However, the Newton-Raphson method usually requires the fewest steps to reach the minimum.<br />

Steepest Descent Method<br />

Rather than requiring the calculation of numerous second derivatives, the steepest descent method relies on an<br />

approximation. In this method, the second derivative is assumed to be a constant. Therefore, the equation to update<br />

the geometry becomes<br />

x new = x old − γ E ʹ′ x old<br />

( ) , (3)<br />

where γ is a constant. In this method, the gradients at each point still must be calculated, but by not requiring second<br />

derivatives to be calculated, the € method is much faster per step than the Newton-Raphson method. However,<br />

because of the approximation, it is not as efficient and so more steps are generally required to find the minimum.<br />

The method is named Steepest Descent because the direction in which the geometry is first minimized is opposite to<br />

the direction in which the gradient is largest (i.e., steepest) at the initial point. Once a minimum in the first direction<br />

is reached, a second minimization is carried out starting from that point and moving in the steepest remaining<br />

direction. This process continues until a minimum has been reached in all directions to within a sufficient tolerance.<br />

Such a process is illustrated for a system with two geometrical coordinates in Figure 2.<br />

Figure 2. Illustration of the steepest descent method for a system with two geometrical coordinates.<br />

Conjugate Gradient Method<br />

In the Conjugate Gradient method, the first portion of the search takes place opposite the direction of the largest<br />

gradient, just as in the Steepest Descent method. However, to avoid some of the oscillating back and forth that often<br />

plagues the steepest descent method as it moves toward the minimum, the conjugate gradient method mixes in a<br />

little of the previous direction in the next search. This allows the method to move rapidly to the minimum. The<br />

equations for the conjugate gradient method are more complex than those of the other two methods, so they will not<br />

be given here.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!