12.07.2015 Views

coulomb excitation data analysis codes; gosia 2007 - Physics and ...

coulomb excitation data analysis codes; gosia 2007 - Physics and ...

coulomb excitation data analysis codes; gosia 2007 - Physics and ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

However, estimation of the stepsize according to equation 4.36 is out of question, since the secondderivativematrix is never calculated in GOSIA <strong>and</strong>, moreover, the assumption of local quadraticity is ingeneral not valid. Instead, an iterative procedure is used to find a minimum along the direction defined bythegradient,basedonawellknownNewton-Raphsonalgorithmforfinding the zeros of arbitrary functions.A search for a minimum of a function is equivalent to finding a zero of its first derivative with respect to thestepsize h, according to the second-order iterative scheme:δfδhh → h −(4.37)δ 2 fδh 2which can be repeated until the requested convergence is achieved, unless the second derivative of f withrespect to h is negative, which implies that the quadratic approximation cannot be applied even locally. Insuch a case, the minimized function is sampled stepwise until the Newton-Raphson method becomes applicablewhen close enough to the minimum along the direction of the gradient. One-dimensional minimizationis stopped when the absolute value of the difference between two subsequent vectors of parameters is lessthan the user-specified convergence criterion.The gradients in GOSIA are evaluated numerically, using the forward-difference formula or, optionally,the forward-backward approximation. While the forward difference formulaδf= f(x 1,x 2 , ...x i + h, ...) − f(x 1 ,x 2 , ...x i , ...)(4.38)δx i hrequires only one calculation of the minimized function per parameter, in addition to the evaluation of thecentral value f(x 1,x 2 , ...x n ), the forward-backward formulaδf= f(x 1,x 2 , ...x i + h, ...) − f(x 1 ,x 2 ,...x i − h, ...)(4.39)δx i 2hrequires two calculations of the minimized function per parameter. The forward-backward formula thenshould be requested only in the vicinity of the minimum, where the accuracy of the numerical calculationsstarts to play an important role.4.5.5 Gradient + Derivative MinimizationThe steepest descent minimization is efficient if a minimized function is smooth in the space of parameters,but it exhibits considerable drawbacks when dealing with functions having sharp “valleys“ superimposed onsmooth surfaces. Such valleys are created by strong correlations of two or more parameters. In the caseof Coulomb <strong>excitation</strong> <strong>analysis</strong>, the valleys are introduced mainly by including accurate spectroscopic <strong>data</strong>,especially the branching ratios, which fix the ratio of two transitional matrix elements. Note, that evenif the branching ratio is not introduced as an additional <strong>data</strong> point, the valley still will be present in theyield component of the least-squares statistic S if both transitions depopulating a given state are observed.To demonstrate this deficiency of the simple steepest descent method, let us consider a model situation inwhich a two-parameter function f(x, y) =x 2 +(x − y) 2 is minimized, starting from a point x = y. Theterm(x − y) 2 creates a diagonal valley leading to the minimum point (0, 0). Using the analytic gradient <strong>and</strong> thestepsize given by 4.35, it is seen that the minimization using the steepest descent method will follow a pathshown in the figure instead of following the diagonal.To facilitate h<strong>and</strong>ling of the two-dimensional valleys, introduced by the spectroscopic <strong>data</strong>, GOSIA offersa gradient+derivative method, designed to force the minimization procedure to follow the two-dimensionalvalleys,atthesametimeintroducingthesecond-order information without calculating the second ordermatrix (4.36), thus speeding up the minimization even if the minimized function has a smooth surface.Generally, to minimize a locally parabolic function:f(¯x) =f(¯x 0 )+ ¯5 0 ∆¯x + 1 ∆¯xJ∆¯x (4.40)2one can look for the best direction for a search expressed as a linear combination of an arbitrary number ofvectors, ¯P i , not necessarily orthogonal, but only linearly independent. This is equivalent to requesting that:47

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!