transitions that are significantly too strong due to the deficiency in the approximation used. Then it isrecommended that such transitions be included as experimentally observed <strong>data</strong>, with large errors ascribed,to force the code to include them in the correction factors table.4.5.4 Steepest Descent MinimizationChoice of the minimization strategy is dependent on specific characteristics of the function to be minimized.While it is possible generally to tailor the strategy for the case where the function to be minimized can beexpressed analytically, the multidimensional search for a minimum of a function that can only be evaluatednumerically - which is a case of the multiple Coulomb <strong>excitation</strong> <strong>analysis</strong> - cannot be fully algorithmizedto provide a universal optimum strategy. Thus the minimization procedure should leave much room foruser intervention, based on both intuition <strong>and</strong> underst<strong>and</strong>ing of the processes being analyzed. The mostcommonly used minimization strategies - simplex, variable metric <strong>and</strong> gradient algorithms- perform betteror worse dependent on the case. In our case, the simplex-type methods are not useable, because the exactcalculation is replaced by the fast approximation. Correction factors, only valid locally, are introduced,thus the construction of a simplex involving points far from the matrix element set used for evaluating thecorrection factors, is not reliable. In turn, the variable metric method, based on an exact solution of thesecond-order approximation to the S function, is efficient only if the second-order approximation is justifiedwithin a wide range of the parameters, which is usually not true for the Coulomb <strong>excitation</strong> <strong>analysis</strong>. Inaddition, the variable metric method requires that a second derivative matrix is calculated <strong>and</strong> stored,thus extending both the computing time <strong>and</strong> the central memory requirements to perform a single step ofminimization without much improvement compared to the steepest descent method, discussed below if thefunction is far from quadratic. Considering the above, the gradient methods are the only approach suitablefor fitting large sets of matrix elements to the Coulomb <strong>excitation</strong> <strong>data</strong>. GOSIA offers two gradient-typemethods which can be choosen by the user dependent on the case being analyzed - a simple steepest descentminimization, outlined below, <strong>and</strong> a gradient+derivative method, described in Section 4.5.5. A version thatuses the annealing technique has been developed for special applications [IBB95]The steepest descent method is one of the most commonly used methods of minimization based on thelocal behaviour of the function to be minimized. Assuming local validity of a first-order Taylor expansionaround the central set of arguments, ¯x 0 , any function can be approximated as:f(¯x) =f(¯x 0 )+ ¯5 0 ∆¯x + ... (4.32)with 5 0 being a gradient, i.e. a vector of derivatives calculated at the point ¯x 0 , explictly defined as:¯5 0,i = δfδx i(4.33)The steepest descent method is based on a simple observation that the local decrease of the functionto be minimized, f, is maximized if the change of the vector of the parameters, ∆¯x, is antiparallel to thegradient. As long as a minimized function is not multivalued <strong>and</strong> does not have saddle points, a simpleiteration scheme:¯x → ¯x − h ¯5 (4.34)provides a safe <strong>and</strong> efficient way to minimize a function using the gradient evaluated at each succesive point ¯x.The stepsize, h, must be found by performing one-dimensional minimization along the direction antiparallelto the gradient. Assuming the locally quadratic behaviour of the function f, thevalueofh is expressed by:h = ¯5 2¯5J ¯5(4.35)where J is the matrix of second derivatives of f with respect to ¯x i.e.:J ik =δ2 fδx i δx k(4.36)46
However, estimation of the stepsize according to equation 4.36 is out of question, since the secondderivativematrix is never calculated in GOSIA <strong>and</strong>, moreover, the assumption of local quadraticity is ingeneral not valid. Instead, an iterative procedure is used to find a minimum along the direction defined bythegradient,basedonawellknownNewton-Raphsonalgorithmforfinding the zeros of arbitrary functions.A search for a minimum of a function is equivalent to finding a zero of its first derivative with respect to thestepsize h, according to the second-order iterative scheme:δfδhh → h −(4.37)δ 2 fδh 2which can be repeated until the requested convergence is achieved, unless the second derivative of f withrespect to h is negative, which implies that the quadratic approximation cannot be applied even locally. Insuch a case, the minimized function is sampled stepwise until the Newton-Raphson method becomes applicablewhen close enough to the minimum along the direction of the gradient. One-dimensional minimizationis stopped when the absolute value of the difference between two subsequent vectors of parameters is lessthan the user-specified convergence criterion.The gradients in GOSIA are evaluated numerically, using the forward-difference formula or, optionally,the forward-backward approximation. While the forward difference formulaδf= f(x 1,x 2 , ...x i + h, ...) − f(x 1 ,x 2 , ...x i , ...)(4.38)δx i hrequires only one calculation of the minimized function per parameter, in addition to the evaluation of thecentral value f(x 1,x 2 , ...x n ), the forward-backward formulaδf= f(x 1,x 2 , ...x i + h, ...) − f(x 1 ,x 2 ,...x i − h, ...)(4.39)δx i 2hrequires two calculations of the minimized function per parameter. The forward-backward formula thenshould be requested only in the vicinity of the minimum, where the accuracy of the numerical calculationsstarts to play an important role.4.5.5 Gradient + Derivative MinimizationThe steepest descent minimization is efficient if a minimized function is smooth in the space of parameters,but it exhibits considerable drawbacks when dealing with functions having sharp “valleys“ superimposed onsmooth surfaces. Such valleys are created by strong correlations of two or more parameters. In the caseof Coulomb <strong>excitation</strong> <strong>analysis</strong>, the valleys are introduced mainly by including accurate spectroscopic <strong>data</strong>,especially the branching ratios, which fix the ratio of two transitional matrix elements. Note, that evenif the branching ratio is not introduced as an additional <strong>data</strong> point, the valley still will be present in theyield component of the least-squares statistic S if both transitions depopulating a given state are observed.To demonstrate this deficiency of the simple steepest descent method, let us consider a model situation inwhich a two-parameter function f(x, y) =x 2 +(x − y) 2 is minimized, starting from a point x = y. Theterm(x − y) 2 creates a diagonal valley leading to the minimum point (0, 0). Using the analytic gradient <strong>and</strong> thestepsize given by 4.35, it is seen that the minimization using the steepest descent method will follow a pathshown in the figure instead of following the diagonal.To facilitate h<strong>and</strong>ling of the two-dimensional valleys, introduced by the spectroscopic <strong>data</strong>, GOSIA offersa gradient+derivative method, designed to force the minimization procedure to follow the two-dimensionalvalleys,atthesametimeintroducingthesecond-order information without calculating the second ordermatrix (4.36), thus speeding up the minimization even if the minimized function has a smooth surface.Generally, to minimize a locally parabolic function:f(¯x) =f(¯x 0 )+ ¯5 0 ∆¯x + 1 ∆¯xJ∆¯x (4.40)2one can look for the best direction for a search expressed as a linear combination of an arbitrary number ofvectors, ¯P i , not necessarily orthogonal, but only linearly independent. This is equivalent to requesting that:47
- Page 1: COULOMB EXCITATION DATA ANALYSIS CO
- Page 4 and 5: 10 MINIMIZATION BY SIMULATED ANNEAL
- Page 6 and 7: 1 INTRODUCTION1.1 Gosia suite of Co
- Page 8 and 9: 104 Ru, 110 Pd, 165 Ho, 166 Er, 186
- Page 13 and 14: Figure 1: Coordinate system used to
- Page 15 and 16: Cλ E =1.116547 · (13.889122) λ (
- Page 17 and 18: Figure 2: The orbital integrals R 2
- Page 19 and 20: 2.2 Gamma Decay Following Electroma
- Page 21 and 22: where :d 2 σ= σ R (θ p ) X R kχ
- Page 23 and 24: Formula 2.49 is valid only for t mu
- Page 25 and 26: Ã XK(α) =exp−iτ i (E γ )x i (
- Page 27 and 28: important to have an accurate knowl
- Page 29 and 30: 3 APPROXIMATE EVALUATION OF EXCITAT
- Page 31 and 32: with the reduced matrix element M c
- Page 33 and 34: q (20)s (0 + → 2 + ) · M 1 ζ (2
- Page 35 and 36: esults of minimization and error ru
- Page 37 and 38: adjustment of the stepsize accordin
- Page 39 and 40: approximation reliability improves
- Page 41 and 42: Zd 2 σ(I → I f )Y (I → I f )=s
- Page 43 and 44: 4.5 MinimizationThe minimization, i
- Page 45: X(CC k Yk c − Yk e ) 2 /σ 2 k =m
- Page 49 and 50: It can be shown that as long as the
- Page 51 and 52: een exceeded; third, the user-given
- Page 53 and 54: where f k stands for the functional
- Page 55 and 56: x i + δx i Rx iexp ¡ − 1 2 χ2
- Page 57 and 58: method used for the minimization, i
- Page 59 and 60: OP,ERRO (ERRORS) (5.6):Activates th
- Page 61 and 62: -----OP,SIXJ (SIX-j SYMBOL) (5.25):
- Page 63 and 64: 5.3 CONT (CONTROL)This suboption of
- Page 65 and 66: I,I1 Ranges of matrix elements to b
- Page 67 and 68: CODE DEFAULT OTHER CONSEQUENCES OF
- Page 69 and 70: 5.4 OP,CORR (CORRECT )This executio
- Page 71 and 72: 5.6 OP,ERRO (ERRORS)ThemoduleofGOSI
- Page 73 and 74: 5.7 OP,EXIT (EXIT)This option cause
- Page 75 and 76: M AControls the number of magnetic
- Page 77 and 78: 5.10 OP,GDET (GE DETECTORS)This opt
- Page 79 and 80: 5.12 OP,INTG (INTEGRATE)This comman
- Page 81 and 82: ¡ dE¢dx1 ..¡ dEdx¢Stopping powe
- Page 83 and 84: NI1, NI2 Number of subdivisions of
- Page 85 and 86: 5.13 LEVE (LEVELS)Mandatory subopti
- Page 87 and 88: 5.15 ME (OP,COUL)Mandatory suboptio
- Page 89 and 90: Figure 10: Model system having 4 st
- Page 91 and 92: ME =< INDEX2||E(M)λ||INDEX1 > The
- Page 93 and 94: When entering matrix elements in th
- Page 95 and 96: There are no restrictions concernin
- Page 97 and 98:
5.18 OP,POIN (POINT CALCULATION)Thi
- Page 99 and 100:
5.20 OP,RAW (RAW UNCORRECTED γ YIE
- Page 101 and 102:
5.21 OP,RE,A (RELEASE,A)This option
- Page 103 and 104:
5.25 OP,SIXJ (SIXJ SYMBOL)This stan
- Page 105 and 106:
5.27 OP,THEO (COLLECTIVE MODEL ME)C
- Page 107 and 108:
2,5,1,-2,23,5,1,-2,23,6,1,-2,2Matri
- Page 109 and 110:
5.29 OP,TROU (TROUBLE)This troubles
- Page 111 and 112:
to that of the previous experiment,
- Page 113 and 114:
To reduce the unnecessary input, on
- Page 115 and 116:
OP,STAR or OP,POIN under OP,GOSI. N
- Page 117 and 118:
5.31 INPUT OF EXPERIMENTAL γ-RAY Y
- Page 119 and 120:
6 QUADRUPOLE ROTATION INVARIANTS -
- Page 121 and 122:
*½P 5 (J) = s(E2 × E2) J ׯh¾
- Page 123 and 124:
The expectation value of cos3δ can
- Page 125 and 126:
where ē is an arbitratry vector. D
- Page 127 and 128:
achieved using “mixed“ calculat
- Page 129 and 130:
TAPE9 Contains the parameters neede
- Page 131 and 132:
TAPE18 Input file, containing the i
- Page 133 and 134:
7.4.4 CALCULATION OF THE INTEGRATED
- Page 135 and 136:
OP,EXITInput: TAPE4,TAPE7,TAPE9Outp
- Page 137 and 138:
OP,ERRO0,MS,MEND,1,0,RMAXand the fi
- Page 139 and 140:
8 SIMULTANEOUS COULOMB EXCITATION:
- Page 141 and 142:
4, 3, 1kr88.corKr corrected yields
- Page 143 and 144:
0 Correction for in-flight decay ch
- Page 145 and 146:
OP, ERRO Estimation of errors of fi
- Page 147 and 148:
9 COULOMB EXCITATION OF ISOMERIC ST
- Page 149 and 150:
configurations with a probability e
- Page 151 and 152:
The average range covered by each m
- Page 153 and 154:
SFX,NTOTI1(1),I2(1),RSIGN(1)I1(2),I
- Page 155 and 156:
11.2 LearningtoWriteGosiaInputsThe
- Page 157 and 158:
(1.6 MeV)1.1 MeV0.75 MeV0.4 MeV0.08
- Page 159 and 160:
Define the germaniumdetector geomet
- Page 161 and 162:
Figure 15: Flow diagram for Gosia m
- Page 163 and 164:
gosia < 2-make-correction-factors.i
- Page 165 and 166:
Issue the commandgosia < 9-diag-err
- Page 167 and 168:
At this point, it is suggested to c
- Page 169 and 170:
calculation.) In this case, a copy
- Page 171 and 172:
4,-4, -3.705, 3,44,5, 4.626, 3.,7.5
- Page 173 and 174:
90145901459014590145901459014590145
- Page 175 and 176:
.10.028921.10.026031.10.023431.10.0
- Page 177 and 178:
5,5,634,650,82.000,84.000634,638,64
- Page 179 and 180:
***********************************
- Page 181 and 182:
*** CHISQ= 0.134003E+01 ***MATRIX E
- Page 183 and 184:
CALCULATED AND EXPERIMENTAL YIELDS
- Page 185 and 186:
11.7 Annotated excerpt from a Coulo
- Page 187 and 188:
11.8 Accuracy and speed of calculat
- Page 189 and 190:
18,10.056,0.068,0.082,0.1,0.12,0.15
- Page 191 and 192:
line 152 Eu 182 Tanumber (keV) (keV
- Page 193 and 194:
1.6 Normalization between data sets
- Page 195 and 196:
13 GOSIA 2007 RELEASE NOTESThese no
- Page 197 and 198:
Matrix elements 500(April 1990, T.
- Page 199 and 200:
14 GOSIA Manual UpdatesDATE UPDATE2
- Page 201 and 202:
[KIB08]T.Kibédi,T.W.Burrows,M.B.Tr