08.09.2013 Views

q - Rosario Toscano - Free

q - Rosario Toscano - Free

q - Rosario Toscano - Free

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

the current temperature; otherwise it is rejected i.e. q remains unchanged. Equivalently,<br />

the new point qnew is accepted if it satisfies J ( qnew)<br />

≤ J ( q)<br />

−T<br />

log( r)<br />

.<br />

4. Repeat steps 2 and 3 until the sequence of accepted points have reached a state of<br />

equilibrium.<br />

5. The temperature T is lowered to a new temperature Tnew in accordance with the annealing<br />

schedule, set T = Tnew<br />

and return to step 2. This process is continued until some stopping<br />

rule is satisfied.<br />

There are many way in which this algorithm can be implemented. In what follows, we give<br />

some practical rules widely used for an efficient implementation of the simulated annealing<br />

algorithm.<br />

Choice of the initial temperature. The initial temperature must be chosen sufficiently large so<br />

that any point of the search domain D has a reasonable chance of being visited. However, if Tinit<br />

is too large then a too long time is spent in a state of “high energy” (i.e. high values of the cost<br />

function). Many methods have been proposed in the literature to determine the initial temperature<br />

(see for instance Ben-Ameur, 2004). A well accepted approach consist in computing an initial<br />

temperature such that the acceptance ratio is approximately equal to a given value τ 0 . This can be<br />

done as follows. Generate at random η samples uniformly distributed in D : q ∈D<br />

, i = 1, K,<br />

η ,<br />

and choose a rate of acceptance τ 0 , then evaluate the initial temperature using:<br />

( ) / log( 0)<br />

τ<br />

Tinit = − ΔJ<br />

max , where (Δ J ) max is defined as: ( Δ J ) max = max J ( q ) − min J ( q ) . By the<br />

1≤i≤η<br />

1≤i≤η<br />

way, we can use the η samples to select an initial decision vector q as follows: q = min J ( q ) .<br />

i<br />

i<br />

i<br />

arg<br />

1≤i≤η<br />

Generation of a new candidate solution (step 2 of the SA algorithm). Generally, a new<br />

candidate solution qnew is generated by adding a random perturbation to the current solution q.<br />

There are many way to do that, a common rule for continuous optimization problem is to add a<br />

nq-dimensional Gaussian random variable to the current value q (see spall): q + (Σ)<br />

,<br />

q new<br />

= g<br />

where g is a zero-mean Gaussian random vector with covariance matrix Σ, which must be fixed<br />

by the user. Another approach consists in changing only one component of q at a time (Brooks &<br />

Morgan, 1995). This is done by first selecting one of the components of q at random, and then<br />

randomly selecting a new value for that variable within its bounds. In Bohachevsky et al 1986, a<br />

spherical uniform perturbation is adopted. More precisely, the new candidate point qnew is<br />

obtained by first generating a random direction vector θ, with θ 1 , then multiplying it by a<br />

fixed step size β, and finally summing the resulting vector to q, i.e. = q + βθ . The value of<br />

the step size β must be set by the user. In a similar way one can also adopt the following rule:<br />

qnew = q + CU , where U is a vector of uniform random number in the range (− 1,<br />

1)<br />

and C is a<br />

constant diagonal matrix whose elements define the maximum change allowed in each<br />

component of q. The matrix C is also user defined. The methods presented above are not<br />

limitative and some other approaches have been proposed in the literature see for instance<br />

Vanderbilt & Louie (1984), Parks (1990) to cite only a few.<br />

2 =<br />

q new<br />

i

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!