12.07.2015 Views

Davide Cherubini - PhD Thesis - UniCA Eprints

Davide Cherubini - PhD Thesis - UniCA Eprints

Davide Cherubini - PhD Thesis - UniCA Eprints

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4.1 Heuristic AlgorithmsAs the name say, the SA simulates the process of tempering of a metal (e.g.steel) and glass, and assumes each point within the search space as the state of aphysical system. The objective function is interpreted as the internal energy ateach state and the algorithm tries to carry the system from an arbitrary initialstate, to the state with minimum possible energy.In order to escape from local minima the algorithm permits the so-called“uphill moves” where the resulting solution has a worse value . As will be shownin the following, the probability of accepting uphill moves generally decreasesand is controlled by two factors: the difference of the objective functions and thevalue of a global time varying parameter called “temperature”.The main steps followed by the Simulated Annealing Algorithm are:1. Generation of the initial solutionThe initial solution can be either randomly or heuristically produced. Duringthis phase the temperature parameter is initialized.2. Fundamental iterationAt each step the SA algorithm compares a randomly sampled solution fromthe neighbourhood of the current solution x. This new value x ′ is acceptedas new current solution with a probability that is generally evaluated followingthe Boltzmann distribution e (−(f(x′ )−f(x))/T ) , where f(x) and f(x ′ )are the values of the objective function at state x and x ′ respectively, andT is the temperature parameter.Analogously to the physical annealing process, the temperature decreasesas the simulation proceeds, thus at the beginning of the search the probabilityof accepting uphill moves is high and the algorithm explores the searchspace (random walk).In a second phase, when the probability of uphill moves decreases (it is inverselyproportional to T), the method becomes an iterative improvementalgorithm converging to a global (or a local) minimum. This step is repeateduntil the termination condition is satisfied.3. Convergence to optimumAs theoretical result, it can be shown that for any given finite problem, the23

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!