11.07.2015 Views

View - Universidad de Almería

View - Universidad de Almería

View - Universidad de Almería

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

132 Eligius M.T. Hendrixneighbourhood of a minimum point where f is convex, then convergence is guaranteed andthe algorithm is effective in the sense of reaching a minimum point. Let us consi<strong>de</strong>r what happensfor the two test cases. When choosing the starting point x 0 in the middle of the intervalTable 2. Newton for functions h and g, ɛ = 0.001function hfunction gk x k h ′ (x k ) h ′′ (x k ) h(x k ) x k g ′ (x k ) g ′′ (x k ) g(x k )0 5.000 -1.795 43.066 1.301 5.000 -1.795 -4.934 1.3011 5.042 0.018 43.953 1.264 4.636 0.820 -7.815 1.5112 5.041 0.000 43.944 1.264 4.741 -0.018 -8.012 1.5533 5.041 0.000 43.944 1.264 4.739 0.000 -8.017 1.553[3, 7], the algorithm converges to the closest minimum point for function h and to a maximumpoint for the function g. This gives rise to introducing the concept of a region of attraction ofa minimum point x ∗ as that region of starting points x 0 where the local search procedure convergesto point x ∗ . One can observe here when experimenting further, that when x 0 is close toa minimum point of g the algorithm converges to one of the minimum points. Morever, oneshould remark that the algorithm requires a safeguard to keep the iterates in the interval [3, 7].This means that if for instance x k+1 < 3, it should be forced to a value of 3. In that case, alsothe left point x = 3 is an attraction point of the algorithm. Function h is piecewise convex,such that the algorithm always converges to the closest minimum point.4. SummaryOne of the targets of the educational system on optimisation algorithms is to teach stu<strong>de</strong>nts tothink and analyse critically. At least to see through the evolutionary, neural, self learning andreplicating humbug. Therefore it is good to start with analysing simple algorithms. This paperaims at being an introductory text for that by showing simple cases that differ in structure andanalysing simple algorithms for that.References[1] W. P. Baritompa, R.H. Mladineo, G. R. Wood, Z. B. Zabinsky, and Zhang Baoping. Towards pure adaptivesearch. Journal of Global Optimization, 7:73–110, 1995.[2] P.E. Gill, W. Murray, and M.H. Wright. Practical Optimization. Aca<strong>de</strong>mic Press, New York, 1981.[3] E.M.T. Hendrix and O. Klepper. On uniform covering, adaptive random search and raspberries. Journal ofGlobal Optimization, 18:143–163, 2000.[4] E.M.T. Hendrix, P. M. Ortigosa, and I. García. On success rates for controlled random search. Journal of GlobalOptimization, 21:239–263, 2001.[5] L.E. Scales. Introduction to Non-Linear Opimization,. Macmillan, London, 1985.[6] A. Törn and A. Zilinskas. Global Optimization, volume 350 of Lecture Notes in Computer Science. Springer, Berlin,1989.[7] Z. B. Zabinsky and R. L. Smith. Pure adaptive search in global optimization. Mathematical Programming,53:323–338, 1992.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!