Views
3 years ago

Application of an Adaptive Differential Evolution Algorithm ... - Koszalin

Application of an Adaptive Differential Evolution Algorithm ... - Koszalin

SLOWIK: APPLICATION OF

SLOWIK: APPLICATION OF ADAPTIVE DIFFERENTIAL EVOLUTION ALGORITHM WITH MULTIPLE TRIAL VECTORS 3165Fig. 4.Memory consumption in bytes for the DE+ and LM algorithms.during the next experiment, the value NT equal to 3 wasassumed. In the second experiment, the DE-ANNT+ algorithmwas executed tenfold, and the average values of the resultsobtained for ϕ =0.90 and ϕ =0.99 are presented in Table II(ϕ =0.90) and in Table III (ϕ =0.99). The comparative resultsin both tables are taken from [15] (the structures of trainedANN are presented in Fig. 3). The ϕ values were chosenexperimentally according to the author’s previous experience.From Tables II and III, it can be seen that, for the thresholdof training correctness values ϕ =0.90 and ϕ =0.99, theapplication of the proposed DE-ANNT+ method caused anincrease in the correct classification of data as compared to theDE-ANNT method. In six out of the eight possible cases, betterresults were obtained using the proposed method than by usingthe DE-ANNT method. Also, the results obtained using theDE-ANNT+ algorithm are better than the results obtained byusing the EBP and EA algorithms. The results obtained by theDE-ANNT+ algorithm is better (having a higher percentage ofcorrectly classified data) or comparable in four out of the eightpossible cases, as compared to the results obtained by using theLM algorithm. Also, it can be seen from Tables II and III thatthe number of iterations (NI) of EBP increases as the maximaltime increases, but the NI of LM does not. This is caused by thefact that, for the EBP method, the ANN training error value wasnot lower than ɛ =0.0001 after the maximal time. Therefore, inall cases, the EBP method was stopped after the same numberof iterations, but with the LM algorithm, the computations wereoften stopped before the maximal time was reached.In the LM algorithm, the memory consumption can be estimatedas N 2 , where N is the number of weights in the ANN[29]. In the DE+ algorithm, the memory consumption canbe estimated as PopSize· N. If we assume that the floatingpointnumber is represented by 4 bytes, then the estimatedvalues for memory consumption must be multiplied by 4. InFig. 4, the memory consumption as the function of the paritypproblem (for p ∈ [3; 12]) for the LM and DE+ algorithms ispresented.VII. CONCLUSIONBased on the results shown in Tables II and III, it can beseen that training of ANNs by using the DE-ANNT+ algorithmincreases the efficiency of the data classification in the sametime period when compared with the EA, EBP, or DE-ANNTalgorithms. In comparison with the LM algorithm, the resultsobtained when using the DE-ANNT+ algorithm are comparable.However, in the case of the DE+ algorithm, memoryconsumption grows more slowly than with the LM algorithmas regard to increasing the values of p in parity-p problems. It isnecessary to point out that the algorithm presented can also beused easily to train multioutput ANNs, ANNs with nonstandardarchitectures (for example, the tower architecture [16]), andnetworks with a nondifferentiable neuron activation functionfor which applications of the EBP or LM algorithms are notpossible. Additionally, the introduction of adaptive changes ofF and CR parameter values, together with the introductionof multiple trial vectors in the presented DE-ANNT+ algorithm,allows one to increase the effectiveness of the proposedalgorithm in relation to the previously elaborated DE-ANNTalgorithm [15]. Also, it is worth saying that the proposedDE-ANNT+ algorithm can be used in many industrial electronicsapplications in which the use of ANN is needed. Asexamples, based on the literature review, the DE-ANNT+algorithm can be used for the following: state-of-charge estimationin battery string systems [32], shape recognition systems[33], estimation of nonlinear load harmonic currents [34],sensorless control of single switch-based switched reluctancemotor drives [35], and modeling of embedded fuel-cell powergenerators [36].APPENDIXEXAMPLE OF THE DE-ANNT+ METHOD IN OPERATIONIt can be assumed that the ANN from Fig. 2 is trained forthe classification of a parity-2 problem. Therefore, the ANNhas nine values of weights. Additionally, it is assumed thatPopSize =5, ɛ =0.0001, and NT =2.In the first step, the population of x i individuals containingPopSize =5vectors (individual) is randomly createdx 1 = {0.8; 0.7; 0.6; −0.3; 0.4; −0.5; 0.1; 0.2; −0.3}x 2 = {−0.4; 0.3; 0.8; −0.9; −0.2; −0.1; 0.1; 0.2; 0.4}x 3 = {−0.2; 0.2; 0.6; 0.7; 0.9; −0.3; 0.4; 0.5; 0.2}x 4 = {0.7; −0.8; −0.9; 0.4; 0.5; −0.6; 0.2; −0.1; 0.1}x 5 = {0.5; −0.5; −0.2; 0.4; −0.7; 0.8; −0.1; 0.3; 0.6}.In the second step, the NT =2 mutated vectors V i,m arecreated for each individual x i , and next, only one vector v i(having the lowest value of the objective function ERR(.)) isdetermined from trial vectors V i,m for each individual x i .At the beginning, the best individual in the population isdetermined. Therefore, the value of the objective function

3166 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 58, NO. 8, AUGUST 2011ERR(.) is computed for each individual x i according to (3)ERR(x 1 )=2.013ERR(x 2 )=2.017ERR(x 3 )=2.105ERR(x 4 )=2.039ERR(x 5 )=2.087.It can be seen that the best individual is individual x 1 ;therefore r1 =1.Thetwo(NT =2)trial vectors V i,1 and V i,2for particular individuals x i obtained using (1) are as follows(in parentheses, the value of coefficient r1 is presented, andthe randomly chosen values of coefficients r2, r3, and F forparticular individuals x i are also shown):Individual x 1 :(r1 =1;r2 =4;r3 =5;F =1.5)V 1,1 = {1.1; 0.25; −0.45; −0.3; 2.2; −2.6; 0.55; −0.4; −1.05}(r1 =1;r2 =2;r3 =3;F =0.8)V 1,2 ={0.64; 0.78; 0.76; −1.58; 0.48; −0.34; −0.14; −0.04; −0.14}ERR(V 1,1 )=2.552; ERR(V 1,2 )=1.999Individual x 2 :(r1 =1;r2 =3;r3 =5;F =0.7)V 2,1 = {0.31; 1.19; 1.16; −0.09; 1.52; −1.27; 0.45; 0.34; −0.58}(r1 =1;r2 =4;r3 =5;F =0.2)V 2,2 = {0.84; 0.64; 0.46; −0.3; 0.64; −0.78; 0.16; 0.12; −0.4}ERR(V 2,1 )=2.186; ERR(V 2,2 )=2.059Individual x 3 :(r1 =1;r2 =2;r3 =4;F =0.6)V 3,1 = {0.14; 1.36; 1.62; −1.08; −0.02; −0.2; 0.04; 0.38; −0.12}(r1 =1;r2 =5;r3 =4;F =1.2)V 3,2 = {0.56; 1.06; 1.44; −0.3; −1.04; 1.18; −0.26; 0.68; 0.3}ERR(V 3,1 )=2.013; ERR(V 3,2 )=1.999Individual x 4 :(r1 =1;r2 =2;r3 =3;F =0.3)V 4,1 = {0.74; 0.73; 0.66; −0.78; 0.07; −0.44; −0.01; 0.11; −0.24}(r1 =1;r2 =3;r3 =5;F =1.6)v 4,2 = {−0.32; 1.82; 1.88; 0.18; 2.96; −2.26; 0.9; 0.52; −0.94}ERR(V 4,1 )=2.000; ERR(V 4,2 )=2.533Individual x 5 :(r1 =1;r2 =4;r3 =2;F =0.9)V 5,1 ={1.79; −0.29; −0.93; 0.87; 1.03; −0.95; 0.19; −0.07; −0.57}(r1 =1;r2 =3;r3 =4;F =1.7)V 5,2 = {−0.73; 2.4; 3.15; 0.21; 1.08; 0.01; 0.44; 1.22 − 0.13}ERR(V 5,1 )=1.927; ERR(V 5,2 )=2.583.Next, from each group of two (NT =2)vectors, namely:V i,1 and V i,2 connected to particular individuals x i , we choosethe best vector having a lower value of the objective functionERR(.). After this selection, the following vectors v i areassigned to vectors x i corresponding to them:Individual x 1 :v 1 = V 1,2 = {0.64; 0.78; 0.76; −1.58; −0.48;− 0.34; −0.14; −0.04; −0.14}Individual x 2 :v 2 = V 2,2 = {0.84; 0.64; 0.46; −0.3; 0.64;− 0.78; 0.16; 0.12; −0.4}Individual x 3 :v 3 = V 3,2 = {0.56; 1.06; 1.44; −0.3;− 1.04; 1.18; −0.26; 0.68; 0.3}Individual x 4 :v 4 = V 4,1 = {0.74; 0.73; 0.66; −0.78; 0.07;− 0.44; −0.01; 0.11; −0.24}Individual x 5 :v 5 = V 5,1 = {1.79; −0.29; −0.93; 0.87;1.03; −0.95; 0.19; −0.07; −0.57}.In the third step, the individuals x i are crossed over with theircorresponding individuals v i (see Section IV, step three). As aresult of the crossover operation, individuals u i are created.For individual x i , if we assume that the randomly chosenvalue CR is equal to 0.4, and that the set rand of randomlychosen numbers is equal to: {0.5; 0.6; 0.9; 0.1; 0.3; 0.3; 0.7;0.8; 0.6}, then the values of vectors u i are (in practice thevalue of the CR coefficient, and the values in the set randare randomly chosen for each individual separately; in thisexample, these values are the same for all the individuals, forexample simplification)u 1 = {0.8; 0.7; 0.6; −1.58; −0.48; −0.34; 0.1; 0.2; −0.3}ERR(u 1 )=2.027u 2 = {−0.4; 0.3; 0.8; −0.3; 0.64; −0.78; 0.1; 0.2; 0.4}ERR(u 2 )=1.997u 3 = {−0.2; 0.2; 0.6; −0.3; −1.04; 1.18; 0.4; 0.5; 0.2}ERR(u 3 )=2.067u 4 = {0.7; −0.8; −0.9; −0.78; 0.07; −0.44; 0.2; −0.1; 0.1}ERR(u 4 )=2.024u 5 = {0.5; −0.5; −0.2; 0.87; 1.03; −0.95; −0.1; 0.3; 0.6}ERR(u 5 )=2.174.In the fourth step, a selection of individuals for the newgeneration is performed (see Section IV, step four). The followingindividuals {x 1 ; u 2 ; u 3 ; u 4 ; x 5 } are selected for the new

Applications of Algorithmic Differentiation within ... - ePrints Soton
Evolution is Adaptation - HeiQ Materials
A parallel Algorithmic Differentiation approach for ... - Autodiff.org
Genomic and proteomic analysis of adaptive evolution in ...
Evolution of an Efficient Search Algorithm for the Mate-In-N Problem ...
Download Adaptive Markets: Financial Evolution at the Speed of Thought {fulll|online|unlimite)
Adapting to the Differential Social Impacts of Climate ... - Sniffer
Adaptive evolution of non-coding DNA in Drosophila ... - 植物遺伝学
Theory and Algorithms for Adaptive Particle Simulation - Inria
[+][PDF] TOP TREND Adaptive Markets: Financial Evolution at the Speed of Thought [READ]
An Adaptive Differential Evolution Algorithm With Novel Mutation ...
Differential Evolution Algorithm: Recent Differential Evolution ...
Differential Evolution - A simple and efficient adaptive ... - CiteSeerX
Differential Evolution Algorithm with Ensemble of Parameters and ...
An Efficient Multiobjective Differential Evolution Algorithm for ...
Adaptive Strategy Selection in Differential Evolution
A Multiobjective Differential Evolution Algorithm for Constrained ...
Enhanced Differential Evolution with Adaptive Strategies for ...
Particle Swarm Optimization and Differential Evolution Algorithms ...
A simplex differential evolution algorithm ... - Ajith Abraham
A Novel Hybrid Differential Evolution Algorithm - International ...
A Hybrid Differential Evolution Algorithm - Proceedings of the II ...
modified differential evolution algorithm in load dispatch ... - Humusoft
a novel differential evolution algorithmic ... - Brunel University
differential evolution algorithm combined with chaotic ... - Kybernetika
Adaptive Strategy Selection in Differential Evolution - HAL - INRIA
A Novel Differential Evolution Algorithm based on epsilon ...
using the differential evolution algorithm for processing star camera ...
Evidence for differential selection and potential adaptive evolution in ...
Application of Self-adaptive DifferentialEvolution Algorithm to ...