Views
3 years ago

Application of an Adaptive Differential Evolution Algorithm ... - Koszalin

Application of an Adaptive Differential Evolution Algorithm ... - Koszalin

SLOWIK: APPLICATION OF

SLOWIK: APPLICATION OF ADAPTIVE DIFFERENTIAL EVOLUTION ALGORITHM WITH MULTIPLE TRIAL VECTORS 3161The arrangement of this paper is as follows: in Section II,gradient training methods are presented; in Section III, the DEalgorithm and its application to ANN training are shown; inSection IV, the properties of the proposed DE-ANNT+ methodare described; in Section V, the structure of the assumed ANNand neuron model is presented; in Section VI, the experimentsare described; in Section VII, some conclusions are presented;and in the appendix, an example of the DE-ANNT+ method inoperation is described in detail.II. GRADIENT TRAINING METHODSTo use an ANN for any problem solving, it is first necessaryto train the network. The training depends on an adaptationof free network parameters, that is, on the proper choice ofneural weight values [21], [22]. Specialized gradient learningalgorithms are used for the adaptation of these weight values.Among these algorithms, the most popular are the error backpropagationmethod (EBP) [22] and the LM algorithm [20].The EBP algorithm is based on the gradient method andpermits efficient neural network training for solving difficultproblems, which often refer to nonseparable data [16]. It isa fundamental supervised training algorithm for multilayerfeedforward neural networks.Unfortunately, the EBP algorithm possesses several disadvantages.Among these, the following are mentioned mostoften: a huge number of iterations is required to obtain satisfactoryresults and the sensitivity of the error function to localminima. Its operation also depends on the value of the learningcoefficient. When the value of the chosen learning coefficient istoo small, the result is a long processing time for the algorithm,but when the value is too high, this can cause the algorithm tooscillate [27].Another neural network training algorithm is the LM algorithm[20]. This algorithm modifies the values of weightsin a grouped manner, after the application of all the trainingvectors. It is one of the most effective training algorithms forfeedforward neural networks. However, this algorithm also possessessome disadvantages. The main shortcomings are closelylinked to the computation of the error function and Jacobianinversion for obtaining a matrix in which the dimensions areequal to the total of all the weights in the neural network.Therefore, the requirement for memory is very high [23], [24].This algorithm is also local, and there is no guarantee of findinga global minimum for the objective function. In the case wherethe algorithm converges to the local minimum, there is no wayof escape, and the solution obtained is not optimal [28].Due to the disadvantages of these methods, research ondifferent optimization techniques that are dedicated to ANNtraining is still required in both of the described cases. Therefore,an application of the adaptive DE algorithm with multipletrial vectors for training of an ANN, as described in this paper,is well founded. The proposed DE-ANNT+ algorithm requiresfewer iterations than the EBP algorithm, it does not oscillate, itrequires less memory than the LM algorithm, and neurons withnondifferential activation functions can be used in the ANN(whereas with the EBP and LM algorithms, only neurons withdifferential activation functions can be used in the ANN).III. BACKGROUNDA. DE AlgorithmThe DE algorithm was proposed by Price and Storn [1]. TheDE algorithm has the following advantages over the traditionalgenetic algorithm: it is easy to use and it has efficient memoryutilization, lower computational complexity (it scales betterwhen handling large problems), and lower computational effort(faster convergence) [26]. DE is quite effective in nonlinearconstraint optimization and is also useful for optimizing multimodalproblems [25].Its pseudocode form is as follows:a) Create an initial populationconsisting of PopSize individualsb) While (termination criterion is notsatisfied)Do Beginc) For each ith individual in thepopulationBegind) Randomly generate threeinteger numbers:r 1 ,r 2 ,r 3 ∈ [1; PopSize], where r 1 ≠r 2 ≠r 3 ≠ie) For each jth gene in ithindividual (j ∈ [1; n])Beginv i,j = x r1,j + F · (x r2,j − x r3,j )f) Randomly generate one real numberrand j ∈ [0; 1)g) If rand j

3162 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 58, NO. 8, AUGUST 2011where F ∈ [0, 2), and r1,r2,r3,i∈ [1,PopSize] fulfill theconstraintr1 ≠ r2 ≠ r3 ≠ i. (2)Fig. 1. Part of (a) ANN, corresponding to its (b) chromosome containing theweight values; weights w i,0 represent bias weights [15].the optimized function [1], [10]. Another important property ofthis algorithm is a local limitation of the selection operator toonly two individuals (parent (x i ) and child (u i )), and, owingto this property, the selection operator is more effective andfaster [10]. Also, to accelerate the convergence of the algorithm,it is assumed that the index r 1 (occurring in the algorithmpseudocode) points to the best individual in the population.B. DE Algorithm in ANN TrainingIn the literature, we can find several applications of the DEalgorithm to ANN training. For example, we can mention [30]and [31]. In these papers, the DE algorithm without adaptiveselection of control parameters was used in ANN training.Therefore, the main problem in these papers was the tuningof the algorithm parameters. This problem was overcome in[15], in which the adapted DE algorithm [11] was used in ANNtraining (DE-ANNT algorithm). Due to the use of DE-ANNT,the tuning of primary DE parameters, such as F and CR, is notneeded.IV. DE-ANNT+ METHODThe proposed DE-ANNT+ method is based on the previouslyelaborated DE-ANNT method [15] and operates accordingto the following steps:In the first step, a population of individuals is randomlycreated. The number of individuals in the population is storedin parameter PopSize. Each individual x i consists of k genes(where k represents the number of weights in the trained ANN).In Fig. 1(a), a part of an ANN with neurons from n to mis shown. Additionally, in Fig. 1(b), the coding scheme forweights in an individual x i connected to neurons from Fig. 1(a)is shown.Each jth (j ∈ [1,k]) gene of individual x i can have valuesfrom a determined range of variability (closed double-sided)from min j to max j . In the proposed method, the values ofmin j = −1 and max j =1are assumed.In the second step, the NT (number of trial vectors) mutatedindividuals (trial vectors) V i,m (m ∈ [1,NT]) are created foreach individual x i in the population, according to the formulaV i,m = x r1 + F · (x r2 − x r3 ) (1)Indexes r2 and r3 point to individuals randomly chosen fromthe population. Index r1 points to the best individual in thepopulation, which has the lowest value of the training errorfunction ERR(.). This function is described as follows:ERR = 1 2 ·T∑(Correct i − Answer i ) 2 (3)i=1where i is the actual number of training vector, T is the numberof all training vectors; Correct i is the required correct answerfor the ith training vector, and Answer i is the answer generatedby the neural network for the ith training vector applied toits input. The DE-ANNT+ method minimizes the value of theobjective function ERR(.).From the created set of mutated vectors V i,m , only one vectorV i,m (individual), having the lowest value of the objectivefunction ERR(.), is chosen for each individual x i , and it isassigned as vector v i .In the third step, all individuals x i are crossed over with theirmutated individuals v i . As a result of this crossover operation,an individual u i is created. The crossover operates as follows:for chosen individual x i =(x i,1 ,x i,2 ,x i,3 ,...,x i,j ) and individualv i =(v i,1 ,v i,2 ,v i,3 ,...,v i,j ); for each gene j ∈ [1; k]of individual x i , randomly generate a number rand j from therange [0; 1), and use the following rule:If rand j < CR then u i,j = v i,jElse u i,j = x i,jwhere CR ∈ [0; 1).In this paper, an adaptive selection of control parametervalues F and CR is introduced (similarly as in [11]) accordingto the formulasA =T heBest iT heBest i−1(4)F =2· A · random (5)CR = A · random (6)where random—the random number with a uniform distributionin the range [0; 1); T heBest i —the value of theobjective function for the best solution in ith generation;T heBest i−1 —the value of the objective function for the bestsolution in the i − 1th generation.From (5) and (6), we can see that, in the case of a stagnation(lack of changes of the best solution), the F parameter takesrandom values from the range [0; 2), and the CR parametertakes random values from the range [0; 1). In such a case, thesearching of the solution space has a more global character, andthe DE algorithm may “get out” more easily from the localextreme that is causing its stagnation. However, in the casewhere the results obtained by the DE algorithm are improvingin subsequent generations, then the F parameter accepts randomvalues from the range [0; 2 · A), and the CR parameteraccepts random values from the range [0; A). Obviously, the

Applications of Algorithmic Differentiation within ... - ePrints Soton
Evolution is Adaptation - HeiQ Materials
A parallel Algorithmic Differentiation approach for ... - Autodiff.org
Genomic and proteomic analysis of adaptive evolution in ...
Adapting to the Differential Social Impacts of Climate ... - Sniffer
Theory and Algorithms for Adaptive Particle Simulation - Inria
Download Adaptive Markets: Financial Evolution at the Speed of Thought {fulll|online|unlimite)
Evolution of an Efficient Search Algorithm for the Mate-In-N Problem ...
[+][PDF] TOP TREND Adaptive Markets: Financial Evolution at the Speed of Thought [READ]
[+]The best book of the month Elements of Causal Inference: Foundations and Learning Algorithms (Adaptive Computation and Machine Learning Series) [PDF]
Adaptive evolution of non-coding DNA in Drosophila ... - 植物遺伝学
An Adaptive Differential Evolution Algorithm With Novel Mutation ...
Differential Evolution Algorithm: Recent Differential Evolution ...
An Efficient Multiobjective Differential Evolution Algorithm for ...
Adaptive Strategy Selection in Differential Evolution
Differential Evolution - A simple and efficient adaptive ... - CiteSeerX
Differential Evolution Algorithm with Ensemble of Parameters and ...
A Multiobjective Differential Evolution Algorithm for Constrained ...
Enhanced Differential Evolution with Adaptive Strategies for ...
Particle Swarm Optimization and Differential Evolution Algorithms ...
A Novel Hybrid Differential Evolution Algorithm - International ...
A simplex differential evolution algorithm ... - Ajith Abraham
modified differential evolution algorithm in load dispatch ... - Humusoft
a novel differential evolution algorithmic ... - Brunel University
A Hybrid Differential Evolution Algorithm - Proceedings of the II ...
differential evolution algorithm combined with chaotic ... - Kybernetika
A Novel Differential Evolution Algorithm based on epsilon ...
Application of Self-adaptive DifferentialEvolution Algorithm to ...
using the differential evolution algorithm for processing star camera ...
Evidence for differential selection and potential adaptive evolution in ...