23.12.2014 Views

OCTOBER 19-20, 2012 - YMCA University of Science & Technology

OCTOBER 19-20, 2012 - YMCA University of Science & Technology

OCTOBER 19-20, 2012 - YMCA University of Science & Technology

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Proceedings <strong>of</strong> the National Conference on<br />

Trends and Advances in Mechanical Engineering,<br />

<strong>YMCA</strong> <strong>University</strong> <strong>of</strong> <strong>Science</strong> & <strong>Technology</strong>, Faridabad, Haryana, Oct <strong>19</strong>-<strong>20</strong>, <strong>20</strong>12<br />

Figure 4 Multi-layer feed forward artificial neural<br />

network<br />

2. Artificial Neural NETWORK MODELING<br />

The multi-layer feed forward ANN <strong>of</strong> Fig.4 consists three parts: input layer, the hidden layer, and the output<br />

layer. The neurons between the layers are connected by the links having synaptic weights. The error backpropagation<br />

training algorithm is based on weight updates so as to minimize the sum <strong>of</strong> squared error for -<br />

number <strong>of</strong> output neurons, given as:<br />

Where<br />

= desired output for the th pattern. The weights <strong>of</strong> the links are updated as<br />

where is the learning step, is the learning rate and is the momentum constant<br />

In Eq. (3) is the error term, which is given as follows:<br />

where is the number <strong>of</strong> neurons in the hidden layer.<br />

)<br />

The training process is initialized by assigning small random weight values to all the links. The input-output<br />

patterns are presented one by one and updating the weights each time. The mean square error (MSE) at the end<br />

<strong>of</strong> each epoch due to all patterns is computed as<br />

Where = number <strong>of</strong> training patterns.<br />

The training process will be terminated when the specified goal <strong>of</strong> MSE or maximum number <strong>of</strong> epochs is<br />

achieved. Before training and validation the total input and output data were normalized for increase accuracy<br />

and speed <strong>of</strong> the network.<br />

3. Artificial neural network training<br />

The training <strong>of</strong> ANN for 9 input-output patterns has been carried out using programming in neural network (NN)<br />

toolbox available in MATLAB s<strong>of</strong>tware. First neural network architecture has been decided. The general<br />

network is supposed to be 4–n–2, which implies four neurons in the input layer, n neurons in the hidden layer<br />

and two neurons in the output layer. In the present study we have 4 neurons in the input layer (corresponding to 4<br />

process inputs, current, wheel speed, pulse on-time, and duty factor), 2 neurons in the output layer<br />

(corresponding to 2 outputs MRR and R a and one hidden layer <strong>of</strong> 23 neurons was employed. The following<br />

learning factors are used to successfully train the network.<br />

• Learning rate ( ) = 0.05; momentum factor (α) = 0.85;<br />

• Maximum number <strong>of</strong> epochs = 5000; tolerance for MSE = 0.00001.<br />

The ANN training simulation was carried out using the variable learning rate training procedure “traingdx” <strong>of</strong><br />

the MATLAB NN toolbox. This procedure improves the performance <strong>of</strong> EBPTA by allowing the learning rate to<br />

change based on the complexity <strong>of</strong> the local error surface. Fig.5 shows the variation <strong>of</strong> MSE during the training.<br />

In the present study, the desired MSE achieved after 1466 epochs.<br />

547

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!