Advanced Data Analytics Using Python_ With Machine Learning, Deep Learning and NLP Examples ( 2023)
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Chapter 5
Deep Learning and Neural Networks
Backpropagation
Backpropagation, which usually substitutes an optimization method
like gradient descent, is a common method of training artificial neural
networks. The method computes the error in the outermost layer and
backpropagates up to the input layer and then updates the weights as
a function of that error, input, and learning rate. The final result is to
minimize the error as far as possible.
Backpropagation Approach
Problems like the noisy image to ASCII examples are challenging to solve
through a computer because of the basic incompatibility between the
machine and the problem. Nowadays, computer systems are customized
to perform mathematical and logical functions at speeds that are beyond
the capability of humans. Even the relatively unsophisticated desktop
microcomputers, widely prevalent currently, can perform a massive
number of numeric comparisons or combinations every second.
The problem lies in the inherent sequential nature of the computer.
The “fetch-execute” cycle of the von Neumann architecture allows the
machine to perform only one function at a time. In such cases, the time
required by the computer to perform each instruction is so short that the
average time required for even a large program is negligible to users.
A new processing system that can evaluate all the pixels in the image in
parallel is referred to as the backpropagation network (BPN).
Generalized Delta Rule
I will now introduce the backpropagation learning procedure for
knowing about internal representations. A neural network is termed a
mapping network if it possesses the ability to compute certain functional
relationships between its input and output.
100