28.10.2014 Views

Fault Detection and Diagnostics for Rooftop Air Conditioners

Fault Detection and Diagnostics for Rooftop Air Conditioners

Fault Detection and Diagnostics for Rooftop Air Conditioners

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

2.2.3 Back-propagation neural network<br />

Neural networks are composed of simple elements operating in parallel. These elements<br />

are inspired by biological nervous systems. As in nature, the network function is<br />

determined largely by the connections between elements. A neural network can be trained<br />

to per<strong>for</strong>m a particular function by adjusting the values of the connections between<br />

elements. Commonly neural networks are adjusted, or trained, so that a particular input<br />

leads to a specific target output.<br />

Back-propagation (BP) neural networks are used most often <strong>and</strong> were created by<br />

generalizing the Widrow-Hoff learning rule to multiple-layer networks <strong>and</strong> nonlinear<br />

differentiable transfer functions. Input vectors <strong>and</strong> the corresponding output vectors are<br />

used to train a network until it can approximate a function or associate input vectors with<br />

specific output vectors. Networks with biases, a sigmoid layer, <strong>and</strong> a linear output layer are<br />

capable of approximating any function with a finite number of discontinuities. St<strong>and</strong>ard<br />

backpropagation is a gradient descent algorithm, as is the Widrow-Hoff learning rule. The<br />

term backpropagation refers to the manner in which the gradient is computed <strong>for</strong><br />

nonlinear multilayer networks. Properly trained backpropagation networks tend to give<br />

reasonable answers when presented with inputs that they have never seen. Typically, a new<br />

input will lead to an output similar to the correct output <strong>for</strong> input vectors used in training<br />

that are similar to the new input being presented. This generalization property makes it<br />

possible to train a network on a representative set of input/output pairs.<br />

The oldest algorithm is a gradient descent algorithm, <strong>for</strong> which the weights <strong>and</strong> biases are<br />

moved in the direction of the negative gradient of the per<strong>for</strong>mance function. However, this<br />

algorithm is often too slow <strong>for</strong> practical problems. So many improved algorithms such as<br />

variable learning rate, resilient backpropagation, conjugate gradient <strong>and</strong> reduced memory<br />

Levenberg-marquardt, have been proposed to increase training speed <strong>and</strong> reduce the<br />

memory requirements. Another problem that occurs during neural network training is<br />

called overfitting. The error on the training set is driven to a very small value, but when<br />

new data is presented to the network the error is large. The network has memorized the<br />

training examples, but it has not learned to generalize to new situations.<br />

Since BP artificial neural networks can be used to build classifiers that directly classify input<br />

vectors, they are often used to build both the model <strong>and</strong> classifier as a whole <strong>for</strong> FDD use.<br />

In a paper by Li, X., H. Hvaezi-Nejad (1996), an artificial neural network (ANN) prototype<br />

<strong>for</strong> fault detection <strong>and</strong> diagnosis (FDD) in complex heating systems was presented. The<br />

15

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!