28.10.2014 Views

Fault Detection and Diagnostics for Rooftop Air Conditioners

Fault Detection and Diagnostics for Rooftop Air Conditioners

Fault Detection and Diagnostics for Rooftop Air Conditioners

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Table 2.1 Best model orders <strong>for</strong> 3D polynomial fits <strong>for</strong> experimental data<br />

Variable Best Model to Use RMS Error (F) Maximum Error (F)<br />

T evap 1 st order 0.49 0.99<br />

T sh 3 rd order with cross terms 1.39 3.03<br />

T hg 3 rd order with cross terms 1.00 3.24<br />

T cond 1 st order 0.31 0.61<br />

T sc 2 nd order with cross terms 0.46 1.39<br />

∆T ca<br />

1 st order 0.18 0.48<br />

∆T ea<br />

2 nd order with cross terms 0.23 0.56<br />

2.2.2 General Regression Neural Network<br />

Donald [1991] described a memory-based network, general regression neural network<br />

(GRNN), which is a one-pass learning algorithm with a highly parallel structure. This<br />

approach eliminates the necessity of assuming a specific functional <strong>for</strong>m of the model.<br />

Rather, it allows the appropriate <strong>for</strong>m to be expressed as a probability density function<br />

(pdf) which is empirically determined from the observed data using nonparametric<br />

estimators. Thus, this approach is not limited to any particular <strong>for</strong>m <strong>and</strong> requires no prior<br />

knowledge of the appropriate <strong>for</strong>m. Secondly, the resulting regression equation can be<br />

implemented in a parallel, neural–network-like structure. Since the parameters of the<br />

structure are determined directly from examples rather than iteratively, the structure<br />

“learns” <strong>and</strong> can begin to generalize immediately. Considering that the idea <strong>and</strong> algorithm<br />

of GRNN is adopted in our FDD modeling, the derivation of GRNN is repeated <strong>and</strong> in<br />

order to better underst<strong>and</strong> the original derivation some omitted intermediate derivation is<br />

added.<br />

Assume that f ( X , y)<br />

represents the known joint continuous probability density function<br />

of a vector r<strong>and</strong>om variable, X , <strong>and</strong> a scalar r<strong>and</strong>om variable, y . The conditional mean of<br />

y given X (also called the regression of y on X ) is given by<br />

∞<br />

∫<br />

∫<br />

yf ( X , y)<br />

dy<br />

−∞<br />

E[ y | X ] =<br />

(1)<br />

∞<br />

f ( X , y)<br />

dy<br />

−∞<br />

When the density f ( X , y)<br />

is not known, it should usually be estimated from a sample of<br />

observations of X <strong>and</strong> y . Here the consistent estimators proposed by Parzen (1962) are<br />

adopted. These estimators are a good choice <strong>for</strong> estimating the probability density<br />

11

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!