18.11.2014 Views

Download - ijcer

Download - ijcer

Download - ijcer

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Load forecasting for practical power systems by…<br />

Y =<br />

y<br />

<br />

<br />

y<br />

<br />

1<br />

2<br />

y n<br />

<br />

,……………..(13)<br />

<br />

<br />

1<br />

x11<br />

x12<br />

x1<br />

k <br />

<br />

<br />

X = <br />

1 x21<br />

x22<br />

x2k<br />

,……………..(14)<br />

1<br />

x x x <br />

31 32 3k<br />

<br />

<br />

1<br />

xn<br />

1<br />

xn2<br />

xnk<br />

<br />

<br />

0 <br />

β = <br />

<br />

and ……………..(15)<br />

1<br />

<br />

<br />

k<br />

<br />

1<br />

<br />

€ = <br />

<br />

<br />

……………..(16)<br />

2<br />

<br />

<br />

<br />

n <br />

In general, y is an (n x 1) vector of the observations, X is an (n x p) matrix of the levels of the<br />

independent variables, β is a (p x 1) vector of the regression coefficients, and € is an (n x 1) vector of random<br />

errors. We wish to find the vector of least squares estimators, ˆ that minimizes<br />

n<br />

L <br />

i1<br />

2<br />

i<br />

'<br />

'<br />

( y x ) ( y x ) ……………..(17)<br />

Note that L may be expressed as<br />

L = Y 1 Y – β 1 X 1 Y – Y 1 X β + β 1 X 1 X β<br />

= Y 1 Y - 2β 1 X 1 Y + β 1 X 1 Xβ……………..(18)<br />

Because β 1 X 1 Y is a (1 x 1) matrix, and its transpose (β 1 X 1 Y) 1 = Y 1 Xβ is the same matrix. The least squares<br />

estimators must satisfy<br />

L<br />

1 1<br />

2X<br />

Y 2X<br />

Xˆ<br />

<br />

<br />

ˆ<br />

0 ……………..(19)<br />

1<br />

This simplifies to X X ˆ <br />

1<br />

X Y<br />

equation (19) is the matrix form of the least squares normal equations. It is identical to equation (18). To solve<br />

the normal equations, multiply both sides of equation (15) by the inverse of X 1 X. Thus, the least squares<br />

estimator of β is<br />

ˆ 1 1<br />

1<br />

( X X ) X Y ……………..(20)<br />

It is easy to see that the matrix form of the normal equations is identical to the scalar form. Writing out the<br />

above equation in detail we get If the indicated matrix multiplication is performed, the scalar form of the normal<br />

equations (i.e equations (3.4)) will result. In this form it is easy to see that X 1 X is a (p x p) symmetric matrix and<br />

X 1 Y is a (p x 1) column vector. Note the special structure of the X 1 X matrix. The diagonal elements of X 1 X are<br />

the sum of squares of the elements in the columns of X, and the off-diagonal elements are the sums of crossproducts<br />

of the elements in the columns of X. Furthermore, note that the elements of X 1 Y are the sums of crossproducts<br />

of the columns of X and the observations {Y i }.<br />

n<br />

n<br />

n<br />

n<br />

<br />

<br />

n X<br />

i1<br />

X<br />

i2<br />

X<br />

ik Yi<br />

ˆ <br />

<br />

i1<br />

i1<br />

i1<br />

<br />

0<br />

i1<br />

n<br />

n<br />

n<br />

n <br />

<br />

n<br />

<br />

<br />

2<br />

<br />

<br />

<br />

<br />

X<br />

i1<br />

X<br />

i1<br />

X<br />

i1X<br />

i2<br />

X<br />

i<br />

X<br />

ˆ <br />

1 ik 1<br />

<br />

<br />

<br />

X<br />

i1Yi<br />

……………..(21)<br />

<br />

i1<br />

i1<br />

i1<br />

i1<br />

i1<br />

<br />

<br />

<br />

<br />

n<br />

n<br />

n<br />

n <br />

ˆ n <br />

2<br />

<br />

<br />

k<br />

<br />

X<br />

ik X<br />

ik<br />

X<br />

i1<br />

X<br />

ik<br />

X<br />

i2<br />

X<br />

ik<br />

<br />

X<br />

ikYi<br />

<br />

<br />

i1<br />

i1<br />

i1<br />

i1<br />

<br />

<br />

i1<br />

<br />

The fitted regression model is<br />

Y ˆ Xˆ .……………..(22)<br />

www.<strong>ijcer</strong>online.com ||May ||2013|| Page 59

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!