17.01.2015 Views

Prediction Theory 1 Introduction 2 General Linear Mixed Model

Prediction Theory 1 Introduction 2 General Linear Mixed Model

Prediction Theory 1 Introduction 2 General Linear Mixed Model

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

3.1 Best Predictor<br />

The best predictor, for any type of model, requires knowledge of the distribution of the random<br />

variables as well as the moments of that distribution. Then, the best predictor is the conditional<br />

mean of the predictor given the data vector, i.e.<br />

E(K ′ b + M ′ u|y)<br />

which is unbiased and has the smallest mean squared error of all predictors (Cochran 1951).<br />

The computational form of the predictor depends on the distribution of y. The computational<br />

form could be linear or nonlinear. The word best means that the predictor has the smallest<br />

mean squared error of all predictors of K ′ b + M ′ u.<br />

3.2 Best <strong>Linear</strong> Predictor<br />

The best predictor may be linear OR nonlinear. Nonlinear predictors are often difficult to<br />

manipulate or to derive a feasible solution. The predictor could be restricted to class of linear<br />

functions of y. Then, the distributional form of y does not need to be known, and only the first<br />

(means) and second (variances) moments of y must be known. If the first moment is Xb and<br />

the second moment is V ar(y) = V, then the best linear predictor is<br />

where<br />

E(K ′ b + M ′ u) = K ′ b + C ′ V −1 (y − Xb)<br />

C ′<br />

= Cov(K ′ b + M ′ u, y).<br />

When y has a multivariate normal distribution, then the best linear predictor (BLP) is the<br />

same as the best predictor (BP). The BLP has the smallest mean squared error of all linear<br />

predictors of K ′ b + M ′ u.<br />

3.3 Best <strong>Linear</strong> Unbiased Predictor<br />

In general, the first moment of y, namely Xb, is not known, but V, the second moment, is<br />

commonly assumed to be known. Then predictors can be restricted further to those that are<br />

linear and also unbiased. The best linear unbiased predictor is<br />

where<br />

and C and V are as before.<br />

K ′ˆb + C ′ V −1 (y − Xˆb)<br />

ˆb = (X ′ V −1 X) − X ′ V −1 y,<br />

This predictor is the same as the BLP except that ˆb has replaced b in the formula. Note<br />

that ˆb is the GLS estimate of b. Of all linear, unbiased predictors, BLUP has the smallest<br />

mean squared error. However, if y is not normally distributed, then nonlinear predictors of<br />

K ′ b + M ′ u could potentially exist that have smaller mean squared error than BLUP.<br />

3

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!