Prediction Theory 1 Introduction 2 General Linear Mixed Model
Prediction Theory 1 Introduction 2 General Linear Mixed Model
Prediction Theory 1 Introduction 2 General Linear Mixed Model
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
and<br />
( ) ( )<br />
X<br />
−<br />
′ R −1 X X ′ R −1 Z<br />
θ<br />
Z ′ R −1 X Z ′ R −1 Z + G −1 S<br />
=<br />
(<br />
K<br />
M<br />
Let a solution to these equations be obtained by computing a generalized inverse of<br />
(<br />
X ′ R −1 X X ′ R −1 Z<br />
Z ′ R −1 X Z ′ R −1 Z + G −1 )<br />
denoted as ( )<br />
Cxx C xz<br />
,<br />
C zx C zz<br />
then the solutions are<br />
Therefore, the predictor is<br />
L ′ y =<br />
=<br />
(<br />
θ<br />
S<br />
(<br />
(<br />
)<br />
= −<br />
(<br />
Cxx C xz<br />
C zx C zz<br />
) (<br />
K<br />
M<br />
) ( ) (<br />
K ′ M ′ C xx C xz X ′ R −1 y<br />
C zx C zz Z ′ R −1 y<br />
) ( ) ˆb<br />
K ′ M ′ ,<br />
û<br />
)<br />
.<br />
)<br />
)<br />
.<br />
where ˆb and û are solutions to<br />
( ) (<br />
X ′ R −1 X X ′ R −1 Z<br />
ˆb<br />
Z ′ R −1 X Z ′ R −1 Z + G −1 û<br />
)<br />
=<br />
(<br />
X ′ R −1 y<br />
Z ′ R −1 y<br />
)<br />
.<br />
The equations are known as Henderson’s <strong>Mixed</strong> <strong>Model</strong> Equations or MME. The equations are<br />
of order equal to the number of elements in b and u, which is usually much less than the number<br />
of elements in y, and therefore, are more practical to solve. Also, these equations require the<br />
inverse of R rather than V, both of which are of the same order, but R is usually diagonal or<br />
has a more simple structure than V. Also, the inverse of G is needed, which is of order equal<br />
to the number of elements in u. The ability to compute the inverse of G depends on the model<br />
and the definition of u.<br />
The MME are a useful computing algorithm for obtaining BLUP of K ′ b + M ′ u. Please keep<br />
in mind that BLUP is a statistical procedure such that if the conditions for BLUP are met,<br />
then the predictor has the smallest mean squared error of all linear, unbiased predictors. The<br />
conditions are that the model is the true model and the variance-covariance matrices of the<br />
random variables are known without error.<br />
In the strictest sense, all models approximate an unknown true model, and the variancecovariance<br />
parameters are usually guessed, so that there is never a truly BLUP analysis of data,<br />
except possibly in simulation studies.<br />
8