23.10.2012 Views

View PDF Version - RePub - Erasmus Universiteit Rotterdam

View PDF Version - RePub - Erasmus Universiteit Rotterdam

View PDF Version - RePub - Erasmus Universiteit Rotterdam

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 3.2<br />

164<br />

effects correct the mean structure to get the patient specific profile. Nevertheless,<br />

a further improvement of the mean structure may lead to a further improvement of<br />

the discriminant procedure. Further, we have also fitted models with K = 3 mixture<br />

components and compared the resulting discriminant rule to these described above.<br />

However, the solution with K = 3 performed worse than the above procedures based<br />

on K =1orK = 2 implying that the structure of the models with K = 3 is already<br />

overparametrized. Note that with K = 3, the dimension of the parameter space<br />

increases with 1 + 6 + 21 = 28 in each prognostic group.<br />

Discussion<br />

In this paper, we have generalized the discriminant analysis of multivariate longitudinal<br />

profiles by assuming a normal mixture in the random effects distribution in<br />

the mixed model. The application of our approach to the PBC Dutch Study data<br />

showed some improvements compared to the methodology based on mixed models<br />

with normal random effects. Due to the fact that the normal mixture serves as<br />

a semi-parametric model for the unknown random effects distribution, the first obvious<br />

question is how to choose K, the number of mixture components. In general,<br />

models with a different number of components can be fitted and then compared<br />

by means of a suitable measure of model complexity and fit like the deviance information<br />

criterion (DIC, Spiegelhalter et al. 25 ), or the penalized expected deviance<br />

(PED, Plummer27 ). Alternatively, posterior distributions of deviances under different<br />

models can be compared (Aitkin, Liu and Chadwick28 ). Nevertheless, when<br />

discrimination is of primary interest and a training data set is available then it is<br />

preferable to choose the optimal model by evaluating the resulting discrimination<br />

rule by, e.g. means of cross-validation, as was done in Section ’Application to PBC<br />

data’.<br />

In our specification of the MLMM, we assumed that the errors εi,r,j (i =1,...,N, r =<br />

1,...,R, j =1,...,ni,r) are independent and hence the markers Yi,r,j are conditionally<br />

independent given the random effects. Hence, one can further generalize<br />

the proposed model by using a more general covariance structure for the vectors<br />

of errors εi,r (i =1,...,N, r =1,...,R as was done, e.g., by Shah, Laird and<br />

Schoenfeld 29 or Morrell et al. 7 With such generalization, the results of Section ’Ap-<br />

plication to PBC data’ can even improve. Further, it is certainly possible to relax<br />

the normality assumption on random effects in several other directions than was<br />

done in this paper. For example, a multivariate t-distribution (see, e.g., Pinheiro,<br />

Liu and Wu 30 ) or a mixture of multivariate t-distributions (Lin, Lee and Ni 31 )for

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!