23.10.2012 Views

View PDF Version - RePub - Erasmus Universiteit Rotterdam

View PDF Version - RePub - Erasmus Universiteit Rotterdam

View PDF Version - RePub - Erasmus Universiteit Rotterdam

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Discriminant analysis using a MLMM with a normal mixture 149<br />

informative prior distribution, we refer to Komárek 14 , where in his notation the<br />

terms yi,j are replaced by reasonable initial values of the random effects. These can<br />

be, for example, equal to their empirical Bayes estimates from separate (for r =<br />

1,...,R) maximum-likelihood fits of models derived from (1). A weakly informative<br />

priorforthefixedeffectsαis obtained by setting ξαr ,l to zero, and c 2 αr ,l (r =<br />

1,...,R,l =1,...,pr ) to a large positive number, e.g., 10 000 (it is necessary to<br />

check that the variance of the posterior distribution is considerably smaller). Finally,<br />

adapting recommendations of Richardson and Green22 , the following values of the<br />

fixed hyperparameters ζε,r ,gε,r ,hε,r (r =1,...,R) related to the prior distribution<br />

of the error terms lead to a weakly informative prior: a small positive number for<br />

ζε,r and gε,r , hε,r =10/R 2 ε,r , where Rε,r is a range of residuals from separate (for<br />

r =1,...,R) initial maximum-likelihood fits of models derived from (1).<br />

Posterior distribution and Markov chain Monte Carlo<br />

Given the parameters ψ, i.e. parameters for which the prior distribution has been<br />

specified in (7), the likelihood corresponding to model (1) takes a relatively simple<br />

form, i.e.,<br />

N�<br />

N� R�<br />

L(ψ) = p(y i | ψ) = p(y i,r | αr , bi,r,σ 2 r )<br />

i=1<br />

∼<br />

i=1 r=1<br />

N�<br />

R�<br />

N (Xi,rαr + Zi,rbi,r, σ<br />

i=1 r=1<br />

2 r Ini,r ).<br />

(16)<br />

Let y be the observed values of all longitudinal markers from the whole data.<br />

Bayesian inference is based on a sample from the posterior distribution p(ψ | y) ∝<br />

L(ψ) p(ψ) obtained using the Markov chain Monte Carlo method with a block Gibbs<br />

sampler.<br />

To improve the numerical properties of the MCMC algorithm, it is useful to choose<br />

the shift vector s and the scale matrix S (see expression (3)) such that the shifted<br />

and scaled random effects b ∗ i have approximately zero mean and unit variances. For<br />

this reason we recommend to set s to the estimated means and diagonal elements of<br />

the S matrix to the estimated standard deviations from separate (for r =1,...,R)<br />

maximum-likelihood fits of models derived from (1). Further, note that a sample<br />

from the posterior distribution p(θ | y) is directly available, simply by ignoring the<br />

γb, γε, b, u parts of sampled values of the vector ψ.<br />

The R package mixAK (Komárek14 ) has been extended to handle the MLMM (1).<br />

Whenever possible, the R implementation exploits the block-diagonal structure of

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!