29.07.2014 Views

mixed - Stata

mixed - Stata

mixed - Stata

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

8 <strong>mixed</strong> — Multilevel <strong>mixed</strong>-effects linear regression<br />

matlog, during optimization, parameterizes variance components by using the matrix logarithms of<br />

the variance–covariance matrices formed by these components at each model level.<br />

The matsqrt parameterization ensures that variance–covariance matrices are positive semidefinite,<br />

while matlog ensures matrices that are positive definite. For most problems, the matrix square root<br />

is more stable near the boundary of the parameter space. However, if convergence is problematic,<br />

one option may be to try the alternate matlog parameterization. When convergence is not an issue,<br />

both parameterizations yield equivalent results.<br />

The following option is available with <strong>mixed</strong> but is not shown in the dialog box:<br />

coeflegend; see [R] estimation options.<br />

Remarks and examples<br />

stata.com<br />

Remarks are presented under the following headings:<br />

Introduction<br />

Two-level models<br />

Covariance structures<br />

Likelihood versus restricted likelihood<br />

Three-level models<br />

Blocked-diagonal covariance structures<br />

Heteroskedastic random effects<br />

Heteroskedastic residual errors<br />

Other residual-error structures<br />

Crossed-effects models<br />

Diagnosing convergence problems<br />

Survey data<br />

Introduction<br />

Linear <strong>mixed</strong> models are models containing both fixed effects and random effects. They are a<br />

generalization of linear regression allowing for the inclusion of random deviations (effects) other than<br />

those associated with the overall error term. In matrix notation,<br />

y = Xβ + Zu + ɛ (1)<br />

where y is the n × 1 vector of responses, X is an n × p design/covariate matrix for the fixed effects<br />

β, and Z is the n × q design/covariate matrix for the random effects u. The n × 1 vector of errors<br />

ɛ is assumed to be multivariate normal with mean 0 and variance matrix σ 2 ɛ R.<br />

The fixed portion of (1), Xβ, is analogous to the linear predictor from a standard OLS regression<br />

model with β being the regression coefficients to be estimated. For the random portion of (1), Zu+ɛ,<br />

we assume that u has variance–covariance matrix G and that u is orthogonal to ɛ so that<br />

[ ]<br />

u<br />

Var =<br />

ɛ<br />

[<br />

G 0<br />

]<br />

0 σɛ 2 R<br />

The random effects u are not directly estimated (although they may be predicted), but instead are<br />

characterized by the elements of G, known as variance components, that are estimated along with<br />

the overall residual variance σɛ 2 and the residual-variance parameters that are contained within R.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!