13.12.2012 Views

GfKl 2008 - Legos

GfKl 2008 - Legos

GfKl 2008 - Legos

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

On the power of corrected score functions to<br />

adjust for measurement error<br />

Thomas Augustin and Matthias Wallner<br />

Department of Statistics, University of Munich (LMU)<br />

augustin@stat.uni-muenchen.de<br />

Abstract. Measurement error modeling, also called errors-in-variables-modeling,<br />

is a generic term for all situations where additional uncertainty in the variables<br />

has to be taken into account, in order to avoid severe bias in the statistical analysis.<br />

The problem is omnipresent in technical statistics, when data from imperfect<br />

measurement instruments are analyzed, as well as in biometrics, econometrics or<br />

social science, where operationalizations (surrogates) are used instead of complex<br />

theoretical constructs.<br />

After a brief introduction to the area of measurement error modelling, the talk<br />

discusses the power and some limitations of Nakamura’s general principle of corrected<br />

score functions, mainly in the context of failure time data. Starting with classical<br />

covariate measurement error in Cox’s PH model, it is shown how the Breslow<br />

likelihood can be corrected, while according to results by Stefanski and Nakamura<br />

himself no corrected score function for the partial likelihood can exist. We then turn<br />

to parametric failure time models and extend consideration to additionally errorprone<br />

lifetimes. Finally, some ideas for handling Berkson-type errors (as occurring,<br />

e.g., in Radon studies) and rounded errors will be sketched.<br />

Key words: Measurement error, error-in-variables, survival analysis, Cox model,<br />

rounding<br />

− 6 −

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!