13.07.2015 Views

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Health</strong> Technology Assessment 2003; Vol. 7: No. 27Case-mix adjustment methodsIn the absence of information on factorsinfluencing allocation, the traditional solution toremoving selection bias in <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>has been to attempt to control for knownprognostic factors, either by design and/or byanalysis. Alternative statistical techniques forremoving bias in <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> arebriefly described by D’Agostino and Kwan 1 andinclude matching, stratification, covarianceadjustment and the propensity score analysis(Box 2). However, all statistical techniques maketechnical assumptions (regression models typicallyassume that the relationship between theprognostic variable and the outcome is linear)and the degree to which they can adequatelyadjust for differences between groups isunclear.Moses 18 lists three factors necessary for successfuladjustment: (1) knowledge of which variables mustbe taken into account; (2) measuring thosevariables in each participant; and (3) using thosemeasurements appropriately to adjust thetreatment comparison. He further states that weare likely to fail on all three counts: 18“We often don’t understand the factors that causepeople’s disease to progress or not; even if we knewthose factors, we might find they had not beenmeasured. And if they were measured, the correct wayto use them in adjustment calls for a theoreticalunderstanding we seldom have.”BOX 2 Commonly used methods of statistical adjustmentStandardisationParticipants are analysed in groups (strata) which havesimilar characteristics, the overall effect being estimatedby averaging the effects seen in each of the groups.RegressionRelationships between prognostic factors and outcomeare estimated from the data in hand, and adjustmentscalculated for the difference in average values of theprognostic factor between the two groups. Linearregression (or covariance analysis) is used for continuousoutcomes, logistic regression for binary outcomes.Propensity scoresPropensity probabilities are calculated for each participantfrom the data set, estimating their chance of receivingtreatment according to their characteristics. Treatmenteffects are estimated either by comparing groups thathave similar propensity scores (using matching orstratification methods), or by calculating a regressionadjustment based on the difference in average propensityscore between the groups.Use of adjustment therefore assumes thatresearchers know which are the most importantprognostic factors and that these have beenappropriately measured. Further, it cannotaddress the problem of unknown orunmeasurable prognostic factors, which may playa particular role in confounding by indication.Full adjustment for confounding by indicationwould require information on prognostic factorsinfluencing both disease progression andtreatment choice. 14 Where there is an associationbetween prognosis and treatment choice, it wouldseem that correlates of prognosis can only warn ofa problem, not control for it. Furthermore, if thedegree to which the markers are correlated withprognosis is different in diseased and <strong>non</strong>diseasedpersons then the magnitude (anddirection) of the resulting bias will beunpredictable in an overall analysis. 14Scope of this projectThis report contains the results of three systematicreviews and two empirical <strong>studies</strong> all relating tothe internal validity of <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>.The findings of these five <strong>studies</strong> will be ofimportance to researchers undertaking new<strong>studies</strong> of healthcare evaluations and systematicreviews, and also healthcare professionals,consumers and policy makers looking to use theresults of <strong>randomised</strong> and <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>to inform their decision-making.The first systematic review looks at existingevidence of bias in <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>,critically evaluating previous methodological<strong>studies</strong> that have attempted to estimate andcharacterise differences in results between RCTsand <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>.Two further systematic reviews focus on the issueof quality assessment of <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>.The first identifies and evaluates tools that can beused to assess the quality of <strong>non</strong>-<strong>randomised</strong><strong>studies</strong>. The second looks at ways that studyquality has been assessed and addressed insystematic reviews of healthcare <strong>intervention</strong>s thathave included <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>.The two empirical investigations focus on the issueof selection bias in <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>. Thefirst investigates the size and behaviour ofselection bias in evaluations of two specific clinical<strong>intervention</strong>s and the second assesses the degreeto which case-mix adjustment corrects forselection bias.5© Queen’s Printer and Controller of HMSO 2003. All rights reserved.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!