13.07.2015 Views

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Health</strong> Technology Assessment 2003; Vol. 7: No. 27Chapter 3Review of empirical comparisons of the results of<strong>randomised</strong> and <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>IntroductionEvidence about the importance of design featuresof RCTs has accumulated rapidly during recentyears. 19–21 This evidence has mainly been obtainedby a method of investigation that has been termedmeta-epidemiology, a powerful but simpletechnique of investigating variations in the resultsof RCTs of the same <strong>intervention</strong> according tofeatures of their study design. 22 The processinvolves first identifying substantial numbers ofsystematic reviews each containing RCTs both withand without the design feature of interest. Withineach review, results are compared between thetrials meeting and not meeting each designcriterion. These comparisons are then aggregatedacross the reviews in a grand overall meta-analysisto obtain an estimate of the systematic biasremoved by the design feature. For RCTs, therelative importance of proper randomisation,concealment of allocation and blinding have allbeen estimated using this technique. 20,21 Theresults have been shown to be consistent acrossclinical fields, 23 providing some evidence thatmeta-epidemiology may be a reliable investigativetechnique. The method has also been applied toinvestigate sources of bias in <strong>studies</strong> of diagnosticaccuracy, where participant selection, independenttesting and use of consistent reference standardshave been identified as being the most importantdesign features. 24The use of meta-epidemiology has also beenextended from the comparison of design featureswithin a particular study design to comparisonsbetween study designs. In two separate HTAreports, the results of RCTs have been comparedwith those from <strong>non</strong>-<strong>randomised</strong> evaluationsacross multiple <strong>intervention</strong>s to estimate the biasremoved by randomisation. 25,26 However, themeta-epidemiology method may be inappropriatefor between design comparisons due to:●meta-confounding: the existence of otherdifferences between <strong>randomised</strong> and <strong>non</strong><strong>randomised</strong><strong>studies</strong> which could impact on theirfindings, and●unpredictability in the direction of effect:there possibly being no overall systematic biasbut biases acting unpredictably in differentdirections with varying magnitudes.An alternative methodology for empiricallyinvestigating differences between study designs isintroduced in Chapter 5. First, we review theevidence provided by meta-epidemiologicalcomparisons of <strong>randomised</strong> and <strong>non</strong>-<strong>randomised</strong><strong>studies</strong> of the importance of randomisation per se,and discuss weaknesses in the use of metaepidemiologyto make such a comparison. Thecomparisons are considered in regard to fourparticular issues, namely whether there isempirical evidence of:●●●●inconsistencies in findings between RCTs and<strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>systematic differences in average estimates oftreatment effects between RCTs and <strong>non</strong><strong>randomised</strong><strong>studies</strong>differences in the variability of results betweenRCTs and <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> (betweenstudy heterogeneity), andwhether case-mix adjustment in <strong>non</strong><strong>randomised</strong><strong>studies</strong> reduces systematic biasand/or between study heterogeneity.MethodsReviews were eligible for inclusion if:●●they compared quantitative results betweenRCTs of an <strong>intervention</strong> and <strong>non</strong>-<strong>randomised</strong><strong>studies</strong> of the same <strong>intervention</strong>they had accumulated, through some systematicsearch, results from several of thesecomparisons across healthcare <strong>intervention</strong>s.Reviews were identified from a search of electronicdatabases including MEDLINE, EMBASE andPsycLit, from the earliest possible date up toDecember 1999; from handsearches of Statistics inMedicine, Statistical Methods in Medical Research,Psychological Bulletin and Controlled Clinical Trials9© Queen’s Printer and Controller of HMSO 2003. All rights reserved.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!