13.07.2015 Views

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Health</strong> Technology Assessment 2003; Vol. 7: No. 27results of both observational and <strong>randomised</strong><strong>studies</strong>. It seems highly likely that these decisionsmay relate to the similarity of the results of <strong>studies</strong>of different designs.Did the RCTs and <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>recruit similar participants, use similar<strong>intervention</strong>s and measure similar outcomes?Discrepancies between the results of observationaland <strong>randomised</strong> <strong>studies</strong> may be confounded bydifferences in the selection and evaluation ofpatient groups, in the formulation and delivery oftreatments, in the use of placebos and othercomparative treatments and in the methods usedto maintain follow-up and record outcomes. Formany <strong>intervention</strong>s there may also be temporalconfounding of study types, <strong>non</strong>-<strong>randomised</strong><strong>studies</strong> typically being performed prior to theRCTs. Such meta-confounding will make it difficultto attribute a systematic difference directly to theuse or <strong>non</strong>-use of a random allocation mechanism.Six of the eight reviews noted this problem andincorporated features in their evaluations toreduce the potential for metaconfounding.25–28,32,35 Two reviews made morestringent efforts to assess comparability than theothers. 25,26 Britton and colleagues restricted theselection of <strong>studies</strong> to be similar in terms of<strong>intervention</strong>, setting, control therapy and outcomemeasure. MacLehose and colleagues assessed eachcomparison for the possibility of metaconfoundingand found that the most susceptible(i.e. those with differences in eligibility criteria andtime periods and no adjustment for severity ofdisease, comorbidity and other prognostic factors)had, on average, the largest discrepancies.Were the RCTs and <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>shown to use similar study methodology in allrespects other than the allocation mechanism?Meta-confounding could also occur throughdifferences in other aspects of study design,beyond the use of randomisation. For example,the results of RCTs are known to vary according tothe quality of randomisation, especially theconcealment of allocation at recruitment. 23However, <strong>non</strong>e of the reviews restricted theinclusion of RCTs to those with adequateconcealment or on any other methodologicalbasis. Only one review 26 assessed the comparabilityof <strong>randomised</strong> and <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> onany aspect of study quality (blinding).Discrepancies and similarities between studydesigns could be partly explained by differences inother unevaluated aspects of the methodologicalquality of the RCTs.Similarly, there will be differences in themethodological rigour of the <strong>non</strong>-<strong>randomised</strong><strong>studies</strong>. Importantly, the possible biases of <strong>non</strong><strong>randomised</strong><strong>studies</strong> vary with study design. Onlyone review 27 restricted <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> tobe of a single design (historically controlled<strong>studies</strong>). In all the others, RCTs were comparedwith <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> of a mixture ofdesigns.Were sensible, objective criteria used todetermine differences or equivalence of studyfindings?The manner in which results were judged to be‘equivalent’ or ‘discrepant’ varied widely betweenthe reviews and influenced the conclusions thatwere drawn. For example, Concato andcolleagues 33 deemed that the <strong>randomised</strong> and<strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> of mammographicscreening (comparison 34 in Table 3) hadremarkably similar results, whereas Ioannidis andcolleagues 34 classified them as discrepant. Only intwo reviews was the judgement made aggregatedacross the comparisons. 27,35 In all the otherreviews each individual topic was classified aseither equivalent or discrepant. Many of thecomparisons made at the level of a clinical topicwere based on very few data, for example, in theIoannidis review 34 on average for each<strong>intervention</strong> five RCTs were compared with four<strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>. Hence the absence of astatistically significant difference cannot beinterpreted as evidence of ‘equivalency’ andclinically significant differences in treatmenteffects cannot be excluded. Conversely, thepresence of a statistically significant differencedoes not indicate that a clinically importantdifference does exist. Four reviews 26,28,34,35 moreusefully concentrated on describing the magnitudeof the differences, all four noting substantialdifferences occurring in some, but not all,comparisons. The Concato review 33 subjectivelyclassified all comparisons as being ‘remarkablysimilar’.The comparisons made in two reviews 33,34 of therelative variability of <strong>randomised</strong> and <strong>non</strong><strong>randomised</strong>results can be considered flawedowing to the criteria used to compare variation.Concato and colleagues considered the range ofthe point estimates from observational <strong>studies</strong> andRCTs. 33 This comparison was confounded by thedifferent sample sizes used in observational and<strong>randomised</strong> <strong>studies</strong>, and the number of <strong>studies</strong>considered. On average, the RCTs in the Concatoreview were 25% smaller than the observational<strong>studies</strong>, hence greater variability in their results is19© Queen’s Printer and Controller of HMSO 2003. All rights reserved.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!