13.07.2015 Views

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Review of empirical comparisons of the results of <strong>randomised</strong> and <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>18Lipsey and Wilson 35 and Wilson and Lipsey 36Lipsey and Wilson searched for all meta-analysesof psychological <strong>intervention</strong>s, broadly defined astreatments whose intention was to inducepsychological change (whether emotional,attitudinal, cognitive or behavioural). Evaluationsof individual components of <strong>intervention</strong>s andbroad <strong>intervention</strong>al policies or organisationalarrangements were excluded. Searches ofpsychology and sociology databases supported bymanual searches identified a total of 302 metaanalyses,76 of which contained both <strong>randomised</strong>and <strong>non</strong>-<strong>randomised</strong> comparative <strong>studies</strong>. Resultswere analysed in two ways. First, the average effectsizes of <strong>randomised</strong> and <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>were computed across the 74 reviews, and averageeffects were noted to be very slightly smaller for<strong>non</strong>-<strong>randomised</strong> than <strong>randomised</strong> <strong>studies</strong>. Second(and more usefully) the difference in effect sizesbetween <strong>randomised</strong> and <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>within each of the reviews was computed andplotted. This revealed both large over- andunderestimates with <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>,differences in effect sizes ranging from –0.60 to+0.77 standard deviations.Studies excluded from the reviewThree commonly cited <strong>studies</strong> were excluded fromour review. 37–39 Although these <strong>studies</strong> madecomparisons between the results of <strong>randomised</strong>and <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> across many<strong>intervention</strong>s, they did not match RCTs and <strong>non</strong><strong>randomised</strong><strong>studies</strong> according to the <strong>intervention</strong>.Although they provide some information about theaverage findings of selected <strong>randomised</strong> and <strong>non</strong><strong>randomised</strong><strong>studies</strong>, they did not consider whetherthere are differences in results of RCTs and <strong>non</strong><strong>randomised</strong><strong>studies</strong> of the same <strong>intervention</strong>.Findings of the eight reviewsThe eight reviews have drawn conflictingconclusions. Five of the eight reviews concludedthat there are differences between the results of<strong>randomised</strong> and <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> in manybut not all clinical areas, but without there being aconsistent pattern indicating systematicbias. 25,26,28,34,35 One of the eight reviews found anoverestimation of effects in all areas studied. 27 Thefinal two concluded that the results of <strong>randomised</strong>and <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> were ‘remarkablysimilar’. 32,33Of the two reviews that considered the relativevariability of <strong>randomised</strong> and <strong>non</strong>-<strong>randomised</strong>results, one concluded that RCTs were moreconsistent 34 and the other that they were lessconsistent. 33The two <strong>studies</strong> that investigated the impact ofcase-mix adjustment were in agreement, bothnoting that adjustment did not necessarily reducediscordance between <strong>randomised</strong> and <strong>non</strong><strong>randomised</strong>findings. 25,27Critical evaluation of reviewsThe discrepancies in the conclusions of the eightreviews may in part be explained by variations intheir methods and rigour, so that they had varyingsusceptibility to bias. We consider the weaknessesin these reviews under four headings.Was the identification of included <strong>studies</strong>unlikely to be biased?The <strong>studies</strong> used in all the reviews represent onlya very small portion of all <strong>randomised</strong> andobservational research. From Table 3, it is clear thatthe seven reviews in medical areas were each onlybased on a subset of known comparisons of<strong>randomised</strong> and <strong>non</strong>-<strong>randomised</strong> evidence. Eventhe largest review 34 did not include allcomparisons identified in previous reviews.Correspondence has also cited several otherexamples of treatment comparisons where thereare disagreements between observational and<strong>randomised</strong> <strong>studies</strong>. 40,41More important, could the comparisons selectedfor these meta-epidemiological reviews be apotentially biased sample? There are two levels atwhich publication bias can act in these evaluations:(a) selective publication of primary <strong>studies</strong>, whichwill affect all reviews, and (b) selective publicationof meta-analyses of these <strong>studies</strong>, which will affectreviews restricted to secondary publications.Evaluations of publication bias have noteddifferences in the frequency of primary publicationof <strong>randomised</strong> and observational <strong>studies</strong>, althoughthe direction and magnitude of the differencesvary between evaluations and the relationship tostatistical significance is not known. 42 Similarly,the decisions made concerning the publication ofmeta-analyses that include <strong>non</strong>-<strong>randomised</strong><strong>studies</strong> are likely to be influenced by the results ofexisting <strong>randomised</strong> controlled trials. TheConcato review 33 may be the most susceptible topublication bias as it restricted study selection tometa-analyses combining <strong>randomised</strong> orobservational results published in five generalmedical journals: Annals of Internal Medicine, BritishMedical Journal (BMJ), Journal of the AmericanMedical Association (JAMA), New England Journal ofMedicine (NEJM) and The Lancet. Therefore, theonly <strong>studies</strong> eligible for inclusion were those whereboth authors and top journal editors had alreadydecided that it was sensible to synthesise the

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!