13.07.2015 Views

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Use of quality assessment in systematic reviews of <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>48the 168 reviews that claimed to have undertakenquality assessment did not comprehensively assessthe internal validity of the included <strong>studies</strong>. Only34 reviews (20%) included at least one quality itemin five out of six internal validity domains andonly 62 (37%) assessed at least one of the four keyareas that distinguish <strong>non</strong>-<strong>randomised</strong> from<strong>randomised</strong> <strong>studies</strong>. Only three reviews (2%) usedquality assessment tools that we judged to paysufficient attention to key issues of selection biasfor <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>.Most of the reviews that assessed study qualityreported the results in some form, although in lessthan one-third of cases were the results reportedper study. This seems to be partially related to thenumber of <strong>studies</strong> per review – those with fewer<strong>studies</strong> were more able to present detailed results.A minority of reviews that assessed quality (12%)did not go on to consider the quality assessmentresults in the study syntheses; however, mostprovided a narrative discussion of study qualityand its implications.Those reviews that attempted to incorporate studyquality into a quantitative synthesis did so in avariety of ways. Most included a wide variety ofstudy designs, but the numbers of primary <strong>studies</strong>included were not large enough to allow thedegree of bias introduced by variations in qualityto be clearly identified; the impact of quality wasconfounded by differences in study design.Our review has revealed that the conduct ofsystematic reviews that include <strong>non</strong>-<strong>randomised</strong><strong>studies</strong> with respect to quality assessment is aspoor as, if not worse than, that found by Moherand colleagues in 1999 for meta-analyses ofRCTs. 131 Moher and colleagues also found thattrial quality was not assessed in most metaanalyses(48% compared with 67% of reviews inour study) and that when it is assessed, theassessments are obtained with <strong>non</strong>-validated tools.The infrequency of adequate quality assessmentmay in part occur owing to the lack of empiricalevidence and controversy concerning the biasesthought to act in <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>, andconfusion both about what quality items to assessand what quality assessment tool to use. This isperhaps supported by our finding that thosereviews that included both RCTs and <strong>non</strong><strong>randomised</strong><strong>studies</strong> were much more likely to haveconducted quality assessment than those that didnot include RCTs or also included uncontrolled<strong>studies</strong>.However, the absence of agreed criteria forassessing quality has not stopped reviews of <strong>non</strong><strong>randomised</strong><strong>studies</strong> of healthcare <strong>intervention</strong>sbeing carried out and their results being used toinform treatment and policy decisions. Given theclear evidence that inadequate randomisationprocedures severely bias the results of RCTs, itseems reasonable to predict that <strong>non</strong>-randommethods of allocation are equally if not moreopen to selection bias than concealed allocation.Where <strong>randomised</strong> evidence is unavailable,the potential for bias and resultinguncertainty inherent in estimates based on<strong>non</strong>-<strong>randomised</strong> evidence should be stronglyemphasised and evaluated through qualityassessment.A particular strength of our review was theavailability of a large number of systematic reviewsfor assessment provided via the DARE database.This database is fed by monthly, and in some casesweekly, extensive literature searches of a widerange of databases [such as Current Contents,MEDLINE and Cumulative Index of Nursing andAllied <strong>Health</strong> Literature (CINAHL)] publishedsince 1994. Systematic reviews have to meet acertain standard of methodological quality beforebeing included on the database. This in turnmeans that the reviews in our sample are ofhigher quality than many that are published, sothat the situation in practice may be even worsethan we have demonstrated here. In the past,the process of assessing reviews for inclusion inDARE and the subsequent writing of structuredabstracts for each review has also meant thatthere was some time lag before reviews wereloaded on to the database. The majority ofreviews in our sample were published prior to1999 and it is possible that reviewers are nowmore likely to conduct and report qualityassessment of <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> than theyhave been in the past.Nevertheless, reviewers who include <strong>non</strong><strong>randomised</strong><strong>studies</strong> in their systematic reviewsshould be aware of the fundamental need toaddress the potential for selection bias in these<strong>studies</strong> (and also all of the other quality issues thataffect all study designs), and should consider theimpact of these biases in the synthesis of <strong>studies</strong>and, in turn, in their conclusions. In turn, users ofsystematic reviews should be careful not to overinterpretthe results of reviews that included <strong>non</strong><strong>randomised</strong><strong>studies</strong>. If biases have not beenassessed then conclusions may be invalid orunjustified.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!