13.07.2015 Views

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Health</strong> Technology Assessment 2003; Vol. 7: No. 27Chapter 1IntroductionThe ultimate goal of the evaluation ofhealthcare <strong>intervention</strong>s is to produce a validestimate of effectiveness, in terms of both internaland external validity. Internal validity concerns theextent to which the results of a study can bereliably attributed to the <strong>intervention</strong> underevaluation, whereas external validity concerns theextent to which a study’s results can be generalisedbeyond the given study context.The <strong>randomised</strong> controlled trial (RCT) is widelyregarded as the design of choice for theassessment of the effectiveness of healthcare<strong>intervention</strong>s. The main benefit of the RCT is theuse of a randomisation procedure that, whenproperly implemented, ensures that the allocationof any participant to one treatment or anothercannot be predicted. The randomisation processmakes the comparison groups equal with respectto both known and unknown prognostic factors atbaseline, apart from chance bias. 1 RCTs also tendto benefit from so-called ‘inherited properties’,which generally mark them out as higher quality<strong>studies</strong>. These properties include the fact that theyare prospective <strong>studies</strong>, with written protocolsspecifying, and thus standardising, importantaspects of patient enrolment, treatment,observation and analysis. 2 RCTs are also morelikely to employ specific measures to reduce orremove bias, such as blinded outcome assessment.There are instances where <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>have either been sufficient to demonstrateeffectiveness or they appear to have arrived atresults similar to those of RCTs. However, whererandomisation is possible, most agree that theRCT should be the preferred method ofevaluating effectiveness. 2–4 The risks of relyingsolely on <strong>non</strong>-<strong>randomised</strong> evidence include failingto convince some people of the validity of theresult, or successfully convincing others of anincorrect result. 3Nevertheless, several scenarios remain underwhich an RCT may be unnecessary, inappropriate,impossible or inadequate. 5 Examples include theassessment of rare side-effects of treatments, somepreventive <strong>intervention</strong>s and policy changes.Furthermore, there must be hundreds of examplesof <strong>intervention</strong>s for which RCTs would be possiblebut have not yet been carried out, leaving themedical and policy community to rely on <strong>non</strong><strong>randomised</strong>evidence. It is therefore essential tohave an understanding of the biases that mayinfluence <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>.Types of <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>A taxonomy of study designs that may be used toassess the effectiveness of an <strong>intervention</strong> isprovided in Box 1. However, there is inconsistentuse of nomenclature when describing <strong>non</strong><strong>randomised</strong><strong>studies</strong>, and other taxonomies mayapply different definitions to the same studydesigns. To attempt to avoid the problems ofinconsistent terminology, six features can beidentified that differentiate between these <strong>studies</strong>.First, some <strong>studies</strong> make comparisons betweengroups, whilst some simply describe outcomes in asingle group (e.g. case series). Second, thecomparative designs differ in the way thatparticipants are allocated to groups, varying fromthe use of randomisation (RCTs), quasirandomisation,geographical or temporal factors(cohort <strong>studies</strong>), the decisions of healthcareprofessionals (clinical database cohorts), to theidentification of groups with specific outcomes(case–control <strong>studies</strong>). Third, <strong>studies</strong> differ in thedegree to which they are prospective (andtherefore planned) or retrospective, for matterssuch as the recruitment of participants, collectionof baseline data, collection of outcome data andgeneration of hypotheses. Fourth, the methodused to investigate comparability of the groupsvaries: in RCTs no investigation is necessary(although it is often carried out), in controlledbefore-and-after designs baseline outcomemeasurements are used, and in cohort andcase–control <strong>studies</strong> investigation of confoundersis required. Fifth, <strong>studies</strong> differ in the level atwhich the <strong>intervention</strong> is applied: sometimes it isallocated to individuals, other times to groups orclusters.Finally, some <strong>studies</strong> are classified as experimentalwhereas others are observational. In experimental<strong>studies</strong> the study investigator has some degree ofcontrol over the allocation of <strong>intervention</strong>s. Mostimportantly, he/she has control over the allocation1© Queen’s Printer and Controller of HMSO 2003. All rights reserved.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!