13.07.2015 Views

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Health</strong> Technology Assessment 2003; Vol. 7: No. 27covariate that would be needed to nullify anobserved effect. 145,174 However, <strong>non</strong>e of thesemethods can correct for bias.Investigators undertaking systematic reviews ofeffectiveness should include the results of <strong>non</strong><strong>randomised</strong><strong>studies</strong> with discretion. If results fromgood-quality RCTs are available, there seems to beno justification for additionally considering <strong>non</strong><strong>randomised</strong><strong>studies</strong>. If <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> areto be included in a review, their quality needs tobe carefully assessed to evaluate the likelihood ofbias. Quality assessment should pay particularattention to the description of the allocationmechanisms and the demonstration of there beingcomparability in all important prognostic factorsat baseline. Although we have not identified aquality assessment tool that met all ourrequirements, the six best tools could all beadapted for use in a systematic review. Theconclusions of a review should take into accountthe extra uncertainty associated with the results of<strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>.The results of <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> should betreated with a healthy degree of scepticism.<strong>Health</strong>care decision-makers should be cautious notto over-interpret results from <strong>non</strong>-<strong>randomised</strong><strong>studies</strong>. Importantly, checking that treated andcontrol groups appear comparable does notguarantee freedom from bias, and it should neverbe assumed that case-mix adjustment methods canfully correct for observed differences betweengroups. The uncertainty in the result of a <strong>non</strong><strong>randomised</strong>study is not properly summarised inthe confidence interval for the overall effect: ouranalyses have shown that the true uncertainty inthe results of <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> may be 10times greater.ConclusionsNon-<strong>randomised</strong> <strong>studies</strong> are sometimes but notalways biased:●●The results of <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> can differfrom the results of RCTs of the same <strong>intervention</strong>.All other issues remaining equal, lack ofrandomisation introduces bias into theassessment of treatment effects. The bias mayhave two components. It may be systematic andappear on average to act in a particulardirection if the <strong>non</strong>-random allocationmechanism leads to a consistent difference incase-mix. However, if the allocation mechanismcan lead to haphazard differences in case-mix,●the bias can act in either direction, increasinguncertainty in outcome in ways that cannot bepredicted. The extent of systematic bias andincreased uncertainty varies according to thetype of <strong>non</strong>-<strong>randomised</strong> comparison andclinical context.Meta-epidemiological techniques tend not toprovide useful information on the sources ofand degrees of bias in <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>owing to the existence of meta-confounding andlack of systematic or predictable bias in theresults of <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>.Statistical methods of analysis cannot properlycorrect for inadequacies of study design:●Case-mix comparability and standard methodsof case-mix adjustment do not guarantee theremoval of bias. Residual confounding may behigh even when good prognostic data areavailable, and in some situations adjustedresults may appear more biased thanunadjusted results.Systematic reviews of effectiveness often do notadequately assess the quality of <strong>non</strong>-<strong>randomised</strong><strong>studies</strong>:●●Quality assessment has not routinely beenundertaken in systematic reviews of effectivenessthat include <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>. Whenstudy quality has been investigated, there isvariability in the tools and quality criteria thathave been used and there has been no consistentpattern between quality and review findings.Not all reviews that have assessed quality haveconsidered the findings when synthesising studyresults and drawing conclusions.Although many quality assessment tools existand have been used for appraising <strong>non</strong><strong>randomised</strong><strong>studies</strong>, few are suitable for thistask as they omit key domains of qualityassessment (assessment of allocationmechanisms, attempts to achieve case-mixcomparability by design, identification ofconfounding factors, use of case-mix adjustmentmethods). Few were developed usingappropriate methodological procedures.Fourteen tools were identified which appear tohave reasonable coverage of core domains ofinternal validity, six of which were consideredpotentially suitable for use in systematic reviewsof <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>. All six wouldrequire modification to cover adequately the keyissues in <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> (of identifyingprognostic factors and accounting for them inthe design and analysis).91© Queen’s Printer and Controller of HMSO 2003. All rights reserved.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!