13.07.2015 Views

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Discussion and conclusionsNon-<strong>randomised</strong> <strong>studies</strong> provide a poor basis fortreatment or health policy decisions:●●The inability of case-mix adjustment methodsto compensate for selection bias and ourinability to identify <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>which are free of selection bias indicate that<strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> should only be used insituations where RCTs cannot be undertaken.<strong>Health</strong>care policies based upon <strong>non</strong><strong>randomised</strong><strong>studies</strong> or systematic reviews of<strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> may need re-evaluationif the uncertainty in the true evidence base wasnot fully appreciated when the decisions weremade.Recommendations for further research1. The resampling methodology that we haveemployed in this project should be applied inother clinical areas where suitable RCTs exist.These evaluations should consider (a) thedistribution of biases associated with <strong>non</strong><strong>randomised</strong>allocation, (b) whether <strong>non</strong><strong>randomised</strong><strong>studies</strong> with similar baselinecharacteristics are less biased and (c) theperformance of case-mix adjustment methods.It would be valuable to study different contextsto evaluate the degree to which bias is relatedto the amount of prognostic information knownat allocation.2. Efforts should be focused on the developmentof a new quality assessment tool for <strong>non</strong><strong>randomised</strong><strong>studies</strong> or the refinement anddevelopment of existing tools. Appropriatemethodological procedures of tooldevelopment should be employed and keyindicators of internal validity covered. Theseindicators include both those for whichempirical evidence is available from work onRCTs and those supported by our empiricalinvestigations. The latter should include themethod used to allocate participants to groups;specification of the factors that influenced theseallocations; the way in which these factors arethought to relate to outcome; and appropriateadjustment in the analysis. In the meantime,systematic reviewers should be stronglyencouraged to use and adapt those tools thatdo cover key quality issues.3. Research should be undertaken to developmethods of measuring and characterisingreasons for treatment choices in patientpreference and allocation by indication <strong>studies</strong>,and evaluations undertaken to assess whetherrecording such information allows effectiveadjustment for selection bias.4. Empirical work is needed to investigatehow quality assessments of <strong>non</strong>-<strong>randomised</strong><strong>studies</strong> should be incorporated in thesynthesis of <strong>studies</strong> in a systematic reviewand to study the implications of individualquality features for the interpretation of reviewresults.5. Reasons for the failure of case-mix adjustmentmethods should be further investigated,including assessment of the generalisability ofour results to risk assessments andepidemiological <strong>studies</strong> where they arefrequently utilised. The impact of differencesbetween unconditional and conditionalestimates of treatment effects should beassessed.6. Guidelines should be produced to adviseinvestigators on the best ways of undertakingprospective <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> tominimise bias.7. The role of propensity scoring in adjusting forselection bias should be further evaluated, andcomputer macros made available for itsapplication.Recommendations for those producingand using health technologyassessments1. Systematic reviewers and those conductinghealth technology assessments should bestrongly encouraged to base any estimates ofeffectiveness or cost-effectiveness on evidencefrom RCTs. Where such evidence is unavailable,the uncertainty inherent in estimates based on<strong>non</strong>-<strong>randomised</strong> evidence should be stronglyemphasised.2. Decision-makers should review healthcarepolicies based on the results of <strong>non</strong>-<strong>randomised</strong><strong>studies</strong> to assess whether the inherentuncertainty in the evidence base was properlyappreciated when the policy decisions weremade.3. Agencies funding primary research should fund<strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> only when they areconvinced that a <strong>randomised</strong> study is notfeasible.92

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!