13.07.2015 Views

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Health</strong> Technology Assessment 2003; Vol. 7: No. 27Appendix 4Detailed description of ‘unselected’top 14 quality assessment toolsBracken, 1989 104The types of questions which led to the selectionof this tool according to our pre-selected corecriteria were:5.3 How allocation occurredIf historical rather than concurrent controlshave been used, has it been explained whythis does not introduce bias into the study?5.4 Any attempt to balance groups by designIf study groups were ‘matched’, have therationale and detailed criteria for matching,and success (i.e. number of cases not matched)been provided?9.2 Identification of prognostic factorsHas the measurement of importantconfounding or effect-modifying variablesbeen described so the reader can judge howthey have been controlled?9.3 Case-mix adjustmentHas an analysis of potential confounding oreffect modification of the principal relationsbeen presented?This tool provides a list of 36 questions to guidethe reporting of observational <strong>studies</strong>. The tool issplit into four sections: introduction to the report(4 items); description of materials and methods(17 items); presentation of results (7 items); andstudy conclusions (8 items). The tool took 20–30minutes to complete. It was clearly not designedfor use in a systematic review and to ourknowledge has not been used for that purpose.The majority of the questions could be answeredyes/no, making it possible to compare responsesacross <strong>studies</strong>. However, the questions are notphrased in such a way that any real judgements ofthe methodological quality of a study could bemade, as the questions listed above demonstrate.Furthermore, the study designs that the tool couldcover were limited to case-control and cohortdesigns.Critical Appraisal SkillsProgramme, 1999 64The types of questions which led to the selectionof this tool according to our pre-selected corecriteria were:5.3 How allocation occurredItem not covered5.4 Any attempt to balance groups by designHave the authors taken account of theconfounding factors in the design and/oranalysis?9.2 Identification of prognostic factorsWhat factors could potentially be related toboth the exposure and the outcome of interest(confounding)?9.3 Case-mix adjustmentSame item as 5.4 aboveThis tool provides a list of 12 questions to aid thecritical appraisal of cohort <strong>studies</strong>. The tool is splitinto three sections: ‘are the results of the studyvalid?’ (7 items); ‘what are the results?’ (2 items)and ‘will the results help locally?’ (2 items). Thetool took 15–20 minutes to complete. It was notused in our sample of systematic reviews, and in itscurrent format would be difficult to apply in sucha context. The questions prompt thinking aboutquality but were felt to require subjective answersthat are unlikely to be consistent within or betweenreviewers. The questions are followed by ‘hints’ tohelp the reader answer the overall questions, butin some cases it is difficult to work out how theresponses to the hints translate into yes/no/can’ttell. For example, if a reviewer answered ‘yes’ totwo hints and ‘no’ to two hints for a singlequestion, it is not clear what the overall answer tothat question should be.The CASP tool was not judged to be suitable foruse in a systematic review in its current format.The Bracken tool was not judged to be suitable foruse in a systematic review.135© Queen’s Printer and Controller of HMSO 2003. All rights reserved.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!