13.07.2015 Views

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Appendix 4136DuRant, 1994 99The types of questions which led to the selectionof this tool according to our pre-selected corecriteria were:5.3 How allocation occurredHow were subjects assigned to experimentalgroups?5.4 Any attempt to balance groups by designItem not covered9.2 Identification of prognostic factorsAs for 9.3 below9.3 Case-mix adjustmentWere appropriate variables or factorscontrolled for or blocked during the analysis?Were other potentially confounding variableshandled appropriately?This tool provides a list of 103 questions to aid theevaluation of research articles. The tool coversseveral study designs: experimental or quasiexperimental,survey designs and cross-sectional<strong>studies</strong>, retrospective chart reviews andretrospective <strong>studies</strong> and case–control <strong>studies</strong>.Some of the sections are general across all designs:‘introduction’ (8 items); ‘methods and procedures’(3 items); ‘results’ (12 items); and ‘discussion’(9 items). Additional sections relevant to <strong>non</strong><strong>randomised</strong><strong>intervention</strong> <strong>studies</strong> are:‘experimental or quasi-experimental designs’(26 items) and ‘statistical analysis for experimentaldesigns’ (5 items). The tool took 15–25 minutes tocomplete. Again, it was not designed for use in asystematic review and although it covers themajority of issues relating to internal validity, thetool does not force the reader to answer thequestions in a systematic manner. It is also verylong and includes several irrelevant items. Thistool is really a critical appraisal tool designed toprompt thinking regarding quality and does notprovide a means of comparing the quality of<strong>studies</strong> included in a review.The DuRant tool was not judged to be suitable foruse in a systematic review.Fowkes and Fulton, 1991 107The types of questions which led to the selectionof this tool according to our pre-selected corecriteria were:5.3 How allocation occurredItem not covered5.4 Any attempt to balance groups by designControl group acceptable? Matching orrandomisation9.2 Identification of prognostic factorsDistorting influences? Confounding factors9.3 Case-mix adjustmentDistorting influences? Distortion reduced byanalysisThis tool provides a list of six questions as achecklist for appraising a medical article. The toolaims to cover several study designs includingcross-sectional, cohort, controlled trials andcase–control <strong>studies</strong>. Each of the six questions listsitems to be scored ‘major problem’, minorproblem’ or ‘no problem’. The tool took around10 minutes to complete. This is another exampleof a checklist that was designed to promptthinking about quality but does not permit thesystematic assessment of quality across <strong>studies</strong>.Limited detail of what the items mean is providedin the actual checklists and the entire paper needsto be read to provide any guidance.The Fowkes tool was not judged to be suitable foruse in a systematic review.Hadorn and colleagues, 1996 102The types of questions which led to the selectionof this tool according to our pre-selected corecriteria were:5.3 How allocation occurredFor cohort <strong>studies</strong>, the study groups were nottreated concurrently5.4 Any attempt to balance groups by designItem not covered9.2 Identification of prognostic factorsKnown prognostic factors for the outcome ofinterest or possible confounders were notmeasured at baseline9.3 Case-mix adjustmentA significant difference was found in one ormore baseline characteristics that are knownprognostic factors or confounders, but noadjustments were made for this in the analysisThis tool was developed for the rating of evidencefor clinical practice guidelines. It provides a list ofeight quality assessment criteria; each criterion listswhat the authors consider to be major and minorflaws. The criterion for allocation of patients totreatment groups has separate responses for RCTsand cohort or registry <strong>studies</strong>. The tool took 15–20minutes to complete. Although the tool covered anumber of validity issues, it was found to be fairlydifficult to use in its published format and perhapsconcentrated overly on pharmaceutical <strong>intervention</strong>s.The Hadorn tool was not judged to be suitable foruse in a systematic review.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!