13.07.2015 Views

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Use of quality assessment in systematic reviews of <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>TABLE 12 Quality assessment in systematic reviews: coverage of internal validityProportion of Proportion of Proportion ofNo. with 1 reviews reviews claiming reviews includingpre-specified specifying items QA (%) NRS (%)items (%) (n = 124) (n = 168) (n= 511)Coverage of individual domainsCreation of groups 103 83 61 20Blinding 78 63 46 15Soundness of information 35 28 21 7Follow-up 89 72 53 17Comparability 68 55 41 13Outcome 46 37 27 9Summary of domains covered5 or 6 domains 34 27 20 7All 6 domains 11 9 7 2Summary of core items coveredNumber of core items met:1 62 50 37 122 19 15 11 43 5 4 3 14 0 0 0 0Number meeting each core item:5.3. How allocation occurred 24 19 14 55.4. Any attempt to balance groups by design 17 14 10 39.2. Identification of prognostic factors 12 10 7 29.3. Case-mix adjustment 33 27 20 7Reviews using ‘best’ tools:5 domains and 3 core items 3 2 2 1QA, quality assessment; NRS, <strong>non</strong>-<strong>randomised</strong> study.46One 93 looked at criterion validity but did notpresent the results of the comparison. Theother 129 used the judges panel technique toexamine face and content validity. The results ofthis were not reported in detail, only that thepreliminary list of over 200 constructs was reducedto 71 critical factors.Twelve reviews in the sample (eight of themodified tools and four developed by reviewauthors) did not report the quality items assessedby the authors.Content of assessment toolsThe quality criteria included in the assessmenttools used could be identified for 124 reviews andthese were examined in more detail (Table 12).The majority of these reviews (83%) included atleast one item relating to the ‘creation of groups’domain; 72% considered ‘follow-up’ and looked at63% ‘blinding’; and 55% included items relating tothe ‘comparability of groups’ domain. Less thanhalf of the 124 reviews considered ‘analysis ofoutcome’ or ‘soundness of information’. Only 34reviews contained items in at least five out of sixinternal validity domains and 11 contained itemsin all six domains.Fewer than 21% of all reviews that included <strong>non</strong><strong>randomised</strong><strong>studies</strong> addressed any one of the sixinternal validity domains (see final column of Table12), 5% assessed five internal validity domains and2% assessed six domains. Looking more closely atthe four quality items that particularly distinguish<strong>non</strong>-<strong>randomised</strong> <strong>studies</strong> from RCTs, only 62reviews (12%) assessed at least one of the four coreitems, 19 of which (4%) assessed two of the itemsand only five (1%) looked at three of the four. Noreviews assessed all four items and 449 (88%)reviews addressed <strong>non</strong>e of the four. Of the fouritems, reviews were most likely to consider whetherthe study had conducted any case-mix adjustment(33 reviews, 7% of the sample), how the allocationhad occurred (24 reviews, 4.7%), whether anymatching had been used to balance groups (17reviews, 3%) and whether all important prognosticfactors had been identified (12 reviews, 2%). Onlythree reviews 109,112,130 used one of what we judged

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!