13.07.2015 Views

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

Evaluating non-randomised intervention studies - NIHR Health ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

© Queen’s Printer and Controller of HMSO 2003. All rights reserved.Review Method of incorporating Results of quality investigationquality into synthesisLittenberg, 1998 316 Qualitative Quality and results of each study were discussed individually, starting with highest scoring <strong>studies</strong>, according touse of control group. Overall quality stated to be poor, but tentative conclusions were drawn from the resultsLoblaw, 1998 223 Qualitative Summary of level of evidence available for each procedure/recommendation. Few high-quality papers were foundLong, 1995 380 Qualitative Studies discussed in terms of level of evidence and grading of recommendations. Authors conclude that a greaterpercentage of <strong>studies</strong> using indirect positioning as an <strong>intervention</strong> had stronger evidence to support the<strong>intervention</strong>s than <strong>studies</strong> with direct positioning as the <strong>intervention</strong>. Sackett’s framework should not be the solemethod of treatment evaluation because the infant’s medical status needs to be considered prior toimplementing a treatment strategyLucassen, 1998 286 Quantitative Quality score and effect measure were not correlated (r = –0.02; p = 0.92)Correlation analysis between quality scoreand effectLyons, 1991 319 Quantitative Studies with high internal validity demonstrated larger pooled effect size (ES 1.138, SD 1.119) compared withSubgroup analysis according to quality medium (ES 0.818, SD 0.778) or low (ES 0.811, SD 0.670) validity <strong>studies</strong>. The low-validity comparison is(high/medium/low) significantly different from the medium and high comparisons (p < 0.05).MacLehose, 2000 26 Quantitative Neither total quality nor any component of the quality score was significantly associated with effect sizeCorrelation analysis to investigate impact ofindividual components of the quality scoreMacMillan, 1994 396 Qualitative Some methodological limitations and their implications on study results were discussed narratively. The impacton the conclusions of the review is not clearMacMillan, 1994 191 Qualitative Methodological limitations were discussed narratively. Study outcomes were also ranked by methodologicalscore. There does not appear to be a great deal of difference in effect according to qualityMargolis, 1995 322 Qualitative Some narrative discussion of quality was provided. Methodological weaknesses make any conclusions regardingeffectiveness impossibleMarrs, 1995 323 Quantitative No-treatment control group led to higher ES than where placebo control used. Researcher contact with subjectsSubgroup analyses according to individual had little impact on ES, and unpublished <strong>studies</strong> produced slightly higher ES than published, as did <strong>randomised</strong>quality criteriacompared with <strong>non</strong>-<strong>randomised</strong> <strong>studies</strong>. A significant correlation was also found between subject retention instudy and ES (r = 0.268)Massy, 1995 324 Quantitative In each case the results of the regression models were not substantively different (data not presented)Regression model weighted by inversevariance and study quality. Results comparedwith weighting by inverse variance alone,sample size and unweighted modelsMathews, 1996 397 Qualitative Studies discussed narratively according to grade (only grades A or B included). Methodological weaknesses makeit difficult to assess the relative importance of each <strong>intervention</strong>continued<strong>Health</strong> Technology Assessment 2003; Vol. 7: No. 27161

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!