11.07.2015 Views

Clinical Trials

Clinical Trials

Clinical Trials

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Clinical</strong> <strong>Trials</strong>: A Practical Guide ■❚❙❘Table 3. How to improve precision of the treatment effect in cluster randomized trials.• Have clear justification for the use of the cluster randomized trial design• Carefully select the outcome measures• Adjust the sample size according to the size and number of clusters• Take into account the clustering aspect of the design in the analysis• Carry out sensitivity analyses to assess the robustness of results, using various statistical methods specifiedin the protocolSensitivity analysisSensitivity analyses assess how estimated treatment effects vary with differentstatistical methods, in particular methods that do and do not take the cluster effectinto consideration. If the estimates of treatment effect are sensitive to a clustereffect in a CRT, it would suggest that the CRT design is important.A sensitivity study that compared analytical methods in CRTs showed that resultsfrom different approaches that address the cluster effect are less sensitive whenoutcomes are continuous than if outcomes are binary [12,14].Bias in published cluster randomized trialsAlthough there is increasing recognition of the methodological issues associatedwith CRTs, many investigators are still not clear about the impact of this design onsample size requirements and the results of analysis.A retrospective review of CRTs from January 1997 to October 2002 examined theprevalence of a risk of bias associated with the design and conduct of CRTs [15].The study showed that, out of 36 trials at the cluster level, 15 trials (42%) providedevidence for appropriate allocation and 25 (69%) used stratified allocation. Fewtrials showed evidence of imbalance at the cluster level. However, some evidenceof susceptibility to risk of bias at the individual level existed in 14 studies (39%).The authors concluded that some published CRTs might not have taken adequateprecautions against threats to the internal validity of their design [15].Similarly, another review explored the appropriate use of methodological andanalytical techniques used in reports of CRTs of primary prevention trials [16].Out of 24 articles identified, only four (19%) included sample size calculations ordiscussions of power that allowed for clustering, while only 12 (57%) took clusteringinto account in the statistical analysis. The authors concluded that design and analysisissues associated with CRTs generally remain unrecognized (see Table 3) [16].147

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!