11.07.2015 Views

Clinical Trials

Clinical Trials

Clinical Trials

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Clinical</strong> <strong>Trials</strong>: A Practical Guide ■❚❙❘complications with this treatment without significant clinical benefit, promptingphysicians to weigh the risk–benefit ratio more carefully. Therefore, there arelessons to be learnt from negative/neutral trials.7. Was the study negative because it was inadequately powered?With increased publication of trials with negative or neutral results, it is importantto be clear whether the trial was negative due to errors in sample-size calculationsor whether the new treatment strategy really was no different to the standardtreatment. A trial should be large enough to detect a worthwhile effect asstatistically significant if it exists, or to give confidence in the notion that the newtreatment is no better than the control treatment.Calculation of sample size is based on the expected difference in the primaryoutcome measure between the two groups being assessed, and the baseline eventrate expected in the standard therapy group. The expected difference should alsobe worthwhile in real practice. For example, a 1% reduction in event rates isa useful difference to pursue if the event rate with standard therapy is around5%–10%, but would be less meaningful if the event rate with standard therapyis 40%.Underpowered studies are common because expectations are over-ambitious andadditional patients might need to be recruited, but the funding to extend the studymight not be available. Underpowered studies can lead to a Type II or beta error,ie, the erroneous conclusion that an intervention has no effect when the trial sizeis inadequate to allow a comparison. In contrast, a Type I or alpha error is theconclusion that a difference is significant when in fact it is due to chance.By convention, the threshold for considering a result as significant is set higherthan for considering a study to be nonsignificant, therefore favoring traditionaltherapies over new therapies that lack established side-effect profiles [13].8. Were the outcome measures reported appropriately?A study’s outcome measures need to be clearly defined. Standardizedmeasurement criteria for outcomes are needed for the results to have clinicalrelevance. If multiple outcome measures are being collected, a precise statementshould explain how these measures are to be prioritized and reported relative tothe study objectives.433

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!