11.07.2015 Views

Clinical Trials

Clinical Trials

Clinical Trials

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

❘❙❚■ Chapter 1 | Randomized <strong>Clinical</strong> <strong>Trials</strong>Randomization in conjunction with a large sample size is the most effective way torestrict such confounding, by evenly distributing both known and unknownconfounding factors between treatment groups. If, before the study begins, we knowwhich factors may confound the trial then we can use randomization techniquesthat force a balance of these factors (stratified randomization) (see Chapter 7). In theanalysis stage of a trial, we might be able to restrict confounding using specialstatistical techniques such as stratified analysis and regression analysis (see Chapter 24).Random errorEven if a trial has an ideal design and is conducted to minimize bias and confounding,the observed treatment effect could still be due to random error or chance [4,5].The random error can result from sampling, biologic, or measurement variation inoutcome variables. Since the patients in a clinical trial are only a sample of all possibleavailable patients, the sample might yet show a chance false result compared to theoverall population. This is known as a sampling error. Sampling errors can be reducedby choosing a very large group of patients or by using special analytic techniquesthat combine the results of several smaller studies, called a meta-analysis (seeChapter 38). Other causes of random error are described elsewhere [5].Statistical analyses deal with random error by providing an estimate of how likelythe measured treatment effect reflects the true effect (see Chapters 18–21).Statistical testing or inference involves an assessment of the probability of obtainingthe observed treatment difference (or more extreme difference for an outcome),assuming that there is no difference between treatments. This probability is oftencalled the P-value or false-positive rate. If the P-value is less than a specified criticalvalue (eg, 5%), the observed difference is considered to be statistically significant.The smaller the P-value, the stronger the evidence is for a true difference betweentreatments. On the other hand, if the P-value is greater than the specified criticalvalue then the observed difference is regarded as not statistically significant, andis considered to be potentially due to random error or chance. The traditionalstatistical threshold is a P-value of 0.05 (or 5%), which means that we only accepta result when the likelihood of the conclusion being wrong is less than 1 in 20,ie, we conclude that only one out of a hypothetical 20 trials will show a treatmentdifference when in truth there is none.Statistical estimates summarize the treatment differences for an outcome in theform of point estimates (eg, means or proportions) and measures of precision(eg, confidence intervals [CIs]) (see Chapters 18–21). A 95% CI for a treatmentdifference means that the range presented for the treatment effect is 95% likelyto contain (when calculated in 95 out of 100 hypothetical trials assessing the sametreatment effect) the true value of the treatment difference, ie, the value we wouldobtain if we were to use the entire available patient population.6

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!