11.07.2015 Views

A comparison of bootstrap methods and an adjusted bootstrap ...

A comparison of bootstrap methods and an adjusted bootstrap ...

A comparison of bootstrap methods and an adjusted bootstrap ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

ootstrap method <strong><strong>an</strong>d</strong> demonstrated that it is more robust th<strong>an</strong> other <strong>methods</strong> acrossvarying signal levels <strong><strong>an</strong>d</strong> classifiers in this context. For small to moderate sized samples,we suggest using the <strong>adjusted</strong> <strong>bootstrap</strong> method since 1) it remains conservative, henceavoids overly optimistic assessment <strong>of</strong> a prediction model; 2) it does not suffer fromextremely large bias or variability in <strong>comparison</strong> to other <strong>methods</strong>. These features <strong>of</strong> the<strong>adjusted</strong> <strong>bootstrap</strong> method are particularly appealing for small samples when otherprediction error estimation <strong>methods</strong> encounter difficulties.REFERENCES1. Stone M. Cross-validatory choice <strong><strong>an</strong>d</strong> assessment <strong>of</strong> statistical predictions. Journal <strong>of</strong>the Royal Statistical Society, Serial B 1974; 36: 111-147.2. Efron B. Estimating the error rate <strong>of</strong> a prediction rule: improvement on crossvalidation.Journal <strong>of</strong> the Americ<strong>an</strong> Statistical Association 1983; 78: 316-331.3. Efron B, Tibshir<strong>an</strong>i R. Improvement on cross-validation: the .632+ <strong>bootstrap</strong> method.Journal <strong>of</strong> the Americ<strong>an</strong> Statistical Association 1997; 92: 548-560.4. Efron B, Tibshir<strong>an</strong>i R. An Introduction to the Bootstrap. Chapm<strong>an</strong> <strong><strong>an</strong>d</strong> Hall, 1998.5. Breim<strong>an</strong> L. Out-<strong>of</strong> –bag estimation. Technical Report, Department <strong>of</strong> Statistics,University <strong>of</strong> California, Berkeley, 1996.6. Breim<strong>an</strong> L. Bagging predictors. Machine Learning 1996; 24:123-140.7. Dudoit S, Fridly<strong><strong>an</strong>d</strong> J. Classification in microarray experiments. Statistical Analysis <strong>of</strong>Gene Expression Microarray Data. Chapm<strong>an</strong> <strong><strong>an</strong>d</strong> Hall, 2003; pp 93-158.8. Molinaro AM, Simon R, Pfeiffer RM. Prediction error estimation: a <strong>comparison</strong> <strong>of</strong>resampling <strong>methods</strong>. Bioinformatics 2005; 21: 3301-3307.9. Fu W, Carroll RJ, W<strong>an</strong>g S. Estimating misclassification error with small samples via<strong>bootstrap</strong> cross-validation. Bioinformatics 2005; 21: 1979-1986.10. Simon R, Radmacher MD, Dobbin K, McSh<strong>an</strong>e LM. Pitfalls in the use <strong>of</strong> DNAmicroarray data for diagnostic <strong><strong>an</strong>d</strong> prognostic classification. Journal <strong>of</strong> the NationalC<strong>an</strong>cer Institute 2003; 95: 14-18.11. Ambroise C, McLachl<strong>an</strong> GJ. Selection bias in gene extraction on the basis <strong>of</strong>microarray gene-expression data. Proceedings <strong>of</strong> National Academy <strong>of</strong> Science 2002; 99:6562-6566.18

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!