25.12.2013 Views

CRANFIELD UNIVERSITY Eleni Anthippi Chatzimichali ...

CRANFIELD UNIVERSITY Eleni Anthippi Chatzimichali ...

CRANFIELD UNIVERSITY Eleni Anthippi Chatzimichali ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

1.6.3 Leave-One-Out Cross-Validation<br />

Leave-One-Out Cross-Validation (LOOCV) is the extreme version of -fold crossvalidation.<br />

In this case, is equal to , which is the total number of samples in the<br />

dataset (Duan et al., 2003). Thus, training and testing are repeated times. During<br />

each run, a single sample is used as the test set, while all the remaining<br />

samples are used in in the model’s training process as illustrated in Figure 1-8.<br />

Even though the LOOCV algorithm produces an almost unbiased estimate of the<br />

expected test error, due to its high variance it may often be leading to unreliable<br />

estimates (Efron, 1983; Kohavi, 1995; Chapelle and Vapnik, 2000; Duan et al., 2003;<br />

Glasmachers, 2008; Clarke et al. 2009). Furhermore, LOOCV is a computationally<br />

expensive and time-consuming validation method; thus, it is mainly used in cases<br />

where the input data are extremely scarce such that the computational expense is no<br />

longer a discouraging factor (Cawley et al., 2007).<br />

Figure 1-8 Leave-One-Out Cross Validation (LOOCV)<br />

The figure graphically represents the steps of leave-one-out cross-validation. In this method, the<br />

number of folds is equal to the number of initial observations. Thus, in every run, all samples but one<br />

are used for training, whereas the single sample is kept aside for testing. The figure has been extracted<br />

from http://research.cs.tamu.edu/prism/lectures/iss/iss_l13.pdf<br />

26

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!