16.11.2012 Views

Brain–Computer Interfaces - Index of

Brain–Computer Interfaces - Index of

Brain–Computer Interfaces - Index of

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

318 Y. Li et al.<br />

selection as follows, a wide frequency band is first divided into several sub-bands<br />

which may overlap. For each sub-band, a r 2 -coefficient is calculated as a score using<br />

the observations in this sub-band. Based on this score, one or several sub-bands can<br />

be selected.<br />

5 Translation Methods<br />

After the features that reflect the intentions <strong>of</strong> the user are extracted, the next step is<br />

to translate these discriminative features into commands that operate a device.<br />

The translation methods employed in BCIs convert the features extracted into<br />

device control commands [7]. These commands can be <strong>of</strong> discrete-value such as<br />

letter selection, or <strong>of</strong> continuous-value such as vertical and horizontal cursor movement.<br />

The translation methods in BCIs <strong>of</strong>ten employ machine learning approaches<br />

in order to train a model from the training data collected. Translation methods can<br />

be broadly classified into classification and regression methods. Sect. 5.1 describes<br />

the former, which translates features into discrete-value commands, while Sect. 5.2<br />

describes the latter, which translates features into continuous-value commands.<br />

5.1 Classification Methods<br />

A classification algorithm is defined as one that involves building a model from the<br />

training data so that the model can be used to classify new data that are not included<br />

in the training data [36].<br />

The “No Free Lunch” theorem states that there is no general superiority <strong>of</strong> any<br />

approach over the others in pattern classification; furthermore, if one approach<br />

seems to outperform another in a particular situation, it is a consequence <strong>of</strong> its fit<br />

to the particular pattern recognition problem [33]. Hence, a wide variety <strong>of</strong> classification<br />

methods are used in BCIs. Prevailingly, the Fisher linear discriminant (FLD)<br />

[37–41], support vector machine (SVM) [37, 39, 40, 41], Bayesian [42], and hidden<br />

Markov model (HMM) [43] are commonly used as classification methods in BCIs.<br />

FLD and SVM classification algorithms are described in the following subsections.<br />

The classification algorithms use the training data that comprises n samples<br />

denoted xj with class label yj where j = 1, ··· , n, to form the training model.<br />

The classifier then calculates an estimate <strong>of</strong> the class label y <strong>of</strong> an unseen sample<br />

using the training model. These algorithms (as well as many others) are implemented<br />

for Matlab in the Statistical Pattern Recognition ToolBox (STPRTool) from<br />

http://cmp.felk.cvut.cz/cmp/s<strong>of</strong>tware/stprtool/.<br />

5.1.1 Fisher Linear Discriminant<br />

Linear discrimination is a method that projects the high dimensional feature data<br />

onto a lower dimensional space. The projected data is then easier to separate into two<br />

classes. In Fisher linear discriminant (FLD), the separability <strong>of</strong> the data is measured

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!