16.11.2012 Views

Brain–Computer Interfaces - Index of

Brain–Computer Interfaces - Index of

Brain–Computer Interfaces - Index of

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

320 Y. Li et al.<br />

5.1.2 Support Vector Machine<br />

The support vector machine (SVM) [45] is a linear discriminant tool that maximizes<br />

the margin <strong>of</strong> separation between two classes based on the assumption that it<br />

improves the classifier’s generalization capability. In contrast, Fisher linear discriminant<br />

maximizes the average margin, i.e., the margin between the class means [34].<br />

Figure 6 illustrates a typical linear hyperplane learned by a SVM.<br />

From articles such as [34, 45], an optimal classifier for unseen data is that with<br />

1<br />

the largest margin<br />

||w|| 2 , i.e. <strong>of</strong> minimal Euclidean norm ||w|| 2 . For a linear SVM,<br />

the large margin (i.e. the optimal hyperplane w) is realized by minimizing the cost<br />

function on the training data<br />

under the constraints<br />

J (w, ξ) = 1<br />

2 �w�2 + C<br />

n�<br />

ξi, (20)<br />

� T<br />

yi w xi + b � ≥ 1 − ξi, ξi ≥ 0, ∀i = 1, ··· , n, (21)<br />

Fig. 6 An example that illustrates the training <strong>of</strong> a support vector machine that finds the optimal<br />

hyperplane with maximum margin <strong>of</strong> separation from the nearest training patterns. The nearest<br />

training patterns are called the support vectors<br />

i=1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!