16.11.2012 Views

Brain–Computer Interfaces - Index of

Brain–Computer Interfaces - Index of

Brain–Computer Interfaces - Index of

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

312 Y. Li et al.<br />

components is larger than the number <strong>of</strong> observed mixtures and the components can<br />

be dependent to each other. Interested readers can refer to [17, 18] for more details<br />

on SCA.<br />

2.1.6 Common Spatial Patterns (CSP)<br />

In contrast to PCA, which maximizes the variance <strong>of</strong> the first component in the<br />

transformed space, the common spatial patterns (CSP) maximizes the variance-ratio<br />

<strong>of</strong> two conditions or classes. In other words, CSP finds a transformation that maximizes<br />

the variance <strong>of</strong> the samples <strong>of</strong> one condition and simultaneously minimizes<br />

the variance <strong>of</strong> the samples <strong>of</strong> the other condition. This property makes CSP one <strong>of</strong><br />

the most effective spatial filters for BCI signal processing provided the user’s intent<br />

is encoded in the variance or power <strong>of</strong> the associated brain signal. BCIs based on<br />

motor imagery are typical examples <strong>of</strong> such systems [19–21]. The term conditions<br />

or classes refer to different mental tasks; for instance, left hand and right hand motor<br />

imagery. The CSP algorithm requires not only the training samples but also the<br />

information to which condition the samples belong to compute the linear transformation<br />

matrix. In contrast, PCA and ICA do not require this additional information.<br />

Therefore, PCA and ICA are unsupervised methods, whereas CSP is a supervised<br />

method, which requires the condition or class label information for each individual<br />

training sample.<br />

For a more detailed explanation <strong>of</strong> the CSP algorithm, we assume that W ∈ Rn×n is a CSP transformation matrix. Then the transformed signals are WX, where X is<br />

the data matrix <strong>of</strong> which each row represents an electrode channel. The first CSP<br />

component, i.e. the first row <strong>of</strong> WX, contains most <strong>of</strong> the variance <strong>of</strong> class 1 (and<br />

least <strong>of</strong> class 2), while the last component i.e. the last row <strong>of</strong> WX contains most <strong>of</strong><br />

the variance <strong>of</strong> class 2 (and least <strong>of</strong> class 1), where classes 1 and 2 represent two<br />

different mental tasks.<br />

The columns <strong>of</strong> W−1 are the common spatial patterns [22]. As explained earlier,<br />

the values <strong>of</strong> the columns represent the contribution <strong>of</strong> the CSP components to the<br />

channels, and thus can be used to visualize the topographic distribution <strong>of</strong> the CSP<br />

components. Figure 4 shows two common spatial patterns <strong>of</strong> an EEG data analysis<br />

example on left and right motor-imagery tasks, which correspond to the first and the<br />

last columns <strong>of</strong> W−1 respectively. The topographic distributions <strong>of</strong> these components<br />

correspond to the expected contralateral activity <strong>of</strong> the sensorimotor rhythms<br />

induced by the motor imagery tasks. That is, left hand motor imagery induces sensorimotor<br />

activity patterns (ERD/ERS) over the right sensorimotor areas, while right<br />

hand motor imagery results in activity patterns over the left sensorimotor areas.<br />

CSP and PCA are both based on the diagonalization <strong>of</strong> covariance matrices.<br />

However, PCA diagonalizes one covariance matrix, whereas CSP simultaneously<br />

diagonalizes two covariance matrices R1 and R2, which correspond to the two<br />

different classes. Solving the eigenvalue problem is sufficient for PCA. For CSP,<br />

the generalized eigenvalue problem with R −1<br />

1 R1 has to be solved [12] togive<br />

the transformation matrix W that simultaneously diagonalizes the two covariance<br />

matrices:

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!