16.11.2012 Views

Brain–Computer Interfaces - Index of

Brain–Computer Interfaces - Index of

Brain–Computer Interfaces - Index of

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

336 A. Schlögl et al.<br />

whereby T indicates the transpose operator. The variances are the diagonal elements<br />

<strong>of</strong> the variance-covariance matrix, and the <strong>of</strong>f-diagonal elements indicate the covariance<br />

σi,j = 1 �Nt=1 �<br />

N (xi(t) − μi) · (xj(t) − μj) � between the i-th and j-th element.<br />

We define also the so-called extended covariance matrix (ECM) E as<br />

Nx �<br />

ECM(x) = Ex = [1, x(t)] T ·[1, x(t)] =<br />

t=1<br />

� � �<br />

a b 1 μx<br />

= Nx ·<br />

c D μx T �x + μT x μ �<br />

x<br />

One can obtain from the ECM E the number <strong>of</strong> samples N = a, the mean<br />

μ = b/a as well as the variance-covariance matrix Σ = D/a − (c/a) · (b/a). This<br />

decomposition will be used later.<br />

The adaptive version <strong>of</strong> the ECM estimator is<br />

(16)<br />

Ex(t) = (1 − UC) · Ex(t − 1) + UC · [1, x(t)] T · [1, x(t)] (17)<br />

where t is the sample time, UC is the update coefficient. The decomposition <strong>of</strong><br />

the ECM E, mean μ, variance σ 2 and covariance matrix Σ is the same as for the<br />

stationary case; typically is N = a = 1.<br />

1.2.4 Adaptive Inverse Covariance Matrix Estimation<br />

Some classifiers like LDA or QDA rely on the inverse Σ−1 <strong>of</strong> the covariance<br />

matrix Σ; therefore, adaptive classifiers require an adaptive estimation <strong>of</strong> the inverse<br />

covariance matrix. The inverse covariance matrix Σ can be obtained from Eq. (16)<br />

with<br />

Σ −1 �<br />

= a · D − c · a −1 �−1 · b . (18)<br />

This requires an explicit matrix inversion. The following formula shows how<br />

the inverse convariance matrix Σ −1 can be obtained without an explicit matrix<br />

inversion. For this purpose, the block matrix decomposition [1] and the matrix<br />

inversion lemma (20) is used. Let us also define iECM = E −1 =<br />

S = D − CA −1 B. According to the block matrix decomposition [1]<br />

E −1<br />

x =<br />

�<br />

A B<br />

=<br />

� −1<br />

=<br />

� A B<br />

C D<br />

� �<br />

A−1 + A−1BS−1CA−1 −A−1BS−1 C D<br />

−S−1CA−1 S−1 �<br />

1 + μx�−1 x μx −μT x �−T<br />

�<br />

x<br />

−�−1 x μTx �−1 x<br />

� −1<br />

with<br />

The inverse extended covariance matrix iECM = E −1 can be obtained adaptively<br />

by applying the matrix inversion lemma (20) to Eq. (17). The matrix inversion<br />

lemma (also know as Woodbury matrix identity) states that the inverse A −1 <strong>of</strong> a<br />

given matrix A = (B+UDV) can be determined by<br />

(19)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!