02.03.2018 Views

Advances in E-learning-Experiences and Methodologies

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Knowledge Discovery from E-Learn<strong>in</strong>g Activities<br />

e-Learn<strong>in</strong>g<br />

Chen, C.M., Lee, H.M., & Chen, Y.H. (2005).<br />

Personalized e-learn<strong>in</strong>g system us<strong>in</strong>g item response<br />

theory. Computers & Education, 44(3),<br />

237-255.<br />

Conole, G., Dyke, M., Oliver, M., & Seale, J.<br />

(2004). Mapp<strong>in</strong>g pedagogy <strong>and</strong> tools for effective<br />

learn<strong>in</strong>g design. Computers & Education,<br />

43(1-2), 17-33.<br />

Driscoll, M. (2002). Web-based tra<strong>in</strong><strong>in</strong>g: Design<strong>in</strong>g<br />

e-learn<strong>in</strong>g experiences, Pfeiffer; 2 Har/Cdr<br />

edition.<br />

Fig<strong>in</strong>i, S., Bald<strong>in</strong>i, P., & Giudici, P. (2006). Nonparametric<br />

approaches for e-learn<strong>in</strong>g data. Lecture<br />

Notes <strong>in</strong> Computer Science, 4065, 548-560.<br />

Shih, T.K., Wang, T.H., Chang, C.Y., Kao, T.C., &<br />

Hamilton, D. (2007). Ubiquitous e-learn<strong>in</strong>g with<br />

multimodal multimedia devices. IEEE transactions<br />

on multimedia, 9(3), 487-499.<br />

Watk<strong>in</strong>s, R. (2005). 75 e-learn<strong>in</strong>g activities:<br />

Mak<strong>in</strong>g onl<strong>in</strong>e learn<strong>in</strong>g <strong>in</strong>teractive, Pfeiffer;<br />

Har/Cdr edition.<br />

Annex 1<br />

This annex conta<strong>in</strong>s the formulation of the parametric<br />

ICA algorithm applied <strong>in</strong> the chapter.<br />

The probability density function of the data x<br />

can be expressed as:<br />

p(x) = | det W|p(y) (3)<br />

where p(y) can be expressed as the product of the<br />

marg<strong>in</strong>al distributions s<strong>in</strong>ce it is the estimate of<br />

the <strong>in</strong>dependent components:<br />

M<br />

p( y) = ∏ pi<br />

( yi<br />

)<br />

i=<br />

1<br />

(4)<br />

If we assume a nonparametric model for p(y), we<br />

can estimate the source pdf’s from a set of tra<strong>in</strong><strong>in</strong>g<br />

samples obta<strong>in</strong>ed from the orig<strong>in</strong>al dataset<br />

us<strong>in</strong>g (2). We propose a kernel density estimation<br />

technique (Scott & Sa<strong>in</strong>, 2004; Duda, Hart, &<br />

Stork, 2000), where the marg<strong>in</strong>al distribution of<br />

a reconstructed component is approximated as:<br />

( m)<br />

∑<br />

n'<br />

( ')<br />

1⎛<br />

n<br />

ym<br />

− y ⎞<br />

m<br />

− 2 ⎜ h ⎟<br />

⎝ ⎠2<br />

p y = a ⋅ e , m = 1... M<br />

(5)<br />

where a is a normalization constant <strong>and</strong> h is a<br />

constant that adjust the degree of smooth<strong>in</strong>g<br />

of the estimated pdf. The learn<strong>in</strong>g algorithm<br />

can be derived us<strong>in</strong>g the maximum-likelihood<br />

estimation. In a probability context it is usual to<br />

maximize the log-likelihood of (3) with respect<br />

to the unknown matrix W:<br />

( W) log det W . p( y)<br />

L<br />

=<br />

W<br />

log det W<br />

= +<br />

W<br />

where<br />

W<br />

log p<br />

W<br />

log det W<br />

T<br />

=<br />

−<br />

W<br />

( W ) 1<br />

( y)<br />

.<br />

(6)<br />

(7)<br />

Impos<strong>in</strong>g <strong>in</strong>dependence on y <strong>and</strong> us<strong>in</strong>g (4):<br />

M<br />

∑<br />

log<br />

p( y<br />

M<br />

) log p( ym)<br />

= ∑<br />

=<br />

W m=<br />

1 W<br />

1 ( )<br />

M<br />

1<br />

= ∑<br />

( m)<br />

W p( ym)<br />

p y<br />

( )<br />

m= 1 m=<br />

1<br />

m<br />

where us<strong>in</strong>g (5):<br />

( )<br />

(8)<br />

p ym p ym ym<br />

.<br />

y W<br />

( ')<br />

1 ⎛ n<br />

ym<br />

− y ⎞<br />

m<br />

− n '<br />

m 2 ⎜ h ⎟ ⎛<br />

⎝ ⎠2<br />

m<br />

−<br />

m<br />

(9)<br />

p y ( )<br />

y y ⎞ 1<br />

= − a∑e<br />

, m = 1... M<br />

y<br />

⎜<br />

m<br />

n '<br />

h ⎟<br />

⎝ ⎠ h

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!