Shared Gaussian Process Latent Variables Models - Oxford Brookes ...
Shared Gaussian Process Latent Variables Models - Oxford Brookes ...
Shared Gaussian Process Latent Variables Models - Oxford Brookes ...
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
2.5. NON-LINEAR 33<br />
forced to be centered, <br />
i xi = 0. The optimal embedding ˆ X can be found<br />
through an eigenvalue problem.<br />
Laplacian Eigenmaps<br />
The proximity graph is also the starting point for Laplacian eigenmaps [5].<br />
Each node in the graph is connected to its neighbors by a vertex with an<br />
edge weight representing the locality of the points. Several different mea-<br />
sures of locality can be used. In the original paper either a heat kernel,<br />
wij = e − ||y i −y j ||2 2<br />
t , or constant wij = 1 was applied. Once the graph have<br />
been constructed the objective is to find an embedding X of the data such<br />
that points that are connected in the graph stay as close together as possible.<br />
For the first dimension,<br />
ˆX = argmin <br />
(xi − xj)Wij = y T Ly, (2.37)<br />
i,j<br />
where L is referred to as the Laplacian defined as L = D − W and D is<br />
a diagonal matrix such that Dii = <br />
j Wji. The objective Eq. 2.37 has a<br />
trivial solution zero dimensional solution representing the embedding using<br />
a single point. To remove this solution the solution is forced to be orthogonal<br />
to the constant vector 1, y T D1 = 0. Further, to shrinking the embedding a<br />
constraint on the scale y T Dy = 1 is appended to the objective. The diagonal<br />
matrix D provides a scaling of each point with respect to its locality to other<br />
points in the data. For a multi-dimensional embedding of the data this leads