28.07.2013 Views

Shared Gaussian Process Latent Variables Models - Oxford Brookes ...

Shared Gaussian Process Latent Variables Models - Oxford Brookes ...

Shared Gaussian Process Latent Variables Models - Oxford Brookes ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2.8. GP-LVM 49<br />

prior over the parameters leads to the following objective,<br />

L = Lr + <br />

lnθi + 1<br />

2 ||xi|| 2 . (2.58)<br />

i<br />

For a covariance function specifying a distribution over linear functions a<br />

closed form solution for Eq 2.56 exists [33]. However for general covariance<br />

functions the solution is found through gradient based optimization .<br />

As previously discussed, infinitely many solutions to the latent variable for-<br />

mulation of dimensionality reduction exists, to proceed the solution needs to be<br />

constrained by prior information. The GP-LVM solution is constrained by the<br />

GP marginal likelihoods trade-off between smooth solutions and a good data-fit<br />

Eq.2.50. By fixing the dimensionality of the latent representation a solution can<br />

be found.<br />

2.8.1 <strong>Latent</strong> Constraints<br />

The GP-LVM objective seeks the locations of the latent coordinates X that max-<br />

imize the marginal likelihood of the data. One advantage of directly optimizing<br />

the latent locations is that additional constraints on X can easily be incorporated<br />

in the GP-LVM framework. In the following section we will review some of the<br />

extensions in terms of latent constraints that has been applied to the GP-LVM.<br />

The fundamental difference between spectral and generative dimensionality<br />

reduction is the assumption made by the spectral algorithms that the latent coordi-<br />

nates can be found as a smooth mapping from the observed data. This means that<br />

we are interested in finding latent locations such that the locality in the observed<br />

data is preserved. Further this assumption implies that a smooth inverse to the<br />

i

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!