28.07.2013 Views

Project Proposal (PDF) - Oxford Brookes University

Project Proposal (PDF) - Oxford Brookes University

Project Proposal (PDF) - Oxford Brookes University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

FP7-ICT-2011-9 STREP proposal<br />

18/01/12 v1 [Dynact]<br />

Based on the general ideas and methods developed by the SYSTeMS-IDSIA teams and described recently in<br />

[62,63], we will derive formulae for the probabilities of such sequences in imprecise HMMs that lend<br />

themselves to backwards recursion, and therefore to generalisations of the backward algorithms to precise<br />

HMMs, essentially linear in the length of the Markov chain.<br />

SubTask 1.2.2. - Computation of lower and upper likelihoods<br />

These are the lower and upper probabilities of the observation sequences. Backwards recursion calculations<br />

for these entities should also be possible, based on the seminal work in [62,63].<br />

SubTask 1.2.3. - Most probable explanation<br />

The "most probable explanation" for precise HMMs consists on finding the state sequence (explanation)<br />

with the highest probability, conditional on the observation sequence. In an imprecise probabilities context,<br />

there are basically two ways of formulating this problem [94]. The "maximin" method is a pessimistic<br />

approach which seeks the state sequence with the highest lower probability, conditional on the observations.<br />

In the "maximality" method, one state sequence is considered to be a better explanation than another if it has<br />

a higher posterior probability in all the precise models that are consistent with a given imprecise model.<br />

Preliminary analyses [64] hint that the latter method is most likely to be able to be implemented efficiently.<br />

Task 1.3 - Imprecise EM for hidden Markov models<br />

Task 1.2 is about making inference on imprecise HMMs, however learnt. Yet, learning imprecise HMMs<br />

from video sequences is not trivial, as the number of operation to be performed for each iteration of the<br />

algorithm can grow exponentially with the input size. This task involves:<br />

SubTask 1.3.1 - Specialisation of the EM algorithms to the HMM topology<br />

The imprecise EM developed in Task 1.1 has to be first specialized to the special case of HMMs, based on<br />

the inference algorithms developed in Task 1.2.<br />

SubTask 1.3.2 - Implementation of recursive formulae for parameter updating<br />

Recursive formulae describing the revision of model parameters have to be determined.<br />

SubTask 1.3.3 - Convergence and initialisation: empirical study<br />

The convergence properties of the EM algorithm, specialised to the HMM topology, have to be assessed. Its<br />

robustness and the quality of the estimates with respect to parameter initialisation (a crucial issue even in the<br />

precise case) needs to be considered as well.<br />

Task 1.4 - Imprecise Dynamical Graphical Models<br />

HMMs are an example of probabilistic graphical models with a dynamic component, in a way the simplest<br />

example of dynamic Bayesian networks. A general theory for dynamic BNs is well established, but a similar<br />

comprehensive framework is still missing for credal networks, the imprecise-probabilistic generalisations of<br />

Bayesian networks. Contributing to the development of such a general theory is the final aim of this work<br />

package, paying special attention to higher-order HMM topologies able to provide a better model of the<br />

correlation between the frames of a video sequence depicting an action. We also intend to explore the<br />

possibility of modelling these relations by means of undirected graphs, providing a generalisation of random<br />

Markov fields (which are already widely used in computer vision) to the imprecise probabilistic framework.<br />

SubTask 1.4.1 - Dynamical (and object-oriented) credal networks<br />

A crucial step consist on investigating whether the ideas pioneered in [62,63] for inference in credal trees can<br />

be extended to more general types of probabilistic graphical models. Methods will then have to be devised<br />

for (recursively) constructing joint models from the local imprecise-probabilistic models attached to the<br />

nodes of such credal networks.<br />

SubTask 1.4.2 - Inference algorithms and complexity results for dynamic credal networks<br />

Based on those (recursive) methods for constructing joints, recursive or message-passing algorithms need to<br />

be developed for updating these networks based on new observations, computing lower and upper<br />

probabilities, likelihoods, and most probable explanation.<br />

SubTask 1.4.3 - Imprecise Markov random fields<br />

Markov random fields are graphical models based on undirected graphs particularly suited for modelling<br />

pixel-to-pixel correlation in image analysis [85]. The extension of our inference techniques to imprecise<br />

Markov random fields is potentially of enormous impact and will be pursued towards the end of WP1.<br />

WP2 – Classification of generative dynamical models<br />

Once represented video sequences as either precise or imprecise-probabilistic graphical models (for instance<br />

of the class of imprecise hidden Markov models), gesture and action recognition reduce to classifying such<br />

generative models. Different competing approaches to this issue can be foreseen.<br />

Task 2.1 – Dissimilarity measures for imprecise graphical models<br />

SubTask 2.1.1 - Modelling similarity between sets of probability distributions<br />

<strong>Proposal</strong> Part B: page [14] of [67]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!