28.07.2013 Views

Learning manifolds of dynamical models for activity recognition

Learning manifolds of dynamical models for activity recognition

Learning manifolds of dynamical models for activity recognition

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

[16] F. Cuzzolin, Manifold learning <strong>for</strong> multi-dimensional autoregressive<br />

<strong>dynamical</strong> <strong>models</strong>, Machine <strong>Learning</strong> <strong>for</strong> Visionbased<br />

Motion Analysis (L. Wang, G. Zhao, L. Cheng, and<br />

M. Pietikine, eds.), Springer, 2010.<br />

[17] F. Cuzzolin, D. Mateus, D. Knossow, E. Boyer, and<br />

R. Horaud, Coherent laplacian protrusion segmentation,<br />

CVPR’08.<br />

[18] F. Cuzzolin, A. Sarti, and S. Tubaro, Action modeling with<br />

volumetric data, ICIP’04, vol. 2, pp. 881– 884.<br />

[19] N. Dalai and B. Triggs, Histograms <strong>of</strong> oriented gradients <strong>for</strong><br />

human detection, CVPR’06, pp. 886– 893.<br />

[20] M. N. Do, Fast approximation <strong>of</strong> Kullback-Leibler distance<br />

<strong>for</strong> dependence trees and hidden Markov <strong>models</strong>, IEEE Signal<br />

Processing Letters 10 (2003), no. 4, 115 – 118.<br />

[21] O. Duchenne, I. Laptev, J. Sivic, F. Bach, and J. Ponce, Automatic<br />

annotation <strong>of</strong> human actions in video, ICCV’09.<br />

[22] C. F. Eick, A. Rouhana, A. Bagherjeiran, and R. Vilalta, Using<br />

clustering to learn distance functions <strong>for</strong> supervised similarity<br />

assessment, ICML and Data Mining, 2005.<br />

[23] R. Elliot, L. Aggoun, and J. Moore, Hidden markov <strong>models</strong>:<br />

estimation and control, Springer Verlag, 1995.<br />

[24] J. M. Wang et. al., Gaussian process <strong>dynamical</strong> model,<br />

NIPS’05.<br />

[25] L. Ralaivola et. al., Dynamical modeling with kernels <strong>for</strong><br />

nonlinear time series prediction, NIPS’04.<br />

[26] A. Galata, N. Johnson, and D. Hogg, <strong>Learning</strong> variablelength<br />

Markov <strong>models</strong> <strong>of</strong> behavior, CVIU 81 (2001), no. 3,<br />

398–413.<br />

[27] A. Gilbert, J. Illingworth, and R. Bowden, Scale invariant<br />

action <strong>recognition</strong> using compound features mined from<br />

dense spatio-temporal corners, 2008, pp. I: 222–233.<br />

[28] A. Gupta, P. Srinivasan, J. Shi, and L. S. Davis, Understanding<br />

videos, constructing plots learning a visually<br />

grounded storyline model from annotated videos., CVPR’09,<br />

pp. 2012–2019.<br />

[29] D. Han, L. Bo, and C. Sminchisescu, Selection and context<br />

<strong>for</strong> action <strong>recognition</strong>, ICCV’09.<br />

[30] N. Ikizler-Cinbis, R. G. Cinbis, and S. Sclar<strong>of</strong>f, <strong>Learning</strong><br />

actions from the web, ICCV’09.<br />

[31] M. Itoh and Y. Shishido, Fisher in<strong>for</strong>mation metric and Poisson<br />

kernels, Differential Geometry and its Applications 26<br />

(2008), no. 4, 347 – 356.<br />

[32] T. K. Kim and R. Cipolla, Canonical correlation analysis <strong>of</strong><br />

video volume tensors <strong>for</strong> action categorization and detection,<br />

31 (2009), no. 8, 1415–1428.<br />

[33] S. Kullback and R. A. Leibler, On in<strong>for</strong>mation and sufficiency,<br />

Annals <strong>of</strong> Math. Stat. 22 (1951), 79–86.<br />

[34] G. Lebanon, Metric learning <strong>for</strong> text documents, IEEE Tr.<br />

PAMI 28 (2006), no. 4, 497–508.<br />

[35] R. N. Li, R. Chellappa, and S. H. K. Zhou, <strong>Learning</strong> multimodal<br />

densities on discriminative temporal interaction manifold<br />

<strong>for</strong> group <strong>activity</strong> <strong>recognition</strong>.<br />

[36] Z. Lin, Z. Jiang, and L. S. Davis, Recognizing actions by<br />

shape-motion prototype trees, ICCV’09, pp. 444–451.<br />

[37] J. G. Liu, J. B. Luo, and M. Shah, Recognizing realistic actions<br />

from videos ’in the wild’, CVPR’09, pp. 1996–2003.<br />

[38] C. C. Loy, T. Xiang, and S. Gong, Modelling <strong>activity</strong><br />

global temporal dependencies using time delayed probabilistic<br />

graphical model, ICCV’09.<br />

[39] M. Marszalek, I. Laptev, and C. Schmid, Actions in context,<br />

CVPR’09.<br />

[40] D. Mateus, R. Horaud, D. Knossow, F. Cuzzolin, and E.<br />

Boyer, Articulated shape matching using Laplacian eigenfunctions<br />

and unsupervised point registration, CVPR’08.<br />

[41] C. Nandini and C. N. Ravi Kumar, Comprehensive framework<br />

to gait <strong>recognition</strong>, Int. J. Biometrics 1 (2008), no. 1,<br />

129–137.<br />

[42] B. North, A. Blake, M. Isard, and J. Rittscher, <strong>Learning</strong> and<br />

classification <strong>of</strong> complex dynamics, PAMI 22 (2000), no. 9.<br />

[43] M. Piccardi and O. Perez, Hidden Markov <strong>models</strong> with kernel<br />

density estimation <strong>of</strong> emission probabilities and their use in<br />

<strong>activity</strong> <strong>recognition</strong>, VS’07, pp. 1–8.<br />

[44] K. Rapantzikos, Y. Avrithis, and S. Kollias, Dense saliencybased<br />

spatiotemporal feature points <strong>for</strong> action <strong>recognition</strong>,<br />

CVPR’09, pp. 1454–1461.<br />

[45] K. K. Reddy, J. Liu, and M. Shah, Incremental action <strong>recognition</strong><br />

using feature-tree, ICCV’09.<br />

[46] G. Rogez, J. Rihan, S. Ramalingam, C. Orrite, and P. H. S.<br />

Torr, Randomized trees <strong>for</strong> human pose detection, CVPR’08.<br />

[47] K. Schindler and L. van Gool, Action snippets: How many<br />

frames does human action <strong>recognition</strong> require?, CVPR’08.<br />

[48] M. Schultz and T. Joachims, <strong>Learning</strong> a distance metric from<br />

relative comparisons, NIPS’04.<br />

[49] H. J. Seo and P. Milanfar, Detection <strong>of</strong> human actions from a<br />

single example, ICCV’09.<br />

[50] N. Shental, T. Hertz, D. Weinshall, and M. Pavel, Adjustment<br />

learning and relevant component analysis, ECCV’02.<br />

[51] Q. F. Shi, L. Wang, L. Cheng, and A. Smola, Discriminative<br />

human action segmentation and <strong>recognition</strong> using semi-<br />

Markov model, CVPR’08.<br />

[52] A. J. Smola and S. V. N. Vishwanathan, Hilbert space embeddings<br />

in <strong>dynamical</strong> systems, IFAC’03, pp. 760 – 767.<br />

[53] J. Sun, X. Wu, S. C. Yan, L. F. Cheong, T. S. Chua, and J. T.<br />

Li, Hierarchical spatio-temporal context modeling <strong>for</strong> action<br />

<strong>recognition</strong>, CVPR’09, pp. 2004–2011.<br />

[54] A. Sundaresan, A. K. Roy Chowdhury, and R. Chellappa,<br />

A hidden Markov model based framework <strong>for</strong> <strong>recognition</strong> <strong>of</strong><br />

humans from gait sequences, ICIP’03, pp. II: 93–96.<br />

[55] I. W. Tsang, J. T. Kwok, C. W. Bay, and H. Kong, Distance<br />

metric learning with kernels, ICAI’03.<br />

[56] Y. Wang and G. Mori, Max-margin hidden conditional random<br />

fields <strong>for</strong> human action <strong>recognition</strong>, CVPR’09, pp. 872–<br />

879.<br />

[57] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russel, Distance<br />

metric learning with applications to clustering with side in<strong>for</strong>mation,<br />

NIPS’03.<br />

[58] B. Yao and S.C Zhu, <strong>Learning</strong> de<strong>for</strong>mable action templates<br />

from cluttered videos, ICCV’09.<br />

[59] Y. Hu, L. Cao, F. Lv, S. Yan, Y. Gong, and T. S. Huang,<br />

Action detection in complex scenes with spatial and temporal<br />

ambiguities, ICCV’09.<br />

[60] J. S. Yuan, Z. C. Liu, and Y. Wu, Discriminative subvolume<br />

search <strong>for</strong> efficient action detection, CVPR’09, pp. 2442–<br />

2449.<br />

[61] Z. Zhang, <strong>Learning</strong> metrics via discriminant kernels and<br />

multidimensional scaling: Toward expected euclidean representation,<br />

ICML’03.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!