06.02.2013 Views

Abstract book (pdf) - ICPR 2010

Abstract book (pdf) - ICPR 2010

Abstract book (pdf) - ICPR 2010

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

13:30-16:30, Paper ThBCT8.59<br />

Discriminating Intended Human Objects in Consumer Videos<br />

Uegaki, Hiroshi, Osaka Univ.<br />

Nakashima, Yuta, Osaka Univ.<br />

Babaguchi, Noboru, Osaka Univ.<br />

In a consumer video, there are not only intended objects, which are intentionally captured by the camcorder user, but also<br />

unintended objects, which are accidentally framed-in. Since the intended objects are essential to present what the camcorder<br />

user wants to express in the video, discriminating the intended objects from the unintended objects are beneficial for many<br />

applications, e.g., video summarization, privacy protection, and so forth. In this paper, focusing on human objects, we<br />

propose a method for discriminating the intended human objects from the unintended human objects. We evaluated the<br />

proposed method using 10 videos captured by 3 camcorder users. The results demonstrate that the proposed method successfully<br />

discriminates the intended human objects with 0.45 of recall and 0.80 of precision.<br />

13:30-16:30, Paper ThBCT8.60<br />

Detecting Human Activity Profiles with Dirichlet Enhanced Inhomogeneous Poisson Processes<br />

Shimosaka, Masamichi, The Univ. of Tokyo<br />

Ishino, Takahito, The Univ. of Tokyo<br />

Noguchi, Hiroshi, The Univ. of Tokyo<br />

Mori, Taketoshi, The Univ. of Tokyo<br />

Sato, Tomomasa, The Univ. of Tokyo<br />

This paper describes an activity pattern mining method via inhomogeneous Poisson point processes (IPPPs) from timeseries<br />

of count data generated in behavior detection by pyroelectric sensors. IPPP reflects the idea that typical human activity<br />

is rhythmic and periodic. We also focus on the idea that activity patterns are affected by exogenous phenomena,<br />

such as the day of the week, and weather condition. Because single IPPP could not tackle this idea, Dirichlet process mixtures<br />

(DPM) are leveraged in order to discriminate and discover different activity patterns caused by such factors. The use<br />

of DPM leads us to discover the appropriate number of the typical daily patterns automatically. Experimental result using<br />

long-term count data shows that our model successfully and efficiently discovers typical daily patterns.<br />

13:30-16:30, Paper ThBCT8.61<br />

I-FAC: Efficient Fuzzy Associative Classifier for Object Classes in Images<br />

Mangalampalli, Ashish, International Inst. of Information Tech. Hyderabad, India<br />

Chaoji, Vineet, Yahoo! Inc<br />

Sanyal, Subhajit, Yahoo! Lab. Bangalore, India<br />

We present I-FAC, a novel fuzzy associative classification algorithm for object class detection in images using interest<br />

points. In object class detection, the negative class CN is generally vague (CN = U CP ; where U and CP are the universal<br />

and positive classes respectively). But, image classification necessarily requires both positive and negative classes for<br />

training. I-FAC is a single class image classifier that relies only on the positive class for training. Because of its fuzzy<br />

nature, I-FAC also handles polysemy and synonymy (common problems in most crisp (non-fuzzy) image classifiers) very<br />

well. As associative classification leverages frequent patterns mined from a given dataset, its performance as adjudged<br />

from its false-positive-rate(FPR)-versus-recall curve is very good, especially at lower FPRs when its recall is even better.<br />

IFAC has the added advantage that the rules used for classification have clear semantics, and can be comprehended easily,<br />

unlike other classifiers, such as SVM, which act as black-boxes. From an empirical perspective (on standard public<br />

datasets), the performance of I-FAC is much better, especially at lower FPRs, than that of either bag-of-words (BOW) or<br />

SVM (both using interest points).<br />

13:30-16:30, Paper ThBCT8.62<br />

Audio-Visual Data Fusion using a Particle Filter in the Application of Face Recognition<br />

Steer, Michael, Otto-von-guericke-Univ. Magdeburg<br />

This paper describes a methodology by which audio and visual data about a scene can be fused in a meaningful manner<br />

in order to locate a speaker in a scene. This fusion is implemented within a Particle Filter such that a single speaker can<br />

be identified in the presence of multiple visual observations. The advantages of this fusion are that weak sensory data<br />

from either modality can be reinforced and the presence of noise can be reduced.<br />

- 313 -

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!