24.11.2013 Views

PhD Thesis Semi-Supervised Ensemble Methods for Computer Vision

PhD Thesis Semi-Supervised Ensemble Methods for Computer Vision

PhD Thesis Semi-Supervised Ensemble Methods for Computer Vision

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

LIST OF FIGURES<br />

vii<br />

6.2 Training a DAS-RF on corrupted unlabeled data (green) and its OOBE (green<br />

dashed) and on not corrupted data (blue). After 6 iterations the SSL stops and it<br />

is trained only on labeled data. Self-learning is depicted in red. . . . . . . . . . 96<br />

8.1 Multiple instance learning principle: Positive bags (blue regions) consist<br />

of both positive and negative instances; however, the “real” instance labels<br />

are unknown to the learner. By contrast, in negative bags (red) all<br />

instances are guaranteed to be negative. . . . . . . . . . . . . . . . . . . 104<br />

8.2 Some samples of the 20 COREL categories and their corresponding segmentations<br />

[Chen et al., 2006]. . . . . . . . . . . . . . . . . . . . . . . . 113<br />

9.1 The original on-line AdaBoost tracking loop as proposed by [Grabner and<br />

Bischof, 2006]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117<br />

9.2 Tracking of a textured patch with difficult background (same texture). As<br />

soon as the object becomes occluded the original tracker from [Grabner<br />

and Bischof, 2006] (dotted cyan), drifts away. Our proposed methods<br />

(yellow) successfully re-detects the object and continues tracking. . . . . 117<br />

9.3 The semi-supervised tracking loop: as can be seen, the on-line classifier<br />

is “aware” that putative update patches are unlabeled data, which are incorporated<br />

using prior knowledge in <strong>for</strong>m of a static classifier. . . . . . . 119<br />

9.4 Detection and tracking in principle can be viewed as the same problem,<br />

depending on how fast the classifier adapts to the current scene. On the<br />

one side a general object detector (e.g., [Viola and Jones, 2002b]) is located<br />

and on the other side a highly adaptive tracker (e.g., [Grabner and<br />

Bischof, 2006]). Our approach is somewhere in between, benefiting from<br />

both approaches: (i) be sufficiently adaptive to new appearance and lightning<br />

changes, and the simplification of object vs. background and (ii)<br />

limit (avoid large) drifting by keeping prior in<strong>for</strong>mation from the object. 119<br />

9.5 Dependency graph between neighboring samples of current and past frames<br />

(a) and (b) similarity matrix encoding this relation <strong>for</strong> 3 frames (block<br />

structure due to the grouping of positive and negative samples). . . . . . . 121<br />

9.6 Spatial temporal coherence between samples in tracking: Thick points<br />

are samples from the current frame, thin circles are samples from previous<br />

frames. (left) classification from appearance based prior, (right) the<br />

smiliarity encoding S smoothes of individual (wrong) predictions. Color<br />

encoded is the probability of samples belonging to the positive class. . . . 123

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!