12.07.2015 Views

Weakly supervised classification of objects in images using soft ...

Weakly supervised classification of objects in images using soft ...

Weakly supervised classification of objects in images using soft ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

10 Riwal Lefort, Ronan Fablet, Jean-Marc Bouchermixtures. The convergence is typically observed on a ten <strong>of</strong> iterations. Theseresults can be expla<strong>in</strong>ed by the fact that the IP2 procedure dist<strong>in</strong>guishes at eachiteration separate tra<strong>in</strong><strong>in</strong>g and test set to update the random forests and theclass priors.Fig. 1. Evolution <strong>of</strong> the performances <strong>of</strong> the iterative procedures IP1 and IP2 throughiteration: dataset D1 (left), dataset D3 (right).5.3 Semi-<strong>supervised</strong> experimentsSemi-<strong>supervised</strong> experiments have been carried out us<strong>in</strong>g a procedure similarto the previous section. Tra<strong>in</strong><strong>in</strong>g and test sets are randomly built for a givendataset. Each tra<strong>in</strong><strong>in</strong>g set is composed <strong>of</strong> labelled and unlabelled samples. Wehere report results for datasets D2 and D3 with the follow<strong>in</strong>g experimental sett<strong>in</strong>g.For dataset D3 the tra<strong>in</strong><strong>in</strong>g dataset conta<strong>in</strong>s 9 labelled examples (3 for eachclass) and 126 unlabelled examples (42 for each class). For dataset D3, we focuson a two-class example consider<strong>in</strong>g only samples correspond<strong>in</strong>g to normal andcyclic pattern. Tra<strong>in</strong><strong>in</strong>g datasets conta<strong>in</strong> 4 labelled samples and 86 unlabelledsamples per class. This particular experimental sett<strong>in</strong>g is chosen to illustrate therelevance <strong>of</strong> the semi-<strong>supervised</strong> learn<strong>in</strong>g when only very fulled labelled tra<strong>in</strong><strong>in</strong>gsamples are available. In any case, the upper bound <strong>of</strong> the <strong>classification</strong> performances<strong>of</strong> a semi-<strong>supervised</strong> scheme is given by the <strong>supervised</strong> case. Therefore,only weak ga<strong>in</strong> can be expected when a representative set <strong>of</strong> fully labelled samplesis provided to the semi-<strong>supervised</strong> learn<strong>in</strong>g.Five semi-<strong>supervised</strong> procedure are compared: three based on self-tra<strong>in</strong><strong>in</strong>g(ST) strategies [5], with s<strong>of</strong>t random forests (ST-SRF), with standard (hard)random forests (ST-RF), with a nave Bayes classifier (ST-NBC), a EM-basednave Bayes classifier (EM-NBC) [6] and the iterative procedure IP2 to s<strong>of</strong>t randomforest (IP2-SRF). Results are reported <strong>in</strong> figure 2.These semi-<strong>supervised</strong> experiments first highlight the relevance <strong>of</strong> the s<strong>of</strong>trandom forests compared to their standard versions. For <strong>in</strong>stance, when compar<strong>in</strong>gboth to a self-tra<strong>in</strong><strong>in</strong>g strategy, the s<strong>of</strong>t random forests lead to a ga<strong>in</strong><strong>of</strong> 5% <strong>of</strong> correct <strong>classification</strong> with dataset D3. This is regarded as a directconsequence <strong>of</strong> a reduced propagation <strong>of</strong> <strong>in</strong>itial <strong>classification</strong> errors with s<strong>of</strong>tdecisions. The structure <strong>of</strong> the feature space for dataset D3 further illustratesas previously the flexibility <strong>of</strong> the random forest schemes compared to the otherones, especially generative models which behave poorly.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!