18.11.2012 Views

anytime algorithms for learning anytime classifiers saher ... - Technion

anytime algorithms for learning anytime classifiers saher ... - Technion

anytime algorithms for learning anytime classifiers saher ... - Technion

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Technion</strong> - Computer Science Department - Ph.D. Thesis PHD-2008-12 - 2008<br />

3.29 Anytime behavior of IIDT on the 10-XOR dataset . . . . . . . . . 61<br />

3.30 Anytime behavior of IIDT on the Tic-tac-toe dataset . . . . . . . 62<br />

3.31 Time steps <strong>for</strong> a single run of IIDT on 10-XOR . . . . . . . . . . 63<br />

3.32 Learning the XOR-10 problem incrementally . . . . . . . . . . . . 64<br />

3.33 Anytime behavior of modern learners on the XOR-5 dataset . . . 66<br />

3.34 Anytime behavior of modern learners on the Multiplexer-20 dataset 67<br />

3.35 Anytime behavior of modern learners on the Tic-tac-toe dataset . 68<br />

4.1 Attribute selection in SEG2 . . . . . . . . . . . . . . . . . . . . . 72<br />

4.2 Procedure <strong>for</strong> Attribute selection in ACT . . . . . . . . . . . . . . 75<br />

4.3 Attribute evaluation in ACT . . . . . . . . . . . . . . . . . . . . . 75<br />

4.4 Evaluation of tree samples in ACT (uni<strong>for</strong>m costs) . . . . . . . . 76<br />

4.5 Evaluation of tree samples in ACT (nonuni<strong>for</strong>m test costs) . . . . 76<br />

4.6 Evaluation of tree samples in ACT (nonuni<strong>for</strong>m error penalties) . 77<br />

4.7 Average normalized cost as a function of misclassification cost . . 86<br />

4.8 The differences in per<strong>for</strong>mance between ACT and ICET . . . . . 95<br />

4.9 Average accuracy as a function of misclassification cost . . . . . . 96<br />

4.10 Average normalized cost as a function of time . . . . . . . . . . . 96<br />

4.11 Average cost when test costs are assigned randomly . . . . . . . . 97<br />

4.12 Comparison of various <strong>algorithms</strong> when error costs are nonuni<strong>for</strong>m 98<br />

5.1 Exemplifying the recursive invocation of TDIDT$ . . . . . . . . . 101<br />

5.2 Top-down induction of anycost decision trees . . . . . . . . . . . . 102<br />

5.3 Attribute evaluation in Pre-Contract-TATA . . . . . . . . . . . . 104<br />

5.4 Attribute selection in pre-contract-TATA . . . . . . . . . . . . . . 105<br />

5.5 Building a repertoire in contract-TATA with uni<strong>for</strong>m cost gaps . . 106<br />

5.6 Classifying a case using a repertoire in contract-TATA . . . . . . 107<br />

5.7 Building a repertoire in contract-TATA using the hill-climbing<br />

approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108<br />

5.8 Using repertoires in interruptible-TATA . . . . . . . . . . . . . . . 108<br />

5.9 An example of applying cost discounts . . . . . . . . . . . . . . . 110<br />

5.10 Procedure <strong>for</strong> applying discounts . . . . . . . . . . . . . . . . . . 111<br />

5.11 Results <strong>for</strong> pre-contract classification . . . . . . . . . . . . . . . . 113<br />

5.12 Results <strong>for</strong> pre-contract classification on Glass, AND-OR, MULTI-<br />

XOR, and KRK . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114<br />

5.13 TATA with different sample sizes on the Multi-XOR dataset . . . 115<br />

5.14 Results <strong>for</strong> contract classification . . . . . . . . . . . . . . . . . . 116<br />

5.15 Learning repertoires with different time allocations and sample sizes117<br />

5.16 Results <strong>for</strong> interruptible classification . . . . . . . . . . . . . . . . 118<br />

6.1 The sensitivity of LSID3 and skewing to irrelevant attributes . . . 123<br />

6.2 Exemplifying the difficulties other methods might face . . . . . . 129

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!