23.02.2015 Views

Machine Learning - DISCo

Machine Learning - DISCo

Machine Learning - DISCo

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

CHAPTER 2 CONCEPT. LEARNING AND THE GENERAL-TO-SPECIFIC ORDERING 45<br />

up in memory. If the instance is found in memory, the stored classification<br />

is returned. Otherwise, the system refuses to classify the new instance.<br />

2. CANDIDATE-ELIMINATION algorithm: New instances are classified only in the<br />

case where all members of the current version space agree on the classification.<br />

Otherwise, the system refuses to classify the new instance.<br />

3. FIND-S: This algorithm, described earlier, finds the most specific hypothesis<br />

consistent with the training examples. It then uses this hypothesis to classify<br />

all subsequent instances.<br />

The ROTE-LEARNER has no inductive bias. The classifications it provides<br />

for new instances follow deductively from the observed training examples, with<br />

no additional assumptions required. The CANDIDATE-ELIMINATION algorithm has a<br />

stronger inductive bias: that the target concept can be represented in its hypothesis<br />

space. Because it has a stronger bias, it will classify some instances that the ROTE-<br />

LEARNER will not. Of course the correctness of such classifications will depend<br />

completely on the correctness of this inductive bias. The FIND-S algorithm has<br />

an even stronger inductive bias. In addition to the assumption that the target<br />

concept can be described in its hypothesis space, it has an additional inductive<br />

bias assumption: that all instances are negative instances unless the opposite is<br />

entailed by its other know1edge.t<br />

As we examine other inductive inference methods, it is useful to keep in<br />

mind this means of characterizing them and the strength of their inductive bias.<br />

More strongly biased methods make more inductive leaps, classifying a greater<br />

proportion of unseen instances. Some inductive biases correspond to categorical<br />

assumptions that completely rule out certain concepts, such as the bias "the hypothesis<br />

space H includes the target concept." Other inductive biases merely rank<br />

order the hypotheses by stating preferences such as "more specific hypotheses are<br />

preferred over more general hypotheses." Some biases are implicit in the learner<br />

and are unchangeable by the learner, such as the ones we have considered here.<br />

In Chapters 11 and 12 we will see other systems whose bias is made explicit as<br />

a set of assertions represented and manipulated by the learner.<br />

2.8 SUMMARY AND FURTHER READING<br />

The main points of this chapter include:<br />

Concept learning can be cast as a problem of searching through a large<br />

predefined space of potential hypotheses.<br />

The general-to-specific partial ordering of hypotheses, which can be defined<br />

for any concept learning problem, provides a useful structure for organizing<br />

the search through the hypothesis space.<br />

+Notice this last inductive bias assumption involves a kind of default, or nonmonotonic reasoning.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!