23.02.2015 Views

Machine Learning - DISCo

Machine Learning - DISCo

Machine Learning - DISCo

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

and (b) classify the observed training examples in a single inference step.<br />

Thus, the learned rules can be seen as a reformulation of the domain theory<br />

into a set of special-case rules capable of classifying instances of the target<br />

concept in a single inference step.<br />

0 EBL as "just" restating what the learner already "knows." In one sense, the<br />

learner in our SafeToStack example begins with full knowledge of the Safe-<br />

ToStack concept. That is, if its initial domain theory is sufficient to explain<br />

any observed training examples, then it is also sufficient to predict their<br />

classification in advance. In what sense, then, does this qualify as learning?<br />

One answer is that in many tasks the difference between what one knows<br />

in principle and what one can efficiently compute in practice may be great,<br />

and in such cases this kind of "knowledge reformulation" can be an important<br />

form of learning. In playing chess, for example, the rules of the game<br />

constitute a perfect domain theory, sufficient in principle to play perfect<br />

chess. Despite this fact, people still require considerable experience to learn<br />

how to play chess well. This is precisely a situation in which a complete,<br />

perfect domain theory is already known to the (human) learner, and further<br />

learning is "simply" a matter of reformulating this knowledge into a form<br />

in which it can be used more effectively to select appropriate moves. A beginning<br />

course in Newtonian physics exhibits the same property-the basic<br />

laws of physics are easily stated, but students nevertheless spend a large<br />

part of a semester working out the consequences so they have this knowledge<br />

in more operational form and need not derive every problem solution<br />

from first principles come the final exam. PROLOG-EBG performs this type<br />

of reformulation of knowledge-its learned rules map directly from observable<br />

instance features to the classification relative to the target concept, in a<br />

way that is consistent with the underlying domain theory. Whereas it may<br />

require many inference steps and considerable search to classify an arbitrary<br />

instance using the original domain theory, the learned rules classify<br />

the observed instances in a single inference step.<br />

Thus, in its pure form EBL involves reformulating the domain theory to<br />

produce general rules that classify examples in a single inference step. This kind<br />

of knowledge reformulation is sometimes referred to as knowledge compilation,<br />

indicating that the transformation is an efficiency improving one that does not<br />

alter the correctness of the system's knowledge.<br />

11.3.1 Discovering New Features<br />

One interesting capability of PROLOG-EBG is its ability to formulate new features<br />

that are not explicit in the description of the training examples, but that are needed<br />

to describe the general rule underlying the training example. This capability is<br />

illustrated by the algorithm trace and the learned rule in the previous section. In<br />

particular, the learned rule asserts that the essential constraint on the Volume and<br />

Density of x is that their product is less than 5. In fact, the training examples

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!