23.02.2015 Views

Machine Learning - DISCo

Machine Learning - DISCo

Machine Learning - DISCo

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

theories to automatically improve performance at several search-intensive<br />

planning and optimization problems.<br />

0 Second, in many other cases it is unreasonable to assume that a perfect<br />

domain theory is available. It is difficult to write a perfectly correct and<br />

complete theory even for our relatively simple SafeToStack problem. A more<br />

realistic assumption is that plausible explanations based on imperfect domain<br />

theories must be used, rather than exact proofs based on perfect knowledge.<br />

Nevertheless, we can begin to understand the role of explanations in learning<br />

by considering the ideal case of perfect domain theories. In Chapter 12 we<br />

will consider learning from imperfect domain theories.<br />

This section presents an algorithm called PROLOG-EBG (Kedar-Cabelli and<br />

McCarty 1987) that is representative of several explanation-based learning algorithms.<br />

PROLOG-EBG is a sequential covering algorithm (see Chapter 10). In other<br />

words, it operates by learning a single Horn clause rule, removing the positive<br />

training examples covered by this rule, then iterating this process on the remaining<br />

positive examples until no further positive examples remain uncovered. When<br />

given a complete and correct domain theory, PROLOG-EBG is guaranteed to output<br />

a hypothesis (set of rules) that is itself correct and that covers the observed positive<br />

training examples. For any set of training examples, the hypothesis output<br />

by PROLOG-EBG constitutes a set of logically sufficient conditions for the target<br />

concept, according to the domain theory. PROLOG-EBG is a refinement of the EBG<br />

algorithm introduced by Mitchell et al. (1986) and is similar to the EGGS algorithm<br />

described by DeJong and Mooney (1986). The PROLOG-EBG algorithm is<br />

summarized in Table 11.2.<br />

11.2.1 An Illustrative Trace<br />

To illustrate, consider again the training example and domain theory shown in<br />

Table 11.1. As summarized in Table 11.2, the PROLOG-EBG algorithm is a sequential<br />

covering algorithm that considers the training data incrementally. For<br />

each new positive training example that is not yet covered by a learned Horn<br />

clause, it forms a new Horn clause by: (1) explaining the new positive training<br />

example, (2) analyzing this explanation to determine an appropriate generalization,<br />

and (3) refining the current hypothesis by adding a new Horn clause rule to<br />

cover this positive example, as well as other similar instances. Below we examine<br />

each of these three steps in turn.<br />

11.2.1.1 EXPLAIN THE TRAINING EXAMPLE<br />

The first step in processing each novel training example is to construct an explanation<br />

in terms of the domain theory, showing how this positive example satisfies<br />

the target concept. When the domain theory is correct and complete this explanation<br />

constitutes a proof that the training example satisfies the target concept.<br />

When dealing with imperfect prior knowledge, the notion of explanation must be<br />

extended to allow for plausible, approximate arguments rather than perfect proofs.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!