23.02.2015 Views

Machine Learning - DISCo

Machine Learning - DISCo

Machine Learning - DISCo

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

example that is not yet covered by the current Horn clauses, explains this new<br />

example, and formulates a new rule'according to the procedure described above.<br />

Notice only positive examples are covered in the algorithm as we have defined<br />

it, and the learned set of Horn clause rules predicts only positive examples. A<br />

new instance is classified as negative if the current rules fail to predict that it is<br />

positive. This is in keeping with the standard negation-as-failure approach used<br />

in Horn clause inference systems such as PROLOG.<br />

11.3 REMARKS ON EXPLANATION-BASED LEARNING<br />

As we saw in the above example, PROLOG-EBG conducts a detailed analysis of<br />

individual training examples to determine how best to generalize from the specific<br />

example to a general Horn clause hypothesis. The following are the key properties<br />

of this algorithm.<br />

0 Unlike inductive methods, PROLOG-EBG produces justified general hypotheses<br />

by using prior knowledge to analyze individual examples.<br />

0 The explanation of how the example satisfies the target concept determines<br />

which example attributes are relevant: those mentioned by the explanation.<br />

0 The further analysis of the explanation, regressing the target concept to determine<br />

its weakest preimage with respect to the explanation, allows deriving<br />

more general constraints on the values of the relevant features.<br />

0 Each learned Horn clause corresponds to a sufficient condition for satisfying<br />

the target concept. The set of learned Horn clauses covers the positive<br />

training examples encountered by the learner, as well as other instances that<br />

share the same explanations.<br />

0 The generality of the learned Horn clauses will depend on the formulation<br />

of the domain theory and on the sequence in which training examples are<br />

considered.<br />

0 PROLOG-EBG implicitly assumes that the domain theory is correct and complete.<br />

If the domain theory is incorrect or incomplete, the resulting learned<br />

concept may also be incorrect.<br />

There are several related perspectives on explanation-based learning that<br />

help to understand its capabilities and limitations.<br />

0 EBL as theory-guided generalization of examples. EBL uses its given domain<br />

theory to generalize rationally from examples, distinguishing the relevant example<br />

attributes from the irrelevant, thereby allowing it to avoid the bounds<br />

on sample complexity that apply to purely inductive learning. This is the<br />

perspective implicit in the above description of the PROLOG-EBG algorithm.<br />

0 EBL as example-guided reformulation of theories. The PROLOG-EBG algorithm<br />

can be viewed as a method for reformulating the domain theory into a<br />

more operational form. In particular, the original domain theory is reformulated<br />

by creating rules that (a) follow deductively from the domain theory,

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!