29.01.2014 Views

Knowledge Representation and Reasoning

Knowledge Representation and Reasoning

Knowledge Representation and Reasoning

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Artificial Intelligence<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong><br />

Propositional Logic<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 1


Outline<br />

♦ <strong>Knowledge</strong>-based agents<br />

♦ Logic in general—models <strong>and</strong> entailment<br />

♦ Propositional (Boolean) logic<br />

♦ Equivalence, validity, satisfiability<br />

♦ Inference rules <strong>and</strong> theorem proving<br />

– forward chaining<br />

– backward chaining<br />

– resolution<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 2


Motivation<br />

♦ All our search programs seem quite special purpose.<br />

♦ The search algorithm may be general, but we have to<br />

code up the problem representation <strong>and</strong> heuristic<br />

functions ourselves<br />

♦ Wouldn’t it be nice if we could build a more “general purpose”<br />

program.<br />

♦ E.g., suppose we could just<br />

♦ tell our program about the rules of chess or checkers, <strong>and</strong><br />

it would be able to play<br />

♦ give it some ideas about how to play well, <strong>and</strong> it would use<br />

them.<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 3


<strong>Knowledge</strong> <strong>Representation</strong><br />

♦ Such a general program would have to be capable of<br />

"knowing" lots of things:<br />

♦ E.g., the rules of chess<br />

♦ Good strategies<br />

♦ Lots of things needed to make sense out of these rules,<br />

etc. (“Background knowledge”)<br />

♦ We would need to be able to express an open-ended variety<br />

of things, in some form that they can be properly manipulated.<br />

♦ The means of expressing general “facts”, or beliefs, etc., is<br />

called a knowledge representation language.<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 4


Procedural vs. Declarative<br />

♦ "Know-how" versus "know-that".<br />

♦ In general, procedural knowledge is<br />

♦ Hard to use generally, reason with, introspect on,<br />

articulate.<br />

♦ Enable fast behavior<br />

♦ Declarative knowledge is<br />

♦ More flexible (multi-use), amenable to introspection,<br />

articulation<br />

♦ This is the focus of most KR work.<br />

♦ there must always be some sort of (procedural)<br />

mechanism around that interprets declarative knowledge<br />

(inference engine).<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 5


<strong>Knowledge</strong> <strong>Representation</strong> Language<br />

♦ A KRL is a way of writing down beliefs (or other kinds of<br />

mental states)<br />

♦ Needs to be<br />

♦ Expressive<br />

♦ Unambiguous<br />

♦ Context-independent<br />

♦ Compositional<br />

♦ There have been various c<strong>and</strong>idates proposed for KRLs over<br />

the years<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 6


Logic as KRL<br />

♦ formal logic used as a basic framework for KRLs<br />

♦ Logic consists of<br />

♦ A language<br />

how to build up sentences in the language (syntax)<br />

<strong>and</strong> what those sentences mean (semantics)<br />

♦ An inference procedure<br />

which sentences are valid inferences from other<br />

sentences<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 7


<strong>Knowledge</strong> bases<br />

Inference engine<br />

domain−independent algorithms<br />

<strong>Knowledge</strong> base<br />

domain−specific content<br />

♦ <strong>Knowledge</strong> base = set of sentences in a formal language<br />

♦ Declarative approach to building an agent (or other system):<br />

TELL it what it needs to know<br />

ASK itself what to do (answers follow from the KB)<br />

♦ agents can be viewed at the<br />

♦ knowledge level<br />

i.e., what they know, regardless of how implemented<br />

♦ implementation level<br />

i.e., data structures <strong>and</strong> algorithms to manipulate KB<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 8


¦<br />

§¨©<br />

¥<br />

¡ ¢ ¡ £¤<br />

¨<br />

<br />

<br />

¨<br />

¦<br />

¦ <br />

<br />

¨<br />

¦<br />

¥<br />

¡ £¤<br />

<br />

¡<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

"<br />

#<br />

! ¡<br />

A simple knowledge-based agent<br />

KB<br />

t<br />

¦ ¨ <br />

© ¢ ¡<br />

TELL<br />

£ KB-AGENT<br />

action ← ASK<br />

TELLKB<br />

t ←t<br />

percept<br />

! ¡ <br />

¦¨<br />

action<br />

¤<br />

KB<br />

MAKE-PERCEPT-SENTENCE<br />

KB<br />

MAKE-ACTION-QUERY<br />

MAKE-ACTION-SENTENCE<br />

action<br />

percept,t<br />

t<br />

action t<br />

The agent must be able to:<br />

♦ represent states, actions, etc.<br />

♦ incorporate new percepts<br />

♦ update internal representations of the world<br />

♦ deduce hidden properties of the world<br />

♦ deduce appropriate actions<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 9


Logic in general<br />

♦ Logics are formal languages for representing information<br />

such that conclusions can be drawn<br />

♦ Syntax defines the sentences in the language<br />

♦ Semantics define the “meaning” of sentences;<br />

i.e., define truth of a sentence in a world<br />

E.g., the language of arithmetic<br />

x + 2 ≥ y is a sentence; x2 + y > is not a sentence<br />

x + 2 ≥ y is true iff the number x + 2 is no less than the number y<br />

x + 2 ≥ y is true in a world where x = 7, y = 1<br />

x + 2 ≥ y is false in a world where x = 0, y = 6<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 10


Entailment<br />

♦ Entailment means that one thing follows from another:<br />

KB |= α<br />

<strong>Knowledge</strong> base KB entails sentence α<br />

if <strong>and</strong> only if<br />

α is true in all worlds where KB is true<br />

♦ e.g., the KB containing “I’m hungry” <strong>and</strong> “I’m tired” entails<br />

“I’m hungry or I’m tired”<br />

♦ e.g., x + y = 4 entails 4 =x + y<br />

♦ Entailment is a relationship between sentences (i.e., syntax)<br />

based on semantics<br />

Note: brains process syntax (of some sort)<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 11


Models<br />

Logicians typically think in terms of models, which are formally<br />

structured worlds with respect to which truth can be evaluated<br />

♦ we say m is a model of a sentence α if α is true in m<br />

♦ M(α) is the set of all models of α<br />

♦ KB |= α if <strong>and</strong> only if M(KB) ⊆ M(α)<br />

E.g.<br />

KB = I’m hungry <strong>and</strong> I’m tired<br />

α = I’m hungry M( )<br />

x<br />

x<br />

x<br />

x<br />

x<br />

xx<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

M(KB)<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x x<br />

x xx<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x<br />

x x<br />

x<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 12


Inference<br />

♦ KB ⊢ i α = α can be derived from KB by procedure i<br />

♦ Consequences of KB are a haystack; α is a needle<br />

Entailment = needle in haystack; inference = finding it<br />

♦ Soundness: i is sound if whenever KB ⊢ i α, it is also true<br />

that KB |= α<br />

♦ Completeness: i is complete if whenever KB |= α, it is also<br />

true that KB ⊢ i α<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 13


Propositional logic: Syntax<br />

Propositional logic is the simplest logic—illustrates basic ideas<br />

The proposition symbols P 1 , P 2 etc are sentences<br />

♦ If S is a sentence, ¬S is a sentence (negation)<br />

♦ If S 1 , S 2 are sentences, S 1 ∧ S 2 is a sentence (conjunction)<br />

♦ If S 1 , S 2 are sentences, S 1 ∨ S 2 is a sentence (disjunction)<br />

♦ If S 1 , S 2 are sentences, S 1 → S 2 is a sentence (implication)<br />

♦ If S 1 , S 2 are sentences, S 1 ↔ S 2 is a sentence (biconditional)<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 14


Propositional logic: Semantics<br />

Each model specifies true/false for each proposition symbol<br />

E.g. P 1,2 P 2,2 P 3,1<br />

true true false<br />

(With these symbols, 8 possible models, can be enumerated<br />

automatically.)<br />

Rules for evaluating truth with respect to a model m:<br />

¬S is true iff S is false<br />

S 1 ∧ S 2 is true iff S 1 is true <strong>and</strong> S 2 is true<br />

S 1 ∨ S 2 is true iff S 1 is true or S 2 is true<br />

S 1 → S 2 is true iff S 1 is false or S 2 is true<br />

i.e., is false iff S 1 is true <strong>and</strong> S 2 is false<br />

S 1 ↔ S 2 is true iff S 1 → S 2 is true <strong>and</strong> S 2 → S 1 is true<br />

Simple recursive process evaluates an arbitrary sentence, e.g.,<br />

¬P 1,2 ∧ (P 2,2 ∨ P 3,1 ) = true ∧ (false ∨ true) =true ∧ true =true<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 15


Truth tables for connectives<br />

P Q ¬P P ∧ Q P ∨ Q P → Q P ↔ Q<br />

false false true false false true true<br />

false true true false true true false<br />

true false false false true false false<br />

true true false true true true true<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 16


¤ ¡<br />

¦¨<br />

<br />

© ¨<br />

¦<br />

©<br />

<br />

<br />

! ¡<br />

<br />

<br />

¤ ¡<br />

<br />

<br />

<br />

<br />

<br />

! ( ! ! ¡ <br />

! ( !<br />

<br />

<br />

)<br />

<br />

¢<br />

*<br />

<br />

<br />

! ¡<br />

<br />

<br />

<br />

Inference by enumeration<br />

Depth-first enumeration of all models is sound <strong>and</strong> complete<br />

symbols ←<br />

£ <br />

£ TT-ENTAILS?<br />

$ %&©&©<br />

<br />

TT-CHECK-ALL<br />

£ TT-CHECK-ALL<br />

EMPTY?<br />

PL-TRUE?<br />

¡ <br />

<br />

PL-TRUE?<br />

symbols<br />

KB<br />

KB<br />

α<br />

KB<br />

! ¡ <br />

model<br />

α model<br />

α<br />

©¨<br />

symbols<br />

KB<br />

α<br />

true<br />

symbols<br />

¡'! £ <br />

¡'! !<br />

[ ]<br />

© <br />

model<br />

false<br />

KB<br />

! ¡ <br />

α<br />

true<br />

© <br />

false<br />

true<br />

P ← FIRST<br />

TT-CHECK-ALL<br />

symbols<br />

<br />

TT-CHECK-ALL<br />

KB α<br />

rest ← REST<br />

KB α<br />

rest<br />

EXTEND<br />

symbols<br />

rest EXTEND P,true,model<br />

P,false,model<br />

O(2 n ) for n symbols; problem is co-NP-complete<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 17


Logical equivalence<br />

Two sentences are logically equivalent iff true in same models:<br />

α ≡ β if <strong>and</strong> only if α |= β <strong>and</strong> β |= α<br />

(α ∧ β) ≡ (β ∧ α) commutativity of ∧<br />

(α ∨ β) ≡ (β ∨ α) commutativity of ∨<br />

((α ∧ β) ∧ γ) ≡ (α ∧ (β ∧ γ)) associativity of ∧<br />

((α ∨ β) ∨ γ) ≡ (α ∨ (β ∨ γ)) associativity of ∨<br />

¬(¬α) ≡ α double-negation elimination<br />

(α → β) ≡ (¬β → ¬α) contraposition<br />

(α → β) ≡ (¬α ∨ β) implication elimination<br />

(α ↔ β) ≡ ((α → β) ∧ (β → α)) biconditional elimination<br />

¬(α ∧ β) ≡ (¬α ∨ ¬β) de Morgan<br />

¬(α ∨ β) ≡ (¬α ∧ ¬β) de Morgan<br />

(α ∧ (β ∨ γ)) ≡ ((α ∧ β) ∨ (α ∧ γ)) distributivity of ∧ over ∨<br />

(α ∨ (β ∧ γ)) ≡ ((α ∨ β) ∧ (α ∨ γ)) distributivity of ∨ over ∧<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 18


Validity <strong>and</strong> satisfiability<br />

A sentence is valid if it is true in all models,<br />

e.g., True, A ∨ ¬A, A → A, (A ∧ (A → B)) → B<br />

Validity is connected to inference via the Deduction Theorem:<br />

KB |= α if <strong>and</strong> only if (KB → α) is valid<br />

A sentence is satisfiable if it is true in some model<br />

e.g., A ∨ B, C<br />

A sentence is unsatisfiable if it is true in no models<br />

e.g., A ∧ ¬A<br />

Satisfiability is connected to inference via the following:<br />

KB |= α if <strong>and</strong> only if (KB ∧ ¬α) is unsatisfiable<br />

i.e., prove α by reductio ad absurdum<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 19


Proof methods<br />

Proof methods divide into (roughly) two kinds:<br />

♦ Application of inference rules<br />

♦ Legitimate (sound) generation of new sentences from old<br />

♦ Proof = a sequence of inference rule applications Can use<br />

inference rules as operators in a st<strong>and</strong>ard search alg<br />

♦ Typically require translation of sentences into a normal<br />

form<br />

♦ Model checking<br />

♦ truth table enumeration (always exponential in n)<br />

♦ improved backtracking, e.g.,<br />

Davis–Putnam–Logemann–Lovel<strong>and</strong><br />

♦ heuristic search in model space (sound but incomplete)<br />

e.g., min-conflicts-like hill-climbing algorithms<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 20


Forward <strong>and</strong> backward chaining<br />

Horn Form (restricted)<br />

KB = conjunction of Horn clauses<br />

Horn clause =<br />

♦ proposition symbol; or<br />

♦ (conjunction of symbols) → symbol<br />

E.g., C ∧ (B → A) ∧ (C ∧ D → B)<br />

Modus Ponens (for Horn Form): complete for Horn KBs<br />

α 1 ,...,α n ,<br />

α 1 ∧ · · · ∧ α n → β<br />

β<br />

Can be used with forward chaining or backward chaining.<br />

These algorithms are very natural <strong>and</strong> run in linear time<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 21


Forward chaining<br />

Idea: fire any rule whose premises are satisfied in the KB,<br />

add its conclusion to the KB, until query is found<br />

L ∧ M → P<br />

B ∧ L → M<br />

A ∧ P → L<br />

A ∧ B → L<br />

A<br />

B<br />

P → Q<br />

L<br />

Q<br />

P<br />

A<br />

M<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 22


! ¡ <br />

¤ ¡<br />

KB<br />

<br />

¨<br />

¦<br />

+ <br />

false<br />

+ <br />

<br />

!<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 23<br />

<br />

*<br />

©<br />

$ &<br />

false<br />

Forward chaining algorithm<br />

true<br />

q<br />

<br />

© <br />

<br />

£ PL-FC-ENTAILS?<br />

<br />

%¨<br />

¦<br />

¨<br />

count<br />

inferred<br />

agenda<br />

¥<br />

¦<br />

¦<br />

<br />

§¨©¨<br />

©<br />

¦ & ¦ <br />

<br />

¨<br />

¦ %¨<br />

© <br />

<br />

<br />

¨<br />

¥<br />

¦<br />

¦<br />

© <br />

© ¨<br />

¦ <br />

% <br />

<br />

©<br />

$<br />

¦<br />

<br />

¥<br />

,<br />

*<br />

<br />

&<br />

' £( ! agenda<br />

¨©<br />

p ← POP agenda<br />

inferred p<br />

inferred p ←true<br />

<br />

<br />

. ¡'!<br />

-<br />

¡<br />

£ <br />

.<br />

-<br />

p<br />

<br />

% ©&<br />

¨<br />

<br />

c<br />

' /©¨ ¦<br />

¢¤<br />

c<br />

count<br />

.<br />

-<br />

¨ <br />

¡ ! '<br />

<br />

-<br />

£<br />

c<br />

0<br />

.<br />

-<br />

count<br />

£ <br />

true<br />

0q<br />

¡<br />

' ! !<br />

c<br />

HEAD<br />

.<br />

¡ <br />

<br />

agenda<br />

<br />

c<br />

HEAD<br />

. <br />

-<br />

PUSH<br />

<br />

false<br />

<br />

! ¡<br />

<br />

¦


Forward chaining example<br />

Q<br />

1<br />

L<br />

2<br />

P<br />

M<br />

2<br />

2 2<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 24


Forward chaining example<br />

Q<br />

1<br />

L<br />

2<br />

P<br />

M<br />

2<br />

1 1<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 24


Forward chaining example<br />

Q<br />

1<br />

L<br />

2<br />

P<br />

M<br />

1<br />

1<br />

0<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 24


Forward chaining example<br />

Q<br />

1<br />

L<br />

1<br />

P<br />

M<br />

0<br />

1<br />

0<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 24


Forward chaining example<br />

Q<br />

1<br />

L<br />

0<br />

P<br />

M<br />

0<br />

1<br />

0<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 24


Forward chaining example<br />

Q<br />

0<br />

L<br />

0<br />

P<br />

M<br />

0<br />

0<br />

0<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 24


Forward chaining example<br />

Q<br />

0<br />

L<br />

0<br />

P<br />

M<br />

0<br />

0<br />

0<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 24


Forward chaining example<br />

Q<br />

0<br />

L<br />

0<br />

P<br />

M<br />

0<br />

0<br />

0<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 24


Proof of completeness<br />

FC derives every atomic sentence that is entailed by KB<br />

1. FC reaches a fixed point where no new atomic sentences are<br />

derived<br />

2. Consider the final state as a model m, assigning true/false to<br />

symbols<br />

3. Every clause in the original KB is true in m<br />

Proof: Suppose a clause a 1 ∧ ... ∧ a k → b is false in m<br />

Then a 1 ∧ ... ∧ a k is true in m <strong>and</strong> b is false in m<br />

Therefore the algorithm has not reached a fixed point!<br />

4. Hence m is a model of KB<br />

5. If KB |= q, q is true in every model of KB, including m<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 25


Backward chaining<br />

Idea: work backwards from the query q:<br />

to prove q by BC,<br />

check if q is known already, or<br />

prove by BC all premises of some rule concluding q<br />

Avoid loops: check if new subgoal is already on the goal stack<br />

Avoid repeated work: check if new subgoal<br />

1) has already been proved true, or<br />

2) has already failed<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 26


Backward chaining example<br />

Q<br />

P<br />

M<br />

L<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 27


Backward chaining example<br />

Q<br />

P<br />

M<br />

L<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 27


Backward chaining example<br />

Q<br />

P<br />

M<br />

L<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 27


Backward chaining example<br />

Q<br />

P<br />

M<br />

L<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 27


Backward chaining example<br />

Q<br />

P<br />

M<br />

L<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 27


Backward chaining example<br />

Q<br />

P<br />

M<br />

L<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 27


Backward chaining example<br />

Q<br />

P<br />

M<br />

L<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 27


Backward chaining example<br />

Q<br />

P<br />

M<br />

L<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 27


Backward chaining example<br />

Q<br />

P<br />

M<br />

L<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 27


Backward chaining example<br />

Q<br />

P<br />

M<br />

L<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 27


Backward chaining example<br />

Q<br />

P<br />

M<br />

L<br />

A<br />

B<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 27


Forward vs. backward chaining<br />

FC is data-driven, cf. automatic, unconscious processing,<br />

e.g., object recognition, routine decisions<br />

May do lots of work that is irrelevant to the goal<br />

BC is goal-driven, appropriate for problem-solving,<br />

e.g., Where are my keys? How do I get into a PhD program?<br />

Complexity of BC can be much less than linear in size of KB<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 28


Resolution<br />

Conjunctive Normal Form (CNF—universal)<br />

conjunction of disjunctions of literals<br />

} {{ }<br />

clauses<br />

E.g., (A ∨ ¬B) ∧ (B ∨ ¬C ∨ ¬D)<br />

Resolution inference rule (for CNF): complete for propositional<br />

logic<br />

l 1 ∨ · · · ∨ l k ,<br />

m 1 ∨ · · · ∨ m n<br />

l 1 ∨ · · · ∨ l i−1 ∨ l i+1 ∨ · · · ∨ l k ∨ m 1 ∨ · · · ∨ m j−1 ∨ m j+1 ∨ · · · ∨ m n<br />

where l i <strong>and</strong> m j are complementary literals. E.g.,<br />

P 1,3 ∨ P 2,2 , ¬P 2,2<br />

P 1,3<br />

Resolution is sound <strong>and</strong> complete for propositional logic<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 29


Conversion to CNF<br />

B 1,1 ↔ (P 1,2 ∨ P 2,1 )<br />

1. Eliminate ↔, replacing α ↔ β with (α → β) ∧ (β → α).<br />

(B 1,1 → (P 1,2 ∨ P 2,1 )) ∧ ((P 1,2 ∨ P 2,1 ) → B 1,1 )<br />

2. Eliminate →, replacing α → β with ¬α ∨ β.<br />

(¬B 1,1 ∨ P 1,2 ∨ P 2,1 ) ∧ (¬(P 1,2 ∨ P 2,1 ) ∨ B 1,1 )<br />

3. Move ¬ inwards using de Morgan’s rules <strong>and</strong> double-negation:<br />

(¬B 1,1 ∨ P 1,2 ∨ P 2,1 ) ∧ ((¬P 1,2 ∧ ¬P 2,1 ) ∨ B 1,1 )<br />

4. Apply distributivity law (∨ over ∧) <strong>and</strong> flatten:<br />

(¬B 1,1 ∨ P 1,2 ∨ P 2,1 ) ∧ (¬P 1,2 ∨ B 1,1 ) ∧ (¬P 2,1 ∨ B 1,1 )<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 30


¤ ¡<br />

( 4 <br />

!<br />

£ <br />

<br />

% ©<br />

Ci<br />

¢¤<br />

'<br />

¡<br />

<br />

¨<br />

%<br />

<br />

Ci<br />

*<br />

<br />

<br />

1 23&¨ ¦<br />

<br />

¡<br />

' ! !<br />

<br />

©¨© $<br />

¡ <br />

<br />

Resolution algorithm<br />

Proof by contradiction, i.e., show KB ∧ ¬α unsatisfiable<br />

clauses ←<br />

new ← { }<br />

£ PL-RESOLUTION<br />

$ ¦<br />

KB<br />

α<br />

! ¡ <br />

true<br />

© <br />

false<br />

KB ∧ ¬α<br />

C j<br />

£ clauses<br />

resolvents ← PL-RESOLVE<br />

resolvents<br />

new ←new ∪resolvents<br />

new ⊆ clauses<br />

clauses ←clauses ∪new<br />

£ <br />

©¨<br />

¦<br />

¨<br />

% &<br />

' ! ! ¡ <br />

C j<br />

¦<br />

false<br />

true<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 31


Resolution example<br />

KB = (B 1,1 ↔ (P 1,2 ∨ P 2,1 )) ∧ ¬B 1,1 α = ¬P 1,2<br />

P 1,2<br />

P 1,2<br />

P 2,1 B 1,1 P 1,2 P 2,1 P 1,2 B 1,1 B 1,1<br />

B 1,1 B 1,1<br />

P 1,2<br />

B 1,1<br />

B 1,1 P 2,1 B 1,1 P1,2 P 2,1 P 2,1<br />

P 2,1<br />

P 1,2 P 2,1 P 1,2<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 32


Local search algorithms<br />

♦ local search algorithms can be applied directly to PL-SAT<br />

♦ HILL-CLIMBING, or SIMULATED-ANNEALING<br />

♦ CSP<br />

♦ we need an appropriate evaluation function<br />

♦ goal is to satisfy every clause<br />

♦ just count the number of unsatisfied clauses<br />

♦ steps are defined by flipping the truth value of a variable<br />

♦ e.g. P = true P = false<br />

♦ search space usually contains many local minima (we need<br />

r<strong>and</strong>omness)<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 33


! ¡ <br />

¤ ¡<br />

p<br />

max5<br />

clauses<br />

max5<br />

WALKSAT<br />

failure<br />

<br />

©<br />

flips<br />

<br />

<br />

¦ ©<br />

£ WALKSAT<br />

¨¨ <br />

model ←<br />

i<br />

model<br />

clause ←<br />

¦¨<br />

©¦<br />

flips<br />

clauses<br />

*<br />

# ¡<br />

<br />

¡ <br />

model<br />

¡<br />

' ! !<br />

¦<br />

6 <br />

£ <br />

$ ©clauses<br />

<br />

¨¦ 6<br />

<br />

<br />

¦¨<br />

©<br />

p<br />

<br />

<br />

% 7£(£ ¡8<br />

7 ¢ :¦ ©<br />

% & 9<br />

, £ ¡'4 <br />

clause<br />

¦ <br />

¨<br />

$ ¦¦¨ ©:¦<br />

¦<br />

6<br />

¨<br />

; ¨¦<br />

% ¦ <br />

¦ <br />

© 9 &¦:¦<br />

♦ algorithm correct but incomplete<br />

♦ would be complete with max_flips = ∞<br />

♦ but with unsatisfiable formulae it would not terminate<br />

♦ useful when we expect the formula to be satisfiable<br />

♦ not good for deduction<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 34


Summary<br />

Logical agents apply inference to a knowledge base<br />

to derive new information <strong>and</strong> make decisions<br />

Basic concepts of logic:<br />

– syntax: formal structure of sentences<br />

– semantics: truth of sentences wrt models<br />

– entailment: necessary truth of one sentence given another<br />

– inference: deriving sentences from other sentences<br />

– soundess: derivations produce only entailed sentences<br />

– completeness: derivations can produce all entailed<br />

sentences<br />

Forward, backward chaining are linear-time, complete for Horn<br />

clauses<br />

Resolution is complete for propositional logic<br />

Propositional logic lacks expressive power<br />

<strong>Knowledge</strong> <strong>Representation</strong> <strong>and</strong> <strong>Reasoning</strong> – 35

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!