12.12.2012 Views

Lecture Notes in Computer Science 5185

Lecture Notes in Computer Science 5185

Lecture Notes in Computer Science 5185

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

12 J. Debenham and C. Sierra<br />

A general measure of whether q (µ) is more <strong>in</strong>terest<strong>in</strong>g than p is: K(q (µ)�D(Xi)) ><br />

K(p�D(Xi)), whereK(x�y)=∑ j x j ln x j<br />

y is the Kullback-Leibler distance between two<br />

j<br />

probability distributions x and y.<br />

F<strong>in</strong>ally merg<strong>in</strong>g Eqn. 3 and Eqn. 1 we obta<strong>in</strong> the method for updat<strong>in</strong>g a distribution<br />

Xi on receipt of a message µ:<br />

P t+1 (Xi)=Γi(D(Xi),P t (X i(µ))) (4)<br />

This procedure deals with <strong>in</strong>tegrity decay, and with two probabilities: first, the probability<br />

z <strong>in</strong> the percept µ, and second the belief R t (α,β,µ) that α attached to µ.<br />

The <strong>in</strong>teraction between agents α and β will <strong>in</strong>volve β mak<strong>in</strong>g contractual commitments<br />

and (perhaps implicitly) committ<strong>in</strong>g to the truth of <strong>in</strong>formation exchanged. No<br />

matter what these commitments are, α will be <strong>in</strong>terested <strong>in</strong> any variation between β’s<br />

commitment, ϕ, and what is actually observed (as advised by the <strong>in</strong>stitution agent ξ),<br />

as the enactment, ϕ ′ . We denote the relationship between commitment and enactment,<br />

P t (Observe(ϕ ′ )|Commit(ϕ)) simply as P t (ϕ ′ |ϕ) ∈ M t .<br />

In the absence of <strong>in</strong>-com<strong>in</strong>g messages the conditional probabilities, P t (ϕ ′ |ϕ), should<br />

tend to ignorance as represented by the decay limit distribution and Eqn. 1. We now<br />

show how Eqn. 4 may be used to revise P t (ϕ ′ |ϕ) as observations are made. Let the set of<br />

possible enactments be Φ = {ϕ1,ϕ2,...,ϕm} with prior distribution p = P t (ϕ ′ |ϕ). Suppose<br />

that message µ is received, we estimate the posterior p (µ) =(p (µ)i) m i=1 = Pt+1 (ϕ ′ |ϕ).<br />

First, if µ =(ϕk,ϕ) is observed then α may use this observation to estimate p (ϕk)k as<br />

some value d at time t +1. We estimate the distribution p (ϕk) by apply<strong>in</strong>g the pr<strong>in</strong>ciple of<br />

m<strong>in</strong>imum relative entropy as <strong>in</strong> Eqn. 4 with prior p, and the posterior p (ϕk) =(p (ϕk) j) m j=1<br />

satisfy<strong>in</strong>g the s<strong>in</strong>gle constra<strong>in</strong>t: J (ϕ′ |ϕ) (ϕk)={p (ϕk)k = d}.<br />

Second, we consider the effect that the enactment φ ′ of another commitment φ, also<br />

by agent β, hasonp = P t (ϕ ′ |ϕ). Given the observation µ =(φ ′ ,φ), def<strong>in</strong>e the vector t<br />

as a l<strong>in</strong>ear function of semantic distance by:<br />

ti = P t (ϕi|ϕ)+(1−|δ(φ ′ ,φ) − δ(ϕi,ϕ) |) · δ(ϕ ′ ,φ)<br />

for i = 1,...,m. t is not a probability distribution. The multiply<strong>in</strong>g factor δ(ϕ ′ ,φ) limits<br />

the variation of probability to those formulae whose ontological context is not too far<br />

away from the observation. The posterior p (φ ′ ,φ) is def<strong>in</strong>ed to be the normalisation of t.<br />

3.2 Estimat<strong>in</strong>g Reliability<br />

Rt (α,β,µ) is an epistemic probability that takes account of α’s personal caution. An<br />

empirical estimate of Rt (α,β,µ) may be obta<strong>in</strong>ed by measur<strong>in</strong>g the ‘difference’ between<br />

commitment and enactment. Suppose that µ is received from agent β at time u<br />

and is verified by ξ as µ ′ at some later time t. Denote the prior Pu (Xi) by p. Letp (µ)<br />

be the posterior m<strong>in</strong>imum relative entropy distribution subject to the constra<strong>in</strong>ts J Xi<br />

s (µ),<br />

and let p (µ ′ ) be that distribution subject to J Xi<br />

s (µ ′ ). We now estimate what Ru (α,β,µ)<br />

should have been <strong>in</strong> the light of know<strong>in</strong>g now, at time t,thatµ should have been µ ′ .<br />

The idea of Eqn. 2, is that Rt (α,β,µ) should be such that, on average across M t ,<br />

q (µ) will predict p (µ ′ ) — no matter whether or not µ was used to update the distribution

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!