12.07.2015 Views

View - ResearchGate

View - ResearchGate

View - ResearchGate

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

64 Socially Intelligent AgentsDomain Knowledge. XDM should know all the plans that enable achievingtasks in the application: ∀g∀p (Domain-Goal g)∧(Domain-Plan p)∧(Achieves pg) ⇒ (KnowAbout XDM g)∧ (KnowAbout XDM p)∧ (Know XDM (Achieves pg)). Itshould know, as well, the individual steps of every domain-plan: ∀g∀a (Domain-Goal p)∧(Domain-action a)∧ (Step ap) ⇒ (KnowAbout XDM p)∧ (KnowAbout XDMa)∧ (Know XDM (Step ap)).User Model. The agent should have some hypothesis about: (1) the usergoals, both in general and in specific phases of interaction [∀g (Goal U (T g)) ⇒(Bel XDM (Goal U (Tg)))]; (2) her abilities [∀a (CanDo U a) ⇒(Bel XDM (CanDo Ua))]; and (3) what the user expects the agent to do, in every phase of interaction[∀a (Goal U (IntToDo XDM a)) ⇒(Bel XDM Goal U (IntToDo XDM a))]. This may bedefault, stereotypical knowledge about the user that is settled at the beginningof the interaction. Ideally, the model should be updated dynamically, throughplan recognition.Reasoning Rules. The agent employs this knowledge to take decisionsabout the level of help to provide in any phase of interaction, according to itshelping attitude, which is represented as a set of reasoning rules. For instance,if XDM-Agent is a benevolent, it will respond to all the user’s (implicit orexplicit) requests of performing actions that it presumes she is not able to do:Rule R1 ∀a[(Bel XDM (Goal U (IntToDo XDM a)))∧(Bel XDM ¬ (CanDo U a))∧(BelXDM (CanDo XDM a))] ⇒(Bel XDM (IntToDo XDM a)).If, on the contrary, the agent is a supplier, it will do the requested action onlyif this does not conflict with its own goals:Rule R2 ∀a [(Bel XDM (Goal U (IntToDo XDM a)))∧ (Bel XDM (CanDo XDM a)) ∧ (¬∃g (Goal XDM (T g) ∧ (Bel XDM (Conflicts ag)))] ⇒ (Bel XDM (IntToDo XDM a))...and so on for the other personality traits.Let us assume that our agent is benevolent and that the domain goal g is towrite a correct email address. In deciding whether to help the user, it will haveto check, first of all, how the goal g may be achieved. Let us assume that noconflict exists between g and the agent’s goals. By applying rule R1, XDMwill come to the decision to do its best to help the user in writing the address,by directly performing all the steps of the plan. The agent might select, instead,a level of help to provide to the user; this level of help may be seen, as well,as a personality trait. If, for instance, XDM-Agent is a literal helper, it willonly check that the address is correct. If, on the contrary, it is an overhelper,it will go beyond the user request of help to hypothesize her higher-order goal(for instance, to be helped in correcting the address, if possible). A subhelper

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!