26.03.2013 Views

MIT Encyclopedia of the Cognitive Sciences - Cryptome

MIT Encyclopedia of the Cognitive Sciences - Cryptome

MIT Encyclopedia of the Cognitive Sciences - Cryptome

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

MRI<br />

See MAGNETIC RESONANCE IMAGING<br />

Multiagent Systems<br />

Multiagent systems are distributed computer systems in<br />

which <strong>the</strong> designers ascribe to component modules autonomy,<br />

mental state, and o<strong>the</strong>r characteristics <strong>of</strong> agency. S<strong>of</strong>tware<br />

developers have applied multiagent systems to solve<br />

problems in power management, transportation scheduling,<br />

and a variety <strong>of</strong> o<strong>the</strong>r tasks. With <strong>the</strong> growth <strong>of</strong> <strong>the</strong> Internet<br />

and networked information systems generally, separately<br />

designed and constructed programs increasingly need to<br />

interact substantively; such complexes also constitute multiagent<br />

systems.<br />

In <strong>the</strong> study <strong>of</strong> multiagent systems, including <strong>the</strong> field <strong>of</strong><br />

“distributed AI” (Bond and Gasser 1988) and much <strong>of</strong> <strong>the</strong><br />

current activity in “s<strong>of</strong>tware agents” (Huhns and Singh<br />

1997), researchers aim to relate aggregate behavior <strong>of</strong> <strong>the</strong><br />

composite system with individual behaviors <strong>of</strong> <strong>the</strong> component<br />

agents and properties <strong>of</strong> <strong>the</strong> interaction protocol and<br />

environment. Frameworks for constructing and analyzing<br />

multiagent systems <strong>of</strong>ten draw on metaphors—as well as<br />

models and <strong>the</strong>ories—from <strong>the</strong> social and ecological sciences<br />

(Huberman 1988). Such social conceptions are sometimes<br />

applied within an agent to describe its behaviors in<br />

terms <strong>of</strong> interacting subagents, as in Minsky’s society <strong>of</strong><br />

mind <strong>the</strong>ory (Minsky 1986).<br />

Design <strong>of</strong> a distributed system typically focuses on <strong>the</strong><br />

interaction mechanism—specification <strong>of</strong> agent communication<br />

languages and interaction protocols. The interaction<br />

mechanism generally includes means to implement decisions<br />

or agreements reached as a function <strong>of</strong> <strong>the</strong> agents’<br />

interactions. Depending on <strong>the</strong> context, developers <strong>of</strong> a distributed<br />

system may also control <strong>the</strong> configuration <strong>of</strong> participating<br />

agents, <strong>the</strong> INTELLIGENT AGENT ARCHITECTURE, or<br />

even <strong>the</strong> implementation <strong>of</strong> agents <strong>the</strong>mselves. In any case,<br />

principled design <strong>of</strong> <strong>the</strong> interaction mechanism requires<br />

some model <strong>of</strong> how agents behave within <strong>the</strong> mechanism,<br />

and design <strong>of</strong> agents requires a model <strong>of</strong> <strong>the</strong> mechanism<br />

rules, and (sometimes) models <strong>of</strong> <strong>the</strong> o<strong>the</strong>r agents.<br />

One fundamental characteristic that bears on design <strong>of</strong><br />

interaction mechanisms is whe<strong>the</strong>r <strong>the</strong> agents are presumed<br />

to be cooperative, which in <strong>the</strong> technical sense used here<br />

means that <strong>the</strong>y have <strong>the</strong> same objectives (<strong>the</strong>y may have<br />

heterogeneous capabilities, and may also differ on beliefs<br />

and o<strong>the</strong>r agent attitudes). In a cooperative setting, <strong>the</strong> role<br />

<strong>of</strong> <strong>the</strong> mechanism is to coordinate local decisions and disseminate<br />

local information in order to promote <strong>the</strong>se global<br />

objectives. At one extreme, <strong>the</strong> mechanism could attempt to<br />

centralize <strong>the</strong> system by directing each agent to transmit its<br />

local state to a central source, which <strong>the</strong>n treats its problem<br />

as a single-agent decision. This approach may be infeasible<br />

or expensive, due to <strong>the</strong> difficulty <strong>of</strong> aggregating belief<br />

states, increased complexity <strong>of</strong> scale, and <strong>the</strong> costs and<br />

delays <strong>of</strong> communication. Solving <strong>the</strong> problem in a decentralized<br />

manner, in contrast, forces <strong>the</strong> designer to deal<br />

Multiagent Systems 573<br />

directly with issues <strong>of</strong> reconciling inconsistent beliefs and<br />

accommodating local decisions made on <strong>the</strong> basis <strong>of</strong> partial,<br />

conflicting information (Durfee, Lesser, and Corkill 1992).<br />

Even among cooperative agents, negotiation is <strong>of</strong>ten necessary<br />

to reach joint decisions. Through a negotiation process,<br />

for example, agents can convey <strong>the</strong> relevant information<br />

about <strong>the</strong>ir local knowledge and capabilities necessary<br />

to determine a principled allocation <strong>of</strong> resources or tasks<br />

among <strong>the</strong>m. In <strong>the</strong> contract net protocol and its variants,<br />

agents submit “bids” describing <strong>the</strong>ir abilities to perform<br />

particular tasks, and a designated contract manager assigns<br />

tasks to agents based on <strong>the</strong>se bids. When tasks are not easily<br />

decomposable, protocols for managing shared information<br />

in global memory are required. Systems based on a<br />

blackboard architecture use this global memory both to<br />

direct coordinated actions <strong>of</strong> <strong>the</strong> agents and to share intermediate<br />

results relevant to multiple tasks.<br />

In a noncooperative setting, objectives as well as beliefs<br />

and capabilities vary across agents. Noncooperative systems<br />

are <strong>the</strong> norm when agents represent <strong>the</strong> interests <strong>of</strong> disparate<br />

humans or human organizations. Note that having distinct<br />

objectives does not necessarily mean that <strong>the</strong> agents are<br />

adversarial or even averse to cooperation. It merely means<br />

that agents cooperate exactly when <strong>the</strong>y determine that it is<br />

in <strong>the</strong>ir individual interests to do so.<br />

The standard assumption for noncooperative multiagent<br />

systems is that agents behave according to principles <strong>of</strong><br />

RATIONAL DECISION MAKING. That is, each agent acts to fur<strong>the</strong>r<br />

its individual objectives (typically characterized in<br />

terms <strong>of</strong> UTILITY THEORY), subject to its beliefs and capabilities.<br />

In this case, <strong>the</strong> problem <strong>of</strong> designing an interaction<br />

mechanism corresponds to <strong>the</strong> standard economic concept<br />

<strong>of</strong> mechanism design, and <strong>the</strong> ma<strong>the</strong>matical tools <strong>of</strong> GAME<br />

THEORY apply. Much current work in multiagent systems is<br />

devoted to game-<strong>the</strong>oretic analyses <strong>of</strong> interaction mechanisms,<br />

and especially negotiation protocols applied within<br />

such mechanisms (Rosenschein and Zlotkin 1994). Economic<br />

concepts expressly drive <strong>the</strong> design <strong>of</strong> multiagent<br />

interaction mechanisms based on market price systems<br />

(Clearwater 1996).<br />

Both cooperative and noncooperative agents may derive<br />

some benefit by reasoning expressly about <strong>the</strong> o<strong>the</strong>r agents.<br />

Cooperative agents may be able to propose more effective<br />

joint plans if <strong>the</strong>y know <strong>the</strong> capabilities and intentions <strong>of</strong> <strong>the</strong><br />

o<strong>the</strong>r agents. Noncooperative agents can improve <strong>the</strong>ir bargaining<br />

positions through awareness <strong>of</strong> <strong>the</strong> options and preferences<br />

<strong>of</strong> o<strong>the</strong>rs (agents that exploit such bargaining power<br />

are called “strategic”; those that neglect to do so are “competitive”).<br />

Because direct knowledge <strong>of</strong> o<strong>the</strong>r agents may be<br />

difficult to come by, agents typically induce <strong>the</strong>ir models <strong>of</strong><br />

o<strong>the</strong>rs from observations (e.g., “plan recognition”), within<br />

an interaction or across repeated interactions.<br />

See also AI AND EDUCATION; COGNITIVE ARTIFACTS;<br />

HUMAN-COMPUTER INTERACTION; RATIONAL AGENCY<br />

—Michael P. Wellman<br />

References<br />

Bond, A. H., and L. Gasser, Eds. (1988). Readings in Distributed<br />

Artificial Intelligence. San Francisco: Kaufmann.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!