12.12.2012 Views

Who Needs Emotions? The Brain Meets the Robot

Who Needs Emotions? The Brain Meets the Robot

Who Needs Emotions? The Brain Meets the Robot

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Robot</strong> as Partner<br />

robot emotion 279<br />

In <strong>the</strong> last paradigm, a person collaborates with <strong>the</strong> robot in a social manner<br />

as he or she would with a teammate (Grosz, 1996). <strong>Robot</strong>s that interact with<br />

people as capable partners need to possess social and emotional intelligence<br />

so that <strong>the</strong>y can respond appropriately. A robot that cares for an elderly person<br />

should be able to respond appropriately when <strong>the</strong> patient shows signs of<br />

distress or anxiety. It should be persuasive in ways that are sensitive, such as<br />

reminding <strong>the</strong> patient when to take medication, without being annoying or<br />

upsetting. It would need to know when to contact a health professional when<br />

necessary. Yet, so many current technologies (animated agents, computers,<br />

etc.) interact with us in a manner characteristic of socially or emotionally<br />

impaired people. In <strong>the</strong> best cases, <strong>the</strong>y know what to do but often lack <strong>the</strong><br />

social–emotional intelligence to do it in an appropriate manner. As a result,<br />

<strong>the</strong>y frustrate us and we quickly dismiss <strong>the</strong>m even though <strong>the</strong>y can be useful.<br />

Given that many exciting applications for autonomous robots in <strong>the</strong> future<br />

place <strong>the</strong>m in a long-term relationship with people, robot designers need<br />

to address <strong>the</strong>se issues or people will not accept robots into <strong>the</strong>ir daily lives.<br />

WHY SOCIAL/SOCIABLE ROBOTS?<br />

In order to interact with o<strong>the</strong>rs (whe<strong>the</strong>r it is a device, a robot, or even ano<strong>the</strong>r<br />

person), it is essential to have a good conceptual model for how <strong>the</strong> o<strong>the</strong>r<br />

operates (Norman, 2001). With such a model, it is possible to explain and<br />

predict what <strong>the</strong> o<strong>the</strong>r is about to do, its reasons for doing it, and how to elicit<br />

a desired behavior from it. <strong>The</strong> design of a technological artifact, whe<strong>the</strong>r it is<br />

a robot, a computer, or a teapot, can help a person form this model by “projecting<br />

an image of its operation,” through ei<strong>the</strong>r visual cues or continual feedback<br />

(Norman, 2001). Hence, <strong>the</strong>re is a very practical side to developing robots<br />

that can effectively convey and communicate <strong>the</strong>ir internal state to people for<br />

cooperative tasks, even when <strong>the</strong> style of interaction is not social.<br />

For many autonomous robot applications, however, people will most<br />

likely use a social model to interact with robots in anthropomorphic terms.<br />

Humans are a profoundly social species. Our social–emotional intelligence<br />

is a useful and powerful means for understanding <strong>the</strong> behavior of, and for<br />

interacting with, some of <strong>the</strong> most complex entities in our world, people<br />

and o<strong>the</strong>r living creatures (Dennett, 1987). Faced with nonliving things of<br />

sufficient complexity (i.e., when <strong>the</strong> observable behavior is not easily understood<br />

in terms of physics and its underlying mechanisms), we often apply<br />

a social model to explain, understand, and predict <strong>the</strong>ir behavior, attributing

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!