12.12.2012 Views

Who Needs Emotions? The Brain Meets the Robot

Who Needs Emotions? The Brain Meets the Robot

Who Needs Emotions? The Brain Meets the Robot

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

obot emotion 281<br />

to “lubricate” <strong>the</strong> interface between <strong>the</strong>mselves and <strong>the</strong>ir human interlocutors<br />

but also to promote survival, self-maintenance, learning, decision making,<br />

attention, and more (Breazeal, 2002a, 2003c,d). Hence, social and affective<br />

interactions with people are valued not just at <strong>the</strong> interface but at a pragmatic<br />

and functional level for <strong>the</strong> robot as well.<br />

Humans, however, are <strong>the</strong> most socially and emotionally advanced of<br />

all species. As one might imagine, an autonomous anthropomorphic robot<br />

that could interpret, respond, and deliver human-style social and affective<br />

cues is quite a sophisticated machine. We have explored <strong>the</strong> simplest kind<br />

of human-style social interaction (guided and inspired by what occurs between<br />

a human infant with its caregiver) and have used this as a metaphor<br />

for building a sociable robot, called Kismet (shown in Fig. 10.1). <strong>The</strong> robot<br />

has been designed to support several social and emotive skills and mechanisms<br />

that are outlined in <strong>the</strong> rest of this chapter. Kismet is able to use <strong>the</strong>se<br />

capabilities to enter into rich, flexible, dynamic interactions with people that<br />

are physical, affective, and social.<br />

Surprise Happy Sad<br />

Fear<br />

Disgust Interest<br />

Figure 10.1. A sample of Kismet’s facial expressions for basic emotions (see<br />

text). Kismet is about 1.5 times <strong>the</strong> size of an adult human head and has a<br />

total of 21 degrees of freedom. <strong>The</strong> robot perceives a variety of natural social<br />

cues from visual and auditory channels. Kismet has four cameras to visually<br />

perceive its environment: one behind each eye for postattentive visual<br />

processing (e.g., face detection), one between <strong>the</strong> eyes to provide a wide<br />

peripheral view (to track bright colors, skin tone, and movement), and one in<br />

<strong>the</strong> “nose” that is used in stereo with <strong>the</strong> peripheral-view camera to estimate<br />

<strong>the</strong> distance to targeted objects. A human wears a lavalier microphone to<br />

speak to <strong>the</strong> robot. (Images courtesy of Sam Ogden, © 2000.)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!