12.12.2012 Views

Who Needs Emotions? The Brain Meets the Robot

Who Needs Emotions? The Brain Meets the Robot

Who Needs Emotions? The Brain Meets the Robot

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

eware <strong>the</strong> passionate robot 373<br />

grip of passion, that is, of strong emotion. In o<strong>the</strong>r words, while our emotions<br />

and our reason may usually be brought into accord, <strong>the</strong>re are o<strong>the</strong>r<br />

times when “ambition, avarice, desire, hope, fear, love, hatred, joy, grief,<br />

anger, [or] revenge” may consume us, effectively banishing all alternatives<br />

from our thoughts. In noting this, I frame <strong>the</strong> question “If emotions conveyed<br />

an advantage in biological evolution, why can <strong>the</strong>y be so harmful as<br />

well?” We have already noted that Kelley (Chapter 3) examined <strong>the</strong> role of<br />

opioids in addiction and discussed how <strong>the</strong>se may have great adaptive value<br />

in certain contexts yet may be maladaptive in o<strong>the</strong>rs.<br />

Such issues raise <strong>the</strong> prior question “Did emotions convey a selective<br />

advantage?” and <strong>the</strong> subsequent questions “Are emotions a side effect of a<br />

certain kind of cognitive complexity?” (which might imply that robots of a<br />

certain subtlety will automatically have emotion as a side effect) and “Were<br />

emotions <strong>the</strong> result of separate evolutionary changes, and if so, do <strong>the</strong>ir<br />

advantages outweigh <strong>the</strong>ir disadvantages in a way that might make it appropriate<br />

to incorporate <strong>the</strong>m in robots (whe<strong>the</strong>r through explicit design or<br />

selective pressure)?”<br />

At <strong>the</strong> beginning of this chapter, we considered a scenario for a computer<br />

designed to effectively teach some body of material to a human student<br />

and saw that we might include “providing what a human will recognize as<br />

a helpful emotional tone” to <strong>the</strong> list of criteria for successful program design.<br />

However, <strong>the</strong>re is no evolutionary sequence here as charted by <strong>the</strong><br />

neurobiologists—none of <strong>the</strong> serotonin or dopamine of Kelley, none of <strong>the</strong><br />

punishment and reward of Rolls, none of <strong>the</strong> “fear circuits” of Fellous &<br />

LeDoux. This is not to deny that <strong>the</strong>re can be an interesting study of “computer<br />

evolution” from <strong>the</strong> switches of <strong>the</strong> ENIAC, to <strong>the</strong> punchcards of <strong>the</strong><br />

PDP11 to <strong>the</strong> keyboard to <strong>the</strong> use of <strong>the</strong> mouse and, perhaps, to <strong>the</strong> computer<br />

that perceives and expresses emotions. My point here is simply that <strong>the</strong><br />

computer’s evolution to emotion will not have <strong>the</strong> biological grounding of<br />

human emotion. <strong>The</strong> computer may use a model of <strong>the</strong> student’s emotions<br />

yet may not be itself subject to, for example, reward or punishment. Intriguingly,<br />

this is simulation with a vengeance—yet not simulation in <strong>the</strong> mirror<br />

sense employed by Jeannerod in Chapter 6—<strong>the</strong> simulation is purely of “<strong>the</strong><br />

o<strong>the</strong>r,” not a reflection of <strong>the</strong> o<strong>the</strong>r back onto <strong>the</strong> self. In <strong>the</strong> same way, one<br />

may have a model of a car to drive it without having an internal combustion<br />

engine or wheels. <strong>The</strong>n we must ask if this is an argument against <strong>the</strong> simulation<br />

<strong>the</strong>ory of human emotion. This also points to a multilevel view. At one<br />

level, <strong>the</strong> computer “just follows <strong>the</strong> program” and humans “just follow <strong>the</strong><br />

neural dynamics.” It is only a multilevel view that lets us single out certain<br />

variables as drives. What does that imply for robot emotions?<br />

Suppose, <strong>the</strong>n, that we have a robot that simulates <strong>the</strong> appearance of<br />

emotional behavior but has none of <strong>the</strong> “heated feeling” that governed so

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!