21.03.2015 Views

Darwin's Dangerous Idea - Evolution and the Meaning of Life

Darwin's Dangerous Idea - Evolution and the Meaning of Life

Darwin's Dangerous Idea - Evolution and the Meaning of Life

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

422 THE EVOLUTION OF MEANINGS<br />

get any at all) from indirect reliance on <strong>the</strong> sense organs, life histories, <strong>and</strong><br />

purposes <strong>of</strong> <strong>the</strong>ir creators, Al <strong>and</strong> Bo. The real source <strong>of</strong> <strong>the</strong> meaning or truth<br />

or semanticity in <strong>the</strong> artifacts lies in <strong>the</strong>se human artificers. (That was <strong>the</strong><br />

point <strong>of</strong> <strong>the</strong> suggestion that in a certain sense Al <strong>and</strong> Bo were in <strong>the</strong>ir<br />

respective boxes.) Now, I might have told <strong>the</strong> story differently: inside <strong>the</strong><br />

boxes were two robots, Al <strong>and</strong> Bo, which had each spent a longish "lifetime"<br />

scurrying around in <strong>the</strong> world ga<strong>the</strong>ring facts before getting in <strong>the</strong>ir respective<br />

boxes. I chose a simpler route, to forestall all <strong>the</strong> questions about<br />

whe<strong>the</strong>r box A or box B was "really thinking," but we may now reinstate <strong>the</strong><br />

issues <strong>the</strong>reby finessed, since it is finally time to dispose once <strong>and</strong> for all <strong>of</strong><br />

<strong>the</strong> hunch that original intentionality could not emerge in any artifactual<br />

"mind" without <strong>the</strong> intervention <strong>of</strong> a (human?) artificer. Suppose that is so.<br />

Suppose, in o<strong>the</strong>r words, that, whatever differences <strong>the</strong>re might be between a<br />

simple box <strong>of</strong> truths like box A <strong>and</strong> <strong>the</strong> fanciest imaginable robot, since both<br />

would just be artifacts, nei<strong>the</strong>r could have real—or original—intentionality,<br />

but only <strong>the</strong> derivative intentionality borrowed from its creator. Now you are<br />

ready for ano<strong>the</strong>r thought experiment, a reductio ad absur-dum <strong>of</strong> that<br />

supposition.<br />

4. SAFE PASSAGE TO THE FUTURE 9<br />

Suppose you decided, for whatever reasons, that you wanted to experience<br />

life in <strong>the</strong> twenty-fifth century, <strong>and</strong> suppose that <strong>the</strong> only known way <strong>of</strong><br />

keeping your body alive that long required it to be placed in a hibernation<br />

device <strong>of</strong> sorts. Let's suppose it's a "cryogenic chamber" that cools your body<br />

down to a few degrees above absolute zero. In this chamber your body would<br />

be able to rest, suspended in a super-coma, for as long as you liked. You<br />

could arrange to climb into <strong>the</strong> chamber, <strong>and</strong> its surrounding support capsule,<br />

be put to sleep, <strong>and</strong> <strong>the</strong>n automatically be awakened <strong>and</strong> released in 2401.<br />

This is a time-honored science-fiction <strong>the</strong>me, <strong>of</strong> course.<br />

Designing <strong>the</strong> capsule itself is not your only engineering problem, since<br />

<strong>the</strong> capsule must be protected <strong>and</strong> supplied with <strong>the</strong> requisite energy (for<br />

refrigeration, etc.) for over four hundred years. You will not be able to count<br />

on your children <strong>and</strong> gr<strong>and</strong>children for this stewardship, <strong>of</strong> course, since <strong>the</strong>y<br />

will be long dead before <strong>the</strong> year 2401, <strong>and</strong> you would be most unwise to<br />

presume that your more distant descendants, if any, will take a lively interest<br />

in your well-being. So you must design a supersystem to protect your capsule<br />

<strong>and</strong> to provide <strong>the</strong> energy it needs for four hundred years.<br />

Here <strong>the</strong>re are two basic strategies you might follow. On one, you should<br />

Safe Passage to <strong>the</strong> Future 423<br />

prospect around for <strong>the</strong> ideal location, as best you can foresee, for a fixed<br />

installation that will be well supplied with water, sunlight, <strong>and</strong> whatever else<br />

your capsule (<strong>and</strong> <strong>the</strong> supersystem itself) will need for <strong>the</strong> duration. The<br />

main drawback to such an installation or "plant" is that it cannot be moved if<br />

harm comes its way—if, say, someone decides to build a freeway right<br />

where it is located. The alternative strategy is much more sophisticated <strong>and</strong><br />

expensive, but avoids this drawback: design a mobile facility to house your<br />

capsule, along with <strong>the</strong> requisite sensors <strong>and</strong> early-warning devices so that it<br />

can move out <strong>of</strong> harm's way <strong>and</strong> seek out new sources <strong>of</strong> energy <strong>and</strong> repair<br />

materials as it needs <strong>the</strong>m. In short, build a giant robot <strong>and</strong> install <strong>the</strong> capsule<br />

(with you inside it) in it.<br />

These two basic strategies are copied from nature: <strong>the</strong>y correspond<br />

roughly to <strong>the</strong> division between plants <strong>and</strong> animals. Since <strong>the</strong> latter, more<br />

sophisticated, strategy better fits our purposes, let's suppose that you decide<br />

to build a robot to house your capsule. You should try to design this robot so<br />

that above all else it "chooses" actions designed to fur<strong>the</strong>r your interests, <strong>of</strong><br />

course. Don't call <strong>the</strong>se mere switching points in your robot's control system<br />

"choice" points if you think that this would imply that <strong>the</strong> robot had free will<br />

or consciousness, for I don't mean to smuggle any such contrab<strong>and</strong> into <strong>the</strong><br />

thought experiment. My point is uncontroversial: <strong>the</strong> power <strong>of</strong> any computer<br />

program lies in its capacity to execute branching instructions, zigging one<br />

way or ano<strong>the</strong>r depending on some test it executes on <strong>the</strong> data <strong>the</strong>n available<br />

to it, <strong>and</strong> my point is just that, as you plan your robot's control system, you<br />

would be wise to try to structure it so that whenever branching opportunities<br />

confront it, it will tend to branch down that path that has <strong>the</strong> highest<br />

probability <strong>of</strong> serving your interests. You are, after all, <strong>the</strong> raison d'etre <strong>of</strong><br />

<strong>the</strong> whole gadget. The idea <strong>of</strong> designing hardware <strong>and</strong> s<strong>of</strong>tware that are<br />

specifically attuned to <strong>the</strong> interests <strong>of</strong> a particular human individual is not<br />

even science fiction any more, though <strong>the</strong> particular design problems facing<br />

your robot-builders would be pr<strong>of</strong>oundly difficult engineering challenges,<br />

somewhat beyond <strong>the</strong> state <strong>of</strong> <strong>the</strong> art today. This mobile entity would need a<br />

"vision" system to guide its locomotion, <strong>and</strong> o<strong>the</strong>r "sensory" systems as well,<br />

in addition to <strong>the</strong> self-monitoring capacities to inform it <strong>of</strong> its needs.<br />

Since you will be comatose throughout, <strong>and</strong> thus cannot stay awake to<br />

guide <strong>and</strong> plan its strategies, you will have to design <strong>the</strong> robot supersystem<br />

to generate its own plans in response to changing circumstances over <strong>the</strong><br />

centuries. It must "know" how to "seek out" <strong>and</strong> "recognize" <strong>and</strong> <strong>the</strong>n exploit<br />

energy sources, how to move to safer territory, how to "anticipate" <strong>and</strong> <strong>the</strong>n<br />

avoid dangers. With so much to be done, <strong>and</strong> done fast, you had best rely<br />

whenever you can on economies: give your robot no more discriminatory<br />

prowess than it will probably need in order to distinguish whatever needs<br />

distinguishing in <strong>the</strong> world—given its particular constitution.<br />

9. An earlier version <strong>of</strong> this thought experiment appeared in Dennett 1987b, ch. 8.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!