21.03.2015 Views

Darwin's Dangerous Idea - Evolution and the Meaning of Life

Darwin's Dangerous Idea - Evolution and the Meaning of Life

Darwin's Dangerous Idea - Evolution and the Meaning of Life

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

424 THE EVOLUTION OF MEANINGS Safe Passage to <strong>the</strong> Future 425<br />

Your task would be made much more difficult if you couldn't count on<br />

your robot's being <strong>the</strong> only such robot around with such a mission. Let us<br />

suppose that, in addition to whatever people <strong>and</strong> o<strong>the</strong>r animals are up <strong>and</strong><br />

about during <strong>the</strong> centuries to come, <strong>the</strong>re will be o<strong>the</strong>r robots, many dif-ferent<br />

robots (<strong>and</strong> perhaps "plants" as well), competing with your robot for energy<br />

<strong>and</strong> safety. (Why might such a fad catch on? Let's suppose we get irrefutable<br />

advance evidence that travelers from ano<strong>the</strong>r galaxy will arrive on our planet<br />

in 2401. I for one would ache to be around to meet <strong>the</strong>m, <strong>and</strong> if cold storage<br />

was my only prospect, I'd be tempted to go for it.) If you have to plan for<br />

dealing with o<strong>the</strong>r robotic agents, acting on behalf <strong>of</strong> o<strong>the</strong>r clients like<br />

yourself, you would be wise to design your robot with enough sophistication<br />

in its control system to permit it to calculate <strong>the</strong> likely benefits <strong>and</strong> risks <strong>of</strong><br />

cooperating with o<strong>the</strong>r robots, or <strong>of</strong> forming alliances for mutual benefit. You<br />

would be most unwise to suppose that o<strong>the</strong>r clients will be enamored <strong>of</strong> <strong>the</strong><br />

rule <strong>of</strong> "live <strong>and</strong> let live"—<strong>the</strong>re may well be inexpensive "parasite" robots<br />

out <strong>the</strong>re, for instance, just waiting to pounce on your expensive contraption<br />

<strong>and</strong> exploit it. Any calculations your robot makes about <strong>the</strong>se threats <strong>and</strong><br />

opportunities would have to be "quick <strong>and</strong> dirty"; <strong>the</strong>re is no foolpro<strong>of</strong> way<br />

<strong>of</strong> telling friends from foes, or traitors from promise-keepers, so you will<br />

have to design your robot to be, like a chess-player, a decision-maker who<br />

takes risks in order to respond to time pressure.<br />

The result <strong>of</strong> this design project would be a robot capable <strong>of</strong> exhibiting<br />

self-control <strong>of</strong> a high order. Since you must cede fine-grained real-time<br />

control to it once you put yourself to sleep, you will be as "remote" as <strong>the</strong><br />

engineers in Houston were when <strong>the</strong>y gave <strong>the</strong> Viking spacecraft its autonomy<br />

(see chapter 12). As an autonomous agent, it will be capable <strong>of</strong> deriving<br />

its own subsidiary goals from its assessment <strong>of</strong> its current state <strong>and</strong> <strong>the</strong><br />

import <strong>of</strong> that state for its ultimate goal (which is to preserve you till 2401).<br />

These secondary goals, which will respond to circumstances you cannot<br />

predict in detail (if you could, you could hard-wire <strong>the</strong> best responses to<br />

<strong>the</strong>m), may take <strong>the</strong> robot far afield on century-long projects, some <strong>of</strong> which<br />

may well be ill-advised, in spite <strong>of</strong> your best efforts. Your robot may embark<br />

on actions anti<strong>the</strong>tical to your purposes, even suicidal, having been<br />

convinced by ano<strong>the</strong>r robot, perhaps, to subordinate its own life mission to<br />

some o<strong>the</strong>r.<br />

This robot we have imagined will be richly engaged in its world <strong>and</strong> its<br />

projects, always driven ultimately by whatever remains <strong>of</strong> <strong>the</strong> goal states that<br />

you set up for it at <strong>the</strong> time you entered <strong>the</strong> capsule. All <strong>the</strong> preferences it<br />

will ever have will be <strong>the</strong> <strong>of</strong>fspring <strong>of</strong> <strong>the</strong> preferences you initially endowed<br />

it with, in hopes that <strong>the</strong>y would carry you into <strong>the</strong> twenty-fifth century, but<br />

that is no guarantee that actions taken in <strong>the</strong> light <strong>of</strong> <strong>the</strong> robot's descendant<br />

preferences will continue to be responsive, directly, to your<br />

best interests. From your selfish point <strong>of</strong> view, that is what you hope, but this<br />

robot's projects are out <strong>of</strong> your direct control until you are awakened. It will<br />

have some internal representation <strong>of</strong> its currently highest goals, its summum<br />

bonum, but if it has fallen among persuasive companions <strong>of</strong> <strong>the</strong> sort we have<br />

imagined, <strong>the</strong> iron grip <strong>of</strong> <strong>the</strong> engineering that initially designed it will be<br />

jeopardized. It will still be an artifact, still acting only as its engineering<br />

permits it to act, but following a set <strong>of</strong> desiderata partly <strong>of</strong> its own devising.<br />

Still, according to <strong>the</strong> assumption we decided to explore, this robot will<br />

not exhibit anything but derived intentionality, since it is just an artifact,<br />

created to serve your interests. We might call this position "client centrism"<br />

with regard to <strong>the</strong> robot: / am <strong>the</strong> original source <strong>of</strong> all <strong>the</strong> derived meaning<br />

within my robot, however far afield it drifts. It is just a survival machine<br />

designed to carry me safely into <strong>the</strong> future. The fact that it is now engaged<br />

strenuously in projects that are only remotely connected with my interests,<br />

<strong>and</strong> even anti<strong>the</strong>tical to <strong>the</strong>m, does not, according to our assumption, endow<br />

any <strong>of</strong> its control states, or its "sensory" or "perceptual" states, with genuine<br />

intentionality. If you still want to insist on this client centrism, <strong>the</strong>n you<br />

should be ready to draw <strong>the</strong> fur<strong>the</strong>r conclusion that you yourself never enjoy<br />

any states with original intentionality, since you are just a survival machine<br />

designed, originally, for <strong>the</strong> purpose <strong>of</strong> preserving your genes until <strong>the</strong>y can<br />

replicate. Our intentionality is derived, after all, from <strong>the</strong> intentionality <strong>of</strong><br />

our selfish genes. They are <strong>the</strong> Unmeant Meaners, not us!<br />

If this position does not appeal to you, consider jumping <strong>the</strong> o<strong>the</strong>r way.<br />

Acknowledge that a fancy-enough artifact—something along <strong>the</strong> lines <strong>of</strong><br />

<strong>the</strong>se imagined robots—can exhibit real intentionality, given its rich functional<br />

embedding in <strong>the</strong> environment <strong>and</strong> its prowess at self-protection <strong>and</strong><br />

self-control. 10 It, like you, owes its very existence to a project <strong>the</strong> goal <strong>of</strong><br />

10. In <strong>the</strong> light <strong>of</strong> this thought experiment, consider an issue raised by Fred Dretske<br />

(personal communication) with admirable crispness: "I think we could (logically ) create<br />

an artifact that acquired original intentionality, but not one that (at <strong>the</strong> moment <strong>of</strong><br />

creation, as it were) had it." How much commerce with <strong>the</strong> world would be enough to<br />

turn <strong>the</strong> dross <strong>of</strong> derived intentionality into <strong>the</strong> gold <strong>of</strong> original intentionality? This is our<br />

old problem <strong>of</strong> essentialism, in a new guise. It echoes <strong>the</strong> desire to zoom in on a crucial<br />

moment <strong>and</strong> <strong>the</strong>reby somehow identify a threshold that marks <strong>the</strong> first member <strong>of</strong> a<br />

species, or <strong>the</strong> birth <strong>of</strong> real function, or <strong>the</strong> origin <strong>of</strong> life, <strong>and</strong> as such it manifests a failure<br />

to accept <strong>the</strong> fundamental Darwinian idea that all such excellences emerge gradually by<br />

finite increments. Notice, too, that Dretske's doctrine is a peculiar br<strong>and</strong> <strong>of</strong> extreme<br />

Spencerism: <strong>the</strong> current environment must do <strong>the</strong> shaping <strong>of</strong> <strong>the</strong> organism before <strong>the</strong><br />

shape "counts" as having real intentionality; past environments, filtered through <strong>the</strong><br />

wisdom <strong>of</strong> engineers or a history <strong>of</strong> natural selection, don't count—even if <strong>the</strong>y result in<br />

tne very same functional structures. There is something wrong <strong>and</strong> something right in<br />

this. More important than any particular past history <strong>of</strong> individual appropriate commerce

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!