22.05.2017 Views

Gamma Ray Magazine

Science Fiction | Science Fact | Science Future

Science Fiction | Science Fact | Science Future

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

GAMMA RAY: How would you define<br />

the term science fiction? How is science<br />

fiction integrated into our society?<br />

DAVID BRIN: Many have tried to define<br />

science fiction, most often focusing<br />

on the “science” part, which is terribly<br />

misleading. I like to call it the literature<br />

of exploration and change. While other<br />

genres obsess upon so-called “eternal<br />

verities,” SF deals with the possibility that<br />

our children may have different problems<br />

and priorities. They may, indeed, be very<br />

different than we have been, as we today<br />

are very different than our forebears.<br />

Change is the salient feature of our age.<br />

All creatures live embedded in time,<br />

though only humans seem to lift their<br />

heads to comment on this fact, lamenting<br />

the past or worrying over what’s to come.<br />

Our brains are uniquely equipped to<br />

handle this temporal skepsis. Twin neural<br />

clusters that reside just above our eyes—<br />

the prefrontal lobes—appear especially<br />

adapted for extrapolating ahead.<br />

Meanwhile, swathes of older cortex can<br />

flood with vivid memories of yesterday,<br />

triggered by the merest sensory tickle, as<br />

when a single aromatic whiff sent Proust<br />

back to roam his mother’s kitchen for<br />

80,000 words. The crucial thing about<br />

SF is not the furniture. Anne McCaffrey<br />

wrote real science fiction about… dragons!<br />

And despite lasers, Star Wars is pure<br />

fantasy, because it assumes a changeless-endless<br />

cycle of rule by demigods.<br />

Crucially, Anne’s feudal-style dragon<br />

riders discover they used to fly starships.<br />

And they want… them… back.<br />

Does science fiction predict change or<br />

simply present a possibility of change?<br />

Most SF authors deny trying to “predict.”<br />

The future is a minefield and surprise<br />

is the explosive. That is not to say we<br />

shouldn’t note when an author gets<br />

something right. My own fans keep a wiki<br />

tracking my score—hits and misses—from<br />

near future extrapolations like EARTH<br />

and EXISTENCE. Still, what we truly<br />

aim for are “plausibilities.” These are<br />

possible eventualities that might rattle any<br />

sense of comfy stability in the onrushing<br />

realm, just beyond tomorrow. Elsewhere I<br />

go into the importance of self-preventing<br />

prophecies—SF tales that have quite possibly<br />

saved our lives and certainly helped<br />

save freedom, by inoculating a definitely<br />

NOT-sheep-like public with heightened<br />

awareness of some potential danger.<br />

Among the greatest of these were Dr.<br />

Strangelove, Soylent Green, and Nineteen<br />

Eighty-Four, all of which helped<br />

make the author’s vivid warning somewhat<br />

obsolete through the unexpected miracle<br />

that people actually listened.<br />

If you could re-write Isaac Asimov’s<br />

Three Laws of Robotics, what would you<br />

change and what would you add? (Given<br />

that, in their very nature, they become<br />

contradictory.) What is your “ideal” AI<br />

being, if any?<br />

Well, I did my best to “channel” Isaac in<br />

my novel FOUNDATION’S TRIUMPH,<br />

which tied together most of the loose ends<br />

still dangling when he passed from the<br />

scene. There are several problems with<br />

Isaac’s epochally interesting Three Laws,<br />

foremost of which is that there’s just no<br />

demand for companies and researchers<br />

to put in the hard labor of implementing<br />

such rigid software instructions. Then<br />

there’s the logically inevitable end point<br />

to the Three Laws. Once they get smart,<br />

some computers or robots will become<br />

lawyers, and interpret things their own<br />

way. Asimov showed clever machines<br />

formulating a zeroth law: A robot may<br />

not harm humanity, or by inaction, allow<br />

humanity to come to harm. This was<br />

extrapolated chillingly by Jack Williamson<br />

in The Humanoids, in which machines<br />

decide that service means protection,<br />

and protection means preventing us from<br />

taking any risks at all. “No, no, don’t use<br />

that knife, you might hurt yourself.” Then<br />

a generation later, “No, no, don’t use that<br />

fork.” Then, “Don’t use that spoon. Just<br />

sit on this pillow and we’ll do everything<br />

for you.” As it happens, I consult for a<br />

number of groups and companies working<br />

on AI (artificial intelligence), and I keep<br />

pointing out that there is just one known<br />

example of intelligence so far in the cosmos.<br />

Moreover, there is a way of handling<br />

your creations so that they are likely to be<br />

loyal to you, even if they’re much smarter.<br />

It’s a tried and true method that worked<br />

for quite a few million people who created<br />

entities smarter than themselves in times<br />

past—transforming them into beings who<br />

are stronger, more capable, and sometimes<br />

more brilliant than their parents can<br />

imagine. The technique is to raise them as<br />

members of your civilization. Raise them<br />

as our children.<br />

90

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!