Views
1 year ago

Gamma Ray Magazine

Science Fiction | Science Fact | Science Future

David Brin on Science

David Brin on Science Fiction Fact, and Fantasy 89 GAMMA RAY David Brin is one of the “10 authors most-read by AI researchers.” Naturally, he’s the guy to consult before Terminators take over the planet. With an extensive resume and years of research experience under his belt, Brin has become the go-to authority on all things science. Advising us that “criticism is the only known antidote to error” and that the best technique for integrating AI into our civilization is to raise them as our children, David Brin sat down with OMNI to discuss the worlds of science, science fiction, and fantasy. In this exclusive interview, OMNI was fortunate enough to pick the brain of everyone’s favorite scientist. And yes, we did just give him that well-deserved title.

GAMMA RAY: How would you define the term science fiction? How is science fiction integrated into our society? DAVID BRIN: Many have tried to define science fiction, most often focusing on the “science” part, which is terribly misleading. I like to call it the literature of exploration and change. While other genres obsess upon so-called “eternal verities,” SF deals with the possibility that our children may have different problems and priorities. They may, indeed, be very different than we have been, as we today are very different than our forebears. Change is the salient feature of our age. All creatures live embedded in time, though only humans seem to lift their heads to comment on this fact, lamenting the past or worrying over what’s to come. Our brains are uniquely equipped to handle this temporal skepsis. Twin neural clusters that reside just above our eyes— the prefrontal lobes—appear especially adapted for extrapolating ahead. Meanwhile, swathes of older cortex can flood with vivid memories of yesterday, triggered by the merest sensory tickle, as when a single aromatic whiff sent Proust back to roam his mother’s kitchen for 80,000 words. The crucial thing about SF is not the furniture. Anne McCaffrey wrote real science fiction about… dragons! And despite lasers, Star Wars is pure fantasy, because it assumes a changeless-endless cycle of rule by demigods. Crucially, Anne’s feudal-style dragon riders discover they used to fly starships. And they want… them… back. Does science fiction predict change or simply present a possibility of change? Most SF authors deny trying to “predict.” The future is a minefield and surprise is the explosive. That is not to say we shouldn’t note when an author gets something right. My own fans keep a wiki tracking my score—hits and misses—from near future extrapolations like EARTH and EXISTENCE. Still, what we truly aim for are “plausibilities.” These are possible eventualities that might rattle any sense of comfy stability in the onrushing realm, just beyond tomorrow. Elsewhere I go into the importance of self-preventing prophecies—SF tales that have quite possibly saved our lives and certainly helped save freedom, by inoculating a definitely NOT-sheep-like public with heightened awareness of some potential danger. Among the greatest of these were Dr. Strangelove, Soylent Green, and Nineteen Eighty-Four, all of which helped make the author’s vivid warning somewhat obsolete through the unexpected miracle that people actually listened. If you could re-write Isaac Asimov’s Three Laws of Robotics, what would you change and what would you add? (Given that, in their very nature, they become contradictory.) What is your “ideal” AI being, if any? Well, I did my best to “channel” Isaac in my novel FOUNDATION’S TRIUMPH, which tied together most of the loose ends still dangling when he passed from the scene. There are several problems with Isaac’s epochally interesting Three Laws, foremost of which is that there’s just no demand for companies and researchers to put in the hard labor of implementing such rigid software instructions. Then there’s the logically inevitable end point to the Three Laws. Once they get smart, some computers or robots will become lawyers, and interpret things their own way. Asimov showed clever machines formulating a zeroth law: A robot may not harm humanity, or by inaction, allow humanity to come to harm. This was extrapolated chillingly by Jack Williamson in The Humanoids, in which machines decide that service means protection, and protection means preventing us from taking any risks at all. “No, no, don’t use that knife, you might hurt yourself.” Then a generation later, “No, no, don’t use that fork.” Then, “Don’t use that spoon. Just sit on this pillow and we’ll do everything for you.” As it happens, I consult for a number of groups and companies working on AI (artificial intelligence), and I keep pointing out that there is just one known example of intelligence so far in the cosmos. Moreover, there is a way of handling your creations so that they are likely to be loyal to you, even if they’re much smarter. It’s a tried and true method that worked for quite a few million people who created entities smarter than themselves in times past—transforming them into beings who are stronger, more capable, and sometimes more brilliant than their parents can imagine. The technique is to raise them as members of your civilization. Raise them as our children. 90