17.06.2014 Views

Notes for the Lifebox, the Seashell, and the Soul - Rudy Rucker

Notes for the Lifebox, the Seashell, and the Soul - Rudy Rucker

Notes for the Lifebox, the Seashell, and the Soul - Rudy Rucker

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Notes</strong> <strong>for</strong> The <strong>Lifebox</strong>, <strong>the</strong> <strong>Seashell</strong>, <strong>and</strong> <strong>the</strong> <strong>Soul</strong>, by <strong>Rudy</strong> <strong>Rucker</strong><br />

particular could I drop VR <strong>and</strong> computer games?<br />

I’m planning such a radical revision of <strong>the</strong> notes that I think I’ll save off my current<br />

outline as “Computers <strong>and</strong> Reality <strong>Notes</strong>, July 2, 2003.doc”.<br />

July 4, 2003. <strong>Lifebox</strong>, Quantum Mind, NKS Triad.<br />

Thesis: <strong>the</strong> lifebox.<br />

Anti<strong>the</strong>sis: <strong>the</strong> quantum mind.<br />

Syn<strong>the</strong>sis: NKS-style automata, which possibly are quantum computers.<br />

Maybe just call it The <strong>Lifebox</strong>.<br />

I see <strong>the</strong> mind’s churning as being like <strong>the</strong> eddies in a von Karman vortex street<br />

bouncing off each o<strong>the</strong>r. I love Wolfram’s notion of coming up with a higher-order<br />

automaton procedure involving <strong>the</strong> eddies. In this way we are free to ignore sensitive<br />

dependence, that is, we can ignore (a) <strong>the</strong> <strong>the</strong>rmal bath r<strong>and</strong>omization <strong>and</strong> decoherence, <strong>and</strong><br />

(b) we don’t have to pay so much attention to <strong>the</strong> excavation of digits.<br />

July 14, 2003. Popular Interest in AI.<br />

Yesterday I was on a panel about “The Singularity” at <strong>the</strong> science-fiction Readercon<br />

15 in Burlington, Mass.<br />

I was surprised, again, how really naive most people are about AI. Always you hear<br />

<strong>the</strong> same arguments why a machine can’t be like a person. That a machine “doesn’t care” if<br />

it wins (but it does “care” if we give it a utility function), that a machine “doesn’t make<br />

mistakes” (not only can r<strong>and</strong>om errors easily be produced but, in use, machines tend to<br />

accumulate dead storage areas that make <strong>the</strong>m run less well), etc.<br />

As always, <strong>the</strong> faces light up when I talk about <strong>the</strong> merged higher consciousness that<br />

we know, from <strong>the</strong> inside, that we have.<br />

My fellow SF writers seemed wholly unable or unwilling to discuss <strong>the</strong> Singularity<br />

argument’s three moves as <strong>for</strong>mulated by me: (a) strong AI will occur, that is, machines<br />

equivalent to humans in mental power will be evolved, (b) once we have strong AI, we can<br />

easily get superhuman intelligence by running <strong>the</strong> machines faster <strong>and</strong> giving <strong>the</strong>m more<br />

memory, <strong>and</strong> (c) each generation of superhuman machines can design a still smarter next<br />

generation, setting off a cascade of more <strong>and</strong> more powerful artificial minds.<br />

Of <strong>the</strong>se three steps, only (b) is unexceptionable.<br />

(a) may in fact never come true; strong AI may <strong>for</strong>ever remain a will ’o’ <strong>the</strong> wisp. At<br />

this point really <strong>the</strong> only strong general purpose method <strong>for</strong> AI we have is neural nets.<br />

Training a given net isn’t terribly slow, but trying to evolve towards <strong>the</strong> correct net<br />

architecture is an exponential search problem.<br />

(c) is by no means a given. For if we look at <strong>the</strong> first step of <strong>the</strong> cascade, in which we<br />

humans design machines able to work as well as (let alone better than) ourselves, we find<br />

that this step is an exponential search problem. So why should it necessarily be <strong>the</strong> case that<br />

machine generation n can very easily design generation n+1?<br />

July 23, 2003. Brockman Enters <strong>the</strong> Fray.<br />

So I’m signed on with John Brockman; he only wants to represent this one book,<br />

doesn’t want to touch an SF novel, we have a single-book agency agreement. Though he<br />

p. 117

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!