17.06.2014 Views

Notes for the Lifebox, the Seashell, and the Soul - Rudy Rucker

Notes for the Lifebox, the Seashell, and the Soul - Rudy Rucker

Notes for the Lifebox, the Seashell, and the Soul - Rudy Rucker

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Notes</strong> <strong>for</strong> The <strong>Lifebox</strong>, <strong>the</strong> <strong>Seashell</strong>, <strong>and</strong> <strong>the</strong> <strong>Soul</strong>, by <strong>Rudy</strong> <strong>Rucker</strong><br />

modifying regimen such as meditation classes or group <strong>the</strong>rapy. A pathological neurosis can<br />

be so deeply ingrained a system bug that one has to dig down quite far to change it.<br />

Good News Bad News<br />

Depending on how you look at it, this may seem like ei<strong>the</strong>r good news (we won’t be<br />

replaced by smart robots) or bad news (we won’t figure out how to build smart robots).<br />

Back Propagation as a Computation<br />

In a way, a neural net is something very simple. The net-training process itself has a<br />

simple <strong>and</strong> deterministic description: pick a network architecture, pick a pseudor<strong>and</strong>omizer<br />

<strong>and</strong> assign starting weights, <strong>and</strong> do back-propagation training on a sample set. The only<br />

complicated part here is <strong>the</strong> details of what lies in <strong>the</strong> sample set examples.<br />

***<br />

Again thinking back to evolution, you might wonder why we don’t tune a neural net<br />

by using a genetic algorithm approach. You could look at a whole bunch of r<strong>and</strong>omly<br />

selected weight sets, repeatedly replacing <strong>the</strong> less successful weight sets by mutations <strong>and</strong><br />

combinations of <strong>the</strong> more successful weight sets.<br />

You could do this, but in practice <strong>the</strong> neural net fitness l<strong>and</strong>scapes are smooth enough<br />

that <strong>the</strong> back-propagation hill-climbing method works quite well ⎯ <strong>and</strong> its faster <strong>and</strong><br />

simpler. But what about local maxima? Mightn’t a hill-climbing method end up on <strong>the</strong> top<br />

of a foothill ra<strong>the</strong>r than on <strong>the</strong> top of a mountain? In practice this tends not to happen,<br />

largely because so many weights are involved in a typical real-world neural net. Each<br />

additional weight to tweak adds ano<strong>the</strong>r dimension to <strong>the</strong> fitness l<strong>and</strong>scape, <strong>and</strong> <strong>the</strong>se extra<br />

dimensions can act like “ridges” that lead towards <strong>the</strong> sought-<strong>for</strong> global maximum, sloping<br />

up from a location that’s maximal <strong>for</strong> some but not all of <strong>the</strong> weight dimensions. Also, as<br />

mentioned in <strong>the</strong> last chapter, we don’t really need absolute optimality. Reasonably good<br />

per<strong>for</strong>mance is often enough.<br />

Technical Problem with Running BZ on a Network Instead of on a CA<br />

A network doesn’t have <strong>the</strong> neighborhood structure of a CA, like if A connects to B<br />

<strong>and</strong> A connects to C, that doesn’t imply B is “near” C on a network, though it does in space?<br />

Well, actually, if A will promise to act as a relay station, <strong>the</strong>n B <strong>and</strong> C would be near in that<br />

sense.<br />

To go more towards network, how about giving each cell A its nearest neighbors NA<br />

plus one remote neighbor rA. And say we pick <strong>the</strong> remote neighbors so <strong>the</strong>y aren’t near each<br />

o<strong>the</strong>r. That is, if A <strong>and</strong> B are close <strong>the</strong>n rA <strong>and</strong> rB aren’t close. Maybe <strong>the</strong>re could be a nice<br />

canonical way to do this. If I wanted to program this, I could have a CA with an extra real<br />

number field dist, <strong>and</strong> it would take <strong>the</strong> usual nearest neighbor inputs plus an input from a<br />

cell whose index is dist*N where N is <strong>the</strong> size of <strong>the</strong> CA array. I could fill <strong>the</strong> dist fields at<br />

startup with a r<strong>and</strong>omizer. Oh, I did this years ago with John Walker, when I wrote <strong>the</strong><br />

ZipZap <strong>and</strong> XipXap series of CAs in assembly language. The remote neighbors didn’t<br />

change much.<br />

p. 73

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!