21.03.2015 Views

Darwin's Dangerous Idea - Evolution and the Meaning of Life

Darwin's Dangerous Idea - Evolution and the Meaning of Life

Darwin's Dangerous Idea - Evolution and the Meaning of Life

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

442 THE EMPEROR'S NEW MIND, AND OTHER FABLES The Library <strong>of</strong> Toshiba 443<br />

That is interesting, but it doesn't help us much. Lots <strong>of</strong> interesting things<br />

can be proved, ma<strong>the</strong>matically, about each <strong>and</strong> every member <strong>of</strong> various sets<br />

<strong>of</strong> algorithms. Applying that knowledge in <strong>the</strong> real world is ano<strong>the</strong>r matter,<br />

<strong>and</strong> that is <strong>the</strong> blind spot that led Penrose to overlook AI altoge<strong>the</strong>r, instead<br />

<strong>of</strong> refuting it, as he hoped. This has come out quite clearly in his subsequent<br />

attempts at reformulation <strong>of</strong> his claim in response to his critics.<br />

Given any particular algorithm, that algorithm cannot be <strong>the</strong> procedure<br />

whereby human ma<strong>the</strong>maticians ascertain ma<strong>the</strong>matical truth. Hence<br />

humans are not using algorithms at all to ascertain truth. [Penrose 1990,<br />

p. 696.]<br />

Human ma<strong>the</strong>maticians are not using a knowably sound algorithm in order<br />

to ascertain ma<strong>the</strong>matical truth. [Penrose 1991.]<br />

In <strong>the</strong> more recent <strong>of</strong> <strong>the</strong>se, he goes on to consider <strong>and</strong> close various<br />

"loopholes," <strong>of</strong> which two in particular concern us. ma<strong>the</strong>maticians might be<br />

using "a horrendously complicated unknowable algorithm X" or "an unsound<br />

(but presumably approximately sound) algorithm Y." Penrose presents <strong>the</strong>se<br />

loopholes as if <strong>the</strong>y were ad hoc responses to <strong>the</strong> challenge <strong>of</strong> Godel's<br />

Theorem, instead <strong>of</strong> <strong>the</strong> st<strong>and</strong>ard working assumptions <strong>of</strong> AI. Of <strong>the</strong> first he<br />

says:<br />

This seems to be totally at variance with what ma<strong>the</strong>maticians seem actually<br />

to be doing when <strong>the</strong>y express <strong>the</strong>ir arguments in terms that can ( at<br />

least in principle ) be broken down into assertions that are 'obvious', <strong>and</strong><br />

agreed by all. I would regard it as far-fetched in <strong>the</strong> extreme to believe that<br />

it is really <strong>the</strong> horrendous unknowable X, ra<strong>the</strong>r than <strong>the</strong>se simple <strong>and</strong><br />

obvious ingredients [emphasis added], that lies lurking behind all our<br />

ma<strong>the</strong>matical underst<strong>and</strong>ing. [Penrose 1991]<br />

These "ingredients" are indeed wielded by us all in an apparently nonalgorithmic<br />

way, but this phenomenological fact is misleading. Penrose pays<br />

careful attention to what it is like to be a ma<strong>the</strong>matician, but he overlooks a<br />

possibility—indeed, a likelihood—that is familiar to AI researchers: <strong>the</strong><br />

possibility that underlying our general capacity to deal with such "ingredients"<br />

is a heuristic program <strong>of</strong> mind-boggling complexity. Such a complicated<br />

algorithm would approximate <strong>the</strong> competence <strong>of</strong> <strong>the</strong> perfect<br />

underst<strong>and</strong>er, <strong>and</strong> be "invisible" to its beneficiary. Whenever we say we<br />

solved some problem "by intuition," all that really means is we don't know<br />

how we solved it. The simplest way <strong>of</strong> modeling "intuition" in a computer is<br />

simply denying <strong>the</strong> computer program any access to its own inner workings.<br />

Whenever it solves a problem, <strong>and</strong> you ask it how it solved <strong>the</strong> problem, it<br />

should respond: "I don't know; it just came to me by intuition" (Dennett<br />

1968).<br />

He goes on to dismiss his second loophole (<strong>the</strong> unsound algorithm) by<br />

claiming (1991): "Ma<strong>the</strong>maticians require a degree <strong>of</strong> rigour that makes such<br />

heuristic arguments unacceptable—so no such known procedure <strong>of</strong> this kind<br />

can be <strong>the</strong> way that ma<strong>the</strong>maticians actually operate." This is a more<br />

interesting mistake, for with it he raises <strong>the</strong> prospect that <strong>the</strong> crucial<br />

empirical test would be not to put a single ma<strong>the</strong>matician "in <strong>the</strong> box" but <strong>the</strong><br />

whole ma<strong>the</strong>matical community! Penrose sees <strong>the</strong> <strong>the</strong>oretical importance <strong>of</strong><br />

<strong>the</strong> added power that human ma<strong>the</strong>maticians obtain by pooling <strong>the</strong>ir<br />

resources, communicating with each o<strong>the</strong>r, <strong>and</strong> hence becoming a sort <strong>of</strong><br />

single giant mind that is hugely more reliable than any one homunculus we<br />

might put in <strong>the</strong> box. It is not that ma<strong>the</strong>maticians have fancier brains than<br />

<strong>the</strong> rest <strong>of</strong> us (or than chimpanzees) but that <strong>the</strong>y have mind-tools—<strong>the</strong><br />

social institutions in which ma<strong>the</strong>maticians present each o<strong>the</strong>r <strong>the</strong>ir pro<strong>of</strong>s,<br />

check each o<strong>the</strong>r out, make mistakes in public, <strong>and</strong> <strong>the</strong>n count on <strong>the</strong> public<br />

to correct those mistakes. This does indeed give <strong>the</strong> ma<strong>the</strong>matics community<br />

powers to discern ma<strong>the</strong>matical truth that dwarf <strong>the</strong> powers <strong>of</strong> any individual<br />

human brain (even an individual brain with paper-<strong>and</strong>-pencil peripherals, a<br />

h<strong>and</strong> calculator, or a laptop!). But this does not show that human minds are<br />

not algorithmic devices; on <strong>the</strong> contrary, it shows how <strong>the</strong> cranes <strong>of</strong> culture<br />

can exploit human brains in distributed algorithmic processes that have no<br />

discernible limits.<br />

Penrose doesn't quite see it that way. He goes on to say that "it is our<br />

general (non-algorithmic) ability to underst<strong>and</strong>" that accounts for our<br />

ma<strong>the</strong>matical abilities, <strong>and</strong> <strong>the</strong>n he concludes: "It was not an algorithm x<br />

that was favoured, in Man ( at least) by natural selection, but this wonderful<br />

ability to underst<strong>and</strong>!" (Penrose 1991). Here he commits <strong>the</strong> fallacy I just<br />

exposed using <strong>the</strong> chess example. Penrose wants to argue:<br />

x can underst<strong>and</strong>;<br />

<strong>the</strong>re is no feasible algorithm for underst<strong>and</strong>ing;<br />

<strong>the</strong>refore: what natural selection selected, <strong>the</strong> whatever-it-is that accounts<br />

for underst<strong>and</strong>ing, is not an algorithm.<br />

This conclusion is a non sequitur. If <strong>the</strong> mind is an algorithm (contrary to<br />

Penrose's claim), surely it is not an algorithm that is recognizable to, or<br />

accessible to, those whose minds it creates. It is, in his terms, unknowable.<br />

As a product <strong>of</strong> biological design processes ( both genetic <strong>and</strong> individual), it<br />

is almost certainly one <strong>of</strong> those algorithms that are somewhere or o<strong>the</strong>r in <strong>the</strong><br />

Vast space <strong>of</strong> interesting algorithms, full <strong>of</strong> typographical errors or "bugs,"<br />

but good enough to bet your life on—so far. Penrose sees this as a "farfetched"<br />

possibility, but if that is all he can say against it, he has not yet<br />

come to grips with <strong>the</strong> best version <strong>of</strong> "strong AI."

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!