14.11.2012 Views

2. Philosophy - Stefano Franchi

2. Philosophy - Stefano Franchi

2. Philosophy - Stefano Franchi

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

154<br />

P HILOSOPHY, NON-PHILOSOPHY, AND SCIENCE<br />

ing hardware. It is no different in software design: many decisions are forced by the “raw<br />

material” the programmer has to deal with and many design decisions have to be taken in<br />

order to keep the problem tractable, e.g. to make the simulation work. As a consequence, it<br />

becomes difficult to determine what counts as an explanation, i.e. what and where is the<br />

psychological theory. A simple example, to make the problem clear: to keep the complexity<br />

of the problem within the possibilities of the hardware available at the time, some chess<br />

playing programs were forced to play on a 6x6 board, with no bishops, no en passant cap-<br />

tures and no castling allowed. 33 It might be imagined that a chess expert would have serious<br />

objections to the plausibility of the resulting simulation, since the game played differs sub-<br />

stantively, and perhaps radically, from the game of chess.<br />

Newell is well aware of this difficulty and calls it the irrelevant specification problem.<br />

His proposed solution, however, is almost paradoxical since he suggests a unified theory of<br />

cognition where every specification is relevant because the “theory comes closer to speci-<br />

fying all of the details of the simulation, which is to say, all that is needed to make the sim-<br />

ulation run”. 34 This suggestion involves, it seems, that the problem will go away when the<br />

engineering aspect will have taken over completely, that is, when the simulation will be in-<br />

distinguishable from the original. How to determine, however, at which level the similarity<br />

should stop? In particular, how would it be possible, on such a view, to claim that simula-<br />

tions run on computers can exhibits psychological plausibility? Sooner or later, some deci-<br />

sion or other will have to do with a “substratum” that is irrelevant to the subject matter of<br />

psychology: the materiality of the computer will not go away.<br />

This last remark brings me to the second point of tension between AI-engineering and<br />

AI-psychology, a problem that can be seen as the reverse of the first problem. It may be<br />

claimed that AI works in abstracto (e.g. it is not empirical enough) because it does not<br />

study “actual people” in “actual situations” but rather relies on idealized models. As Daniel<br />

Dennett has remarked 35 —and countless texts could be provided from AI’s “founding fa-<br />

thers”—at least part of the work done in Artificial Intelligence must conducted ‘top-down,’<br />

33. Allen Newell, J.C, Shaw, and Herbert Simon, “Chess Playing Programs....,” 47.<br />

34. Allen Newell, Unified theories…, 23.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!