14.11.2012 Views

2. Philosophy - Stefano Franchi

2. Philosophy - Stefano Franchi

2. Philosophy - Stefano Franchi

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

B ETWEEN ENGINEERING, SCIENCE, AND PHILOSOPHY.<br />

since it takes its lead from necessarily simplified models of intelligence in order to proceed<br />

to their implementation as computer programs. This procedure has prompted some scien-<br />

tists to raise the charge of “idealism” against AI—not by chance a philosophical term—be-<br />

cause of its disregard of the empirical data. This remark, and regardless of the truth of the<br />

objection, is extremely telling of the difficulty faced by Artificial Intelligence in its efforts<br />

of scientific self-justification: the constraints imposed by its engineering-like component—<br />

the necessity of keeping models simple so that programs can actually be written—exert a<br />

pressure on the discipline that tends to pull it away from empirical science (psychology)<br />

and move it closer to philosophy, albeit the latter is interpreted, most often, in a derogatory<br />

sense.<br />

The third difficulty is related to the first two as well, and it represent just another side<br />

of the tension between the engineering and the scientific souls of AI. Artificial Intelligence<br />

has often been accused by psychologists of performing “wild generalizations” in order to<br />

keep its “theories” tractable and formalizable. In other words, psychologists of different<br />

specializations and theoretical orientations have objected that even before it reaches the de-<br />

sign phase, an in fact in order to reach it, AI has to abstract, from the outset, from all the<br />

interesting psychological facts. Thus, its models are based on “ideal competencies” that are<br />

so far from the real cases to bear no relevance to them. One of the most trenchant critiques<br />

of this kind has been offered by John Marshall in a review of Marvin Minsky’s The Society<br />

of Mind. 36 Marshall’s strategy is simple: he examines several of the ideas proposed by Min-<br />

sky about memory, perception, etc., and in each instance he quotes the relevant psycholog-<br />

ical literature by pointing out that the issue tackled by Minsky has already been addressed.<br />

With the important difference that the psychological treatment of, say, memory, did in fact<br />

thematize specific problems, extracted specific data, and proposed specific, testable, re-<br />

peatable explanations instead of what he takes to be just vague intuitions on ill-defined sub-<br />

35. Daniel Dennett, “Artificial Intelligence as <strong>Philosophy</strong> and as Psychology,” Brainstorms (Cambridge:<br />

MIT Press, 1978) 109-126. See also Daniel Dennett, “When Philosophers Encounter Artificial Intelligence,”<br />

Daedalus, 117, 1 (1988) 283-295, for a defense of the engineeristic virtues of AI vs. its alleged<br />

philosophical (in the negative sense) and scientific (in the strong sense) shortcomings.<br />

36. John Marshall, “Close enough for AI?” Journal of Semantics, 5 , 169-173.<br />

155

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!