25.01.2013 Views

popper-logic-scientific-discovery

popper-logic-scientific-discovery

popper-logic-scientific-discovery

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

382<br />

new appendices<br />

inferences from observed to unobserved objects, then, Hume says, ‘I<br />

wou’d renew my question, why from this experience we form any conclusion<br />

beyond those past instances, of which we have had experience’. In other words,<br />

Hume points out that we get involved in an infinite regress if we appeal<br />

to experience in order to justify any conclusion concerning unobserved<br />

instances—even mere probable conclusions, as he adds in his Abstract. For there<br />

we read: ‘It is evident that Adam, with all his science, would never have<br />

been able to demonstrate that the course of nature must continue uniformly<br />

the same. ... Nay, I will go farther, and assert that he could not<br />

so much as prove by any probable arguments that the future must be<br />

conformable to the past. All probable arguments are built on the supposition<br />

that there is conformity betwixt the future and the past, and<br />

therefore can never prove it.’ 6 Thus ( + ) is not justifiable by experience;<br />

yet in order to be <strong>logic</strong>ally valid, it would have to be of the character of<br />

a tautology, valid in every <strong>logic</strong>ally possible universe. But this is clearly<br />

not the case.<br />

Thus ( + ), if true, would have the <strong>logic</strong>al character of a synthetic a priori<br />

principle of induction, rather than of an analytic or <strong>logic</strong>al assertion. But<br />

it does not quite suffice even as a principle of induction. For ( + ) may<br />

be true, and p(a) = 0 may be valid none the less. (An example of a<br />

theory which accepts ( + ) as a priori valid—though, as we have seen, ( + )<br />

must be synthetic—and which at the same time accepts p(a) = 0, is<br />

Carnap’s. 7 )<br />

An effective probabilistic principle of induction would have to be<br />

even stronger than ( + ). It would have to allow us, at least, to conclude<br />

that for some fitting singular evidence b, we may obtain p(a, b) > 1/2,<br />

or in words, that a may be made, by accumulating evidence in its<br />

6 Cf. An Abstract of a Book lately published entitled A Treatise of Human Nature, 1740, ed. by J. M.<br />

Keynes and P. Sraffa, 1938, p. 15. Cf. note 2 to section 81. (The italics are Hume’s.)<br />

7 Carnap’s requirement that his ‘lambda’ (which I have shown to be the reciprocal of a<br />

dependence measure) must be finite entails our ( + ); cf. his Continuum of Inductive Methods,<br />

1952. Nevertheless, Carnap accepts p(a) = 0, which according to Jeffreys would entail the<br />

impossibility of learning from experience. And yet, Carnap bases his demand that his<br />

‘lambda’ must be finite, and thus that ( + ) is valid, on precisely the same transcendental<br />

argument to which Jeffreys appeals—that without it, we could not learn from experience.<br />

See his Logical Foundations of Probability, 1950, p. 565, and my contribution to the<br />

Carnap volume of the Library of Living Philosophers, ed. by P. A. Schilpp, especially note 87.<br />

This is now also in my Conjectures and Refutations, 1963.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!