25.01.2013 Views

popper-logic-scientific-discovery

popper-logic-scientific-discovery

popper-logic-scientific-discovery

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

(5)<br />

Now let h be the statement<br />

P(a, b) = r<br />

and let e be the statement ‘In a sample which has the size n and which<br />

satisfies the condition b (or which is taken at random from the population<br />

b), a is satisfied in n(r ± δ) of the instances’.* 1 Then we may put,<br />

especially for small values of δ,<br />

(6)<br />

P(e) ≈ 2 δ* 2<br />

appendix *ix 429<br />

We may even put P(e) = 2δ; for this would mean that we assign equal<br />

probabilities—and therefore, the probabilities 1/(n + 1)—to each of<br />

the n + 1 proportions, 0/n, 1/n, ... n/n, with which a property a may<br />

occur in a sample of the size n. It follows that we should have to assign<br />

the probability, P(e) = (2d + 1)/(n + 1), to a statistical report e telling<br />

us that m ± d members of a population of the size n have the property a;<br />

so that putting δ = (d + (1/2))/(n + 1), we obtain P(e) = 2δ. (The<br />

equidistribution here described is the one which Laplace assumes in<br />

the derivation of his rule of succession. It is adequate for assessing the<br />

absolute probability, P(e), if e is a statistical report about a sample. But it is<br />

inadequate for assessing the relative probability P(e, h) of the same<br />

report, given a hypothesis h according to which the sample is the<br />

product of an n times repeated experiment whose possible results occur<br />

each with a certain probability. For in this case, it is adequate to assume<br />

a combinatoric, i.e. a Bernoullian rather than a Laplacean, distribution.)<br />

We see from (6) that, if we wish to make P(e) small, we have to make<br />

δ small.<br />

On the other hand, P(e, h)—the likelihood of h—will be close to 1<br />

* 1 It is here assumed that if the size of the sample is n, the frequency within this sample<br />

can be determined at best with an imprecision of ± 1/2n; so that for a finite n, we have<br />

δ � 1/2n. (For large samples, this simply leads to δ > 0.)<br />

* 2 Formula (6) is a direct consequence of the fact that the informative content of a<br />

statement increases with its precision, so that its absolute <strong>logic</strong>al probability increases<br />

with its degree of imprecision; see sections 34 to 37. (To this we have to add the fact that<br />

in the case of a statistical sample, the degree of imprecision and the probability have the<br />

same minima and maxima, 0 and 1.)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!