12.07.2015 Views

Counterexamples to a Likelihood Theory of Evidence

Counterexamples to a Likelihood Theory of Evidence

Counterexamples to a Likelihood Theory of Evidence

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

comparing hypotheses in different models. There the standard notion <strong>of</strong> statisticalsufficiency breaks down, and the relevant statistics may not be sufficient in the senserequired by the LTE.The Asymmetry <strong>of</strong> RegressionThe same challenge <strong>to</strong> LTE extends <strong>to</strong> the linear regression problem (‘regression’ isthe statistician’s name for curve fitting). In these examples, you are also <strong>to</strong>ld that one <strong>of</strong>the two hypotheses or models is true,Y = 2 Xand you are invited <strong>to</strong> say which one istrue on the basis <strong>of</strong> the data. They areexamples in which anyone can tell fromthe full data (with moral certainty) whichis true, but nobody can tell from aknowledge solely <strong>of</strong> the likelihoods. Itis not because Bayesians, or anyone else,are using likelihoods in the wrong way.It’s because the relevant information isnot there!Suppose that data are generatedby the ‘structural’ or ‘causal’ equation Y= X + U, where X and Y are observedvariables, and U is a normal, orGaussian, random variable with meanzero and unit variance, where U isY = Xprobabilistically independent <strong>of</strong> X. 12 Touse this <strong>to</strong> generate pairs <strong>of</strong> values( x, y ), we must also provide agenerating distribution for X, representedby the equation X = µ + W, where µ isthe mean value <strong>of</strong> X, and W is another standard Gaussian random variable,Figure 3: There are two ways <strong>of</strong> generating thesame data: The Forward method and theBackward method (see text). It is impossible <strong>to</strong>tell which method was used from the data alone.probabilistically independent <strong>of</strong> U, also with zero mean and unit variance. Two hundreddata points are shown in Fig. 3. The vertical bar centered on the line Y = X represents theprobability density <strong>of</strong> y given a particular value <strong>of</strong> x.Example 1: Consider two hypotheses about how the data are generated. TheForward method randomly chooses an x value, and then determines the y value by addinga Gaussian error above or below the Forward line (Y = X). This is the method describedin the previous paragraph. The Backward method randomly chooses a y value according<strong>to</strong> the equation Y = µ + 2 Z, where Z is a Gaussian variable with zero mean and unitvariance. Then the x value is determined by adding a Gaussian error (with half thevariance) <strong>to</strong> the left or the right <strong>of</strong> the Backward line Y = 2X (slope = 2). This probabilitydensity represented by the horizontal bar centered on the line Y = 2X (see Fig. 3). In this12 Causal modeling <strong>of</strong> this kind has received a great deal <strong>of</strong> attention in recent years. See Pearl (2000) for acomprehensive survey <strong>of</strong> recent results, as well as Woodward (2003) for an introduction that is moreaccessible <strong>to</strong> philosophers.12

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!