11.07.2015 Views

2DkcTXceO

2DkcTXceO

2DkcTXceO

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

J.O. Berger 255Pedagogical example: Two observations, X 1 and X 2 , are to be taken, where{θ +1 withprobability1/2,X i =θ − 1 with probability 1/2.Consider the confidence set for the unknown θ ∈ IR:{the singleton {(X1 + XC(X 1 ,X 2 )=2 )/2} if X 1 ≠ X 2 ,the singleton {X 1 − 1} if X 1 = X 2 .The frequentist coverage of this confidence set isP θ {C(X 1 ,X 2 )containsθ} = .75,which is not at all a sensible report once the data is at hand. Indeed, if x 1 ≠ x 2 ,then we know for sure that (x 1 + x 2 )/2 is equal to θ, sothattheconfidenceset is then actually 100% accurate. On the other hand, if x 1 = x 2 , we do notknow whether θ is the data’s common value plus 1 or their common valueminus 1, and each of these possibilities is equally likely to have occurred;the confidence interval is then only 50% accurate. While it is not wrong tosay that the confidence interval has 75% coverage, it is obviously much morescientifically useful to report 100% or 50%, depending on the data. And again,this conditional report is still fully frequentist, averaging over the sets of data{(x 1 ,x 2 ):x 1 ≠ x 2 } and {(x 1 ,x 2 ):x 1 = x 2 },respectively.23.3 Likelihood and stopping rule principlesSuppose an experiment E is conducted, which consists of observing data Xhaving density f(x|θ), where θ is the unknown parameters of the statisticalmodel. Let x obs denote the data actually observed.Likelihood Principle (LP): The information about θ, arisingfromjustEand x obs ,iscontainedintheobservedlikelihoodfunctionL(θ) =f(x obs |θ) .Furthermore, if two observed likelihood functions are proportional, then theycontain the same information about θ.The LP is quite controversial, in that it effectively precludes use of frequentistmeasures, which all involve averages of f(x|θ) overx that are notobserved. Bayesians automatically follow the LP because the posterior distributionof θ follows from Bayes’ theorem (with p(θ) beingthepriordensityforθ) asp(θ)f(x obs |θ)p(θ|x obs )= ∫p(θ′ )f(x obs |θ ′ )dθ , ′

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!