11.07.2015 Views

2DkcTXceO

2DkcTXceO

2DkcTXceO

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

264 Conditioning is the issuetest to be .5. It was decided that a discovery should be reported if O pre ≥ 10,which from (23.6) would require α ≤ 5 × 10 −7 ;thisbecametherecommendedstandard for significance in GWAS studies. Using this standard for a large dataset, the paper found 21 genome/disease associations, virtually all of which havebeen subsequently verified.An alternative approach that was discussed in the paper is to use theposterior odds rather than pre-experimental odds — i.e., to condition. Theposterior odds areO post (x) = π 1× m(x|H 1), (23.7)π 0 f(x|0)where m(x|H 1 )= ∫ f(x|θ)p(θ)dθ is the marginal likelihood of the data xunder H 1 .(Again,thispriorcouldbeapointmassatθ ∗ in a frequentistsetting.) It was noted in the paper that the posterior odds for the 21 claimedassociations ranged between 1/10 (i.e., evidence against the association beingtrue) to 10 68 (overwhelming evidence in favor of the association). It wouldseem that these conditional odds, based on the actual data, are much morescientifically informative than the fixed pre-experimental odds of 10/1 for thechosen α, butthepaperdidnotultimatelyrecommendtheirusebecauseitwas felt that a frequentist justification was needed.Actually, use of O post is as fully frequentist as is use of O pre ,sinceitis trivial to show that E{O post (x)|H 0 , R} = O pre ,i.e.,theaverageoftheconditional reported odds equals the actual pre-experimental reported odds,which is all that is needed to be fully frequentist. So one can have the muchmore scientifically useful conditional report, while maintaining full frequentistjustification. This is yet another case where, upon getting the conditioningright, a frequentist completely agrees with a Bayesian.23.6 Final commentsLots of bad science is being done because of a lack of recognition of theimportance of conditioning in statistics. Overwhelmingly at the top of the listis the use of p-values and acting as if they are actually error probabilities.The common approach to testing a sequence of hypotheses is a new additionto the list of bad science because of a lack of conditioning. The use of preexperimentalodds rather than posterior odds in GWAS studies is not so muchbad science, as a failure to recognize a conditional frequentist opportunitythat is available to improve science. Violation of the stopping rule principlein sequential (or interim) analysis is in a funny position. While it is generallysuboptimal (for instance, one could do conditional frequentist testing instead),it may be necessary if one is committed to certain inferential procedures suchas fixed Type I error probabilities. (In other words, one mistake may requirethe incorporation of another mistake.)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!