11.07.2015 Views

2DkcTXceO

2DkcTXceO

2DkcTXceO

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

36 A career in statisticsAt Stanford, I worked on many research projects which involved optimizationand asymptotic results. Many seemed to come easily with the use ofTaylor’s theorem, the Central Limit Theorem and the Mann–Wald results. Amore difficult case was in the theorem of Chernoff and Savage (1958) wherewe established the Hodges–Lehmann conjecture about the efficiency of thenonparametric normal scores test. I knew very little about nonparametrics,but when Richard Savage and M. Dwass mentioned the conjecture, I thoughtthat the variational argument would not be difficult, and it was easy. Whatsurprised me was that the asymptotic normality, when the hypothesis of theequality of the two distributions is false, had not been established. Our argumentapproximating the relevant cumulative distribution function by a Gaussianprocess was tedious but successful. The result apparently opened up a sideindustry in nonparametric research which was a surprise to Jimmie Savage,the older brother of Richard.One side issue is the relevance of optimality and asymptotic results. Inreal problems the asymptotic result may be a poor approximation to what isneeded. But, especially in complicated cases, it provides a guide for tabulatingfinite-sample results in a reasonable way with a minimum of relevant variables.Also, for technical reasons optimality methods are not always available, butwhat is optimal can reveal how much is lost by using practical methods andwhen one should search for substantially better ones, and often how to do so.Around 1958, I proved that for the case of a finite number of states of natureand a finite number of experiments, an asymptotically optimal sequentialdesign consists of solving a game where the payoff for the statistician usingthe experiment e against nature using θ is I(ˆθ, θ, e) and I is the Kullback–Leibler information, assuming the current estimate ˆθ is the true value of theunknown state (Chernoff, 1959). This result was generalized to infinitely manyexperiments and states by Bessler (1960) and Albert (1961) but Albert’s resultrequired that the states corresponding to different terminal decisions beseparated.This raised the simpler non-design problem of how to handle the test thatthe mean of a Normal distribution with known variance is positive or negative.Until then the closest approach to this had been to treat the case of threestates of nature a, 0, −a for the means and to minimize the expected samplesize for 0 when the error probabilities for the other states were given. Thisappeared to me to be an incorrect statement of the relevant decision problemwhich I asked G. Schwarz to attack. There the cost was a loss for the wrongdecision and a cost per observation (no loss when the mean is 0). Althoughthe techniques in my paper would work, Schwarz (1962) did a beautiful jobusing a Bayesian approach. But the problem where the mean could vary overthe entire real line was still not done.Idevotedmuchofthenextthreeyearstodealingwiththenon-designproblem of sequentially testing whether the mean of a Normal distributionwith known variance is positive or negative. On the assumption that the payofffor each decision is a smooth function of the mean µ, itseemsnaturalto

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!