12.07.2015 Views

1 Studies in the History of Statistics and Probability ... - Sheynin, Oscar

1 Studies in the History of Statistics and Probability ... - Sheynin, Oscar

1 Studies in the History of Statistics and Probability ... - Sheynin, Oscar

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

P{max ξ(t) ≥ x} = 0.01, 0 ≤ t ≤ 1where t is measured <strong>in</strong> years <strong>and</strong> <strong>the</strong> left part <strong>of</strong> <strong>the</strong> <strong>in</strong>equality is <strong>the</strong>maximal yearly w<strong>in</strong>d velocity.Suppose that we know <strong>the</strong> values ξ 1 , ξ 2 , ..., ξ n <strong>of</strong> <strong>the</strong> maximalvelocity dur<strong>in</strong>g <strong>the</strong> first, <strong>the</strong> second, ..., n-th year dur<strong>in</strong>g whichmeteorological observations were made. However, w<strong>in</strong>d velocities hadnot been recorded cont<strong>in</strong>uously but only several times a day, so thatthose maximal yearly velocities are <strong>in</strong> essence unknown. For <strong>the</strong> timebe<strong>in</strong>g, let us never<strong>the</strong>less abstract ourselves from this extremelyessential difficulty.And so, we have those observations <strong>of</strong> <strong>the</strong> r<strong>and</strong>om variable ξ, <strong>the</strong>maximal yearly w<strong>in</strong>d velocity, <strong>and</strong> we wish to assign an x such thatP{ξ ≥ x} = 0.01. (4.11)Had <strong>the</strong> number n been very large, we would have been obliged toselect such an x that about a hundredth part <strong>of</strong> <strong>the</strong> ξ i will be larger thanit. The trouble, however, is that n, <strong>the</strong> number <strong>of</strong> years dur<strong>in</strong>g whichobservations are available, is much less than 100. Then, if x is suchthat (4.11) is fulfilled, that is,P{ξ i ≥ x} = 0.01 for each i,<strong>the</strong> number <strong>of</strong> variables ξ i larger than x will obey <strong>the</strong> Poisson law withparameter λ = 0.01n < 1. It will follow that most likely all <strong>of</strong> our ξ i willbe less than x so that we are only able to say that x should be largerthan each <strong>of</strong> <strong>the</strong> ξ i ′s with no upper boundary available.Therefore, we are tempted to smooth our ξ 1 , ..., ξ n by some law, forexample by <strong>the</strong> normal law N( x; ξ, s ) <strong>and</strong> determ<strong>in</strong>e x from equationN( x; ξ, s ) = 1− 0.01 = 0.99.Or, we will propose to identify <strong>the</strong> tail areas <strong>of</strong> <strong>the</strong> unknown functionF(x) with those <strong>of</strong> <strong>the</strong> normal law.We turn <strong>the</strong> readers’ attention to <strong>the</strong> fact that such a procedureshould not be trusted ei<strong>the</strong>r when apply<strong>in</strong>g <strong>the</strong> normal, or any o<strong>the</strong>rlaw, <strong>and</strong> that <strong>the</strong>re exist both <strong>the</strong>oretical grounds <strong>and</strong> considerationsbased on statistical experiments for that <strong>in</strong>ference. Theoretical groundsconsist <strong>in</strong> that <strong>the</strong> central limit <strong>the</strong>orem only states that <strong>the</strong> difference*between <strong>the</strong> exact distribution function P{ sn< x}<strong>and</strong> <strong>the</strong> normal lawis small:P s x x*{n< } − N( ) → 0.For example, if that probability P = 0.95, N(x) = 0.99 <strong>and</strong> <strong>the</strong>difference is only 0.04 which is sufficiently small. However, <strong>the</strong>relative error*[1 − P{ sn< x}] ÷ [1 − N( x)] = 400%43

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!