11.07.2015 Views

kniga 7 - Probability and Statistics 1 - Sheynin, Oscar

kniga 7 - Probability and Statistics 1 - Sheynin, Oscar

kniga 7 - Probability and Statistics 1 - Sheynin, Oscar

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Theorem 3. If the prior density 3 (a; h) has bounded first derivatives with respect to a <strong>and</strong>h, <strong>and</strong> 3 (a; h ) 0, then, uniformly with respect to 1 <strong>and</strong> 1 , 3 ( 1 ; 1 |x 1 ; x 2 ; …; x n ) = (1/)exp(– 1 2 – 1 2 )[1 + (1 + | 1 | + | 1 |)O(1/n)].(47)As in Theorem 2, O(1/n) denotes a magnitude having order (1/n) uniformly with respectto 1 <strong>and</strong> 1 if 3 (a; h), x <strong>and</strong> h1are constant <strong>and</strong> n . The proof is quite similar to that ofTheorem 2 <strong>and</strong> we do not adduce it.It follows from (47) thatE(a|x 1 ; x 2 ; …; x n ) = x + (1/ h1) O(1/n), (48)E[(a – x ) 2 |x 1 ; x 2 ; …; x n ] = (1/2n h 2 1)[1 + O(1/n)], (49)P[|(a – x )| ≤ (c/ h1n)|x 1 ; x 2 ; …; x n ] = (2/) c0exp(– 2 )d + O(1/n),(50)E(h|x 1 ; x 2 ; …; x n ) = h1[1 + O(1/n)], (51)E[(h – h ) 2 |x 1 ; x 2 ; …; x n ] = ( h 2 1/2n)[1 + O(1/n)], (52)P[|h – h1| ≤ (c h 1/n)|x 1 ; x 2 ; …; x n ] = (2/) c0exp(– 2 ) d + O(1/n). (53)Neglecting magnitudes of the order of 1/n in Problem 3, it is natural to assume, on thestrength of formulas (48) <strong>and</strong> (51), that x is an approximate value of a, <strong>and</strong> h1, anapproximate value of h. And, issuing from formulas (49) <strong>and</strong> (52), we may approximatelyconsider h1n <strong>and</strong> n/ h 1as the measures of precision of these approximations respectively.Formulas (50) <strong>and</strong> (53) allow us to determine the confidence limits for a <strong>and</strong> hcorresponding to within magnitudes of the order of 1/n with a given probability ".As in Problem 2, the conditional expectation E(h|x 1 ; x 2 ; …; x n ) is only determined byformula (51) to within factor [1 + O(1/n)]. Therefore, in keeping with the viewpoint adoptedin this section, discussions about choosing h1or, for example, h2= n/S 1 2 as theapproximate value for h are meaningless.From the practical point of view, Theorems 1, 2 <strong>and</strong> 3 are not equally important.According to Theorem 1, the precision of the approximate formulas (29) <strong>and</strong> (17) increasesfor a constant 1 (a) not only with an increasing n, but also with the increase in the measure ofprecision h. Therefore, if the mean square deviation = 1/h2 is small as compared with thea priori admissible region of a, we are somewhat justified in applying these formulas forsmall values of n (<strong>and</strong> even for n = 1) as well. However, in the case of Theorems 2 <strong>and</strong> 3 theremainder terms of formulas (42) <strong>and</strong> (47) only decrease with an increasing n so that they donot offer anything for small values of n.5. The Fisherian Confidence Limits <strong>and</strong> Confidence Probabilities. As stated in theIntroduction, the problem of approximately estimating parameter given (1) can beformulated in particular thus: It is required to determine such confidence limits (x 1 ; x 2 ; …;x n ) <strong>and</strong> (x 1 ; x 2 ; …; x n ) for that it would be practically possible to neglect the case inwhich is situated beyond the interval (the confidence interval) [; ].In order to judge whether certain confidence limits ; for parameter are suitable for agiven (1), it is natural to consider the conditional probabilityP( ≤ ≤ |x 1 ; x 2 ; …; x n ). (54)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!