13.07.2015 Views

SAMPLE MOMENTS 1.1. Moments about the origin (raw moments ...

SAMPLE MOMENTS 1.1. Moments about the origin (raw moments ...

SAMPLE MOMENTS 1.1. Moments about the origin (raw moments ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>SAMPLE</strong> <strong>MOMENTS</strong>1. POPULATION <strong>MOMENTS</strong><strong>1.1.</strong> <strong>Moments</strong> <strong>about</strong> <strong>the</strong> <strong>origin</strong> (<strong>raw</strong> <strong>moments</strong>). The rth moment <strong>about</strong> <strong>the</strong> <strong>origin</strong> of a random variable X,denoted by µ ′ r, is <strong>the</strong> expected value of X r ; symbolically,µ ′ r =E(X r )= ∑ xx r f(x)(1)for r = 0, 1, 2, . . . when X is discrete andµ ′ r =E(X r )=∫ ∞−∞x r f(x) dxwhen X is continuous. The rth moment <strong>about</strong> <strong>the</strong> <strong>origin</strong> is only defined if E[ X r ] exists. A moment <strong>about</strong><strong>the</strong> <strong>origin</strong> is sometimes called a <strong>raw</strong> moment. Note that µ ′ 1 = E(X) =µ X , <strong>the</strong> mean of <strong>the</strong> distribution ofX, or simply <strong>the</strong> mean of X. The rth moment is sometimes written as function of θ where θ is a vector ofparameters that characterize <strong>the</strong> distribution of X.If <strong>the</strong>re is a sequence of random variables, X 1 ,X 2 ,...X n , we will call <strong>the</strong> rth population moment of <strong>the</strong>ith random variable µ ′ i, r and define it asµ ′ i,r = E (X r i ) (3)1.2. Central <strong>moments</strong>. The rth moment <strong>about</strong> <strong>the</strong> mean of a random variable X, denoted by µ r , is <strong>the</strong>expected value of ( X − µ X ) r symbolically,(2)µ r =E[(X − µ X ) r ]= ∑ x( x − µ X ) r f(x)(4)for r = 0, 1, 2, . . . when X is discrete andµ r =E[(X − µ X ) r ]=∫ ∞−∞(x − µ X ) r f(x) dxwhen X is continuous. The r th moment <strong>about</strong> <strong>the</strong> mean is only defined if E[ (X - µ X ) r ] exists. The rthmoment <strong>about</strong> <strong>the</strong> mean of a random variable X is sometimes called <strong>the</strong> rth central moment of X. The rthcentral moment of X <strong>about</strong> a is defined as E[ (X - a) r ]. If a = µ X , we have <strong>the</strong> rth central moment of X <strong>about</strong>µ X .Note that(5)Date: December 7, 2005.1


<strong>SAMPLE</strong> <strong>MOMENTS</strong> 32.2.2. Variance of ¯X r n. First consider <strong>the</strong> case where we have a sample X 1 ,X 2 , ... ,X n .Var ( (¯X nr ) 1 = VarnIf <strong>the</strong> X’s are independent, <strong>the</strong>nn∑i =1X r i) (= 1 n Var ∑ n 2i =1X r i)(14)Var ( ) ¯X nr 1 = Var(Xn 2i r ) (15)i =1If <strong>the</strong> X’s are independent and identically distributed, <strong>the</strong>nVar ( ) ¯X nr 1 =n Var(Xr ) (16)where X denotes any one of <strong>the</strong> random variables (because <strong>the</strong>y are all identical). In <strong>the</strong> case where r =1,we obtainn ∑Var ( ) 1 ¯X n = Var( X )=σ2n n(17)3. <strong>SAMPLE</strong> CENTRAL <strong>MOMENTS</strong>3.1. Definitions. Assume <strong>the</strong>re is a sequence of random variables, X 1 ,X 2 , ... ,X n . We define <strong>the</strong> samplecentral <strong>moments</strong> asC r n⇒ C 1 n⇒ C 2 n= 1 n= 1 n= 1 nThese are only defined if µ ′ i,1 is known.n∑i=1(Xi − µ ′ ) ri,1 ,r = 1, 2, 3,...,n∑ (Xi − µ i,1)′i=1n∑i=13.2. Properties of Sample Central <strong>Moments</strong>.3.2.1. Expected value of C r n. The expected value of C r n is given by(18)( )Xi − µ ′ 2i,1E (Cn r) = 1 nThe last equality follows from equation 7.n∑i=1E ( )X i − µ ′ r 1i,1 =nn∑i =1µ i,r (19)If <strong>the</strong> X i are identically distributed, <strong>the</strong>nE Cn r )=µ rE ( )Cn1 (20)=0


4 <strong>SAMPLE</strong> <strong>MOMENTS</strong>3.2.2. Variance of C r n. First consider <strong>the</strong> case where we have a sample X 1 ,X 2 , ... ,X n .(Var( Cn r 1n∑ ()=Var Xi − µ ′ i,1ni=1If <strong>the</strong> X’s are independently distributed, <strong>the</strong>nVar( C r n )= 1 n 2n ∑i=1If <strong>the</strong> X’s are independent and identically distributed, <strong>the</strong>n) r)= 1 n 2 Var ( n∑i=1Var [( X i − µ ′ ) r ]i,1)(Xi − µ ′ ) ri,1(21)(22)Var( Cn r )= 1 n Var[ ( X − µ ′ 1 ) r ] (23)where X denotes any one of <strong>the</strong> random variables (because <strong>the</strong>y are all identical). In <strong>the</strong> case where r =1,we obtainVar ( Cn1 ) 1 =n Var[ X − µ′ 1 ]= 1 n Var[ X − µ ]= 1 (24)n σ2 − 2 Cov[ X, µ]+Var[ µ ]= 1 n σ24. <strong>SAMPLE</strong> ABOUT THE AVERAGE4.1. Definitions. Assume <strong>the</strong>re is a sequence of random variables, X 1 ,X 2 ,...X n . Define <strong>the</strong> rth samplemoment <strong>about</strong> <strong>the</strong> average asm r nMnr = 1 n∑ (Xi −n¯X) rn , r =1, 2, 3,..., (25)i=1This is clearly a statistic of which we can compute a numerical value. We denote <strong>the</strong> numerical value by,, and define it asmIn <strong>the</strong> special case where r = 1 we haver n = 1 nn∑( x i − ¯x n ) r (26)i =1Mn 1 = 1 n= 1 nn∑ (Xi − ¯X)ni =1n∑X i − ¯X ni =1= ¯X n − ¯X n = 0(27)4.2. Properties of Sample <strong>Moments</strong> <strong>about</strong> <strong>the</strong> Average when r = 2.


<strong>SAMPLE</strong> <strong>MOMENTS</strong> 54.2.1. Alternative ways to write M r n. We can write M 2 n in an alternative useful way by expanding <strong>the</strong> squaredterm and <strong>the</strong>n simplifying as followsM r n = 1 nn∑ ( ) rXi − ¯X ni =1⇒ M 2 n= 1 n= 1 n= 1 nn∑ (Xi − ¯X) 2ni=1( n∑i =1n∑i =1[X2iX 2i− 2 X i ¯Xn + ¯X] )n2− 2 ¯X nn∑ ni =1 X i + 1 nn∑i =1¯X 2 n(28)n∑= 1 ni =1( n∑X 2 i − 2 ¯X 2 n + ¯X 2 n)= 1 ni =1X 2i− ¯X 2 n4.2.2. Expected value of Mn 2. The expected value of M n r is <strong>the</strong>n given byE ( [) nMn2 1 =n E ∑i =1n∑= 1 n= 1 ni=1n∑i =1X 2i]− E [ ] ¯X2 nE [ Xi2 ] ( [ ]) 2− E ¯Xn − Var( ¯Xn )µ ′ i,2 −(1nn∑i =1µ ′ i,1) 2− Var( ¯X n )(29)The second line follows from <strong>the</strong> alternative definition of varianceVar( X )=E ( X 2 ) − [ E ( X )] 2⇒ E ( X 2 ) =[ E ( X )] 2 + Var( X )(30)⇒ E ( ¯X 2n) =[ E( ¯X n)] 2+ Var( ¯X n )and <strong>the</strong> third line follows from equation 12. If <strong>the</strong> X i are independent and identically distributed, <strong>the</strong>n


6 <strong>SAMPLE</strong> <strong>MOMENTS</strong>E ( [Mn2 ) n]1 =n E ∑Xi2 − E [ ¯X 2 ]ni=1(2= 1 n∑µ ′ 1n∑i,2 − µ ′nni,1)− Var( ¯X n )i=1= µ ′ 2 − (µ′ 1 )2 − σ2n= σ 2 − 1 n σ2= n − 1ni=1σ 2 (31)where µ ′ 1 and µ ′ 2 are <strong>the</strong> first and second population <strong>moments</strong>, and µ 2 is <strong>the</strong> second central populationmoment for <strong>the</strong> identically distributed variables. Note that this obviously implies[∑ n(E Xi − ¯X ) ]2i=1= nE ( )Mn2( ) n − 1(32)= nσ 2n=(n − 1) σ 24.2.3. Variance of M 2 n. By definition,Var ( Mn2 ) [ (M ) ]= E2 2n − ( EMn) 2 2(33)The second term on <strong>the</strong> right on equation 33 is easily obtained by squaring <strong>the</strong> result in equation 31.E ( Mn2 ) n − 1 = σ 2n⇒ ( E ( (34)))Mn2 2 ( )= EM2 2 (n − 1) 2n =n 2 σ 4Now consider <strong>the</strong> first term on <strong>the</strong> right hand side of equation 33. Write it as[ (M )2 2]E n⎡(=E ⎣1n) ⎤ 2 n∑ (Xi − ¯X) 2n⎦ (35)i =1Now consider writing 1 n∑ ni =1(Xi − ¯X n) 2as follows


<strong>SAMPLE</strong> <strong>MOMENTS</strong> 71nn∑i =1(Xi − ¯X ) 2=1nn∑i =1( (Xi − µ) − ( ¯X − µ) ) 2= 1 nn∑i =1(Yi − Ȳ ) 2(36)where Y i =X i − µȲ = ¯X − µObviously,n∑ (Xi − ¯X ) ∑ n2=(Yi − Ȳ ) 2, where Yi = X i − µ,Ȳ = ¯X − µ (37)i =1i =1Now consider <strong>the</strong> properties of <strong>the</strong> random variable Y i which is a transformation of X i . First <strong>the</strong> expectedvalue.Y i =X i − µE (Y i ) = E (X i ) − E ( µ)=µ − µ=0(38)The variance of Y i isY i = X i − µVar(Y i ) = Var(X i )= σ 2 if X i are independently and identically distributed(39)Also consider E(Y i 4 ). We can write this asNow write equation 35 as followsE( Y 4 ) ==∫ ∞y 4 f ( x ) dx−∞∫ ∞( x − µ) 4 (40)f(x) dx−∞= µ 4


8 <strong>SAMPLE</strong> <strong>MOMENTS</strong>[ (M )2 2]E n⎡(=E ⎣⎡(=E ⎣⎡(=E ⎣1n1n1n= 1 n 2 E ⎡⎣) ⎤ 2 n∑ (Xi − ¯X) 2n⎦i =1n∑i =1(Xi − ¯X ) ) ⎤ 22⎦n∑ (Yi − Ȳ ) ) ⎤ 22⎦i =1( n∑i =1(Yi − Ȳ ) ) ⎤ 22⎦(41a)(41b)(41c)(41d)Ignoring 1 n 2for now, expand equation 41 as follows⎡( ∑ nE ⎣i =1(Yi − Ȳ ) ) ⎤ 22⎡( ∑ n⎦ =E ⎣i =1⎡( ∑ n= E ⎣i =1⎡({ ∑ n= E ⎣i =1⎡({ ∑ n= E ⎣i =1⎡( ∑ n= E ⎣i =1⎡( ∑ n= E ⎣i =1(Y2i − 2 Y i Ȳ + Ȳ 2)) 2 ⎤ ⎦ (42a)Yi2 − 2 ȲY 2iY 2iY 2iY 2in∑i =1Y i +n∑i =1} ) ⎤ 2− 2 n Ȳ 2 + n Ȳ 2 ⎦}) ⎤ 2− nȲ 2 ⎦) 2− 2 n Ȳ 2 n ∑) ⎤ 2 [⎦ − 2 nEi =1Ȳ 2Y 2in∑i =1Ȳ 2 ) 2⎤⎦+ n 2 Ȳ 4 ⎤⎦Y 2i(42b)(42c)(42d)(42e)]+ n 2 E ( Ȳ 4) (42f)Now consider <strong>the</strong> first term on <strong>the</strong> right of 42 which we can write as


<strong>SAMPLE</strong> <strong>MOMENTS</strong> 9⎡( ∑ nE ⎣i =1Y 2i) ⎤ 2⎡⎦ =E ⎣=E=[ n∑n∑i =1n∑i =1i =1EY 4iY 2in∑j =1Yj2⎤⎦Yi 4 + ∑∑ Y 2i ̸= ji Yj2=nµ 4 + n (n − 1)µ 2 2=nµ 4 + n (n − 1)σ 4]+ ∑∑ i ̸= j EY2 i EYj2(43a)(43b)(43c)(43d)(43e)Now consider <strong>the</strong> second term on <strong>the</strong> right of 42 (ignoring 2n for now) which we can write asE[∑ nȲ 2i =1Y 2i] ⎡= 1 n∑n 2 E ⎣⎡= 1 n 2 E n∑⎢i =1⎣⎡= 1 n∑n 2 ⎢i =1⎣∑ n∑ nY j Y k Yi2j =1 k =1 i =1⎤⎦Yi 4 + ∑∑ Y i 2 Yj 2 + ∑∑ Y ∑jY ki ̸= j j ̸= ki ≠ ji ≠ kEY 4iYi2+ ∑∑ i ̸= j EY2 i EYj 2 + ∑∑ EY ∑jEY kj ̸= ki ≠ ji ≠ k⎤⎥⎦⎤EYi2⎥⎦(44a)(44b)(44c)= 1 [ nµ4 + n (n − 1) µ 2n 2 2 +0 ] (44d)= 1 [µ4 +(n − 1)σ ] 4n(44e)The last term on <strong>the</strong> penultimate line is zero because E(Y j ) = E(Y k ) = E(Y i )=0.


10 <strong>SAMPLE</strong> <strong>MOMENTS</strong>Now consider <strong>the</strong> third term on <strong>the</strong> right side of 42 (ignoring n 2 for now) which we can write as⎡E [ Ȳ 4 ] = 1 n∑n E ⎣4= 1 n 2 E ⎡⎣i =1n∑i =1Y in ∑j =1Y jn ∑k =1Y kn ∑l =1Y l⎤⎦Yi 4 + ∑∑ Y i 2 Yk 2 + ∑∑ Y i 2 Yj2 + ∑i ̸=k i ̸= ji ̸= jYi 2 Yj 2 + ···⎤⎦(45a)(45b)where for <strong>the</strong> first double sum (i = j ≠ k=l), for <strong>the</strong> second (i = k ≠ j=l), and for <strong>the</strong> last (i = l ≠ j=k) and ... indicates that all o<strong>the</strong>r terms include Y i in a non-squared form, <strong>the</strong> expected value of which willbe zero. Given that <strong>the</strong> Y i are independently and identically distributed, <strong>the</strong> expected value of each of <strong>the</strong>double sums is <strong>the</strong> same, which gives⎡E [ Ȳ 4] = 1 n∑n 4 E ⎣= 1 n 4 [ n∑i =1= 1 n 4 [ n∑i =1Yi 4 + ∑∑ Y i 2 Yk 2 + ∑∑ Y i 2 Yj2 + ∑i ̸= k i ̸= ji ̸= jEYi 4 +3 ∑∑ ]Y i 2 Yj 2 + terms containing EX ii ̸= ji =1EYi 4 +3 ∑∑ Y i 2 Yj2i ̸= j]Y 2i Y 2⎤j + ··· ⎦(46a)(46b)(46c)= 1 n 4 [nµ4 +3n (n − 1) ( µ 2 ) 2] (46d)= 1 n 4 [nµ4 +3n (n − 1) σ 4] (46e)= 1 n 3 [µ4 +3(n − 1) σ 4] (46f)Now combining <strong>the</strong> information in equations 44, 45, and 46 we obtain


<strong>SAMPLE</strong> <strong>MOMENTS</strong> 11⎡( ∑ nE ⎣i =1(Yi − Ȳ ) ) ⎤ ⎡2 (∑ n2⎦ =E ⎣i =1⎡( ∑ n=E ⎣i =1(Y2i − 2 Y i Ȳ + Ȳ 2)) 2 ⎤ ⎦ (47a)Y 2i) ⎤ 2 [⎦ − 2 nE[ 1=nµ 4 + n(n − 1)µ 2 2 − 2nnȲ 2n∑i =1Y 2i]+ n 2 E ( Ȳ 4 ) (47b)[µ4 +(n − 1)µ 2 ] ] [ 12 + n 2 [µ4 +3(n − 1)µ 2 ] ]n 3 2(47c)=nµ 4 + n (n − 1)µ 2 2 − 2 [ µ 4[+ (n − 1) µ 2 ] 1 [2 + µ4n+ 3(n − 1) µ 2 ] ]2(47d)= n2n µ 4 − 2 nn µ 4 + 1 n µ 4 + n2 (n − 1)nµ 2 2−2 n (n − 1)nµ 2 23(n − 1)+ µ 2 2 (47e)n= n2 − 2 n +1µ 4 + (n − 1) (n2 − 2 n +3)µ 2 2nn(47f)= n2 − 2 n +1µ 4 + (n − 1) (n2 − 2 n +3)σ 4nn(47g)Now rewrite equation 41 including 1 n 2as follows⎡( [ ( ) ]E M2 2n = 1 n 2 E ∑ n⎣i =1(Yi − Ȳ ) ) ⎤ 22⎦= 1 ( n 2 − 2 n +1n 2 µ 4 + (n − 1) )(n2 − 2 n +3)σ 4nn(48a)(48b)= n2 − 2 n +1n 3 µ 4 + (n − 1) (n2 − 2 n +3)n 3 σ 4 (48c)= ( n − 1)2n 3 µ 4 + (n − 1) (n2 − 2 n +3)n 3 σ 4 (48d)Now substitute equations 34 and 48 into equation 33 to obtainWe can simplify this asVar ( Mn2 ) [ (M ) ]=E2 2n − ( EMn2 ) 2(n − 1)2=n 3 µ 4 + (n − 1) (n2 − 2 n +3)n 3 σ 4 − ( n − (49)1)2n 2 σ 4


12 <strong>SAMPLE</strong> <strong>MOMENTS</strong>Var ( Mn2 ) [ (M ) ]=E2 2n − ( EMn) 2 2(50a)= ( n − 1)2 µn 3 4 + (n − 1) (n2 − 2 n +3)σ 4 − n ( n − 1)2 σ 4 (50b)n 3 n 3= µ 4 ( n − 1) 2 + [ ( n − 1)σ ] ( 4 n 2 − 2 n + 3− n ( n − 1) )(50c)= µ 4 ( n − 1) 2 + [ ( n − 1)σ 4 ] ( n 2 − 2 n + 3− n 2 + n )n 3n 3= µ 4 ( n − 1) 2 + [ ( n − 1)σ 4 ] (3 − n )n 3= µ 4 ( n − 1) 2 − [ ( n − 1)σ 4 ] (n − 3)n 3= ( n − 1) 2 µ 4n 3 −( n − 1) (n − 3)σ4n 3(50d)(50e)(50f)5. <strong>SAMPLE</strong> VARIANCE5.1. Definition of sample variance. The sample variance is defined asS 2 n =1n − 1n∑ (Xi − ¯X) 2n (51)i =1We can write this in terms of <strong>moments</strong> <strong>about</strong> <strong>the</strong> mean asS 2 n =1n − 1n∑i =1(Xi − ¯X n) 2= nn − 1 M 2 n where M 2 n = 1 n(52)n∑ (Xi − ¯X) 2ni=15.2. Expected value of S 2 . We can compute <strong>the</strong> expected value of S 2 by substituting in from equation 31 asfollowsE ( S 2 n) n =n − 1 E ( )Mn2n n − 1=σ 2(53)n − 1 n= σ 25.3. Variance of S 2 . We can compute <strong>the</strong> variance of S 2 by substituting in from equation 50 as follows


<strong>SAMPLE</strong> <strong>MOMENTS</strong> 13Var ( Sn2 ) n 2=( n − 1) 2 Var( Mn2 )=n 2 ( ( n − 1) 2 µ 4( n − 1) 2 n 3= µ 4n−(n − 3)σ4n(n − 1)−)( n − 1) (n − 3) σ4n 3(54)5.4. Definition of ˆσ 2 . One possible estimate of <strong>the</strong> population variance is ˆσ 2 which is given byˆσ 2= 1 n∑ ( ) 2Xi − ¯X nn= M 2 ni =1(55)5.5. Expected value of ˆσ 2 . We can compute <strong>the</strong> expected value of ˆσ 2 by substituting in from equation 31 asfollowsE (ˆσ 2) =E ( )Mn2= n − 1nσ 2 (56)5.6. Variance of ˆσ 2 . We can compute <strong>the</strong> variance of ˆσ 2 by substituting in from equation 50 as followsVar (ˆσ 2) = Var ( )Mn2= (n − 1)2 µ 4n 3= µ 4 − µ 2 2nWe can also write this in an alternative fashionVar (ˆσ 2) =Var ( Mn2 )= ( n − 1) 2 µ 4n 3 −−(n − 1) (n − 3) σ4n 3− 2(µ 4 − 2µ 2 2)n 2 + µ 4 − 3µ 2 2( n − 1) (n − 3)σ4n 3= ( n − 1) 2 µ 4− ( n − 1) (n − 3)µ2 2n 3 n 3= n2 µ 4 − 2 nµ 4 + µ 4n 3 − n2 µ 2 2 − 4 nµ 2 2 +3µ 2 2n 3= n2 ( µ 4 − µ 2 2 ) − 2 n ( µ 4 − 2 µ 2 2 )+µ 4 − 3 µ 2 2n 3= µ 4 − µ 2 2nn 3 (57)(58)− 2(µ 4 − 2 µ 2 2 )n 2 + µ 4 − 3 µ 2 2n 3


14 <strong>SAMPLE</strong> <strong>MOMENTS</strong>6. NORMAL POPULATIONS6.1. Central <strong>moments</strong> of <strong>the</strong> normal distribution. For a normal population we can obtain <strong>the</strong> central <strong>moments</strong>by differentiating <strong>the</strong> moment generating function. The moment generating function for <strong>the</strong> central<strong>moments</strong> is as followsM X (t) =e t2 σ 22 . (59)The <strong>moments</strong> are <strong>the</strong>n as follows. The first central moment isE(X − µ) = d dt) (e t2 σ 22( )= tσ 2 e t2 σ 22| t =0| t =0(60)=0The second central moment is( )E(X − µ ) 2 = d2e t2 σ 2dt 2 2 | t =0= d (t ( ))σ 2 e t2 σ 22 | t =0dt( ( ) ( (61)))= t 2 σ 4 e t2 σ 22 + σ 2 e t2 σ 22 | t =0= σ 2The third central moment is( )E(X − µ ) 3 = d3dt 3 e t2 σ 22= d dt==| t =0(t 2 σ 4 ( e t2 σ 22(t 3 σ 6 ( e t2 σ 22(t 3 σ 6 ( e t2 σ 22) ( ))+ σ 2 e t2 σ 22)+2tσ 4 ( e t2 σ 22) ( ))+3tσ 4 e t2 σ 22| t =0) ( ))+ tσ 4 e t2 σ 22| t =0| t =0(62)=0The fourth central moment is


<strong>SAMPLE</strong> <strong>MOMENTS</strong> 15( )E(X − µ ) 4 = d4dt 4 e t2 σ 22| t =0(t 3 σ 6 ( e t2 σ 22) ( ))+3tσ 4 e t2 σ 22| t =0= d dt( ( ) ( ) ( ) ( ))= t 4 σ 8 e t2 σ 22 +3t 2 σ 6 e t2 σ 22 +3t 2 σ 6 e t2 σ 22 +3σ 4 e t2 σ 2(63)2 | t =0( ( ) ( ) ( ))= t 4 σ 8 e t2 σ 22 +6t 2 σ 6 e t2 σ 22 +3σ 4 e t2 σ 22 | t =0=3σ 46.2. Variance of S 2 . Let X 1 ,X 2 ,...X n be a random sample from a normal population with mean µ andvariance σ 2 < ∞.We know from equation 54 thatVar ( Sn2 ) n 2=( n − 1) Var( M 2 )2 nn 2( ( n − 1) 2 µ 4=( n − 1) 2 n 3−)( n − 1) (n − 3) σ4n 3(64)= µ 4n−(n − 3)σ4n ( n − 1)If we substitute in for µ 4 from equation 63 we obtainVar ( S 2 n) =µ 4n−(n − 3)σ4n (n − 1)= 3 σ4n−(n − 3)σ4n ( n − 1)(3(n − 1)− (n − 3))σ4=n ( n − 1)(3n − 3 − n +3))σ4=n ( n − 1)(65)= 2 nσ4n ( n − 1)6.3. Variance of ˆσ 2 . It is easy to show that= 2 σ4( n − 1)Var(ˆσ 2 )= 2 σ4 ( n − 1)n 2 (66)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!