12.07.2015 Views

chapter 1

chapter 1

chapter 1

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 4: Continuous Random Variables and Probability Distributions116.−λxλea. F(x) = λe −λxand F(x) = 1 − e −λx, so r(x) = = λ , a constant (independent of X);−λxethis is consistent with the memoryless property of the exponential distribution.⎛ α ⎞α−1b. r(x) =⎜⎟α x ; for α > 1 this is increasing, while for α < 1 it is a decreasing function.⎝ β ⎠2⎛ x ⎞ ⎡ x ⎤c. ln(1 – F(x)) = − ∫ α⎜1− ⎟dx= − α ⎢x− ⎥ ⇒ F ( x)= 1 − e⎝ β ⎠ ⎣ 2β⎦⎛ x ⎞f(x) = α⎜1− ⎟e⎝ β ⎠⎛ 2 ⎞⎜x − α x − ⎟⎝ 2 β ⎠0 ≤ x ≤ β⎛ 2 ⎞⎜x − α x − ⎟⎝ 2 β ⎠,117.⎛ 1⎞−λxa. F X (x) = P − ln ( 1−U) ≤ x = P( ln(1−U)≥ −λx) = P( 1 −U≥ e )⎜⎟⎝ λ⎠−λx−λx= P( U ≤1 − e ) = 1 − e since F U (u) = u (U is uniform on [0, 1]). Thus X has anexponential distribution with parameter λ.b. By taking successive random numbers u 1 , u 2 , u 3 , …and computing x = − ln ( 1 − )1iu i10… we obtain a sequence of values generated from an exponential distribution withparameter λ = 10.,118.a. E(g(X)) ≈ E[g(µ) + g′(µ)(X - µ)] = E(g(µ)) + g′(µ)⋅E(X - µ), but E(X) - µ = 0 and E(g(µ))= g(µ) ( since g(µ) is constant), giving E(g(X)) ≈ g(µ).V(g(X)) ≈ V[g(µ) + g′(µ)(X - µ)] = V[g′(µ)(X - µ)] = (g′(µ)) 2 ⋅V(X - µ) = (g′(µ)) 2 ⋅V(X).b.g(I)Vv − v, g′( I ) =2I I= , so E( g(I))( g I )) ≈ ⎜ ⎟ ⋅V( I )2= µ⎛ − v ⎞v( ⎜ ,2 ⎟ σg(I )≈ ⋅ σ2⎝ µI ⎠20vR≈ =IµIv=800v20119. g(µ) + g′(µ)(X - µ) ≤ g(X) implies that E[g(µ) + g′(µ)(X - µ)] = E(g(µ)) = g(µ) ≤ E(g(X)), i.e.that g(E(X)) ≤ E(g(X)).173

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!