12.07.2015 Views

Stat 5101 Lecture Notes - School of Statistics

Stat 5101 Lecture Notes - School of Statistics

Stat 5101 Lecture Notes - School of Statistics

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

5.3. BERNOULLI RANDOM VECTORS 153you can’t talk about expectations or moments, E(Y ) is defined only for numerical(or numerical vector) random variables, not for categorical random variables.However, if we number the categoriesS = {s 1 ,s 2 ,...,s 5 }with s 1 = strongly agree, and so forth, then we can identify the categoricalrandom variable Y with a Bernoulli random vector XX i = I {si}(Y )that isX i = 1 if and only if Y = s i .Thus Bernoulli random variables are an artifice. They are introduced toinject some numbers into categorical problems. We can’t talk about E(Y ),but we can talk about E(X). A thorough analysis <strong>of</strong> the properties <strong>of</strong> thedistribution <strong>of</strong> the random vector X will also tell us everything we want toknow about the categorical random variable Y , and it will do so allowing us touse the tools (moments, etc.) that we already know.5.3.2 MomentsEach <strong>of</strong> the X i is, <strong>of</strong> course, univariate Bernoulli, writeX i ∼ Ber(p i )and collect these parameters into a vectorp =(p 1 ,...,p k )Then we abbreviate the distribution <strong>of</strong> X asX ∼ Ber k (p)if we want to indicate the dimension k or just as X ∼ Ber(p) if the dimension isclear from the context (the boldface type indicating a vector parameter makesit clear this is not the univariate Bernoulli).Since each X i is univariate Bernoulli,E(X i )=p ivar(X i )=p i (1 − p i )That tells usE(X) =p.To find the variance matrix we need to calculate covariances. For i ≠ j,cov(X i ,X j )=E(X i X j )−E(X i )E(X j )=−p i p j ,

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!