06.09.2021 Views

Learning Statistics with R - A tutorial for psychology students and other beginners, 2018a

Learning Statistics with R - A tutorial for psychology students and other beginners, 2018a

Learning Statistics with R - A tutorial for psychology students and other beginners, 2018a

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

null hypothesis<br />

σ=σ 0<br />

μ=μ 0<br />

alternative hypothesis<br />

σ=σ 0<br />

μ≠μ 0<br />

value of X<br />

value of X<br />

Figure 13.2: Graphical illustration of the null <strong>and</strong> alternative hypotheses assumed by the one sample z-<br />

test (the two sided version, that is). The null <strong>and</strong> alternative hypotheses both assume that the population<br />

distribution is normal, <strong>and</strong> additionally assumes that the population st<strong>and</strong>ard deviation is known (fixed<br />

at some value σ 0 ). The null hypothesis (left) is that the population mean μ is equal to some specified<br />

value μ 0 . The alternative hypothesis is that the population mean differs from this value, μ ‰ μ 0 .<br />

.......................................................................................................<br />

Okay, if that’s true, then what can we say about the distribution of ¯X? Well, as we discussed earlier<br />

(see Section 10.3.3), the sampling distribution of the mean ¯X is also normal, <strong>and</strong> has mean μ. But the<br />

st<strong>and</strong>ard deviation of this sampling distribution sep ¯Xq, which is called the st<strong>and</strong>ard error of the mean, is<br />

sep ¯Xq “<br />

σ ?<br />

N<br />

In <strong>other</strong> words, if the null hypothesis is true then the sampling distribution of the mean can be written<br />

as follows:<br />

¯X „ Normalpμ 0 , sep ¯Xqq<br />

Now comes the trick. What we can do is convert the sample mean ¯X into a st<strong>and</strong>ard score (Section 5.6).<br />

This is conventionally written as z, but <strong>for</strong> now I’m going to refer to it as z ¯X. (The reason <strong>for</strong> using this<br />

exp<strong>and</strong>ed notation is to help you remember that we’re calculating st<strong>and</strong>ardised version of a sample mean,<br />

not a st<strong>and</strong>ardised version of a single observation, which is what a z-score usually refers to). When we<br />

do so, the z-score <strong>for</strong> our sample mean is<br />

z ¯X “ ¯X ´ μ 0<br />

sep ¯Xq<br />

or, equivalently<br />

z ¯X “ ¯X ´ μ 0<br />

σ{ ? N<br />

This z-score is our test statistic. The nice thing about using this as our test statistic is that like all<br />

z-scores, it has a st<strong>and</strong>ard normal distribution:<br />

z ¯X „ Normalp0, 1q<br />

(again, see Section 5.6 if you’ve <strong>for</strong>gotten why this is true). In <strong>other</strong> words, regardless of what scale the<br />

original data are on, the z-statistic iteself always has the same interpretation: it’s equal to the number<br />

of st<strong>and</strong>ard errors that separate the observed sample mean ¯X from the population mean μ 0 predicted by<br />

- 382 -

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!