14.03.2014 Views

Modeling and Multivariate Methods - SAS

Modeling and Multivariate Methods - SAS

Modeling and Multivariate Methods - SAS

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 3 Fitting St<strong>and</strong>ard Least Squares Models 75<br />

Estimates<br />

Parameter Power<br />

Suppose that you want to know how likely it is that your experiment will detect some difference at a given<br />

α-level. The probability of getting a significant test result is called the power. The power is a function of the<br />

unknown parameter values tested, the sample size, <strong>and</strong> the unknown residual error variance.<br />

Alternatively, suppose that you already did an experiment <strong>and</strong> the effect was not significant. You think that<br />

the effect might have been significant if you had more data. How much more data do you need?<br />

JMP offers the following calculations of statistical power <strong>and</strong> other details related to a given hypothesis test:<br />

• LSV, the least significant value, is the value of some parameter or function of parameters that would<br />

produce a certain p-value alpha.<br />

• LSN, the least significant number, is the number of observations that would produce a specified p-value<br />

alpha if the data has the same structure <strong>and</strong> estimates as the current sample.<br />

• Power values are also available. See “The Power” on page 81 for details.<br />

The LSV, LSN, <strong>and</strong> power values are important measuring sticks that should be available for all test<br />

statistics, especially when the test statistics are not significant. If a result is not significant, you should at least<br />

know how far from significant the result is in the space of the estimate (rather than in the probability). YOu<br />

should also know how much additional data is needed to confirm significance for a given value of the<br />

parameters.<br />

Sometimes a novice confuses the role of the null hypotheses, thinking that failure to reject the null<br />

hypothesis is equivalent to proving it. For this reason, it is recommended that the test be presented in these<br />

other aspects (power <strong>and</strong> LSN) that show the test’s sensitivity. If an analysis shows no significant difference,<br />

it is useful to know the smallest difference that the test is likely to detect (LSV).<br />

The power details provided by JMP are for both prospective <strong>and</strong> retrospective power analyses. In the<br />

planning stages of a study, a prospective analysis helps determine how large your sample size must be to<br />

obtain a desired power in tests of hypothesis. During data analysis, however, a retrospective analysis helps<br />

determine the power of hypothesis tests that have already been conducted.<br />

Technical details for power, LSN, <strong>and</strong> LSV are covered in the section “Power Calculations” on page 683 in<br />

the “Statistical Details” appendix.<br />

Calculating retrospective power at the actual sample size <strong>and</strong> estimated effect size is somewhat<br />

non-informative, even controversial [Hoenig <strong>and</strong> Heisey, 2001]. The calculation does not provide<br />

additional information about the significance test, but rather shows the test in a different perspective.<br />

However, we believe that many studies fail to detect a meaningful effect size due to insufficient sample size.<br />

There should be an option to help guide for the next study, for specified effect sizes <strong>and</strong> sample sizes.<br />

For more information, see John M. Hoenig <strong>and</strong> Dinnis M. Heisey, (2001) “The Abuse of Power: The<br />

Pervasive Fallacy of Power Calculations for Data Analysis.”, American Statistician (v55 No 1, 19-24).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!