12.07.2015 Views

Contextual Determinants of Electoral System Choice - Åbo Akademi

Contextual Determinants of Electoral System Choice - Åbo Akademi

Contextual Determinants of Electoral System Choice - Åbo Akademi

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

etween independent and dependent variables. Particularly large samples canproduce significant values for otherwise inconsiderable effects.The Baysian information criterion (BIC), constructed by A. E. Raftery (1995), istherein a more satisfying test <strong>of</strong> significance. The BIC value is obtained bysubtracting the logarithm <strong>of</strong> the population from the Wald value. First <strong>of</strong> all, theBIC value should exceed zero to reach some level <strong>of</strong> significance. For coefficientsabove zero, Raftery specifies rules <strong>of</strong> thumb to evaluate different grades <strong>of</strong>evidence in the following way: a BIC difference <strong>of</strong> 0-2 is weak, 2-6 is positive, 6-10 is strong, and larger than 10 is considered very strong. However, when variablesin a model are compared with each other, the BIC value does not explain morethan other coefficients, because it is, after all, dependent on the square <strong>of</strong> thelogged odds relative to the standard error. In the following analyses, consequently,I shall present B-coefficients, standard errors, and values <strong>of</strong> significance. Allindependent variables are coded on a measure scale ranging from 0 to 1. At a firstglance, the demonstration <strong>of</strong> the odds would be a more appropriate solution thanthe logged odds, since the odds represent the factor by which the probability <strong>of</strong>having an event is multiplied as a consequence <strong>of</strong> a one-unit change in theindependent variable. However, a preliminary analysis reveals that the standarderrors vary to a great extent, which in turn implies that the odds have little concretemeaning.The model chi-square value, the –2 log likelihood value, and the Nagelkerke Rsquare value for the model as a whole are also presented. The model chi-squarevalue tests the null hypothesis that all coefficients other than the constant equal 0.The larger the chi-square value, the greater the model improvement above thebaseline. A test <strong>of</strong> significance on the basis <strong>of</strong> this value is provided. The –2 loglikelihood value reflects the likelihood that the data would be observed given theparameter estimates. It can be thought <strong>of</strong> as the deviation from a perfect model inwhich the log likelihood equals 0. The closer the –2 log likelihood value to zero,the better the parameters do in producing the observed data. The Nagelkerke Rsquare is a measure referred to as the pseudo-variance explained, and has aminimum <strong>of</strong> 0 and maximum <strong>of</strong> 1. The value cannot decrease when anothervariable is added to the model.207

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!