Evaluating organizational stress-management interventions using ...
Evaluating organizational stress-management interventions using ...
Evaluating organizational stress-management interventions using ...
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
36 RANDALL, GRIFFITHS, COX<br />
involved in the intervention reported lower exhaustion scores than the<br />
group not involved in the intervention, but this difference only<br />
approached significance, F(1, 30) = 2.24, p = .14.<br />
Taken together, however, the effect of the changes within the two groups<br />
was significant: Interaction terms are more sensitive than separate withingroups<br />
analysis (Tabachnick & Fidell, 2001). As in Study 1, these results<br />
suggested that exposure to the intervention impacted on participants’ wellbeing:<br />
This was only apparent when measures of exposure were used to<br />
adapted the design of the study during analysis.<br />
DISCUSSION<br />
The significant Measured exposure 6 Time interaction effects in both<br />
studies indicated that exposure to the intervention predicted changes in<br />
exhaustion scores over time. The central hypothesis of this article was<br />
confirmed: Nonadaptive designs underestimated the impact of the intervention<br />
on well-being with significant change only becoming apparent when<br />
adapted study designs were used. Both studies indicated that Type III error<br />
(Dobson & Cook, 1980; Harachi et al., 1999) may be minimized by <strong>using</strong> the<br />
results of a robust process evaluation (in this study one that centred on the<br />
triangulation of exposure data) to adapt outcome evaluation. This<br />
protection against Type III error is particularly important given that the<br />
psychological components of work design may exert a relatively modest<br />
influence over general well-being in the short term (Zapf, Dormann, &<br />
Frese, 1996). Moreover, in both studies measures of intervention exposure<br />
identified hidden and ‘‘unintended’’ between-groups designs that facilitated<br />
much-needed and significant improvements in methodological adequacy (see<br />
Beehr & O’Hara, 1987; Murphy, 1996). In Study 1, the between-group<br />
differences found at Time 2 reflected a worsening of the situation for the<br />
‘‘not aware’’ group cooccurring with stability in the ‘‘aware’’ group,<br />
suggesting that the return of responsibilities to them protected supervisors<br />
from the effects of problems associated with not being able to report faults.<br />
This ‘‘protective effect’’ has been observed in other intervention studies (e.g.,<br />
Terra, 1995). The pattern of change in Study 2 indicated a significant<br />
intervention effect when the small changes in the intervention and control<br />
groups cooccurred.<br />
Clearly, and for good reasons, the methodological adequacy of both<br />
studies reflected the constraints placed on them by the research setting. Like<br />
almost all <strong>stress</strong> <strong>management</strong> intervention evaluation studies they do not<br />
play exactly by the methodological rules (Kompier et al., 2000a). However,<br />
measuring and capitalizing on uncontrolled and unpredictable exposure<br />
patterns did extend the established principles of the ‘‘natural experiment’’.<br />
Indeed, in the majority of situations where complete control over