09.12.2012 Views

2003 IMTA Proceedings - International Military Testing Association

2003 IMTA Proceedings - International Military Testing Association

2003 IMTA Proceedings - International Military Testing Association

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

METHOD<br />

PARTICIPANTS<br />

The DEOCS had been administered to 522 participants at the time of the current<br />

study. A random sample of 522 respondents to the MEOCS-EEO was selected for<br />

comparison.<br />

MATERIALS<br />

Items had been taken for 14 scales from the MEOCS-EEO and revised for the<br />

DEOCS: Sexual Harassment and Discrimination, Differential Command Behavior toward<br />

Minorities and Women, Positive Equal Opportunity (EO) Behavior, Racist Behavior,<br />

Religious Discrimination, Disability Discrimination, Age Discrimination, Commitment,<br />

Trust in the Organization, Effectiveness, Work Group Cohesion, Leadership Cohesion,<br />

Satisfaction, and General EO Climate. Two to five items from each scale were chosen<br />

which previously research had shown to have good psychometric qualities (i.e., item-total<br />

correlations, reliability, and discriminability).<br />

PROCEDURE<br />

A three-step process was followed in these analyses. First, Thissen’s (1991, <strong>2003</strong>)<br />

MULTILOG program was used below to obtain difficulty and discriminability<br />

parameters (a and b’s) for the MEOCS-EEO and the DEOCS. Because these parameters<br />

for the two versions were calculated separately, a common metric was needed. Then,<br />

Baker’s (1995) EQUATE program was used to link the two versions. For each of the<br />

scales presented below the parameters from the revised form of the MEOCS were<br />

equated to those of the MEOCS-EEO. The transformation constants (A and K) are also<br />

presented. Finally, following the transformation, DIF analyses were performed using<br />

Raju, van der Linden, and Fleer’s (1995) DFIT program adapted for polytomous items<br />

(Flowers, Oshima, & Raju, 1999; Raju, 2001) and Shealy and Stout’s (1993) SIBTEST<br />

program adapted for polytomous items (Chang, Mazzeo, & Roussos, 1994).<br />

RESULTS<br />

A summary of the results can be seen in Table 1. Listed are the scales, number of<br />

items in the scale, whether any items were reworded for the DEOCS, whether DIF was<br />

detected by DFIT or SIBTEST. An examination of each of the scales is discussed below.<br />

SIBTEST appears to be more sensitive to possible DIF than DFIT. Examination<br />

of BRFs in this report suggests that the SIBTEST is overly sensitive. This also appears to<br />

be true in previous research (Truhon, 2002). Chang et al. (1996) have reported that their<br />

polytomous adaptation of SIBTEST is more likely to exhibit Type I error when there is<br />

nonuniform DIF, which occurs with many items. This would help to explain the seeming<br />

contradiction with Bolt’s (2002) finding that the SIBTEST had less power than DFIT.<br />

Whether one uses the stricter criteria for DIF in DFIT or the looser criteria in<br />

SIBTEST, it is noteworthy that there is a greater DIF in the reworded items than in the<br />

items whose wording was left unchanged. These reworded items allow for a different<br />

interpretation compared to the original version of the items by respondents. For example,<br />

in the reworded version sexual harassment can involve women harassing men and racist<br />

behavior can involve nonwhites discriminating against whites. Overall this suggests that<br />

767<br />

45 th Annual Conference of the <strong>International</strong> <strong>Military</strong> <strong>Testing</strong> <strong>Association</strong><br />

Pensacola, Florida, 3-6 November <strong>2003</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!