Teaching and Assessing Soft Skills - MASS - Measuring and ...
Teaching and Assessing Soft Skills - MASS - Measuring and ...
Teaching and Assessing Soft Skills - MASS - Measuring and ...
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Factors for choosing an assessment<br />
There are several <strong>and</strong> varied methods to assess soft skills, one system will not suit all<br />
(Dewson, Eccles, Tackey, & Jackson, 2000). What may work well for a particular situation <strong>and</strong><br />
institute a may not work for another. The system of choice <strong>and</strong> it implementation depends<br />
very much on the activities <strong>and</strong> objectives, the target group, the course duration, <strong>and</strong> the<br />
available resources.<br />
Assessment design <strong>and</strong> soft skills definitions<br />
The importance of an appropriate <strong>and</strong> meaningful definition of the skills to be assessed<br />
cannot be overstated (Wilson et al., 2010). The success of any attempt to assess them will<br />
rely on these definitions <strong>and</strong> their elaborations in terms of descriptions of emerging<br />
underst<strong>and</strong>ings for the design <strong>and</strong> selection of the assessment instruments <strong>and</strong> activities,<br />
<strong>and</strong> the appraisal of the products of the assessments.<br />
The task of defining the different soft skills is not an easy one. As mentioned above, the<br />
definitions will have to address questions such as: the unit of analysis (are they intended to<br />
reflect individuals, large groups or both?); the age span of these skills (will they be confined<br />
to obligatory, upper secondary, higher education or beyond?); whether the definitions are to<br />
be universal or susceptible to cultural differences; <strong>and</strong> whether the skills are to be defined as<br />
domain-general or closely associated with specific contexts or disciplines.<br />
These are just some of the questions that need to be addressed by the definition of each<br />
skill, <strong>and</strong> the response to these questions will play a determining role in the delineation of<br />
the inferences that can be drawn from the assessment process. In other words, the<br />
definition of the constructs will determine the kind of information that will be collected. It<br />
will therefore constrain the inferences that different stakeholders will be able to make based<br />
on the results of the assessment process.<br />
Considering the overwhelming amount of possible elements involved in each definition,<br />
where can we start the construction of models of proficiency that can serve as a solid base<br />
for assessment?<br />
Current literature in the field of educational assessment stresses that any measurement<br />
should be rooted in a robust cognitive theory <strong>and</strong> a model of the learner that informs, not<br />
only what counts as evidence of mastery, but also what kind of tasks can be used to elicit<br />
them (Pellegrino et al., 2001).<br />
Domain dependence vs domain independence<br />
When defining constructs for measurement, a key question that can arise is the degree to<br />
which any particular context will influence measures of the construct (Wilson et al., 2010).<br />
However, with soft skills, the context can be quite distal from the constructs. For instance, if<br />
communication is considered, it can be measured within numerous subject matter areas.<br />
Communication skills in mathematics, involving a quantitative symbolic system,<br />
representations of data patterns, <strong>and</strong> so forth, is different than communication skills in, for<br />
instance, second-language acquisition, where mediation <strong>and</strong> meaning-making must occur<br />
across languages. On the other h<strong>and</strong>, some underlying aspects may be the same for<br />
122