Mapping Diversity: Developing a European Classification of ... - U-Map
Mapping Diversity: Developing a European Classification of ... - U-Map
Mapping Diversity: Developing a European Classification of ... - U-Map
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
uild on standard data, whereas other are ‘experimental’ and use information that is not included in<br />
20<br />
the set <strong>of</strong> commonly reported data. For these indicators it might be the case that the data reported<br />
depend on the person or department reporting the data. To fi nd out whether this reliability problem<br />
is perceived to exist, the responding higher education institutions were asked to respond to the<br />
statement: ‘the information is reliable’.<br />
The responses are very positive about the reliability <strong>of</strong> the information provided. For 25 indicators<br />
at least fi ve out <strong>of</strong> six responding higher education institutions reported that they (strongly) agreed<br />
with the statement that ‘the information is reliable’. The indicators on which slightly more responding<br />
higher education institutions had some doubts regarding reliability are: 3a and 3b (orientation <strong>of</strong><br />
degrees), 6d (revenues from private contracts) and 14b and 14c (regional engagement).<br />
3.3.3 Feasibility<br />
To assess the feasibility <strong>of</strong> the process <strong>of</strong> collecting and reporting the data we used four indications:<br />
the time needed to collect data on the indicator; the score on the scale ‘easy to collect’; whether the<br />
data were collected from an existing source; and the total number <strong>of</strong> valid cases.<br />
Based on this information an overall rank score was calculated. Calculating an overall rank score is<br />
a tricky exercise. There is no clear conceptual basis for weighting the rank scores on the individual<br />
feasibility scores. Yet there is an argument to make for weighting the fi rst two indicators stronger<br />
than the latter two. The fi rst two are self reported by the respondents, whereas at least the last<br />
indicator is indirectly derived from the sample.<br />
3.3.4. Challenging dimensions<br />
One <strong>of</strong> the reasons to organise the survey was to fi nd out which dimensions and indicators would<br />
be useful in the classifi cation and which would not. To fi nd an answer to that question we combined<br />
the information on validity, feasibility and reliability <strong>of</strong> the indicators selected for each dimension.<br />
We do not use the scores on the perceived relevance <strong>of</strong> the dimensions since a high proportion <strong>of</strong><br />
responding higher education institutions strongly disagreeing with the relevance <strong>of</strong> a dimension is<br />
not an indication <strong>of</strong> the quality <strong>of</strong> the dimension. We see such a lack <strong>of</strong> consensus as an indication <strong>of</strong><br />
the diversity <strong>of</strong> the missions and pr<strong>of</strong>i les <strong>of</strong> the higher education institutions. Only if the vast majority<br />
<strong>of</strong> the responding higher education institutions disagreed with a dimension’s relevance would we<br />
reconsider the choice <strong>of</strong> this dimension. This was not the case for any <strong>of</strong> the fourteen dimensions.<br />
MAPPING DIVERSITY<br />
To identify potential ‘challenging’ dimensions we selected those dimensions for which at least one<br />
indicator scored more than 5% ‘strongly disagree’ on the validity and reliability items and which was<br />
in the bottom fi ve <strong>of</strong> the overall feasibility ranking.<br />
Using these criteria, there are only two ‘challenging’ dimensions: dimension 4, ‘Involvement in live<br />
long learning’ and dimension 6 ‘innovation intensiveness’.