26.03.2015 Views

Mapping Diversity: Developing a European Classification of ... - U-Map

Mapping Diversity: Developing a European Classification of ... - U-Map

Mapping Diversity: Developing a European Classification of ... - U-Map

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong><strong>Map</strong>ping</strong><br />

<strong>Diversity</strong><br />

<strong>Developing</strong> a <strong>European</strong> Classifi cation<br />

<strong>of</strong> Higher Education Institutions


<strong><strong>Map</strong>ping</strong><br />

<strong>Diversity</strong><br />

<strong>Developing</strong> a <strong>European</strong> Classifi cation<br />

<strong>of</strong> Higher Education Institutions<br />

Project Identifi cation number 2006 – 1742 / 001 – 001 SO2 81 AWB<br />

This project has been funded with support from the <strong>European</strong><br />

Commission.<br />

This publication content refl ects the views only <strong>of</strong> the authors.<br />

The Commission cannot be held responsible for any use which<br />

may be made <strong>of</strong> the information contained therein.


<strong><strong>Map</strong>ping</strong><br />

<strong>Diversity</strong><br />

Contents<br />

<strong>Developing</strong> a <strong>European</strong><br />

Classifi cation <strong>of</strong> Higher<br />

Education Institutions<br />

1<br />

contents<br />

introduction<br />

3<br />

4<br />

2<br />

3<br />

4<br />

part i<br />

building a european classifi cation <strong>of</strong> higher<br />

education institutions: concepts and approach<br />

analyzing the results <strong>of</strong> the classifi cation<br />

survey<br />

conclusions<br />

6<br />

7<br />

16<br />

21<br />

5<br />

part ii<br />

operational implementation<br />

28<br />

29<br />

references<br />

36<br />

annex i:<br />

annex ii:<br />

annex iii:<br />

annex iv:<br />

exploratory analysis <strong>of</strong> existing data sources<br />

the case studies<br />

the pilot survey<br />

The CEICHE II survey<br />

39<br />

45<br />

56<br />

64<br />

COLOFON<br />

Enschede, 2008<br />

CHEPS<br />

(Center for Higher Education Policy Studies)<br />

University <strong>of</strong> Twente<br />

p.o. box 217<br />

7500 AE Enschede<br />

The Netherlands<br />

Design & production: VMMP


4 1. Introduction<br />

In August 2005 the report ‘Institutional Pr<strong>of</strong>i les, towards a Typology <strong>of</strong> Higher Education Institutions<br />

in Europe’ was published. This report was the result <strong>of</strong> the fi rst phase <strong>of</strong> a research project on the<br />

development <strong>of</strong> a <strong>European</strong> classifi cation <strong>of</strong> higher education institutions. In general terms, the<br />

objectives <strong>of</strong> this fi rst project were:<br />

• to assess the need for a <strong>European</strong> classifi cation <strong>of</strong> higher education institutions;<br />

• to develop a conceptual model upon which such a classifi cation could be based;<br />

• to propose an appropriate set <strong>of</strong> dimensions and indicators for such a classifi cation.<br />

The fi rst phase <strong>of</strong> the research project resulted in a set <strong>of</strong> principles for designing a classifi cation<br />

as well as a fi rst draft <strong>of</strong> the components <strong>of</strong> such a classifi cation (the draft-classifi cation). Both were<br />

produced in an elaborate process <strong>of</strong> consultation with identifi ed stakeholders. A wide range <strong>of</strong><br />

stakeholders showed interest in the project and contributed to a constructive and fruitful exchange<br />

<strong>of</strong> ideas and views regarding the classifi cation.<br />

This report ‘<strong><strong>Map</strong>ping</strong> <strong>Diversity</strong>: <strong>Developing</strong> a <strong>European</strong> Classifi cation <strong>of</strong> Higher Education Institutions’<br />

is an output <strong>of</strong> the second phase <strong>of</strong> the research project. The overall objectives <strong>of</strong> this second<br />

phase were:<br />

• to test the draft-classifi cation developed in phase I and to adapt it to the realities and needs <strong>of</strong><br />

the various stakeholders;<br />

• to explore and enhance the legitimacy <strong>of</strong> a <strong>European</strong> classifi cation <strong>of</strong> higher education<br />

institutions.<br />

This report addresses these objectives. The fi rst part discusses the research instruments used<br />

to test the draft-classifi cation and presents their outcomes. It also presents the adapted second<br />

draft <strong>of</strong> the classifi cation. The second part discusses the process followed to explore and enhance<br />

the legitimacy <strong>of</strong> the classifi cation and makes a number <strong>of</strong> suggestions regarding its possible<br />

operational introduction.<br />

As in the fi rst phase, the second phase <strong>of</strong> this research project was granted funding in the framework<br />

<strong>of</strong> the EU Socrates programme. It should be pointed out however that the research was carried out<br />

by an independent team <strong>of</strong> researchers.<br />

MAPPING DIVERSITY<br />

The second phase <strong>of</strong> the project was carried out by the Center for Higher Education Policy Studies<br />

(CHEPS), University <strong>of</strong> Twente, the Netherlands in partnership with the University <strong>of</strong> Strathclyde,<br />

Glasgow, Scotland; The University <strong>of</strong> Aveiro, Portugal; and the German Rectors Conference<br />

(HRK).


The research project team consisted <strong>of</strong> the following members:<br />

5<br />

Mr. Pr<strong>of</strong>. Dr. Frans van Vught (project leader) (1)<br />

Mr. Dr. Jeroen Bartelse (1)<br />

Mr. David Bohmert (5)<br />

Mr. Jon File (1)<br />

Mrs. Dr. Christiane Gaethgens (3)<br />

Mrs. Saskia Hansen (2)<br />

Mr. Frans Kaiser (1)<br />

Mr. Dr. Rolf Peter (3)<br />

Mrs. Dr. Sybille Reichert (5)<br />

Mr. Pr<strong>of</strong>. Dr. Jim Taylor (†) (4)<br />

Mr. Dr. Peter West (2)<br />

Mrs. Pr<strong>of</strong>. Dr. Marijk van de Wende (1)<br />

1: CHEPS<br />

2: Strathclyde<br />

3: HRK<br />

4: Aveiro<br />

5. Independent expert<br />

In October 2008 the third and fi nal phase <strong>of</strong> the project will start, with fi nancial support within the<br />

framework <strong>of</strong> the EU Socrates Lifelong learning. In this phase we will evaluate and fine-tune the<br />

dimensions and their indicators and bring them into line with other relevant indicator initiatives; finalise<br />

a working on-line classification tool; articulate this with the classifi cation tool operated by the Carnegie<br />

Foundation; develop a fi nal organisational model for the implementation <strong>of</strong> the classifi cation; and<br />

continue the process <strong>of</strong> stakeholder consultation and discussion that has been a hallmark <strong>of</strong> the<br />

project since its inception in 2005.<br />

The major output <strong>of</strong> the <strong><strong>Map</strong>ping</strong> <strong>Diversity</strong> project will be a fi rm proposal for a <strong>European</strong> classifi cation<br />

<strong>of</strong> higher education institutions. The fi nalisation and implementation <strong>of</strong> this classifi cation will be a<br />

major step in promoting the attractiveness <strong>of</strong> <strong>European</strong> higher education. It will create far greater<br />

transparency and reveal the rich diversity <strong>of</strong> the <strong>European</strong> higher education landscape - this in turn<br />

will help create a stronger pr<strong>of</strong>i le for <strong>European</strong> higher education on a global stage and contribute to<br />

the realisation <strong>of</strong> the goals <strong>of</strong> the Lisbon strategy and the Bologna process.<br />

For more information about the project please see: www.cheps.org/ceihe<br />

This project has been funded with support from the <strong>European</strong><br />

Commission. This publication refl ects the views only <strong>of</strong> the author,<br />

and the Commission cannot be held responsible for any use which<br />

may be made <strong>of</strong> the information contained therein.<br />

MAPPING DIVERSITY


Part I


2. Building a <strong>European</strong> <strong>Classification</strong> <strong>of</strong> Higher<br />

Education Institutions: Concepts and Approach<br />

7<br />

2.1 Relevant concepts<br />

2.1.1 <strong>Diversity</strong><br />

The concept <strong>of</strong> diversity has risen rapidly on the political agenda <strong>of</strong> <strong>European</strong> higher education<br />

during the last few years. The development <strong>of</strong> the <strong>European</strong> Higher Education Area (EHEA) and<br />

the <strong>European</strong> Research Area (ERA) have clearly contributed to the growing attention given to<br />

diversity. In addition the global debates about international competition in higher education, world<br />

class universities, and rankings and league tables have triggered an awareness that the diversity<br />

<strong>of</strong> <strong>European</strong> higher education may be seen as a strength, but that a better understanding <strong>of</strong> that<br />

strength is needed. The creation <strong>of</strong> a <strong>European</strong> classifi cation <strong>of</strong> higher education institutions is<br />

an attempt to contribute to better understanding the diversity <strong>of</strong> the <strong>European</strong> higher education<br />

landscape.<br />

In general ‘diversity’ is a term indicating the variety <strong>of</strong> entities within a system. ‘<strong>Diversity</strong>’ is to be<br />

distinguished from ‘differentiation’ which can be defi ned as a process in which new entities emerge<br />

in a system. While differentiation denotes a dynamic process, diversity refers to the level <strong>of</strong> variety<br />

<strong>of</strong> the entities in a system at a specifi c point in time.<br />

In the higher education literature several forms <strong>of</strong> diversity have been distinguished (Birnbaum<br />

1983; Huisman 1995; van Vught 2008). Some crucial forms <strong>of</strong> diversity are:<br />

• systemic, structural or institutional diversity, referring to differences in types <strong>of</strong> institutions<br />

within higher education systems;<br />

• programmatic diversity, relating to the differences between programmes provided by higher<br />

education institutions;<br />

• reputational diversity, which refers to perceived differences in the prestige or status <strong>of</strong> higher<br />

education institutions.<br />

It is important to maintain a clear distinction between these different forms <strong>of</strong> diversity and to be clear<br />

about the form <strong>of</strong> diversity a specifi c analysis focuses on. In this report the focus is on the various<br />

differences between higher education institutions (one <strong>of</strong> which might be perceived differences in<br />

prestige). In order to underline this focus, we will use the term institutional diversity.<br />

Institutional diversity is <strong>of</strong>ten seen as one <strong>of</strong> the major factors associated with the positive performance<br />

<strong>of</strong> higher education systems. The following arguments are developed in the literature regarding the<br />

positive impact <strong>of</strong> institutional diversity (van Vught 2008):<br />

• First, it is <strong>of</strong>ten argued that an increase in institutional diversity <strong>of</strong> a higher education system is<br />

an important strategy to meet student needs. A more diversifi ed system is assumed to be better<br />

able to <strong>of</strong>fer access to higher education to students with different educational backgrounds and<br />

with a range <strong>of</strong> academic and pr<strong>of</strong>essional achievements.<br />

• A second and related argument is that institutional diversity provides for social mobility. By<br />

<strong>of</strong>fering different modes <strong>of</strong> entry into higher education and by providing multiple forms <strong>of</strong><br />

transfer, a diversifi ed system stimulates upward mobility as well as providing for honourable<br />

MAPPING DIVERSITY


downward mobility. A diversifi ed system allows for corrections <strong>of</strong> errors <strong>of</strong> choice; it provides<br />

8<br />

extra opportunities for success; it rectifi es poor motivations; and it broadens educational<br />

horizons.<br />

• Third, institutional diversity is seen to meet the needs <strong>of</strong> the labour market. The point <strong>of</strong> view<br />

here is that in modern society an increasing variety <strong>of</strong> specialisations on the labour marked is<br />

necessary to allow further economic and social development. A homogeneous higher education<br />

system is thought to be less able to respond to the diverse needs <strong>of</strong> the labour market than a<br />

diversifi ed system.<br />

• A fourth argument is that institutional diversity serves the political needs <strong>of</strong> interest groups.<br />

The idea is that a diverse system ensures the needs <strong>of</strong> different groups in society to have their<br />

own identity and their own political legitimation. In less diversifi ed higher education systems<br />

the needs <strong>of</strong> specifi c groups may remain unaddressed, which may cause internal debates in a<br />

higher education system and various kinds <strong>of</strong> disruptions.<br />

• A fi fth, and well-known argument is that institutional diversity permits the crucial combination <strong>of</strong><br />

elite and mass higher education. Generally speaking, mass systems tend to be more diversifi ed<br />

than elite systems, as mass systems absorb a more heterogeneous clientele and attempt to<br />

respond to a wide range <strong>of</strong> demands from the labour market.<br />

• A sixth reason why institutional diversity is an important objective for higher education systems<br />

is that diversity is assumed to increase the level <strong>of</strong> effectiveness <strong>of</strong> higher education institutions.<br />

The argument is that institutional specialization allows higher education institutions to focus their<br />

attention and energy thus producing higher levels <strong>of</strong> effectiveness.<br />

• Finally, institutional diversity <strong>of</strong>fers opportunities for experimenting with innovation. In a diversifi ed<br />

higher education system, institutions have the option to assess the viability <strong>of</strong> innovations created<br />

by other institutions, without necessarily having to implement these innovations themselves.<br />

<strong>Diversity</strong> <strong>of</strong>fers the possibility to explore the effects <strong>of</strong> innovative behaviour without the need<br />

to implement the innovations at all institutions at the same time. <strong>Diversity</strong> permits low-risk<br />

experimentation.<br />

These various arguments in favour <strong>of</strong> institutional diversity show that diversity is usually assumed to<br />

be a worthwhile objective for higher education systems. Diversifi ed higher educations systems are<br />

believed to produce higher levels <strong>of</strong> client-orientation (both regarding the needs <strong>of</strong> students and<br />

<strong>of</strong> the labour market), social mobility, effectiveness, fl exibility, innovativeness, and stability. More<br />

diversifi ed systems, generally speaking, are thought to be ‘better’ than less diversifi ed systems.<br />

Many governments have designed and implemented policies to increase the level <strong>of</strong> diversity <strong>of</strong><br />

their higher education systems.<br />

MAPPING DIVERSITY<br />

In the <strong>European</strong> context diversity is seen as both an important characteristic <strong>of</strong> the overall higher<br />

education system and a worthwhile policy objective. The diversity <strong>of</strong> the <strong>European</strong> higher education<br />

system is assumed to be large and this is argued to be a highly relevant condition for the future<br />

development <strong>of</strong> the system. However, unfortunately the level <strong>of</strong> this diversity has not yet been made<br />

transparent. It seems that our empirical knowledge about the institutional diversity <strong>of</strong> <strong>European</strong><br />

higher education is still limited. The development <strong>of</strong> a <strong>European</strong> classifi cation <strong>of</strong> higher education<br />

institutions will address this lack <strong>of</strong> knowledge and transparency.<br />

2.1.2 Rankings<br />

One <strong>of</strong> the most debated recent developments in higher education worldwide concerns the<br />

application <strong>of</strong> rankings <strong>of</strong> higher education institutions. Also in the academic literature on higher<br />

education, rankings are now being widely examined from conceptual, methodological and statistical<br />

points <strong>of</strong> view. The general consensus seems to be that although there are still serious problems


with university rankings, rankings are ‘here to stay’. We should try to improve them rather than fi ght<br />

them (Dill and Soo 2005; Van Dyke 2005; Marginson 2007; Marginson and van der Wende 2007;<br />

Sadlak and Liu 2007; Centre for Higher Education Research and Information, Open University et al.<br />

2008; van der Wende 2008). Several issues have been identifi ed that should be addressed when<br />

improving the current ranking approaches.<br />

9<br />

A fi rst issue regards the distinction between aggregated and multi-dimensional rankings. In an<br />

aggregated ranking the information on a number <strong>of</strong> indicators regarding institutional performance<br />

is combined to create an overall institutional league table. In this approach certain weights are<br />

assigned to each indicator according to their perceived importance and a straight hierarchical<br />

ranking is produced. A multi-dimensional ranking provides multiple scores for each individual<br />

higher education institution, <strong>of</strong>fering information on a set <strong>of</strong> different aspects without necessarily<br />

combining these in a unique hierarchy. The problem with aggregated rankings is that they hide<br />

signifi cant differences between higher education institutions and they cannot address the specifi c<br />

interests <strong>of</strong> stakeholders. In addition the choices regarding indicators and their weights in the overall<br />

score are made by the ranking organisation and the underlying rational for these choices is <strong>of</strong>ten<br />

unclear. Multi-dimensional rankings, on the other hand, recognize the diversity <strong>of</strong> higher education<br />

institutions and acknowledge that a single hierarchy cannot refl ect this diversity. In addition, multidimensional<br />

rankings tend to accept that the choices <strong>of</strong> indicators and weights should usually relate<br />

to the users’ or stakeholders’ points <strong>of</strong> view and that hence these users/ stakeholders should be<br />

involved in making these choices.<br />

A second issue with respect to rankings concerns the fact that rankings usually appear to capture<br />

the prestige or reputation <strong>of</strong> higher education institutions, rather than their actual performance. Most<br />

international rankings are prestige rankings, largely focused on criteria like ‘excellence in research’<br />

and subjective peer reputation. Particularly when prestige surveys amongst academics are used,<br />

the problem with these rankings are manifold. Academic peers cannot be assumed to have a<br />

comprehensive overview <strong>of</strong> the academic quality <strong>of</strong> all relevant institutions. In addition misleading<br />

‘halo-effects’ will result from quality judgements based on reputations (with world famous universities<br />

being higher rated because <strong>of</strong> their reputation rather than their performance). Furthermore circularity<br />

effects occur as a result <strong>of</strong> historical reputation (with historically highly reputed institutions receiving<br />

positive judgements regarding their present or future rating).<br />

A third issue regards the selection <strong>of</strong> the indicators to be used in rankings. Such a selection should<br />

satisfy attributes like relevance, comprehensiveness, comparability, validity and reliability. The<br />

indicators should refl ect the dimensions that are judged to be important by various stakeholders<br />

and provide reliable information. To date rankings are confronted with the problem <strong>of</strong> a lack <strong>of</strong><br />

indicators that suffi ciently capture the performance <strong>of</strong> higher education institutions more widely.<br />

Especially in areas other than research, notably teaching and other forms <strong>of</strong> knowledge transfer,<br />

lifelong learning and innovation.<br />

A fourth and fi nal issue regarding rankings concerns their impact on the behaviour <strong>of</strong> higher<br />

education institutions and on the dynamics <strong>of</strong> higher education systems. Rankings appear to<br />

trigger reactions by various stakeholders, <strong>of</strong>ten producing unintended effects. Higher education<br />

institutions for instance react to their ranking positions by increasing their investments in costly<br />

programmes and creating higher access selectivity barriers. Policy-makers stimulate institutions<br />

to improve their position on particular prestige rankings. Rankings are in this respect not neutral<br />

information instruments but rather highly ‘political’ tools that produce various reactions and effects<br />

(Salmi and Saroyan 2006).<br />

MAPPING DIVERSITY


While rankings are criticized for their conceptual and methodological problems and for their potentially<br />

10<br />

dysfunctional effects, they are nevertheless seen as ‘here to stay’. The challenge therefore is to <strong>of</strong>fer<br />

constructive contributions to the process <strong>of</strong> improving the quality and effectiveness <strong>of</strong> rankings. This<br />

is one <strong>of</strong> the intentions <strong>of</strong> this project.<br />

2.1.3 <strong>Classification</strong>s<br />

‘A classifi cation is a spatial, temporal or spatio-temporal segmentation <strong>of</strong> the world’ (Bowker and<br />

Star 2000, p.10). Or in simpler terms it is ‘… the general process <strong>of</strong> grouping entities by similarity’<br />

(Bailey 1994, p.4).<br />

In the literature on classifi cations, a number <strong>of</strong> related terms are used, sometimes interchangeably,<br />

which can lead to confusion. In order to be explicit about the concepts used in this project we<br />

provide a short resumé <strong>of</strong> the relevant terms.<br />

A classifi cation should be distinguished from a typology. A typology is a conceptual classifi cation.<br />

A classifi cation orders empirical cases while a typology addresses conceptual entities. The cells in<br />

a typology represent concepts rather than empirical cases. These concepts are generally defi ned<br />

in a monothetic way: they comprise entities that are all identical on all variables or dimensions<br />

measured.<br />

A taxonomy is a special case <strong>of</strong> classifi cation with the main difference being that each cell (taxon)<br />

comprises an empirical case. This term is generally used in biological sciences.<br />

In this project we are building a classifi cation: we develop a set <strong>of</strong> grouping criteria and use it to group<br />

empirical cases (in our case: higher education institutions). Classifi cations can be unidimensional<br />

or multi-dimensional. In this project a multi-dimensional classifi cation is aimed for.<br />

Generally speaking, classifi cations help to describe a fi eld. They may contribute to the reduction <strong>of</strong><br />

complexity and to increasing transparency. In addition they may be used to identify similarities and<br />

differences between entities. A classifi cation is also an instrument for information and communication.<br />

It intends to assist stakeholders in their decisions and actions.<br />

MAPPING DIVERSITY<br />

As is the case with rankings, in classifi cations the selection <strong>of</strong> the entities to be classifi ed and<br />

particularly <strong>of</strong> the ‘grouping criteria’ to categorize these entities are crucial decisions. Building a<br />

classifi cation should therefore be a user-oriented process. The most crucial aspect <strong>of</strong> a classifi cation<br />

is to determine who the potential or intended users (stakeholders) are and what they want to use<br />

the classifi cation for.<br />

Classifying entities is a process that consists <strong>of</strong> a number <strong>of</strong> steps. The fi rst one is to identify the<br />

entities to be classifi ed. The user-oriented perspective provides suffi cient guidance here.<br />

Once the entities for the classifi cation have been identifi ed the second step can be taken: the<br />

defi nition <strong>of</strong> relevant and adequate grouping criteria. The choice <strong>of</strong> the dimensions (as we shall call<br />

the grouping criteria or key characteristics from now on) should allow the users <strong>of</strong> the classifi cation<br />

to group the entities the way they want. The more dimensions are selected the more the entities can<br />

be grouped and described in detailed and different ways. Here again, the user-oriented perspective<br />

is crucial. Only when the relevant stakeholders are able to contribute to the selection and defi nition<br />

<strong>of</strong> dimensions can relevant classifi cations be produced.<br />

The fi nal step is to identify how the entities score on the different dimensions. During this step the<br />

entities are allocated to the cells <strong>of</strong> the classifi cation on the basis <strong>of</strong> empirical information.


Classifi cations use the principles <strong>of</strong> ordering and comparison to categorize. A <strong>European</strong> classifi cation<br />

<strong>of</strong> higher education institutions allows categorizations <strong>of</strong> these institutions according to the number<br />

<strong>of</strong> dimensions being applied in the classifi cation. As was indicated before, the classifi cation to<br />

be developed here is a multi-dimensional instrument, providing a number <strong>of</strong> categories in which<br />

institutions are grouped that show similar ‘scores’ on characteristics.<br />

The <strong>European</strong> classifi cation <strong>of</strong> higher education institutions thus differs from aggregated rankings<br />

in that it allows multiple scores for individual institutions. It also differs from rankings in general<br />

because it does not intend to create hierarchical comparisons, leading to one ‘league table’.<br />

However, this will not stop users from developing their own rankings <strong>of</strong> tailor made subsets <strong>of</strong><br />

institutions within the classifi cation. This is not necessarily a bad thing. At least the use <strong>of</strong> subsets<br />

<strong>of</strong> institutions reduces the diversity within the group <strong>of</strong> institutions ranked and therefore reduces the<br />

risk that incomparable institutions are compared and unfairly ranked. In this sense the <strong>European</strong><br />

classifi cation <strong>of</strong> higher education institutions is a relevant and signifi cant prerequisite for better<br />

rankings in higher education.<br />

11<br />

2.2 The draft-classification<br />

During the fi rst phase <strong>of</strong> the classifi cation project the option <strong>of</strong> designing and constructing a <strong>European</strong><br />

classifi cation <strong>of</strong> higher educations institutions was explored. The conclusion was that Europe would<br />

certainly pr<strong>of</strong>i t from a classifi cation <strong>of</strong> its many and diverse higher education institutions. As the<br />

Carnegie Classifi cation has done in the US since the early 1970s, a <strong>European</strong> classifi cation <strong>of</strong><br />

higher education institutions would create a level <strong>of</strong> transparency in the <strong>European</strong> higher education<br />

area which would support the various stakeholders in this area.<br />

• Business and industry will better be able to identify the institutions they wish to relate to with<br />

respect to hiring graduates, commissioning research, organizing knowledge transfer, etc.<br />

• Policy makers (at various levels) will be better able to target policies and programmes.<br />

• Students will be better able to identify their preferred higher education institutions and to make<br />

better choices regarding their study-programmes and labour market perspectives.<br />

• Higher education institutions will be better able to develop their missions and pr<strong>of</strong>i les and to<br />

engage in partnerships, benchmarking and networking.<br />

A <strong>European</strong> classifi cation <strong>of</strong> higher education institutions will create transparency and reveal the rich<br />

diversity <strong>of</strong> <strong>European</strong> higher education. As was indicated before, we see the <strong>European</strong> classifi cation<br />

<strong>of</strong> higher education institutions as a descriptive tool, using principles <strong>of</strong> measurement, ordering and<br />

comparing to categorize higher education institutions in multi-dimensional ways.<br />

During the fi rst phase <strong>of</strong> the project a set <strong>of</strong> so-called ‘design principles’ was formulated. These<br />

principles were the result <strong>of</strong> extensive communication with the various stakeholders. The design<br />

principles were the following:<br />

• the classifi cation should be inclusive <strong>of</strong> all <strong>European</strong> higher education institutions;<br />

• the classifi cation should be based on a posterior information, describing the actual conditions<br />

and behaviour <strong>of</strong> higher education institutions,<br />

• the classifi cation should be multi-dimensional and allow several ways <strong>of</strong> categorizing higher<br />

education institutions;<br />

• the classifi cation should be non-hierarchical in terms <strong>of</strong> dimensions, criteria and categories;<br />

• the classifi cation should be based as much as possible on ‘objective’, empirical and reliable<br />

data;<br />

• the classifi cation should be descriptive not prescriptive;<br />

MAPPING DIVERSITY


• the classifi cation should allow fl exibility in the sense that institutions can ‘move’ between<br />

12<br />

categories and that dimensions, criteria and categories can be adapted;<br />

• the classifi cation should be parsimonious regarding extra data-gathering needs;<br />

• the classifi cation should be related to the <strong>European</strong> policy on quality assurance, in particular<br />

the <strong>European</strong> Quality Assurance Register in Higher Education (EQAR).<br />

Based on these principles a draft-classifi cation was developed that consists <strong>of</strong> 14 dimensions and<br />

a set <strong>of</strong> indicators per dimension. The dimensions and indicators were selected in an interactive<br />

process with the stakeholders and experts and were developed to cover the crucial characteristics<br />

<strong>of</strong> higher education institutions in Europe and to allow relevant differentiation between these<br />

institutions.<br />

Regarding the relationship between the <strong>European</strong> classifi cation and quality assurance, the following<br />

suggestions were made:<br />

• the classifi cation should not be seen as an instrument for ranking higher education institutions.<br />

The multi-dimensional and non-hierarchical characteristics <strong>of</strong> the classifi cation imply that a<br />

number <strong>of</strong> different comparisons and categorizations <strong>of</strong> higher education institutions can be<br />

made, that cannot lead to one ‘league table’. However, the classifi cation instrument cannot<br />

prevent users from ranking institutions per dimension. Such rankings may be assumed to be<br />

more useful and fair than aggregated rankings;<br />

• the classifi cation is not an instrument for institutional quality measurement. It does not generate<br />

quality judgments about higher education institutions, nor about their educational and research<br />

programmes. It describes the ‘pr<strong>of</strong>i les’ <strong>of</strong> these institutions on the basis <strong>of</strong> ‘objective’, empirical<br />

and reliable data. These descriptions can <strong>of</strong> course be used in quality measurement and<br />

assurance processes, but in order to be able to do so criteria for the judgment <strong>of</strong> quality have<br />

to be added to the descriptions;<br />

• in order to create a clear relationship with the <strong>European</strong> Register <strong>of</strong> Quality Assurance Agencies<br />

(EQAR), only those higher education institutions whose programmes are successfully reviewed<br />

by a registered quality assurance or accreditation agency should be included in the classifi cation.<br />

In this way only reputable higher education providers will be included.<br />

Table 1 <strong>of</strong>fers an overview <strong>of</strong> the draft and classifi cation. As was indicated earlier this classifi cation<br />

consists <strong>of</strong> 14 dimensions and a set <strong>of</strong> indicators per dimension. The indicators make it possible to<br />

differentiate between institutions and to construct different categories per dimension. In the draftclassifi<br />

cation these categories were only provisionally developed.<br />

MAPPING DIVERSITY


Table 1: Draft and classifi cation (result <strong>of</strong> project phase I)<br />

Dimension<br />

Types <strong>of</strong> degrees <strong>of</strong>fered<br />

Range <strong>of</strong> subjects <strong>of</strong>fered<br />

Orientation <strong>of</strong> degrees<br />

<strong>European</strong> educational<br />

pr<strong>of</strong>i le<br />

Research intensiveness<br />

Innovation driven<br />

research<br />

<strong>European</strong> research pr<strong>of</strong>i le<br />

International orientation<br />

Involvement in life long<br />

learning<br />

Size<br />

Mode <strong>of</strong> delivery<br />

Community services<br />

Public/private character<br />

Legal status<br />

Indicator<br />

a) highest level <strong>of</strong> degree <strong>of</strong>fered<br />

b) number <strong>of</strong> qualifi cations granted in each type<br />

number <strong>of</strong> subject areas covered by an institution using the UNESCO<br />

ISCED subject areas<br />

institutions themselves indicate to what extent their institutional pr<strong>of</strong>i le<br />

corresponds to the categories ‘academic orientation’, ‘pr<strong>of</strong>essional<br />

orientation’, ‘mixed orientation’, ‘not relevant’<br />

an institution’s fi nancial turn-over in <strong>European</strong> higher education<br />

programmes related to total turnover.<br />

number <strong>of</strong> peer reviewed publications relative to the total number<br />

<strong>of</strong> staff<br />

a) number <strong>of</strong> start-up fi rms<br />

b) number <strong>of</strong> patents<br />

c) volume <strong>of</strong> research contracts<br />

an institution’s fi nancial turn-over in <strong>European</strong> research programmes<br />

(Framework Programmes and <strong>European</strong> Research Council) related<br />

to the total turnover.<br />

a) proportion <strong>of</strong> international students related to the total number <strong>of</strong><br />

students in each type <strong>of</strong> degree<br />

b) proportion <strong>of</strong> <strong>European</strong> students related to the total number <strong>of</strong><br />

students in each type <strong>of</strong> degree<br />

c) proportion <strong>of</strong> international staff members related to total number<br />

<strong>of</strong> staff members<br />

the proportion <strong>of</strong> adult learners (e.g. older than thirty years) per type<br />

<strong>of</strong> degree related to total student body.<br />

a) number <strong>of</strong> students enrolled at the institution<br />

b) number <strong>of</strong> staff members employed by the institution<br />

a) campus-based versus distance learning<br />

b) domestic versus abroad mode <strong>of</strong> delivering educational<br />

programmes.<br />

the percentage <strong>of</strong> staff time attributed to community services<br />

the proportion <strong>of</strong> an institution’s private funding related to its total<br />

funding base<br />

the legal status <strong>of</strong> a higher education institution can either be public<br />

or private<br />

13<br />

2.3 Elaborating the draft-classification<br />

During the second phase <strong>of</strong> the research project the draft-classifi cation has been elaborated and<br />

tested. The following activities have been undertaken:<br />

• an exploratory analysis <strong>of</strong> the existing (<strong>European</strong>) data sources in order to fi nd out whether the<br />

relevant information for ‘fi lling the classifi cation’ could be collected from these sources;<br />

• a number <strong>of</strong> in-depth case-studies have been undertaken in order to better understand the<br />

needs and expectations <strong>of</strong> individual higher education institutions regarding the classifi cation;<br />

• a survey was conducted amongst a number <strong>of</strong> higher education institutions in order to test the<br />

relevance, validity and reliability <strong>of</strong> the elements <strong>of</strong> the classifi cation and to learn whether the<br />

necessary information can be supplied by the institutions.<br />

2.3.1 Exploratory analysis <strong>of</strong> existing data sources<br />

In an ideal world, a <strong>European</strong> classifi cation <strong>of</strong> higher education institutions would be based on<br />

readily available, trustworthy data that are defi ned and gathered at a <strong>European</strong> level or are at least<br />

comparable at that level. The advantages are obvious: defi nitions are spelt out, data are gathered<br />

MAPPING DIVERSITY


and checked, consistency <strong>of</strong> analysis is ensured, and legitimacy secured. We explored to what<br />

14<br />

extent this situation is already real. The availability, quality and relevance <strong>of</strong> the data required for the<br />

classifi cation’s indicators was explored. This analysis followed a three step procedure.<br />

The fi rst step was the inventory <strong>of</strong> an extensive number <strong>of</strong> data sources.<br />

The second step was to determine whether the data sources were relevant. We used the following<br />

criteria:<br />

• Does the data source comprise information on any <strong>of</strong> the indicators <strong>of</strong> the draft-classifi cation?<br />

• Is the information presented at the institutional level?<br />

• Does the data source comprise underlying data at the institutional level?<br />

• May the underlying data be used?<br />

• Can the conditions for use (privacy, costs etc.) be met?<br />

Third, once the relevance <strong>of</strong> the data source was determined, we assessed the quality <strong>of</strong> the data,<br />

on the basis <strong>of</strong> the following criteria:<br />

• Data must be up to date<br />

• Consistency through time/ reliability<br />

• Cost <strong>of</strong> data retrieval<br />

Views and opinions, as expressed by experts and in the Advisory Board and Stakeholder Group<br />

meetings, were used to complement the information regarding the most relevant data sources.<br />

The results <strong>of</strong> this analysis are reported in Annex I.<br />

The conclusion <strong>of</strong> the analysis is that international databases are only to a very limited extent available<br />

and suitable for a <strong>European</strong> classifi cation <strong>of</strong> higher education institutions. The major bottleneck is<br />

that these databases usually comprise system-level data or aggregate data that are not suffi ciently<br />

institution-specifi c. Therefore, part <strong>of</strong> the data would have to be collected from national data sources.<br />

A fi rst estimate is that about one third <strong>of</strong> the data can be retrieved that way. Most <strong>of</strong> the data thus<br />

has to be collected at the institutional level.<br />

2.3.2 Case-studies and pilot-survey<br />

MAPPING DIVERSITY<br />

For the in-depth case-studies two levels were distinguished. In two institutions an elaborate on-site<br />

investigation took place into the potential strategic benefi ts <strong>of</strong> a <strong>European</strong> classifi cation.<br />

These institutions were:<br />

• the Norwegian University <strong>of</strong> Science and Technology in Trondheim, Norway;<br />

• the University <strong>of</strong> Strathclyde in Glasgow, Scotland, UK.<br />

The case study reports on these two institutions can be found in Annex II.<br />

In addition to the two elaborate case-studies another six higher education institutions were analyzed<br />

regarding specifi c issues and aspects <strong>of</strong> the possible use <strong>of</strong> the classifi cation.<br />

These institutions were:<br />

• Budapest Tech, Hungary;<br />

• Fachhochschule Osnabrück, Germany;<br />

• Fachhochschule Vorarlberg, Austria ;


• Fontys Hogescholen, the Netherlands;<br />

• Ruprecht-Karls-Universität Heidelberg, Germany;<br />

• Universiteit Twente, the Netherlands.<br />

15<br />

For this analysis a pilot survey was developed that was sent to these six institutions as well as to the<br />

two in-depth case study institutions.<br />

The report on the pilot survey <strong>of</strong> the eight institutions is included as Annex III.<br />

The case studies provided very positive reactions to the possible use <strong>of</strong> the classifi cation. All<br />

institutions appeared to be convinced that they would be able to work with the classifi cation as a<br />

tool for their own strategic management processes. The classifi cation was judged to be a relevant<br />

instrument for sharpening an institution’s mission and pr<strong>of</strong>i le. By focusing on the relevant dimensions<br />

and indicators <strong>of</strong> the classifi cation the institutions indicated that they would be able to strengthen<br />

their strategic orientation and develop and communicate their pr<strong>of</strong>i le. In addition the institutions in<br />

the case studies indicated that they would be highly interested in identifying and learning from other<br />

institutions comparable to them on a number <strong>of</strong> relevant dimensions and indicators. <strong>Developing</strong> and<br />

expanding partnerships and networks with these colleague institutions and setting up benchmarking<br />

processes were seen as important benefi ts <strong>of</strong> the classifi cation.<br />

The case-studies also provided a number <strong>of</strong> suggestions for the adaptation and elaboration <strong>of</strong> the<br />

dimensions and indicators <strong>of</strong> the draft-classifi cation. These suggestions were incorporated in the<br />

adaptation <strong>of</strong> the classifi cation used in the survey amongst a larger number <strong>of</strong> higher education<br />

institutions.<br />

2.3.3 The classification survey<br />

The survey amongst a number <strong>of</strong> higher education institutions was the major element <strong>of</strong> the second<br />

phase <strong>of</strong> the research project. The survey was intended to test the (adapted) draft-classifi cation<br />

and to assess the relevance, validity, reliability and feasibility <strong>of</strong> the classifi cation instrument.<br />

The outcomes <strong>of</strong> this survey provide a clear set <strong>of</strong> indications for the further development <strong>of</strong> the<br />

classifi cation. These outcomes are reported in the next chapter <strong>of</strong> this report (chapter 3) and in<br />

Annex IV.<br />

MAPPING DIVERSITY


16 3 Analyzing the Results <strong>of</strong> the <strong>Classification</strong><br />

Survey<br />

3.1 Rationale for the survey<br />

The classifi cation is intended to be based on the actual behaviour <strong>of</strong> higher education institutions.<br />

The relevant aspects <strong>of</strong> that behaviour are organized in 14 dimensions and measured with 32<br />

indicators. The information on these indicators at the institutional level is diffi cult to fi nd in international<br />

databases. National data sources usually have more relevant information but the use <strong>of</strong> such data<br />

sources is limited because <strong>of</strong> various practical, legal and methodological problems.<br />

Therefore a survey among higher education institutions was designed. This survey served three<br />

purposes:<br />

• to assess the relevance <strong>of</strong> the dimensions selected<br />

• to assess the quality <strong>of</strong> the indicators selected<br />

• to provide data that will allow further analyses <strong>of</strong> the dimensions and their clustering and <strong>of</strong> the<br />

indicators and their potential and pitfalls.<br />

A fuller report on the Survey can be found in Annex IV.<br />

3.1.1 Survey design<br />

The survey consisted <strong>of</strong> two questionnaires: a questionnaire on the dimensions, querying the<br />

relevance <strong>of</strong> the dimensions and the indicators selected, and a questionnaire on the indicators. The<br />

latter comprised questions regarding data on the indicators selected as well as an assessment <strong>of</strong><br />

the indicators.<br />

Draft questionnaires were developed based on the dimensions and indicators identifi ed and<br />

selected at the end <strong>of</strong> phase I <strong>of</strong> the project. These draft questionnaires were tested and discussed<br />

in the two sets <strong>of</strong> case studies, as described in chapter two. Based on the results <strong>of</strong> these tests, the<br />

questionnaires were adjusted and placed on-line for the survey 1 .<br />

MAPPING DIVERSITY<br />

The intended size <strong>of</strong> the sample for the survey was 100 higher education institutions. To keep the<br />

non-response rate as low as possible, networks <strong>of</strong> higher education institutions as represented in<br />

the Advisory Board were asked to introduce the project and identify contact persons. Around 160<br />

higher education institutions were contacted. A second channel through which potential participants<br />

to the survey were identifi ed was through an open web-based procedure. On the project website<br />

(www.cheps.org/ceihe) higher education institutions could express their interest in participating.<br />

Based on the information provided the project team decided whether an interested institution could<br />

participate. In total 16 higher education institutions were selected this way. A fi nal way to invite<br />

institutions to participate was through national and international conferences. On a number <strong>of</strong><br />

occasions the project was presented and a call for participation was made.<br />

To create the required diversity in the experimental data set, the sample was stratifi ed. The strata<br />

in age and size were based on the information on over 3000 higher education institutions in the<br />

database <strong>of</strong> the International Association <strong>of</strong> Universities (IAU). For the identifi cation <strong>of</strong> regions,<br />

the United Nations classifi cation <strong>of</strong> regions was used 2 . In this classifi cation Europe is divided into<br />

Eastern, Northern, Southern and Western Europe.<br />

1 for pdf versions <strong>of</strong> the questionnaires see www.cheps.org//ceihe_dimension.pdf and www.cheps.org//ceihe_<br />

indicators.pdf.<br />

2 http://unstats.un.org/unsd/methods/m49/m49regin.htm#europe


3.1.2 Response to the survey<br />

67 Responses were received for the indicator questionnaire and 85 responses for the dimensions<br />

questionnaire.<br />

In terms <strong>of</strong> institutional age, the response appears to be skewed towards the younger categories.<br />

Compared to the IAU based size strata the sample is skewed towards larger higher education<br />

institutions. Apparently, larger higher education institutions had more resources, commitment or<br />

opportunities to participate in the survey. The responding higher education institutions are evenly<br />

distributed across the four <strong>European</strong> regions as distinguished in the UN classifi cation <strong>of</strong> <strong>European</strong><br />

regions.<br />

17<br />

3.2 The dimensions<br />

Table 2 presents an overview <strong>of</strong> the adapted list <strong>of</strong> dimensions and indicators <strong>of</strong> the classifi cation,<br />

as used in the survey. The changes to the original list (as presented in Table 1) have resulted from<br />

the fi ndings <strong>of</strong> case studies and the pilot-survey.<br />

MAPPING DIVERSITY


Table 2: Overview <strong>of</strong> adapted indicators and dimensions<br />

18<br />

Dimension<br />

Indicator<br />

1a: highest level <strong>of</strong> degree programme <strong>of</strong>fered<br />

1: types <strong>of</strong> degrees <strong>of</strong>fered 1b: number <strong>of</strong> qualifi cations granted in each type <strong>of</strong> degree<br />

programme<br />

2a: number <strong>of</strong> subject areas covered by an institution using<br />

2: range <strong>of</strong> subjects <strong>of</strong>fered<br />

UNESCO/ISCED subject areas 1<br />

3a: the number <strong>of</strong> programmes leading to certifi ed/ regulated<br />

pr<strong>of</strong>essions as a % <strong>of</strong> the total number <strong>of</strong> programmes<br />

3: orientation <strong>of</strong> degrees 3b: the number <strong>of</strong> programmes <strong>of</strong>fered that answer to a<br />

particular demand from the labour market or pr<strong>of</strong>essions (as %<br />

<strong>of</strong> the total number <strong>of</strong> programmes)<br />

4: involvement in life long 4a: number <strong>of</strong> adult learners as a % <strong>of</strong> total number <strong>of</strong> students<br />

learning<br />

by type <strong>of</strong> degree<br />

5a: number <strong>of</strong> peer reviewed publications per fte academic staff<br />

5: research intensiveness 5b: the ISI based citation indicator, also known as the ‘crown<br />

indicator’ 2<br />

6a: the number <strong>of</strong> start-up)<br />

6b: the number <strong>of</strong> patent applications fi led<br />

6: innovation intensiveness 6c: annual licensing income<br />

6d: the revenues from privately funded research contracts as a<br />

% <strong>of</strong> total research revenues<br />

7a: the number <strong>of</strong> degree seeking students with a foreign<br />

nationality, as % <strong>of</strong> total enrolment<br />

7b: the number <strong>of</strong> incoming students in <strong>European</strong> exchange<br />

programmes, as % <strong>of</strong> total enrolment<br />

7: international orientation:<br />

7c: the number <strong>of</strong> students sent out in <strong>European</strong> exchange<br />

teaching and staff<br />

programmes<br />

7d: international staff members as % <strong>of</strong> total number <strong>of</strong> staff<br />

members<br />

7e number <strong>of</strong> programmes <strong>of</strong>fered abroad<br />

8: international orientation: 8a: the institution’s fi nancial turn-over in <strong>European</strong> research<br />

research<br />

programmes as % <strong>of</strong> total fi nancial research turn-over<br />

9a: number <strong>of</strong> students enrolled (headcount)<br />

9: size<br />

9b: number <strong>of</strong> staff members employed (fte)<br />

10a: number <strong>of</strong> distance learning programmes as % <strong>of</strong> total<br />

number <strong>of</strong> programmes<br />

10b: number <strong>of</strong> part-time programmes as % <strong>of</strong> total number <strong>of</strong><br />

10: mode <strong>of</strong> delivery<br />

programmes<br />

10c: number <strong>of</strong> part-time students as % <strong>of</strong> total number <strong>of</strong><br />

students<br />

11a: income from (competitive and non-competitive) government<br />

11: public/private character funding as % <strong>of</strong> total revenues<br />

11b: income from tuition fees as % <strong>of</strong> total income<br />

12: legal status 12a: legal status<br />

13a: number <strong>of</strong> <strong>of</strong>fi cial concerts and performances (co)-<br />

organised by the institution<br />

13: cultural engagement<br />

13b: number <strong>of</strong> <strong>of</strong>fi cial exhibitions (co)-organised by the<br />

institution<br />

MAPPING DIVERSITY<br />

3<br />

14: regional engagement<br />

3 http://unstats.un.org/unsd/methods/m49/m49regin.htm#europe<br />

4 http://www.socialsciences.leidenuniv.nl/cwts/<br />

14a: annual turnover in EU structural funds as % <strong>of</strong> total turnover<br />

14b: number <strong>of</strong> graduates remaining in the region as % <strong>of</strong> total<br />

number <strong>of</strong> graduates<br />

14c: number <strong>of</strong> extracurricular courses <strong>of</strong>fered for regional<br />

labour market<br />

14d: importance <strong>of</strong> local/regional income sources 3


The question ‘this dimension is essential for the pr<strong>of</strong>i le <strong>of</strong> our institution’ is a central question for the<br />

project.<br />

For eight <strong>of</strong> the 14 dimensions more than 80% <strong>of</strong> the responding higher education institutions<br />

agreed on the relevance <strong>of</strong> the dimension. Dimensions 1, 2, 3, 5, 7, 9, 11 and 12). There was only<br />

one dimension (13) which less than 60% <strong>of</strong> respondents rated as being relevant.<br />

A lack <strong>of</strong> consensus on the relevance <strong>of</strong> a dimension is not a disqualifying characteristic. It merely<br />

means that the responding higher education institutions differ in their opinion regarding the relevance<br />

<strong>of</strong> this dimension for the pr<strong>of</strong>i le <strong>of</strong> their institution.<br />

19<br />

3.3 The indicators<br />

In order to ‘score’ higher education institutions on the dimensions, 32 indicators were selected.<br />

These indicators can be seen as (quantitative) information that can be used to assess the positions<br />

<strong>of</strong> a higher education institution on the dimensions. In this section we focus on these indicators.<br />

First we look into the validity <strong>of</strong> the indicators: do the responding higher education institutions think<br />

that the selected indicators measure the phenomena we are investigating? Do the indicators convey<br />

a ‘correct’ picture <strong>of</strong> the dimension?<br />

The focus then shifts to the question <strong>of</strong> whether the information reported is trustworthy: the perceived<br />

reliability <strong>of</strong> the information reported. Since there are signifi cant differences in the status <strong>of</strong> the<br />

indicators (some are based on widely accepted standard statistics, whereas others have a more<br />

experimental character) the project team thought it imperative to check the perceived reliability <strong>of</strong><br />

the information reported.<br />

The fi nal characteristic <strong>of</strong> the indicators discussed is whether it is feasible for the responding higher<br />

education institutions to collect the required information. This issue was one <strong>of</strong> the main reasons for<br />

the survey. A large part <strong>of</strong> the information underlying the classifi cation has to come from individual<br />

higher education institutions. Given the growing survey fatigue and administrative burdens higher<br />

education institutions have to face, it is crucial to know how higher education institutions perceive<br />

the burden that a classifi cation might place on them. Four indications for feasibility are included: the<br />

time needed to fi nd and report the information, the perceived ease <strong>of</strong> fi nding the information, the<br />

use <strong>of</strong> existing sources and the percentage <strong>of</strong> valid responses received.<br />

3.3.1 Validity<br />

Validity is assessed by a question in the dimensions-questionnaire. The higher education institutions<br />

were asked to give their opinion regarding the statement: ‘indicator a is a valid indicator for this<br />

dimension’.<br />

There are fi ve dimensions where the validity <strong>of</strong> the indicators selected raises some doubts:<br />

3 (orientation <strong>of</strong> degrees), 4 (involvement in life long learning), 6 (innovation intensiveness), 13<br />

(cultural engagement), and 14 (regional engagement). These fi ve dimensions have something <strong>of</strong> an<br />

experimental status and need further development.<br />

3.3.2 Reliability<br />

The indicators selected differ in status. Some indicators are already used in different contexts and<br />

5 did not appear in the dimensions questionnaire<br />

MAPPING DIVERSITY


uild on standard data, whereas other are ‘experimental’ and use information that is not included in<br />

20<br />

the set <strong>of</strong> commonly reported data. For these indicators it might be the case that the data reported<br />

depend on the person or department reporting the data. To fi nd out whether this reliability problem<br />

is perceived to exist, the responding higher education institutions were asked to respond to the<br />

statement: ‘the information is reliable’.<br />

The responses are very positive about the reliability <strong>of</strong> the information provided. For 25 indicators<br />

at least fi ve out <strong>of</strong> six responding higher education institutions reported that they (strongly) agreed<br />

with the statement that ‘the information is reliable’. The indicators on which slightly more responding<br />

higher education institutions had some doubts regarding reliability are: 3a and 3b (orientation <strong>of</strong><br />

degrees), 6d (revenues from private contracts) and 14b and 14c (regional engagement).<br />

3.3.3 Feasibility<br />

To assess the feasibility <strong>of</strong> the process <strong>of</strong> collecting and reporting the data we used four indications:<br />

the time needed to collect data on the indicator; the score on the scale ‘easy to collect’; whether the<br />

data were collected from an existing source; and the total number <strong>of</strong> valid cases.<br />

Based on this information an overall rank score was calculated. Calculating an overall rank score is<br />

a tricky exercise. There is no clear conceptual basis for weighting the rank scores on the individual<br />

feasibility scores. Yet there is an argument to make for weighting the fi rst two indicators stronger<br />

than the latter two. The fi rst two are self reported by the respondents, whereas at least the last<br />

indicator is indirectly derived from the sample.<br />

3.3.4. Challenging dimensions<br />

One <strong>of</strong> the reasons to organise the survey was to fi nd out which dimensions and indicators would<br />

be useful in the classifi cation and which would not. To fi nd an answer to that question we combined<br />

the information on validity, feasibility and reliability <strong>of</strong> the indicators selected for each dimension.<br />

We do not use the scores on the perceived relevance <strong>of</strong> the dimensions since a high proportion <strong>of</strong><br />

responding higher education institutions strongly disagreeing with the relevance <strong>of</strong> a dimension is<br />

not an indication <strong>of</strong> the quality <strong>of</strong> the dimension. We see such a lack <strong>of</strong> consensus as an indication <strong>of</strong><br />

the diversity <strong>of</strong> the missions and pr<strong>of</strong>i les <strong>of</strong> the higher education institutions. Only if the vast majority<br />

<strong>of</strong> the responding higher education institutions disagreed with a dimension’s relevance would we<br />

reconsider the choice <strong>of</strong> this dimension. This was not the case for any <strong>of</strong> the fourteen dimensions.<br />

MAPPING DIVERSITY<br />

To identify potential ‘challenging’ dimensions we selected those dimensions for which at least one<br />

indicator scored more than 5% ‘strongly disagree’ on the validity and reliability items and which was<br />

in the bottom fi ve <strong>of</strong> the overall feasibility ranking.<br />

Using these criteria, there are only two ‘challenging’ dimensions: dimension 4, ‘Involvement in live<br />

long learning’ and dimension 6 ‘innovation intensiveness’.


4. Conclusions<br />

21<br />

In this chapter we draw conclusions on what we have learned from the case studies, the review <strong>of</strong><br />

the existing databases and the survey. We particularly report the suggestions and remarks <strong>of</strong>fered<br />

by the stakeholders and the institutions involved in the project. The issues are presented in three<br />

categories:<br />

• general issues on the development and use <strong>of</strong> the classifi cation;<br />

• the validity and feasibility <strong>of</strong> the indicators (Which indicators should be redefi ned, omitted or<br />

added?);<br />

• the relevance <strong>of</strong> the dimensions (Which dimensions should be retained or merged?).<br />

4.1General Issues<br />

We fi rst present an overview <strong>of</strong> a number <strong>of</strong> general suggestions regarding the further development<br />

<strong>of</strong> the classifi cation. These suggestions will lead to further adaptations <strong>of</strong> the current draftclassifi<br />

cation.<br />

First <strong>of</strong> all, it was suggested by several stakeholders and higher education institutions to include an<br />

open question regarding the mission <strong>of</strong> the institution. Such a question, preferably in the dimensions<br />

questionnaire, will give the institution an opportunity to include its intentions and, where there is a<br />

large discrepancy with its ‘empirical’ pr<strong>of</strong>i le, to use this as a starting point for its further strategic<br />

development. This information should not be used to classify institutions but be presented as<br />

additional contextual information.<br />

Secondly, there were several comments and suggestions that referred to the infl uence the national<br />

context has on the answers provided by the institution (1b, 2a, 3a, 4a, 5a, 6, 7d, 7e, 8a, 9b, 10a.<br />

10b, 11a, 12a, 14a). There may also be some confusion/ bias caused by national differences in the<br />

reference period. Academic years are not always the same and the academic year (most frequently<br />

used for student related data) differs in many countries from the calendar year (most frequently used<br />

for fi nancial data). To address this issue, in the next version <strong>of</strong> the classifi cation the use <strong>of</strong> country<br />

specifi c background information will be considered. We conclude that the questions should remain<br />

the same for all countries; but the information behind the info-buttons (see the questionnaires) can<br />

be made country specifi c. The national information will be developed and checked with national<br />

experts.<br />

Another general issue that was mentioned was the relation <strong>of</strong> the project with existing institution<br />

based comparative initiatives. On the one hand there are projects related to student surveys and<br />

student’s opinions on programmes (such as the German CHE ranking 4 ). The suggestion was not to<br />

integrate this information into the classifi cation but to present the information as relevant background<br />

information. Such linkages may increase the use and usefulness <strong>of</strong> the classifi cation for students.<br />

A similar recommendation followed from the analysis <strong>of</strong> existing data sources. Based on the results<br />

<strong>of</strong> that analysis it was recommended that the development <strong>of</strong> comprehensive, systematic and<br />

comparable data on the main functions and characteristics <strong>of</strong> higher education institutions should<br />

be fostered. To this end existing initiatives and centers <strong>of</strong> expertise in Europe should be stimulated<br />

to cooperate, thus enhancing the ongoing work on <strong>European</strong> frameworks and instruments that<br />

enable diversity to become more transparent. This will be a major and indispensable contribution to<br />

6 http://www.che-ranking.de/cms/?getObject=2&getName=CHE-Ranking&getLang=de<br />

MAPPING DIVERSITY


22<br />

the strengthening <strong>of</strong> Europe’s performance in the areas <strong>of</strong> education, research and innovation (the<br />

knowledge triangle).<br />

Fifth, the issue <strong>of</strong> how to ensure that the data provided by institutions are correct was mentioned<br />

both by responding higher education institutions and stakeholders. The project team underlines<br />

the importance <strong>of</strong> this issue but concrete action to develop procedures to ensure the reliability <strong>of</strong><br />

the information provided has been postponed to the third phase <strong>of</strong> the project which will focus on<br />

operational aspects.<br />

A fi nal general issue refers to the question ‘Who owns the data?’ In phase two <strong>of</strong> the project, the<br />

data provided by the institutions are owned by the project team. The project team has made it clear<br />

to the respondents that the data provided will only be used to develop the classifi cation. In a later<br />

stage <strong>of</strong> the project the data could also be used to classify institutions but the project team will only<br />

do so with specifi c consent from the individual institutions.<br />

4.2 The indicators (by dimension)<br />

In this section we present the conclusions regarding the various indicators. The conclusions and<br />

remarks are presented in the sequence <strong>of</strong> dimensions and indicators as presented in table 2.<br />

1. Types <strong>of</strong> degree <strong>of</strong>fered<br />

In addition to the two original indicators, two new indicators were suggested for this dimension.<br />

The fi rst was ‘dominant degree level’: the degree level at which more than 50% <strong>of</strong> all degrees at<br />

the institution are awarded. Due to the fact that there was a substantial number <strong>of</strong> higher education<br />

institutions with ‘no dominant degree level’, an alternative indicator was calculated using 40% as the<br />

cut-<strong>of</strong>f point. The second new indicator was ‘graduate intensity’: the sum <strong>of</strong> master and doctorate<br />

degrees as a percentage <strong>of</strong> overall degrees.<br />

2. Range <strong>of</strong> subjects <strong>of</strong>fered<br />

It was suggested that checking what subject areas are <strong>of</strong>fered is not specifi c enough, although it<br />

provides a general idea <strong>of</strong> the scope <strong>of</strong> the institution’s activities. This might become more precise if<br />

information on the number <strong>of</strong> graduates per subject area was included, allowing the determination<br />

<strong>of</strong> predominant fi elds <strong>of</strong> study. This may be <strong>of</strong> particular interest to students.<br />

MAPPING DIVERSITY<br />

3. Orientation <strong>of</strong> degrees<br />

The link to the <strong>European</strong> list <strong>of</strong> regulated and certifi ed pr<strong>of</strong>essions (used in the indicator questionnaire)<br />

did not work properly for all countries. It was suggested to include the lists for each country in the<br />

background information.<br />

Furthermore it was advised to include the number <strong>of</strong> student placements in fi rms, hospitals etc.<br />

as an indicator for this dimension. A high number <strong>of</strong> placements signals a strong pr<strong>of</strong>essional<br />

orientation.<br />

4. Involvement in life long learning (LLL)<br />

The breakdown <strong>of</strong> enrolment by age group and level <strong>of</strong> programme proved to be problematic in<br />

terms <strong>of</strong> feasibility. It was suggested to take out the breakdown by level <strong>of</strong> degree and to include a<br />

breakdown by mode <strong>of</strong> enrolment (full-time versus part time).<br />

From the comments, we deduced that many LLL activities are taking place outside degree<br />

programmes. By limiting the questions to degree granting activities, a substantial part <strong>of</strong> LLL<br />

activities might become invisible. However, anticipated problems in comparability and interpretation


<strong>of</strong> information on non-degree <strong>of</strong>ferings have convinced the project team to leave out non-degree<br />

<strong>of</strong>ferings.<br />

In some systems, higher education institutions provide special products for the LLL market. Therefore<br />

it was suggested to include a question on students enrolled in specifi c LLL <strong>of</strong>ferings.<br />

23<br />

5. Research intensiveness<br />

The music and arts sector suggested including an indicator that would be more in-line with the<br />

research activities undertaken in this sector. There will be no follow-up <strong>of</strong> this suggestion because the<br />

introduction <strong>of</strong> such an indicator would reduce the legitimacy <strong>of</strong> the dimension for other institutions<br />

(especially the traditional research universities).<br />

It was strongly suggested to use ‘total research revenues’ as an additional indicator <strong>of</strong> research<br />

intensiveness. The information is already included but in many national systems direct government<br />

funding is provided as a lump sum for both teaching and research activities. To calculate total<br />

research income requires the research income part <strong>of</strong> the lump sum to be determined (which proves<br />

to be diffi cult).<br />

6. Innovation intensiveness<br />

It was suggested to use the indicator on start-up fi rms as an indicator for regional engagement as<br />

well.<br />

The way an institution is costing its activities may infl uence the results on the indicator on research<br />

contracts. Full academic costing (FAC) is not a common practice in all countries yet. It was suggested<br />

to introduce a checkbox on whether FAC is used.<br />

There were also comments on the narrow focus <strong>of</strong> the indicators chosen for this dimension. It was<br />

suggested that some indicators should be included to signal innovative activities in the set up <strong>of</strong><br />

teaching and curricula and <strong>of</strong> research, as well as for the innovative character <strong>of</strong> artistic activities.<br />

For the latter the use <strong>of</strong> a community will be considered. A community is a group <strong>of</strong> institutions that<br />

is willing to invest in developing a more comprehensive set <strong>of</strong> indicators for a particular dimension.<br />

Such a community <strong>of</strong> interested institutions could play an active role in developing indicators and<br />

advise the project team on these particular indicators. Participation would be on a voluntary basis.<br />

Working with such a community could enhance the validity, feasibility and legitimacy <strong>of</strong> the indicators<br />

used.<br />

7. International orientation: teaching and staff<br />

It was suggested to introduce ‘academic staff by time spent abroad (study/work)’ as an additional<br />

indicator since nationality does not say enough about the real international orientation.<br />

The indicators on student mobility are mainly focused on EU exchange programmes. By broadening<br />

the scope <strong>of</strong> these indicators to all international exchange programmes this ‘EU-bias’ could be<br />

reduced.<br />

It was furthermore suggested to use the ‘nationality <strong>of</strong> the qualifying diploma’ (where the diploma <strong>of</strong><br />

secondary education was awarded) instead <strong>of</strong> the ‘nationality <strong>of</strong> the student’ to distinguish national<br />

versus international students.<br />

It was recommended to the project team to set up a community <strong>of</strong> institutions that is willing to invest<br />

in developing a more comprehensive set <strong>of</strong> indicators for this dimension. The project team was<br />

advised to include an indicator on joint degree programmes or double degrees awarded.<br />

8. International orientation: research<br />

The scope <strong>of</strong> this indicator was seen by many as too limited. Expanding the scope from EU research<br />

programmes to all international research programmes would enhance the relevance and validity <strong>of</strong><br />

this indicator.<br />

MAPPING DIVERSITY


24<br />

The indicator on the importance <strong>of</strong> regional sources <strong>of</strong> income does provide information on the<br />

relative importance <strong>of</strong> international sources <strong>of</strong> income as well.<br />

9. Size<br />

No comments were made regarding this dimension.<br />

10. Mode <strong>of</strong> delivery<br />

The issue <strong>of</strong> blended learning (combining on-campus and distance learning elements in one<br />

programme) was discussed by some respondents. However, due the methodological problems this<br />

may cause, it was decided not to include an indicator on this item.<br />

In the questionnaire there is a question on the provision <strong>of</strong> distance learning programmes, but there<br />

is no question on the size <strong>of</strong> those programmes in terms <strong>of</strong> enrolment. Including such an indicator<br />

would enhance the validity <strong>of</strong> the set <strong>of</strong> indicators for this dimension.<br />

11. Public/private character<br />

It was suggested to breakdown public funding into direct lump sum public funding and indirect<br />

competitive public funding. It was suggested that the former was a better indicator <strong>of</strong> public<br />

character than the latter. Using the direct lump sum public funding would therefore increase the<br />

validity <strong>of</strong> the indicator.<br />

There were suggestions to include revenues from donations as an additional indicator in this<br />

dimension. Although this source <strong>of</strong> income may not be relevant in many higher education institutions<br />

yet, the project team decided to included the indicator in the list <strong>of</strong> suggested indicators. It is<br />

expected that the relative importance <strong>of</strong> this indicator (as an indicator that clearly differentiates<br />

groups <strong>of</strong> institutions from one another) will increase in the future.<br />

12. Legal status<br />

The few comments made regarding this dimension did not lead to any changes to the dimension or<br />

the indicators used.<br />

13. Cultural engagement<br />

The current indicators were criticized by the music and arts sector. It was recommended to the<br />

project team to set up a community <strong>of</strong> institutions that is willing to invest in developing a more<br />

adequate set <strong>of</strong> indicators for cultural engagement.<br />

MAPPING DIVERSITY<br />

14. Regional engagement<br />

The indicator ‘graduates in the region’ was dropped because <strong>of</strong> methodological problems. The<br />

other indicators were challenged but it was decided to keep these indicators in order to be able to<br />

distinguish institutions that invest in these activities (as part <strong>of</strong> their pr<strong>of</strong>i le). It was recommended to<br />

the project team to set up a community <strong>of</strong> institutions that is willing to invest in developing a more<br />

adequate set <strong>of</strong> indicators for regional engagement.<br />

It was suggested to use the indicator ‘number <strong>of</strong> extracurricular courses’ both as an indicator for the<br />

dimension ‘LLL’ and the dimension ‘mode <strong>of</strong> delivery’.<br />

It was furthermore suggested to include the number <strong>of</strong> partnerships with business and industry as<br />

an indicator in this dimension.


4.3 Reduction <strong>of</strong> the number <strong>of</strong> dimensions<br />

In the survey 14 dimensions were distinguished. This was seen as too many by quite a number<br />

<strong>of</strong> respondents. When the higher education institutions are classifi ed on all 14 dimensions, the<br />

use <strong>of</strong> the classifi cation becomes very tedious and (for many intended users) too time consuming<br />

and confusing. It is also argued that when used as a ‘fi ltering device’ the selection <strong>of</strong> benchmark<br />

institutions based on all dimensions will very rarely result in a reasonable number <strong>of</strong> “hits” (if any).<br />

25<br />

In contrast to this ‘push’ towards a reduction <strong>of</strong> the number <strong>of</strong> dimensions there were also some<br />

comments to keep all dimensions, at least at this stage <strong>of</strong> the project. Reducing the number <strong>of</strong><br />

dimensions leads to a reduction <strong>of</strong> information, which should be avoided during the developmental<br />

stage <strong>of</strong> the classifi cation. <strong>Diversity</strong> is best captured by as many (relevant) dimensions as possible. In<br />

a later stage <strong>of</strong> the project, there will always be the option <strong>of</strong> reducing the number <strong>of</strong> dimensions.<br />

The survey scores on the perceived relevance were not used to rearrange the dimensions or to<br />

reduce the number <strong>of</strong> dimensions. ’Low’ scores on relevance are seen as an indication <strong>of</strong> diversity<br />

among the responding higher education institutions: for many <strong>of</strong> the responding institutions a<br />

particular dimension may be irrelevant, but for a (limited) number <strong>of</strong> institutions it is relevant and<br />

distinguishes them from others.<br />

There were some doubts regarding three dimensions. ‘Involvement in life long learning’ turned<br />

out to be a ‘challenging’ dimension: the validity, reliability and feasibility <strong>of</strong> the indicator, were<br />

considered to be problematic. The dimensions ‘Cultural engagement’ and ‘Regional engagement’<br />

were challenged by a number <strong>of</strong> respondents. Instead <strong>of</strong> labelling these dimensions as ‘challenged’<br />

and deleting them from the list <strong>of</strong> dimensions, the project team decided to label the dimensions as<br />

‘challenging’. The relevance <strong>of</strong> these dimensions for particular groups <strong>of</strong> institutions is the main<br />

reason for keeping the dimensions and investing in developing better indicators for these. That is<br />

why the creation <strong>of</strong> ‘communities’ in the next phase <strong>of</strong> the project was suggested. Several groups <strong>of</strong><br />

institutions (arts and music schools, universities <strong>of</strong> applied sciences) have already expressed their<br />

interest and willingness to form and join such communities. Depending on the outcomes <strong>of</strong> these<br />

communities, all dimensions will be reviewed in the next phase <strong>of</strong> the project which may possibly<br />

lead to a reduction <strong>of</strong> the number <strong>of</strong> dimensions.<br />

4.4 The adapted classification<br />

In table 3 an overview is presented <strong>of</strong> the classifi cation as it has been adapted as a result <strong>of</strong> the<br />

outcomes <strong>of</strong> the survey and comments made on the results <strong>of</strong> the survey. The table shows the draft<br />

classifi cation at the end <strong>of</strong> phase II <strong>of</strong> the project. In the fi nal phase III, further adaptations are to be<br />

expected, resulting from new stakeholders’ inputs as well as further statistical analyses.<br />

Table 3: Overview <strong>of</strong> dimensions and indicators in adapted classifi cation<br />

Dimension Indicator new and suggested indicators<br />

1: types <strong>of</strong> degrees<br />

<strong>of</strong>fered<br />

1a: highest level <strong>of</strong> degree<br />

programme <strong>of</strong>fered<br />

1b: number <strong>of</strong> qualifi cations<br />

granted in each type <strong>of</strong><br />

degree programme<br />

1c: dominant degree level: degree<br />

level in which at least 50% <strong>of</strong> the<br />

degrees were awarded. An alternative<br />

defi nition is considered: degree level in<br />

which at least 40% <strong>of</strong> the degrees were<br />

awarded<br />

MAPPING DIVERSITY


26<br />

Dimension Indicator new and suggested indicators<br />

1d: graduate intensity: the number<br />

<strong>of</strong> graduate degrees awarded as a<br />

percentage <strong>of</strong> all degrees awarded<br />

MAPPING DIVERSITY<br />

2: range <strong>of</strong> subjects<br />

<strong>of</strong>fered<br />

3: orientation <strong>of</strong><br />

degrees<br />

4: involvement in life<br />

long learning<br />

5: research<br />

intensiveness<br />

6: innovation<br />

intensiveness<br />

7: international<br />

orientation: teaching<br />

and staff<br />

2a: number <strong>of</strong> subject areas<br />

covered by an institution<br />

using UNESCO/ISCED<br />

subject areas<br />

3a: the number <strong>of</strong><br />

programmes leading<br />

to certifi ed/ regulated<br />

pr<strong>of</strong>essions as a % <strong>of</strong> the<br />

total number <strong>of</strong> programmes<br />

3b: the number <strong>of</strong><br />

programmes <strong>of</strong>fered that<br />

answer to a particular<br />

demand from the labour<br />

market or pr<strong>of</strong>essions (as<br />

% <strong>of</strong> the total number <strong>of</strong><br />

programmes)<br />

4a: number <strong>of</strong> adult learners<br />

as a % <strong>of</strong> total number <strong>of</strong><br />

students by type <strong>of</strong> degree<br />

5a: number <strong>of</strong> peer reviewed<br />

publications per fte academic<br />

staff<br />

5b: the ISI based citation<br />

indicator, also known as the<br />

2b: number <strong>of</strong> degrees awarded by<br />

level <strong>of</strong> degree and by subject area<br />

3c: the number <strong>of</strong> student placements<br />

in fi rms, hospitals etc. as a % <strong>of</strong> total<br />

enrolment<br />

4a1: number <strong>of</strong> adult learners as a %<br />

<strong>of</strong> total number <strong>of</strong> students (all degree<br />

levels combined)<br />

4b: number <strong>of</strong> part-time adult learners<br />

as a % <strong>of</strong> total number <strong>of</strong> part-time<br />

students<br />

4c: number <strong>of</strong> students enrolled in<br />

specifi c LLL programmes<br />

4d: number <strong>of</strong> extracurricular courses<br />

<strong>of</strong>fered for regional labour market (see<br />

14b)<br />

5c: total research income as a<br />

percentage <strong>of</strong> total income<br />

‘crown indicator’<br />

6a: the number <strong>of</strong> start-up) 6e: use <strong>of</strong> full academic costing (yes/<br />

no)<br />

6b: the number <strong>of</strong> patent<br />

applications fi led<br />

6c: annual licensing income<br />

6d: the revenues from<br />

privately funded research<br />

contracts as a % <strong>of</strong> total<br />

research revenues<br />

7a: the number <strong>of</strong> degree<br />

seeking students with a<br />

foreign nationality, as % <strong>of</strong><br />

total enrolment<br />

7b: the number <strong>of</strong> incoming<br />

students in <strong>European</strong><br />

exchange programmes, as %<br />

<strong>of</strong> total enrolment<br />

7c: the number <strong>of</strong> students<br />

sent out in <strong>European</strong><br />

exchange programmes<br />

7d: international staff<br />

members as % <strong>of</strong> total<br />

number <strong>of</strong> staff members<br />

7a1 the number <strong>of</strong> degree seeking<br />

students with a foreign qualitying<br />

diploma as % <strong>of</strong> total enrolment<br />

7b1: the number <strong>of</strong> incoming students<br />

in international exchange programmes,<br />

as % <strong>of</strong> total enrolment<br />

7c1: the number <strong>of</strong> students sent out in<br />

international exchange programmes


Dimension Indicator new and suggested indicators<br />

7e number <strong>of</strong> programmes<br />

<strong>of</strong>fered abroad<br />

7f: the number <strong>of</strong> students in joint<br />

degree programmes as a % <strong>of</strong> total<br />

enrolment<br />

8: international<br />

orientation: research<br />

8a: the institution’s fi nancial<br />

turn-over in <strong>European</strong><br />

research programmes as<br />

% <strong>of</strong> total fi nancial research<br />

turn-over<br />

9: size 9a: number <strong>of</strong> students<br />

enrolled (headcount)<br />

9b: number <strong>of</strong> staff members<br />

employed (fte)<br />

10: mode <strong>of</strong> delivery 10a: number <strong>of</strong> distance<br />

learning programmes as % <strong>of</strong><br />

total number <strong>of</strong> programmes<br />

10b: number <strong>of</strong> part-time<br />

programmes as % <strong>of</strong> total<br />

number <strong>of</strong> programmes<br />

10c: number <strong>of</strong> part-time<br />

students as % <strong>of</strong> total number<br />

<strong>of</strong> students<br />

11: public/private<br />

character<br />

11a: income from<br />

(competitive and noncompetitive)<br />

government<br />

funding as % <strong>of</strong> total<br />

revenues<br />

11b: income from tuition fees<br />

as % <strong>of</strong> total income<br />

12: legal status 12a: legal status<br />

13: cultural<br />

engagement<br />

14: regional<br />

engagement<br />

13a: number <strong>of</strong> <strong>of</strong>fi cial<br />

concerts and performances<br />

(co)-organised by the<br />

institution<br />

13b: number <strong>of</strong> <strong>of</strong>fi cial<br />

exhibitions (co)-organised by<br />

the institution<br />

14a: annual turnover in EU<br />

structural funds as % <strong>of</strong> total<br />

turnover<br />

14b: number <strong>of</strong> graduates<br />

remaining in the region as %<br />

<strong>of</strong> total number <strong>of</strong> graduates<br />

14c: number <strong>of</strong><br />

extracurricular courses<br />

<strong>of</strong>fered for regional labour<br />

market<br />

14d: importance <strong>of</strong> local/<br />

regional income sources<br />

8a1: the institution’s fi nancial<br />

income from international research<br />

programmes as % <strong>of</strong> total fi nancial<br />

research income<br />

8b: the importance <strong>of</strong> international<br />

sources <strong>of</strong> income (see 14d)<br />

10a1: number students enrolled in<br />

distance learning programmes as % <strong>of</strong><br />

total number <strong>of</strong> students<br />

10d: number <strong>of</strong> extracurricular courses<br />

<strong>of</strong>fered for regional labour market (see<br />

14b)<br />

11a1: income from direct government<br />

funding (lump sum) as a % <strong>of</strong> total<br />

income<br />

11c: Income from donations as a % <strong>of</strong><br />

total income<br />

14e: the number <strong>of</strong> start-up fi rms (see<br />

6a)<br />

14b: this indicator will be dropped<br />

14f: the number <strong>of</strong> partnerships with<br />

business and industry<br />

27<br />

MAPPING DIVERSITY


Part II


5. Operational implementation<br />

Part II contains the fi ndings <strong>of</strong> the project on the operational implementation or institutionalisation<br />

<strong>of</strong> the <strong>European</strong> classifi cation <strong>of</strong> higher education institutions. The question <strong>of</strong> institutionalising the<br />

classifi cation has been discussed intensively with different stakeholders and experts on several<br />

occasions during the project. The views <strong>of</strong> the stakeholders and other relevant actors are reported<br />

in section 5.1. On the basis <strong>of</strong> these views, a set <strong>of</strong> indications for the institutionalisation <strong>of</strong> the<br />

classifi cation is formulated in section 5.2. These indications or design principles for institutionalisation<br />

are then related to four theoretical models <strong>of</strong> operational implementation (section 5.3). We conclude<br />

with a presentation <strong>of</strong> the most appropriate model for institutionalisation and some fi nal considerations<br />

in section 5.4.<br />

29<br />

5.1 The views <strong>of</strong> stakeholders and experts<br />

From the very beginning, the project team has been acutely aware that the views and interests <strong>of</strong><br />

stakeholders are crucial to successfully conceptualising and implementing a classifi cation. Bearing<br />

this in mind, the project team discussed issues related to institutionalising the classifi cation on<br />

several occasions. The feedback and views <strong>of</strong> the participants at these events are presented in<br />

this section. Finally, the views on the classifi cation project <strong>of</strong> the presidency <strong>of</strong> the Council <strong>of</strong> the<br />

<strong>European</strong> Union, the EC and the Carnegie Foundation are reported.<br />

The project team discussed issues related to the institutionalisation <strong>of</strong> the classifi cation at the<br />

following events:<br />

st<br />

− 1 Advisory Board meeting on 12 th December 2006 in Brussels (Belgium);<br />

nd<br />

− 2 Advisory Board meeting on 31 st March 2007 in Lisbon (Portugal);<br />

rd<br />

− 3 Advisory Board meeting on 25 th April 2008 in Santander (Spain);<br />

st<br />

− 1 Stakeholder Group meeting on 12 th December 2006 in Brussels (Belgium);<br />

nd<br />

− 2 Stakeholder Group meeting on 25 th April 2008 in Santander (Spain);<br />

th<br />

− A visit to the Carnegie Foundation from 7 to 9 th April 2008 in Stanford (United States <strong>of</strong><br />

America);<br />

th<br />

− Project conference‚ ‘Building a typology <strong>of</strong> higher education institutions in Europe’ on 24 April<br />

2008 in Santander (Spain);<br />

th<br />

− Bologna Seminar, ‘Unlocking Europe’s potential - Contributing to a better world’ on 19 and 20 th<br />

May 2008 in Ghent (Belgium);<br />

− Project conference, ‘Transparency in <strong>Diversity</strong> – Towards a Classifi cation <strong>of</strong> <strong>European</strong> Higher<br />

Education Institutions’ on 10 th and 11 th July 2008 in Berlin (Germany).<br />

5.1.1 Advisory Board<br />

The participants in the Advisory Board meetings underlined that the primary purpose <strong>of</strong> the<br />

classifi cation should be to serve the needs <strong>of</strong> higher education institutions. They advised that the<br />

project should make the difference with ranking clear and should not result in (one-dimensional)<br />

rankings itself. The Advisory Board pointed out that the organisation carrying out the classifi cation<br />

has to be clearly independent from both market forces and governmental infl uence. The Board also<br />

stressed the need for voluntary participation. Taking notice <strong>of</strong> the way the Carnegie classifi cation is<br />

organised, the Board expressed a preference for an independent body to operate the classifi cation<br />

in the long run. Moreover, it expressed the need to place the classifi cation in a global context and<br />

to explore further cooperation with the Carnegie Foundation.<br />

MAPPING DIVERSITY


30<br />

5.1.2 Stakeholder Group<br />

The participants in the Stakeholder Group meetings suggested that the legitimacy <strong>of</strong> the classifi cation<br />

depends particularly on its acceptance among higher education institutions. They highlighted that<br />

the classifi cation is a prerequisite for better rankings. Many participants supported the project in its<br />

attempts to link the classifi cation to the creation <strong>of</strong> a <strong>European</strong> Higher Education Area (EHEA) and<br />

the aim <strong>of</strong> the Bologna process to create and enhance the transparency <strong>of</strong> the <strong>European</strong> higher<br />

education system. Moreover, they underlined its clear link to the external dimension <strong>of</strong> the EHEA<br />

and expressed their support for exploring further cooperation with the Carnegie Foundation.<br />

5.1.3 Conference participants<br />

The participants in the various conferences advised the project team to further specify and underline<br />

the benefi ts for stakeholders <strong>of</strong> the development <strong>of</strong> a <strong>European</strong> classifi cation. The higher education<br />

institutions were perceived as the most important stakeholders, not least because they provide the<br />

data. Many institutions participating in the survey confi rmed their interest in the classifi cation. They<br />

identifi ed the following four advantages for higher education institutions and pointed to the need to<br />

communicate these as crucial opportunities for the institutions.<br />

−<br />

−<br />

−<br />

−<br />

to mirror and verify institutional ambitions and perceptions;<br />

to identify relevant partners for benchmarking on the <strong>European</strong> level;<br />

to design institutional development strategies;<br />

to make their specifi c institutional pr<strong>of</strong>i les explicit on the <strong>European</strong> level.<br />

Although students will be able to make better informed choices about enrolment, it was generally<br />

felt that the classifi cation is not primarily designed from their perspective. Students seem to be more<br />

interested in information at the programme level. The ranking <strong>of</strong> the Centrum für Hochschulentwicklung<br />

(CHE) was mentioned as a good tool and some participants advised linking CHE data to the<br />

classifi cation (see above).<br />

The participants confi rmed that governments could develop better policies if they took into account<br />

the differences between higher education institutions. Participants also stated that a classifi cation<br />

would need the support <strong>of</strong> <strong>European</strong> governments if it were to be a success.<br />

MAPPING DIVERSITY<br />

The institutionalisation and application <strong>of</strong> the classifi cation was seen to be clearly linked to its<br />

usefulness. The need to engage the relevant stakeholders was generally underlined, particularly<br />

when it comes to the linking <strong>of</strong> the classifi cation to the Bologna Process. In this respect, the<br />

proposal to only classify institutions that are accredited by agencies registered in the <strong>European</strong><br />

Quality Assurance Register in Higher Education (EQAR) received much support. However, the<br />

participants perceived a stakeholder based organisation as unsuitable for the institutionalisation <strong>of</strong><br />

the <strong>European</strong> classifi cation given the complexity <strong>of</strong> fi nding common ground and the complex issue<br />

<strong>of</strong> the representation <strong>of</strong> institutions at the <strong>European</strong> level.<br />

5.1.4 Bologna Follow-Up Group<br />

The project team emphasised on several occasions the project’s goal to create more transparency<br />

within the <strong>European</strong> higher education system. Given the clear reference to such an undertaking in the<br />

Bologna Declaration, the project team argued for embedding the classifi cation within the Bolognaprocess<br />

beyond 2010 during the BFUG-seminar ‘Unlocking Europe’s potential – Contributing to a


etter world’ from 19 th to 20 th May in Ghent (Belgium). There, the importance <strong>of</strong> the institutional<br />

diversity <strong>of</strong> the <strong>European</strong> higher education area and the value <strong>of</strong> the classifi cation as an instrument<br />

were clearly recognised. Several participants underlined the need for an independent body<br />

to operate the classifi cation rather than a model <strong>of</strong> ownership by the stakeholders. Whether the<br />

classifi cation will be included in the next phase <strong>of</strong> the Bologna process beyond 2010 will be on the<br />

agenda <strong>of</strong> the following ministerial meeting in April 2009 in Leuven (Belgium).<br />

31<br />

5.1.5 Presidency <strong>of</strong> the Council <strong>of</strong> the <strong>European</strong> Union<br />

The French presidency <strong>of</strong> the Council <strong>of</strong> the <strong>European</strong> Union has launched a discussion on<br />

benchmarks and indicators for better rankings in higher education and research. Through several<br />

contacts it has become clear that the French presidency is interested in a classifi cation project and<br />

it has invited the CEIHE project team to speak at the presidency conference in November 2008 in<br />

Nice (France).<br />

5.1.6 <strong>European</strong> Commission<br />

The <strong>European</strong> Commission provided the funding for both the fi rst and second phase <strong>of</strong> this<br />

classifi cation project and has thus participated in the development <strong>of</strong> the classifi cation from the<br />

very beginning. The project team however has always emphasised its (scientifi c) independence<br />

and recognises that fi nancial support from the Directorate-General Education, Youth and Culture<br />

(EAC) for the project does not imply unconditional support <strong>of</strong> the EC for the classifi cation. The<br />

project team interprets the EC’s position as support for the underlying notions in the project <strong>of</strong> the<br />

importance <strong>of</strong> the institutional diversity <strong>of</strong> higher education institutions in Europe and the need for<br />

more transparency.<br />

Many participants in the various meetings and conferences have underlined the importance <strong>of</strong> the<br />

continuation <strong>of</strong> the close cooperation with the EC in general and with EAC in particular when further<br />

developing and implementing the classifi cation. The project team agrees with this view and has<br />

expressed this intention on various occasions. Cooperation is also necessary with regard to the<br />

intention <strong>of</strong> the EC to explore additional data-collection on higher education and research institutions<br />

in Europe by the Statistical Offi ce <strong>of</strong> the <strong>European</strong> Communities (EUROSTAT).<br />

5.1.7 Carnegie Foundation<br />

The Carnegie classifi cation in the US higher education system is entirely run and funded by the<br />

Carnegie Foundation. The success <strong>of</strong> the Carnegie classifi cation is due to the fact that the Carnegie<br />

Foundation has the generally accepted authority as the implementing organisation <strong>of</strong> the US<br />

classifi cation. The Carnegie Foundation is not responsible for the data collection; the data are freely<br />

available at the federal level in the US. Carnegie also does not undertake any auditing <strong>of</strong> the quality<br />

<strong>of</strong> the data that higher education institutions provide to the relevant federal bodies.<br />

The project team has established a fruitful working relationship with experts and managers <strong>of</strong> the<br />

Carnegie Foundation. The Carnegie colleagues appreciate the collaboration with the project team,<br />

particularly from a scholarly perspective. Relevant issues for further collaboration include: the<br />

comparability <strong>of</strong> the two classifi cations, the classifi cation dimensions and the relationships between<br />

classifi cations and rankings. The current management also indicated that the Carnegie Foundation<br />

is interested in exploring further collaboration concerning the implementation <strong>of</strong> a <strong>European</strong><br />

classifi cation.<br />

MAPPING DIVERSITY


32 5.2 Criteria for institutionalisation<br />

Taking into account the views, recommendations and concerns that were mentioned during the<br />

consultation process, the project team defi ned fi ve criteria as essential requirements for the<br />

institutional implementation <strong>of</strong> the classifi cation: inclusiveness, independence, pr<strong>of</strong>essionalism,<br />

sustainability and legitimacy. These criteria are defi ned as follows:<br />

Inclusiveness<br />

The classifi cation must be open to recognised higher education institutions <strong>of</strong> all types and all<br />

participating countries, irrespective <strong>of</strong> their membership <strong>of</strong> associations, networks or conferences.<br />

Independence<br />

The classifi cation must be administered independently <strong>of</strong> governments, funding organisations,<br />

representative organisations or business interests.<br />

Pr<strong>of</strong>essional approach<br />

The classifi cation must be run by a pr<strong>of</strong>essional, reliable and effi cient organisation. This will guarantee<br />

appropriate standards in the planning, implementation, communication and further development <strong>of</strong><br />

the classifi cation, hence contributing to an impeccable reputation for the classifi cation which is<br />

essential to its success.<br />

Sustainability<br />

The administration <strong>of</strong> the classifi cation must be properly funded on the basis <strong>of</strong> a long term fi nancial<br />

commitment. This will secure suffi cient capacity for carrying out the work at the required high<br />

level.<br />

Legitimacy<br />

The classifi cation must have the trust <strong>of</strong> participating institutions and stakeholders. This means<br />

that the organisation managing the classifi cation will be held accountable and will be subject to<br />

continuous evaluation and assessment.<br />

5.3 Models for institutionalisation<br />

MAPPING DIVERSITY<br />

Throughout the meetings in the early stages <strong>of</strong> the project, four theoretical options evolved as possible<br />

models for implementation: market, government, stakeholder and independent organisation. In the<br />

following section the four models and the views <strong>of</strong> stakeholders on these models are presented in<br />

relation to the criteria mentioned in the previous section.<br />

Market model<br />

In this model a (consortium <strong>of</strong>) private organisations would implement the classifi cation. Products<br />

and services would be made available to users at market-based tariffs. The strategy, further<br />

development and use <strong>of</strong> the classifi cation would be driven by market demands.<br />

In a market model, the stakeholders we consulted assumed that the provider would only <strong>of</strong>fer<br />

classifi cation services on those dimensions for which it expects suffi cient institutional demand.<br />

Hence some dimensions <strong>of</strong> the classifi cation are likely not to be included. Therefore full inclusiveness<br />

cannot be guaranteed. On the criterion <strong>of</strong> sustainability, stakeholders argued that in a market model<br />

the continuation <strong>of</strong> the classifi cation will depend on demand and will be subject to the volatility <strong>of</strong><br />

the market. Hence, sustainability can not be guaranteed. Finally, the stakeholders raised concerns<br />

about the perceived legitimacy <strong>of</strong> this model.


Government model<br />

In this model governments use their authority over higher education to organise the classifi cation<br />

<strong>of</strong> higher education institutions as an integral instrument <strong>of</strong> their steering capacity. As the tool to be<br />

developed is a Europe-wide classifi cation, it would operate either at the supranational level or within<br />

the framework <strong>of</strong> an inter-governmental agreement.<br />

33<br />

According to the stakeholders, governments could use their authority to ensure full participation <strong>of</strong><br />

higher education institutions, therefore potentially ensuring a high level <strong>of</strong> inclusiveness. However,<br />

the stakeholders voiced clear concerns about the legitimacy <strong>of</strong> the classifi cation in such a model<br />

given the danger <strong>of</strong> the lack <strong>of</strong> ownership by the institutions.<br />

Stakeholder model<br />

In this model all major stakeholders, i.e. business, governments, students and institutions, would<br />

co-own the operation and administration <strong>of</strong> the classifi cation.<br />

According to the stakeholders, this model might provide a good basis for a high level <strong>of</strong> legitimacy.<br />

Nevertheless, the fi nding <strong>of</strong> common ground in the stakeholder model is likely to be diffi cult. In<br />

addition, the lack <strong>of</strong> coherent representation <strong>of</strong> some types <strong>of</strong> institutions at the <strong>European</strong> level could<br />

lead to a bias in favour <strong>of</strong> better represented institutions. This would present a serious challenge to<br />

inclusiveness.<br />

Independent organisation model<br />

In this model an existing or new organisation independent <strong>of</strong> governmental or direct stakeholder<br />

interests would administer the classifi cation.<br />

According to the stakeholders, this model in principle best meets the necessary conditions to fulfi l<br />

all fi ve criteria. One should however consider carefully what the responsibilities <strong>of</strong> the implementing<br />

organisation will be as its (additional) activities could greatly enhance its legitimacy in the larger<br />

societal context.<br />

In the table below, we present in summary the assessment <strong>of</strong> the four models by the stakeholders<br />

against the criteria developed above.<br />

Table 4: Assessment <strong>of</strong> four models for institutionalising the classifi cation<br />

Criteria Inclusiveness Independence Pr<strong>of</strong>essionalism Sustainability Legitimacy<br />

Model<br />

Market - +/- +/- - -<br />

Government + +/- +/- +/- -<br />

Stakeholder - +/- +/- +/- +<br />

Independent +/- + +/- +/- +/-<br />

organisation<br />

The table shows that according to the stakeholders the independent organisation model scores<br />

highest, i.e. a ‘+’ on independence without any ‘–‘ on any other criterion. In addition, the stakeholder<br />

model would complement this model with a ‘+’ on legitimacy.<br />

MAPPING DIVERSITY


34 5.4 Conclusion and considerations<br />

In this paragraph we present our conclusion on the preferred model for the operational implementation<br />

<strong>of</strong> the multi-dimensional classifi cation and <strong>of</strong>fer some fi nal considerations.<br />

The conclusions and considerations reached so far have only a provisional character given the<br />

fact that the research on classifi cation is still going on. The project team will only be able to come<br />

to more substantial and concrete recommendations with regard to institutionalisation after further<br />

development <strong>of</strong> the whole concept <strong>of</strong> classifi cation and more feedback from stakeholders.<br />

Nonetheless, a fi rst indication <strong>of</strong> the preferred model for the implementation <strong>of</strong> the classifi cation can<br />

be presented.<br />

As a consequence <strong>of</strong> the above assessment, the project team recommends combining the<br />

independent organisation and the stakeholder models for the institutionalisation and implementation<br />

<strong>of</strong> the classifi cation. A way to operationalise this would be to create a legally independent organisation<br />

in which stakeholders have an important advisory role to play.<br />

We propose the creation <strong>of</strong> a non-governmental and not-for-pr<strong>of</strong>i t organisation that operates<br />

independently from its funding constituencies or stakeholders (or the use <strong>of</strong> an existing organisation<br />

<strong>of</strong> this nature). Funding could come from public or private sources as long as independence from<br />

these sources and sustainability is guaranteed.<br />

The operating organisation would have a board consisting <strong>of</strong> independent members and would<br />

be managed by a director supported by pr<strong>of</strong>essional staff. The board <strong>of</strong> the organisation would be<br />

advised by a stakeholder advisory council and a scientifi c advisory committee. This organisation is<br />

refl ected in the organisational chart below:<br />

Figure 1: Proposed organisational chart <strong>of</strong> the organisation responsible for the classifi cation<br />

Board<br />

Scientific Advisory Comittee<br />

Stakeholder Advisory Council<br />

MAPPING DIVERSITY<br />

Director<br />

Pr<strong>of</strong>essional Staff<br />

The project team also recommends the creation <strong>of</strong> communities to further develop ‘challenging’<br />

dimensions and to seek better indicators (see section 4.3). The classifi cation organisation would<br />

interact with these communities <strong>of</strong> specifi c higher education institutions designing their operational<br />

procedures and processes.<br />

In addition, several aspects <strong>of</strong> the implementation <strong>of</strong> the classifi cation need further attention.<br />

Regarding the operational area <strong>of</strong> the classifi cation it is advised to focus on the <strong>European</strong> Higher<br />

Education Area (EHEA) and thus to relate the classifi cation to the Bologna Process. However, we


also underline the need for cooperation with other relevant organisations, not least the Carnegie<br />

Foundation.<br />

35<br />

Furthermore, the limited availability <strong>of</strong> data on the <strong>European</strong> level (see section 2.3.1) must be taken<br />

into account when assessing the staffi ng needs regarding implementation. This also applies to the<br />

necessity to organise audits and monitor the quality <strong>of</strong> the data provided.<br />

Finally the project team recommends that all necessary provisions be made to ensure that the<br />

classifi cation organisation has the intellectual property rights regarding the data and the infrastructure<br />

for collecting, processing and presenting the classifi cation.<br />

MAPPING DIVERSITY


References


Bailey, K. D. (1994)<br />

Typologies and taxonomies: an introduction to classifi cation techniques Thousand Oaks, Sage<br />

Publications.<br />

Birnbaum, R. (1983)<br />

Maintaining diversity in higher education. San Francisco, Jossey-Bass.<br />

Bowker, G. C. and S. L. Star (2000)<br />

Sorting things out: Classifi cation and its consequences. Cambridge, MIT.<br />

Centre for higher education research and information, Open University, et al. (2008)<br />

Counting what is measured or measuring what counts? League tables and their impact on higher<br />

education institutions in England. London, HEFCE.<br />

Dill, D. D. and M. Soo (2005)<br />

“Academic quality, league tables and public policy: a cross-national analysis <strong>of</strong> university ranking<br />

systems.” Higher Education 49(4): 495-534.<br />

Europa Publications (2006)<br />

The World <strong>of</strong> Learning 2007. London.<br />

Huisman, J. (1995)<br />

Differentiation, diversity and dependency in higher education. Utrecht, Lemma.<br />

Jongbloed, B., B. Lepori, et al. (2006)<br />

Changes in university incomes and their impact on university-based research and innovation.<br />

Kelo, M., U. Teichler, et al., Eds. (2006)<br />

EURODATA – Student mobility in <strong>European</strong> higher education. Bonn, Lemmens.<br />

Marginson, S. (2007)<br />

“Global university rankings: implications in general and for Australia.” Journal <strong>of</strong> higher education<br />

policy and management 29(2): 131-142.<br />

Marginson, S. and M. van der Wende (2007)<br />

“To rank or to be ranked: the impact <strong>of</strong> global rankings in higher education.” Journal <strong>of</strong> Studies in<br />

International Education 11(3-4): 306-329.<br />

OECD (2004)<br />

Handbook for Internationally Comparative Education Statistics. Paris.<br />

Sadlak, J. and L. N. Liu, Eds. (2007)<br />

The world-class university and ranking: Aiming beyond status. Paris, UNESCO-CEPES.<br />

Salmi, J. and A. Saroyan (2006)<br />

“League tables as policy instruments: Uses and misuses.” Higher Education Management and<br />

Policy 19(2): 24-62.<br />

Van Dyke, N. (2005)<br />

“Twenty years <strong>of</strong> university report cards, Higher education in Europe.” Higher Education in Europe<br />

30(2): 103-125.<br />

van Vught, F. (2008)<br />

“Mission diversity and reputation in higher education.” Higher education policy 21: 151-174.<br />

van der Wende, M. C. (2008)<br />

Rankings and Classifi cations in Higher Education: A <strong>European</strong> Perspective. Higher Education:<br />

Handbook <strong>of</strong> Theory and Research. J. Smart, Springer. XXIII: 49-73.<br />

37<br />

MAPPING DIVERSITY


Annexes


Annex I: Exploratory analysis <strong>of</strong> existing data<br />

sources<br />

39<br />

In this annex we present the results <strong>of</strong> the exploration <strong>of</strong> existing data sources. The information is<br />

presented by dimension and by indicator.<br />

Dimension 1: Types <strong>of</strong> degrees <strong>of</strong>fered<br />

Indicator 1a: Highest degree <strong>of</strong>fered<br />

Information may be found in the Ploteus database, although extracting all information may be rather<br />

tedious. http://ec.europa.eu/ploteus/portal/searchcustom.jsp<br />

Indicator 1b: Number <strong>of</strong> qualifications granted<br />

The number <strong>of</strong> graduates (is not the same as number <strong>of</strong> qualifi cations – issue <strong>of</strong> double degrees etc.-)<br />

needs to be broken down by level <strong>of</strong> programme. It is therefore unlikely that international databases<br />

will include this information (although through international networks like NARIC this information<br />

may be retrieved). National data sources will however in most countries have this information.<br />

Dimension 2: Range <strong>of</strong> subjects <strong>of</strong>fered<br />

Indicator 2a: number <strong>of</strong> subjects <strong>of</strong>fered<br />

This indicator relates to the scope <strong>of</strong> the educational pr<strong>of</strong>i le <strong>of</strong> the institution. In the World <strong>of</strong> Learning<br />

(Europa Publications 2006) the departments are listed, which gives an indication <strong>of</strong> the scope,<br />

but the comparability <strong>of</strong> that data is problematic, due to the existence <strong>of</strong> broad, multi-disciplinary<br />

departments and single subject departments.<br />

Counting the number <strong>of</strong> subjects is also problematic. In many higher education systems there has<br />

been a strong process <strong>of</strong> differentiation, which has lead to a huge number <strong>of</strong> subjects <strong>of</strong>fered.<br />

Standardising these subjects (for comparative reasons) is diffi cult at the national level (and even<br />

more at the international level). There is an international classifi cation <strong>of</strong> disciplines (in the ISCED<br />

framework). Although the ISCED classifi cation is used by a growing number <strong>of</strong> countries, it is still<br />

not a common classifi cation, leaving us with the problem <strong>of</strong> how to compare different disciplines<br />

across countries.<br />

Dimension 3: Orientation <strong>of</strong> degrees<br />

In the draft classifi cation no indicator was specifi ed for this dimension. The rationale for this<br />

dimension is similar to the rationale behind the binary divide in many higher education systems:<br />

to differentiate between the vocationally oriented sector on the one hand and the academically<br />

oriented sector on the other hand. However, this dichotomy is heavily debated and the line between<br />

the sectors is not always clear. One way to look at this dimension is to use the formal structural<br />

divide between university and non-university wherever that applies. Information on this issue is<br />

available in national databases, listing the institutions by sector. Another way is to ask the institutions<br />

what part <strong>of</strong> the subjects <strong>of</strong>fered are vocationally oriented (or to what extent the programmes are<br />

vocationally oriented). This requires a clear idea <strong>of</strong> what a vocational orientation is. But even if that<br />

condition is met, the reliability <strong>of</strong> the data collected this way is questionable.<br />

MAPPING DIVERSITY


Indicator 3a: number <strong>of</strong> programmes leading to certified/ regulated pr<strong>of</strong>essions as a % <strong>of</strong> the<br />

40<br />

total number <strong>of</strong> programmes<br />

In the questionnaire used in the survey references are listed to websites at which (national) lists <strong>of</strong><br />

regulated pr<strong>of</strong>essions can be found.<br />

Indicator 3b: the number <strong>of</strong> programmes <strong>of</strong>fered that answer to a particular demand from the<br />

labour market or pr<strong>of</strong>essions (as % <strong>of</strong> the total number <strong>of</strong> programmes)<br />

This indicator asks for a subjective assessment <strong>of</strong> the proportion and therefore the question on<br />

existing data sources is not applicable.<br />

Dimension 4: Involvement in life long learning<br />

Indicator 4a: number <strong>of</strong> adult learners as a % <strong>of</strong> total number <strong>of</strong> students by type <strong>of</strong> degree<br />

International databases do not breakdown enrolment data by institution. Even national agencies<br />

very <strong>of</strong>ten present enrolment data by age only at the system or sector level.<br />

Dimension 5: Research intensiveness<br />

Indicator 5a: number <strong>of</strong> peer reviewed publications per fte academic staff<br />

This indicator is self-reported, therefore the question on existing data sources is not applicable.<br />

Indicator 5b: the ISI based citation indicator, also known as the ‘crown indicator’<br />

For this indicator, the ISI databases are the obvious international data sources to use. There are<br />

some methodological issues to resolve and there is the issue <strong>of</strong> the licence to use data on the<br />

institutional level. Co-operation with CWTS, Leiden University is envisaged<br />

.<br />

Dimension 6: Innovation intensiveness<br />

The EU administers the Community Innovation Survey: A bi-annual survey among companies and<br />

(in some countries higher education institutions). Based on interviews with the Dutch statistical<br />

<strong>of</strong>fi ce is seems highly unlikely that this source will yield data on higher education institutions (at the<br />

institutional level) for a reasonable range <strong>of</strong> countries.<br />

MAPPING DIVERSITY<br />

Indicator 6a: Number <strong>of</strong> start up firms<br />

There is no encompassing <strong>European</strong> database on start ups. There are a number <strong>of</strong> initiatives/<br />

research projects that have addressed the issue.<br />

One <strong>of</strong> the disturbing results <strong>of</strong> the analysis <strong>of</strong> such projects is that there is no clear unambiguous<br />

defi nition <strong>of</strong> start ups. That means that the counting <strong>of</strong> start ups may be biased by ’deviant’ national,<br />

institutional or even departmental practices.<br />

OECD DSTI<br />

The Industry Science Relationships Outlook report (2002). In this biannual report ten ISRs are<br />

identifi ed, <strong>of</strong> which spin-<strong>of</strong>fs are one and licensing is another. OECD is working towards a generic<br />

model, allowing international comparison. This information base may be used as context information<br />

at the national level, but there are no data in it at the institution level.


EU<br />

The Rebaspin<strong>of</strong>f projects (part <strong>of</strong> the PRIME Network <strong>of</strong> Excellence). The research projects focus<br />

on research based spin<strong>of</strong>fs. Within these projects data on spin-<strong>of</strong>fs have been collected, but there<br />

is no consistent database built.<br />

41<br />

List <strong>of</strong> specifi c studies on this issue<br />

Germany. The ZEW (Zentrum für Europäische Wirtschaftsforschung) has performed a study into<br />

spin <strong>of</strong>fs. This report (Gründungen aus Hochschulen und der öffentlichen Forschung) provides<br />

quantitative data on spin <strong>of</strong>fs <strong>of</strong> universities and research institutes in Germany in the period 1996<br />

till 2000 and gives information on their economic success. 20.000 companies participated in a<br />

survey.<br />

The Netherlands. The Ministry <strong>of</strong> Economic Affairs published a report on spin-<strong>of</strong>fs. The information in<br />

that study refers mainly to the Dutch situation although there are a number <strong>of</strong> international benchmark<br />

institutions described. There are no hard data: they are based on self-reported institutional data in<br />

interviews with experts and stakeholders.<br />

UK. HEFCE administers an annual Higher education-business and community interaction survey.<br />

In the latest version data on a number <strong>of</strong> knowledge transfer indicators are presented for each<br />

institution separately:<br />

1. Income from collaborative research<br />

2. Number <strong>of</strong> consultancy contracts<br />

3. Income from regeneration and development programmes<br />

4. Patents fi led<br />

5. Spin<strong>of</strong>fs<br />

6. Free events (exhibitions etc.)<br />

http://www.hefce.ac.uk/pubs/hefce/2006/06_25/<br />

Indicator 6b: The number <strong>of</strong> patent applications filed<br />

There is a <strong>European</strong> database on patents (http://nl.espacenet.com)<br />

This database is not limited to higher education institutions and the number <strong>of</strong> patents awarded to<br />

them or researchers working there. The data on industry based indicators need to be fi ltered out.<br />

There is an issue <strong>of</strong> national context here. The ownership <strong>of</strong> patents differs between countries.<br />

Whether a patent is owned by the university or the individual researcher may have an impact on the<br />

results from the analyses or the effort that needs to be put in for extracting the correct information<br />

from the database.<br />

The Support Group for Research and Innovation at the Catholic University <strong>of</strong> Leuven, Belgium<br />

(SIOO) has databases on this dimension.<br />

Indicator 6c: Annual licensing income<br />

The SIOO has databases on this dimension.<br />

Indicator 6d: Revenues from privately funded research contracts as a % <strong>of</strong> total research<br />

revenues<br />

Most international databases that have information on this indicator are at the aggregated (national)<br />

level. There is one research project, funded by the EC 5 in which the income <strong>of</strong> a sample <strong>of</strong> universities<br />

in six countries is analysed (the CHINC- project). The dataset comprises data on 100 institutions<br />

for the period 1995 till 2003 on income by source. The data originate from national data sources.<br />

7 The project, “Changes in University Incomes: Their Impact on University-Based Research and Innovation” (CHINC)<br />

was commissioned by the <strong>European</strong> Commission as represented by the Institute for Prospective Technological Studies <strong>of</strong> the<br />

Joint Research Centre (contract no. 22537-2004-12 F1ED SEV NO).<br />

MAPPING DIVERSITY


The report (Jongbloed, Lepori et al. 2006) has a one time character: it is not planned to repeat the<br />

42<br />

exercise. The study may be an important input to assess the pitfalls and potentials <strong>of</strong> analyzing the<br />

data sources used in the CHINC project.<br />

Dimension 7: International orientation: teaching and staff<br />

Again, the international databases do not provide information at the institution-level.<br />

World <strong>of</strong> Learning does not provide information on international students.<br />

There are data available on participants in EU programmes (Kelo, Teichler et al. 2006) but these<br />

students refl ect only part <strong>of</strong> the international orientation <strong>of</strong> a higher education institution. Data on<br />

other exchange activities may be found through the NARIC network.<br />

Indicator 7a: the number <strong>of</strong> degree seeking students with a foreign nationality, as % <strong>of</strong> total<br />

enrolment<br />

The general remarks made above apply.<br />

Indicator 7b: the number <strong>of</strong> incoming students in <strong>European</strong> exchange programmes, as % <strong>of</strong><br />

total enrolment<br />

The general remarks made above apply.<br />

Indicator 7c: the number <strong>of</strong> students sent out in <strong>European</strong> exchange programmes<br />

The general remarks made above apply.<br />

Indicator 7d: International staff members as % <strong>of</strong> total staff<br />

The question <strong>of</strong> what international means applies here as well. Are staff members with foreign<br />

nationality international or only if they have obtained their degree abroad?<br />

The indicator is formulated in terms <strong>of</strong> staff, whereas it is more interesting to look at academic<br />

staff.<br />

It is highly unlikely that this information can be found in international databases. Even in national<br />

databases it is questionable whether this information is available at the institutional level.<br />

Indicator 7e: number <strong>of</strong> programmes <strong>of</strong>fered abroad<br />

We have not come across international or national databases comprising information broken down<br />

along this dimension. There maybe some research reports on this issue.<br />

MAPPING DIVERSITY<br />

Dimension 8: International orientation: research<br />

Indicator 8a: Revenues from EU research programmes as % <strong>of</strong> total research revenues<br />

We could not identify any international database comprising this information on the institutionlevel.<br />

In EU research programmes higher education institutions are in most cases part <strong>of</strong> consortia.<br />

Identifi cation <strong>of</strong> the turnover realized by an individual institution is therefore rather tedious. This is<br />

probably the reason why there is no international database found on this indicator.<br />

Dimension 9: Size<br />

Indicator 9a: number <strong>of</strong> students enrolled (headcount)<br />

Whether data on enrolment are available in international databases depends on the required<br />

breakdown <strong>of</strong> the data. If no further breakdown is needed, data are available in sources like the


World <strong>of</strong> Learning. If a further breakdown by level <strong>of</strong> program, discipline or mode <strong>of</strong> enrolment is<br />

required we shall have to turn to national sources.<br />

43<br />

Indicator 9b: number <strong>of</strong> staff (fte)<br />

International data sources again fall short <strong>of</strong> information on staff. World <strong>of</strong> Learning provides<br />

information on numbers <strong>of</strong> teachers but it is not clear whether all academic staff refers to teaching<br />

staff only. If it is teaching staff only the information needs to be complemented with data on other,<br />

non-teaching academic staff.<br />

There is also the issue <strong>of</strong> headcount data versus fte. With the rise <strong>of</strong> part-time pr<strong>of</strong>essors, there may<br />

be signifi cant differences between the two.<br />

Dimension 10: Mode <strong>of</strong> delivery<br />

Indicator 10a: number <strong>of</strong> distance learning programmes as % <strong>of</strong> total number <strong>of</strong><br />

programmes<br />

We have not come across international or national databases comprising information broken down<br />

along this dimension.<br />

Indicator 10b: number <strong>of</strong> part-time programmes as % <strong>of</strong> total number <strong>of</strong> programmes<br />

-<br />

Indicator 10c: number <strong>of</strong> part-time students as a percentage <strong>of</strong> total number <strong>of</strong> students<br />

-<br />

Dimension 11: Public/private character<br />

Indicator 11a: income from (competitive and non-competitive) government funding as a % <strong>of</strong><br />

total revenues<br />

International databases have some information on this indicator but again it is not broken down by<br />

individual institution. The data from the CHINC project may prove to be valuable in obtaining data<br />

on some countries and identifying major obstacles in working with national databases.<br />

Indicator 11b: income from tuition fees as % <strong>of</strong> total income<br />

-<br />

Dimension 12: Legal status<br />

Indicator 12a: Legal status (private or public)<br />

Information on this indicator is available in international databases (World <strong>of</strong> learning) and national<br />

databases.<br />

MAPPING DIVERSITY


44<br />

Dimension 13: Cultural engagement<br />

Indicator 13a: number <strong>of</strong> <strong>of</strong>ficial concerts and performances (co)-organised by the<br />

institution<br />

-<br />

Indicator13b: number <strong>of</strong> <strong>of</strong>ficial exhibitions (co)-organised by the institution<br />

-<br />

Dimension 14: Regional engagement<br />

14a: annual turnover in EU structural funds as % <strong>of</strong> total turnover<br />

-<br />

14b: number <strong>of</strong> graduates remaining in the region as % <strong>of</strong> total number <strong>of</strong> graduates<br />

-<br />

14c: number <strong>of</strong> extracurricular courses <strong>of</strong>fered for the regional labour market<br />

-<br />

14d: importance <strong>of</strong> local/regional income sources<br />

-<br />

MAPPING DIVERSITY


Annex II: The case studies<br />

45<br />

Case Study: Norwegian University <strong>of</strong> Science and Technology (NTNU)<br />

Introduction<br />

The purpose <strong>of</strong> the case study was:<br />

• to assess the potential use <strong>of</strong> a <strong>European</strong> classifi cation;<br />

• to fi nd out to what extent the dimensions and indicators are seen to be relevant and feasible;<br />

• to fi nd out to what extent the necessary data would be available or could be produced.<br />

General issues<br />

NTNU is a university with a broad academic scope that has its main focus on technology and the<br />

natural sciences. The university has about 20 000 students and 4800 staff. NTNU has been given<br />

the national responsibility for graduate engineering education in Norway and <strong>of</strong>fers an extensive<br />

range <strong>of</strong> subjects in the natural sciences, technology, the humanities, aesthetic studies, health<br />

studies, the social sciences and fi nancial and economic disciplines. NTNU also <strong>of</strong>fers education<br />

in the pr<strong>of</strong>essions: technology, medicine, psychology, architecture, fi ne art, music, pictorial art,<br />

architecture and teacher education.<br />

The Classifi cation was judged by NTNU as highly relevant. It would help NTNU to better get to know<br />

itself, to better formulate its pr<strong>of</strong>i le and its mission, to further develop its identity and to create more<br />

visibility.<br />

Internationally it would allow NTNU to create visibility as a specifi c type <strong>of</strong> higher education institution.<br />

Additionally it would allow better and more focused benchmarking. Nationally it would mark the<br />

differences between existing types <strong>of</strong> institutions. Internally it would stimulate strategic discussions<br />

and (as mentioned) assist in creating the institutional identity.<br />

But the Classifi cation will also be a political tool, according to NTNU. Governmental and other actors<br />

will use it to target and differentiate their policies. NTNU accepts this.<br />

The relationships between the Classifi cation and Quality Assessments should be made more<br />

explicit. The dimensions and indicators may be descriptive but they also imply judgments in terms<br />

<strong>of</strong> quality. External actors will use the indicators to assess the ‘quality’ <strong>of</strong> an institution in the various<br />

dimensions. The public may judge the quality and relevance <strong>of</strong> an institution using the Classifi cation.<br />

Thresholds per indicator will be interpreted as minimum quality levels.<br />

The success <strong>of</strong> the classifi cation will to a large extend depend on the robustness <strong>of</strong> the indicators.<br />

Precise defi nitions and a strong and convincing standardisation are crucial. In addition, since the<br />

institutions will have to provide the data, the time and energy to be spent on data-gathering should<br />

be as little as possible.<br />

Special attention should be given to the validity and reliability <strong>of</strong> the data. Data should be available in<br />

public repositories. Special ‘data-audits’ (e.g., by national agencies, perhaps existing accreditation<br />

agencies or statistical <strong>of</strong>fi ces) could be undertaken to assess the data-provision. Data-audits should<br />

be undertaken by ‘accredited auditors’.<br />

MAPPING DIVERSITY


A crucial issue which needs further analysis is the way interdisciplinarity (both in education and in<br />

46<br />

research) can be addressed. In education the ‘range <strong>of</strong> subjects’ may imply a disciplinary bias. In<br />

research, citation analysis and related approaches may show a similar bias. It is important to fi nd<br />

indicators that capture interdisciplinarity.<br />

Dimensions and Indicators<br />

During four different sessions the dimensions and indicators were discussed regarding:<br />

• education;<br />

• research;<br />

• international orientation;<br />

• institutional aspects.<br />

In the following texts the various remarks and conclusions are presented per indicator per<br />

dimension.<br />

Dimension 1: types <strong>of</strong> degrees <strong>of</strong>fered<br />

Indicator 1.1.1.: highest degree <strong>of</strong>fered<br />

• a good and important indicator;<br />

• data are easy to provide.<br />

Indicator 1.1.2.: number <strong>of</strong> qualifi cations granted<br />

• important to distinguish level and year;<br />

• data are easy to provide.<br />

Dimension 2: range <strong>of</strong> subjects <strong>of</strong>fered<br />

Indicator 1.2.1.: number <strong>of</strong> subjects <strong>of</strong>fered<br />

MAPPING DIVERSITY<br />

• subjects should be understood as ‘areas in which degree programmes are <strong>of</strong>fered’;<br />

• in Norway the Bureau <strong>of</strong> Statistics makes a distinction between:<br />

o study areas (defi ned by government for all higher education institutions);<br />

o programmes (per study area; higher education institutions are free to design these);<br />

o courses (forming the programmes);<br />

o does ISCED provide an internationally shared list <strong>of</strong> study areas? Is this comparable with the<br />

Norwegian list?<br />

o in Norway in addition to the list <strong>of</strong> areas, a group <strong>of</strong> usually interdisciplinary programmes exists<br />

as well; these should also be captured in this indicator.<br />

• Indicator is important;<br />

• Indicator appears to be insuffi ciently clear;<br />

• Data can only be provided if a generally acceptable list <strong>of</strong> study areas exists.<br />

Dimension 3: orientation <strong>of</strong> degrees<br />

Dimension appears to create much confusion. It is nevertheless judged to be important to enable<br />

the Classifi cation to <strong>of</strong>fer categories <strong>of</strong> institutions. Could it be related to Qualifi cation Frameworks<br />

(<strong>European</strong> and National)?


Indicator 1.3.1.: number <strong>of</strong> programmes <strong>of</strong>fered for licensed pr<strong>of</strong>essions<br />

47<br />

• there are also pr<strong>of</strong>essional training programmes in Norway that are not formally licensed<br />

(engineering, architecture, teacher training);<br />

• pr<strong>of</strong>essional licensing will be different in different countries;<br />

• what about lifelong learning programmes (e.g., experience-based master programmes)?<br />

• perhaps two categories <strong>of</strong> pr<strong>of</strong>essional programmes:<br />

o formally licensed and/or accredited by pr<strong>of</strong>essional organisations;<br />

o implicitly licensed by ‘acceptance in the pr<strong>of</strong>essional fi eld’ (engineers, teachers).<br />

• data may be diffi cult to compare.<br />

Indicator 1.3.2.: number <strong>of</strong> programmes considered by the institution to be pr<strong>of</strong>essional<br />

programmes<br />

• will create much confusion and strategic behaviour;<br />

• data will be diffi cult to compare;<br />

Dimension 4: <strong>European</strong> educational pr<strong>of</strong>i le<br />

This dimension is incorporated in ‘International Orientation’.<br />

Dimension 5: research intensiveness<br />

Indicator 2.2.1.: number <strong>of</strong> peer reviewed publications (per fte staff? Per number <strong>of</strong> staff?).<br />

• important indicator;<br />

• normalise per fi eld/discipline; also important to know number <strong>of</strong> researchers per fi eld in the<br />

institution;<br />

• data can be provided; in Norway the national registration system (Frida) <strong>of</strong>fers this information.<br />

Extra indicator 2.2.1a: CWTS crown indicator.<br />

• too little information about it;<br />

• if it addresses scientometric problems, it should be preferred;<br />

• particularly attention needed for research outcomes in engineering (design results) and in arts<br />

(concerts, performances, exhibitions, etc.); also make sure interdisciplinarity is addressed in a<br />

fair manner.<br />

Dimension 6: innovation intensiveness research<br />

The most important indicator here is revenue generated from privately funded research as % <strong>of</strong> total<br />

institutional funding (or total research funding? Data on this are not available). In this indicator all<br />

research related income can be incorporated, i.e. revenues from:<br />

• licensing agreements;<br />

• start-ups and spin-<strong>of</strong>fs;<br />

• contract research;<br />

• selling shares.<br />

An extra indicator under this dimension could be: number <strong>of</strong> invention disclosures. NTNU’s Transfer<br />

Offi ce fi les these data.<br />

MAPPING DIVERSITY


48<br />

Indicator 2.2.2a.: number <strong>of</strong> start-ups<br />

• important indicator;<br />

• data can be provided.<br />

Indicator 2.2.2b.: number <strong>of</strong> patents applied for<br />

• important indicator;<br />

• focus on fi led patents;<br />

• data can be provided.<br />

Indicator 2.2.3.: fi nancial volume <strong>of</strong> privately funded research contracts as % <strong>of</strong> total research<br />

revenues.<br />

• most important indicator but should be broadened to all research related revenues;<br />

• ‘as % <strong>of</strong> total research income’ is diffi cult to show; better as % <strong>of</strong> total income <strong>of</strong> institution;<br />

• data can be provided, but will be diffi cult.<br />

Indicator 2.2.4.: turnover from licensing agreements<br />

• unclear indicator;<br />

• incorporate income from licensing agreements in indicator 2.2.3;<br />

• data can be provided.<br />

Extra Dimension: cultural intensiveness research.<br />

This dimension has been suggested (in Advisory Board) as an additional one to address research<br />

outcomes related to the arts and humanities sectors. It is assumed to cover the socio-cultural<br />

exploitation <strong>of</strong> research. So far no indicators have been specifi ed.<br />

This dimension as such is not relevant for NTNU. However, NTNU would like to have design related<br />

and artistic research outcomes integrated in the Crown indicator (2.2.1.).<br />

Dimension 7: <strong>European</strong> research pr<strong>of</strong>i le<br />

Indicator 2.3.1.: fi nancial turnover in EU research programmes as % <strong>of</strong> total fi nancial turnover<br />

MAPPING DIVERSITY<br />

• important indicator;<br />

• data available;<br />

• perhaps extra indicator: total number <strong>of</strong> industrial partners in EU funded research projects.<br />

Dimension 8: international orientation<br />

Indicator 3.1.1.: number <strong>of</strong> international students as % <strong>of</strong> total number enrolled<br />

• important indicator;<br />

• important to distinguish ‘degree seeking students’ (per level, particularly masters and PhD) and<br />

‘exchange students’;<br />

• important to distinguish ‘incoming’ and ‘outgoing’ exchange students; for ‘degree seeking<br />

students’ only focus on ‘incoming’;<br />

• data can be easily provided.


Indicator 3.1.2a.: number <strong>of</strong> <strong>European</strong> students as % <strong>of</strong> total enrolment<br />

49<br />

• important indicator;<br />

• to be related to 3.1.1. (similar defi nitions);<br />

• data available.<br />

Indicator 3.1.2b.: number <strong>of</strong> programmes <strong>of</strong>fered abroad<br />

• not very relevant for NTNU; NTNU not interested in this, only as development aid.<br />

Extra indicator: number <strong>of</strong> joint international programmes as % <strong>of</strong> total number <strong>of</strong> <strong>of</strong>fered<br />

programmes<br />

• needs clear defi nitions: minimum standards for:<br />

o number <strong>of</strong> international partners (one is acceptable for NTNU);<br />

o total duration <strong>of</strong> stay (<strong>of</strong> students) in partner institution(s);<br />

• data can be provided.<br />

Indicator 3.1.3.: number <strong>of</strong> international staff as % <strong>of</strong> total staff<br />

• diffi cult to defi ne; probably best by citizenship (but creates confusion);<br />

• important indicator;<br />

• data available.<br />

Dimension 9: involvement in lifelong learning<br />

Needs a clear defi nition <strong>of</strong> lifelong learning. If defi ned by age <strong>of</strong> students confusion is created.<br />

Proportion <strong>of</strong> mature students is characteristic <strong>of</strong> the student body, but not necessarily an indicator<br />

for involvement in lifelong learning. Also: dimension should be transferred to education.<br />

Indicator 3.2.1.: number <strong>of</strong> mature students (> 30 years) as % <strong>of</strong> total enrolment<br />

• unclear indicator;<br />

• alternatives: number <strong>of</strong> programmes/courses <strong>of</strong>fered to lifelong learning students; number <strong>of</strong><br />

students with serious (?) work experience;<br />

• data on age <strong>of</strong> students available.<br />

Dimension 10: Size<br />

Indicator 4.1.1.: enrolment<br />

• important indicator;<br />

• data available.<br />

Indicator 4.1.2.1: staff (number? fte?)<br />

• important indicator;<br />

• PhD students in Norway are staff; correct for this category;<br />

• data available.<br />

MAPPING DIVERSITY


50<br />

New indicator: budget/total turnover<br />

• needed, to relate to other indicators;<br />

• data available.<br />

Dimension 11: mode <strong>of</strong> delivery<br />

Indicator 4.2.1.: campus vs. distance (% <strong>of</strong> programme? % <strong>of</strong> students?)<br />

• dimension should be transferred to ‘education’;<br />

• unclear indicator because more and more programmes are ‘dual mode’; distributed learning<br />

becomes more important.<br />

Indicator 4.2.2.: number <strong>of</strong> part time programmes <strong>of</strong>fered as % <strong>of</strong> total number <strong>of</strong> programmes<br />

• this additional indicator was suggested by Advisory Board;<br />

• unclear indicator;<br />

• perhaps to relate to Dimension 9 (lifelong learning)?<br />

Dimension 12: community services<br />

This dimension is vague. Perhaps better not use<br />

Dimension 13: public/private character<br />

Indicator 4.4.1.: % private income in total income<br />

• important indicator;<br />

• private funding should include funding by public organisations that contract out to the institution<br />

for specifi c tasks;<br />

• data available.<br />

Dimension 14: legal status<br />

Indicator 4.5.1: public/private status<br />

MAPPING DIVERSITY<br />

• to be defi ned in legal terms;<br />

• in Norway universities are state institutions.


Case Study: University <strong>of</strong> Strathclyde<br />

51<br />

Introduction<br />

The purpose <strong>of</strong> the case study was:<br />

• to assess the potential use <strong>of</strong> a <strong>European</strong> classifi cation;<br />

• to fi nd out to what extent the dimensions and indicators are seen to be relevant and feasible;<br />

• to fi nd out to what extent the necessary data would be available or could be produced.<br />

General issues<br />

The University is an old Scottish university, having its roots in the technological fi elds. During the last<br />

half century, the scope <strong>of</strong> the university was expanded into a university with fi ve faculties (Education;<br />

Engineering; Law, arts & social science; Science; and a Business school). The university has a<br />

focus on entrepreneurship and on ‘useful learning’.<br />

The potential role <strong>of</strong> a <strong>European</strong> Classifi cation <strong>of</strong> Institutions <strong>of</strong> Higher Education was discussed<br />

in the context <strong>of</strong> the use and abuse <strong>of</strong> international league tables. These league tables, especially<br />

the Shanghai one are very much focused on the American model <strong>of</strong> a research university. In many<br />

Asian higher education systems (the systems where potentially most international students come<br />

from) this model is seen as the dominant and best model. A <strong>European</strong> classifi cation <strong>of</strong> higher<br />

education institutions, if it is robust and trustworthy, may serve to challenge the dominance <strong>of</strong> the<br />

American model and to put the <strong>European</strong> model on the international stage.<br />

The main use <strong>of</strong> a classifi cation would be to contribute to the identifi cation <strong>of</strong> robust benchmarks.<br />

Nowadays, the benchmarks are identifi ed based on potentially outdated views and perceptions.<br />

Specialised higher education institutions, like the University <strong>of</strong> Strathclyde, have a problem identifying<br />

those. The use <strong>of</strong> league tables is not very helpful in this respect. The main impact the league<br />

tables have is on the recruitment <strong>of</strong> international (undergraduate) students and on international<br />

MBA students. A robust classifi cation would help to overcome the problems <strong>of</strong> the league tables<br />

But the issue is not only an international one: it also has a national/ UK dimension. The bulk <strong>of</strong> the<br />

competitors/ potential benchmark institutions are located within the UK. The international student<br />

market is becoming more and more important, but it is also essential for the internal market to have<br />

a robust instrument to identify the benchmark institutions.<br />

The university is developing a new strategy in which the performance <strong>of</strong> the departments and<br />

academic services are under scrutiny. The perception that the University’s position was deteriorating<br />

was partly fed by the information from the national rankings. The use <strong>of</strong> a classifi cation as a<br />

benchmark fi nding tool would improve the institutions capacity to better position itself and act on<br />

that information.<br />

Another area where a trustworthy classifi cation would be very welcome is the international recruitment<br />

<strong>of</strong> students. The university’s experience is that applications from international students, and more<br />

specifi cally Asian students are very much infl uenced by (changes in) the position <strong>of</strong> the university<br />

in international rankings (like the Shanghai and Times Higher rankings).<br />

The use <strong>of</strong> a classifi cation may play a role in fundraising but here, the impact <strong>of</strong> a classifi cation is<br />

expected to be limited. The bulk <strong>of</strong> fundraising activities is related to specifi c projects and has a<br />

predominantly regional or local focus.<br />

MAPPING DIVERSITY


Dimensions and Indicators<br />

52<br />

During six different sessions the dimensions and indicators were discussed regarding:<br />

• education;<br />

• research;<br />

• international orientation;<br />

• institutional aspects.<br />

General practical aspects<br />

The university has to comply with very detailed data-reporting requirements from both HESA and<br />

the Scottish HE funding council. Most <strong>of</strong> the data-elements <strong>of</strong> the classifi cation questionnaire are<br />

covered in those data-reporting activities. However, the exact defi nitions and breakdowns requested<br />

in the draft-classifi cation questionnaire are not always fully compatible with the defi nitions and<br />

breakdowns used in the national data-reporting activities.<br />

The administrative burden for the university would be reduced and the response rate might be raised<br />

if the questionnaire would follow the national data reporting as closely as possible. In addition, this<br />

may lead to a better reliability <strong>of</strong> data since the national data-reporting is extensively validated.<br />

Reworking the data increases the risk <strong>of</strong> errors and different interpretations. .<br />

The suggestions developed here were to check for national data repositories and to add a question<br />

in the questionnaire on whether the data requested are already reported to a national data agency<br />

(and if so to what agency).<br />

In the following texts the various remarks and conclusions are presented per indicator per<br />

dimension.<br />

Dimension 1: types <strong>of</strong> degrees <strong>of</strong>fered<br />

Indicator 1.1.1.: highest degree <strong>of</strong>fered<br />

• a good and important indicator;<br />

• data are easy to provide.<br />

Indicator 1.1.2.: number <strong>of</strong> qualifi cations granted<br />

• data are easy to provide.<br />

Dimension 2: range <strong>of</strong> subjects <strong>of</strong>fered<br />

MAPPING DIVERSITY<br />

Indicator 1.2.1.: number <strong>of</strong> subjects <strong>of</strong>fered<br />

• It was suggested to use national conversion tables for the indicator on subjects. University<br />

staff do not use the ISCED categories. The data reporting to the international organizations is<br />

done by national agencies (Ministry or central statistical <strong>of</strong>fi ce). These agencies use national<br />

conversion tables to convert the national subject categories into the ISCED categories. If such<br />

conversion tables could be presented (in an extra info-fi eld) institutions may be able to complete<br />

the question.<br />

Dimension 3: orientation <strong>of</strong> degrees<br />

Indicator 1.3.1.: number <strong>of</strong> programmes <strong>of</strong>fered for licensed pr<strong>of</strong>essions<br />

• data may be diffi cult to compare.


Indicator 1.3.2.: number <strong>of</strong> programmes considered by the institution to be pr<strong>of</strong>essional<br />

programmes<br />

• data will be diffi cult to compare;<br />

53<br />

Dimension 4: <strong>European</strong> educational pr<strong>of</strong>i le<br />

This dimension is incorporated in ‘International Orientation’.<br />

Dimension 5: research intensiveness<br />

Indicator 2.2.1.: number <strong>of</strong> peer reviewed publications (per fte staff? Per number <strong>of</strong> staff?).<br />

• important indicator;<br />

• The project team should be very clear in defi ning academic staff. In the RAE work teaching staff<br />

and research staff are separated from staff involved in teaching and research. It was debated<br />

whether the traditional academic staff (combining research and teaching) should be counted<br />

only or whether teaching assistants and research assistants should be counted as well. Given the<br />

international practice, the latter is the preferred way<br />

• An additional indicator for this dimension was suggested: 2.2.1a: CWTS crown indicator.<br />

Dimension 6: innovation intensiveness research<br />

It was suggested that the indicator for the innovativeness <strong>of</strong> research (number <strong>of</strong> contracts with<br />

business and industry) could be refi ned. Looking at the number <strong>of</strong> new contract partners is a<br />

more telling indicator for innovativeness. It takes a more dynamic view on innovation. Whether this<br />

information will be readily available for all UK higher education institutions can be questioned.<br />

The university uses four indicators or instruments for the commercialisation <strong>of</strong> research activities:<br />

patents, licensing, royalties and spin out companies. It has also information on the interaction <strong>of</strong><br />

the university with SME.<br />

Indicator 2.2.2a.: number <strong>of</strong> start-ups<br />

• important indicator;<br />

• data can be provided.<br />

Indicator 2.2.2b.: number <strong>of</strong> patents applied for<br />

• important indicator;<br />

• data can be provided.<br />

Indicator 2.2.3.: fi nancial volume <strong>of</strong> privately funded research contracts as % <strong>of</strong> total research<br />

revenues.<br />

• important indicator but should be broadened to all research related revenues;<br />

• data can be provided.<br />

• attention was focused on the way resources and costs are accounted for. The principle <strong>of</strong> full<br />

academic costing is relatively new to the university and it has raised the overall costs. When<br />

fi nancial data on resources or turn-over are used in an international comparative way, it has to be<br />

clear that the same price method is used in all countries<br />

Indicator 2.2.4.: turnover from licensing agreements<br />

• data can be provided.<br />

• It was furthermore suggested to expand the indicator on the innovativeness <strong>of</strong> research. In<br />

MAPPING DIVERSITY


54<br />

addition to the volume <strong>of</strong> the contracts with industry, the institution should indicate to what extent<br />

the contracts are with local, national or international industry<br />

Dimension 7: <strong>European</strong> research pr<strong>of</strong>i le<br />

Indicator 2.3.1.: fi nancial turnover in EU research programmes as % <strong>of</strong> total fi nancial turnover<br />

• important indicator;<br />

• data available;<br />

• it was suggested to add an indicator or expand the indicator on the <strong>European</strong> research pr<strong>of</strong>i le.<br />

The new or expanded indicator should refer to international research income (not limited to<br />

<strong>European</strong>).<br />

Dimension 8: international orientation<br />

Indicator 3.1.1.: number <strong>of</strong> international students as % <strong>of</strong> total number enrolled<br />

• data can be easily provided.<br />

Indicator 3.1.2a.: number <strong>of</strong> <strong>European</strong> students as % <strong>of</strong> total enrolment<br />

• data available.<br />

Indicator 3.1.2b.: number <strong>of</strong> programmes <strong>of</strong>fered abroad<br />

• data can be provided.<br />

Indicator 3.1.3.: number <strong>of</strong> international staff as % <strong>of</strong> total staff<br />

• data on nationality is not readily available.<br />

Dimension 9: involvement in lifelong learning<br />

Indicator 3.2.1.: number <strong>of</strong> mature students (> 30 years) as % <strong>of</strong> total enrolment<br />

• unclear indicator;<br />

• data on age <strong>of</strong> students available.<br />

Dimension 10: Size<br />

MAPPING DIVERSITY<br />

Indicator 4.1.1.: enrolment<br />

• important indicator;<br />

• data available.<br />

Indicator 4.1.2.1: staff (number? fte?)<br />

• important indicator;<br />

• data available.<br />

Dimension 11: mode <strong>of</strong> delivery<br />

Indicator 4.2.1.: campus vs. distance


Indicator 4.2.2.: number <strong>of</strong> part time programmes <strong>of</strong>fered as % <strong>of</strong> total number <strong>of</strong> programmes<br />

55<br />

• this additional indicator was suggested by the Advisory Board;<br />

• data are available<br />

Dimension 12: community services<br />

The University <strong>of</strong> Strathclyde has conducted a study to assess the impact <strong>of</strong> the University on the<br />

region 6 . The information from that study may be a valuable input for the discussions regarding<br />

the dimension on community services (even though the study is a quite intensive econometric<br />

analysis).<br />

Dimension 13: public/private character<br />

Indicator 4.4.1.: % private income in total income<br />

• The fact that information on tuition fees is missing is seen as an omission.<br />

• important indicator;<br />

• data available.<br />

Dimension 14: legal status<br />

Indicator 4.5.1: public/private status<br />

• to be defi ned in legal terms;<br />

Extra Dimension: cultural intensiveness research.<br />

This dimension has been suggested (in Advisory Board) as an additional one to address research<br />

outcomes related to the arts and humanities sectors. It is assumed to cover the socio-cultural<br />

exploitation <strong>of</strong> research. So far no indicators have been specifi ed.<br />

This dimension as such is not relevant for University <strong>of</strong> Strathclyde.<br />

8 Ursula Kelly, Donald McLellan, Iain McNicoll: Strathclyde means business, The impact <strong>of</strong> the University <strong>of</strong><br />

Strathclyde on the economy <strong>of</strong> Scotland and on the City <strong>of</strong> Glasgow.<br />

MAPPING DIVERSITY


56 Annex III: The pilot survey<br />

Introduction<br />

Purpose <strong>of</strong> the pilot survey<br />

The main purpose <strong>of</strong> the pilot survey was to test the questionnaires that were to be sent out to a<br />

larger group <strong>of</strong> higher education institutions. Identifi cation <strong>of</strong> fl aws in the questionnaires in the eyes<br />

<strong>of</strong> respondents in higher education institutions and getting their suggestions for amendments to<br />

the questionnaires were the major goals set for the pilot. A secondary purpose was to create a fi rst<br />

version <strong>of</strong> a data base on the indicators and dimensions selected.<br />

Set-up<br />

In co-operation with the Advisory Board, eleven higher education institutions were identifi ed that<br />

volunteered to be test cases for the pilot questionnaires. In July 2007 two questionnaires were sent<br />

out to these higher education institutions; one questionnaire on the dimensions and one on the<br />

indicators. By the end <strong>of</strong> August 2007 eight valid responses were received. These eight included<br />

the two in-depth case study institutions presented in Annex II.<br />

The questionnaires<br />

Two questionnaires were sent out to the test case institutions. Both questionnaires were sent out as<br />

on-line versions only.<br />

In the questionnaire on dimensions two questions were asked (for each individual dimension):<br />

1. is the dimension essential for pr<strong>of</strong>i ling your own higher education institution? (probing the<br />

perceived relevance <strong>of</strong> the dimensions)<br />

2. is the indicator described a valid indicator? (probing the validity <strong>of</strong> the indicators: do they<br />

measure the phenomenon central to the dimension?)<br />

Respondents had to use a slide bar to indicate to what extent they agreed or disagreed with the<br />

statements given.<br />

MAPPING DIVERSITY<br />

In the questionnaire on the indicators respondents had to answer two blocks <strong>of</strong> questions. The fi rst<br />

block referred to the actual data and information on reference data, like totals and reference years.<br />

The second block referred to an assessment <strong>of</strong> the indicator in terms <strong>of</strong> feasibility and reliability.<br />

Respondents were furthermore invited to comment on the choice <strong>of</strong> indicators and the way they<br />

were measured.<br />

The results<br />

The dimensions<br />

The classifi cation, phase I report concluded with 14 dimensions. Based on the discussions in<br />

the Advisory Board and the Stakeholder Group (12 December 2006), the dimension ‘community<br />

engagement’ was replaced by two other dimensions: ‘regional engagement’ and ‘cultural<br />

engagement’.


The dimension ‘cultural engagement’ was considered to be the least essential for the pr<strong>of</strong>i les <strong>of</strong> the<br />

test cases. Comments on this dimension indicate that the way cultural engagement was defi ned<br />

raises some concern: the defi nitions are not very precise and the results may be interpreted in<br />

various ways. It was also mentioned that the scores on this dimension may be affected by the<br />

systemic and social context.<br />

‘Involvement in life long learning’ and ‘<strong>European</strong> research pr<strong>of</strong>i le’ did not score well either. The<br />

comments on the latter dimension questioned the narrow <strong>European</strong> focus and suggested to broaden<br />

the focus.<br />

International orientation, research intensiveness and size scored relatively well.<br />

57<br />

Figure 2: Responses to the statement ‘this dimension is essential for the pr<strong>of</strong>ile <strong>of</strong> our<br />

institution’<br />

Regional engagement<br />

Cultural engagement<br />

Public private<br />

Mode <strong>of</strong> delivery<br />

Size<br />

<strong>European</strong> res pr<strong>of</strong>ile<br />

International orientation<br />

Innovation intens<br />

Research intens<br />

strongly agree<br />

agree<br />

neutral<br />

disagree<br />

strongly disagree<br />

LLL<br />

Academic orientation<br />

Range <strong>of</strong> subjects<br />

Type <strong>of</strong> highest degree<br />

0% 20% 40% 60% 80% 100%<br />

MAPPING DIVERSITY


The respondents were asked also to identify the three ‘most important’ dimensions and the three<br />

58<br />

‘least important’ dimensions. The responses are not very consistent with the previous results:<br />

‘cultural engagement’ is considered most <strong>of</strong>ten to be ‘least important’, and ‘involvement in LLL’ is<br />

also relatively <strong>of</strong>ten mentioned as least important. In contrast with the previous results, ‘<strong>European</strong><br />

research pr<strong>of</strong>i le’ is not considered to be ‘least important’ and ‘regional engagement’ and ‘public/<br />

private character’ are. Research intensiveness and highest type <strong>of</strong> degree are considered most<br />

<strong>of</strong>ten as most important. The low score <strong>of</strong> ‘international orientation’ and ‘size’ is not in line with the<br />

previous results.<br />

Dimensions may be seen as essential for the pr<strong>of</strong>i le <strong>of</strong> a higher education institution, but the<br />

same dimensions for one institution are not necessarily the most important ones for another. Some<br />

comments on the ranking <strong>of</strong> dimensions corroborated this conclusion.<br />

Figure 3: Scores on the three ‘most important’ and the three ‘least important’ dimensions<br />

Regional engagement<br />

Cultural engagement<br />

Public private<br />

Mode <strong>of</strong> delivery<br />

Size<br />

<strong>European</strong> res pr<strong>of</strong>ile<br />

International orientation<br />

Innovation intens<br />

most important<br />

least important<br />

Research intens<br />

LLL<br />

orientation<br />

Range <strong>of</strong> subjects<br />

Type <strong>of</strong> degree<br />

0 1 2 3 4 5 6 7<br />

As a result <strong>of</strong> the pilot survey, the project team developed an instrument for showing the different<br />

pr<strong>of</strong>i les <strong>of</strong> the higher education institutions involved.<br />

MAPPING DIVERSITY<br />

In the fi gure below, the responses are presented by case, thus providing institutional pr<strong>of</strong>i les that<br />

may be read as ‘mission-driven’ pr<strong>of</strong>i les.


Figure 4: Institutional pr<strong>of</strong>i les based on the responses to the statement ‘this dimension is essential<br />

for the pr<strong>of</strong>i le <strong>of</strong> our institution’<br />

59<br />

MAPPING DIVERSITY


The shapes <strong>of</strong> the pr<strong>of</strong>i les differ substantially. This is caused partly by the differences in outspokenness<br />

60<br />

<strong>of</strong> the respondents: case 1 scores very <strong>of</strong>ten ‘neutral’, whereas 5 and 6 have ‘agree’ as their standard<br />

score. Despite this effect, cases do differ in their opinions in what is essential for the institutional<br />

pr<strong>of</strong>i le. Case 8 is much more ‘research oriented’ than case 10.<br />

5.4.1 Indicators<br />

The focus <strong>of</strong> the pilot survey was to fi nd out whether the higher education institutions could provide<br />

data (in terms <strong>of</strong> feasibility and reliability), whether the presentation and formulations used were<br />

adequate, and whether the respondents considered the indicators selected as valid indicators for<br />

the dimension.<br />

Dimension 1: highest degree <strong>of</strong>fered<br />

For this dimension two indicators were selected: the highest degree programme <strong>of</strong>fered and the<br />

number <strong>of</strong> degrees granted by type <strong>of</strong> degree. The validity <strong>of</strong> these indicators was not challenged.<br />

There were some comments on the feasibility. These comments referred to the predefi ned categories<br />

<strong>of</strong> types <strong>of</strong> degrees (doctorate, master and bachelor) that did not fi t all higher education systems<br />

and programmes. Especially the pre-Bologna programmes caused diffi culties. It proved that the<br />

second indicator could be misunderstood: number <strong>of</strong> degree programmes were reported instead<br />

<strong>of</strong> number <strong>of</strong> qualifi cations awarded (number <strong>of</strong> graduates).<br />

Dimension 2: Range <strong>of</strong> subjects <strong>of</strong>fered<br />

For this a list <strong>of</strong> nine subject areas was used, based on the ISCED classifi cation <strong>of</strong> subjects 7 . The<br />

use <strong>of</strong> the ISCED list raised some questions since institutions use national classifi cations in reporting<br />

to their national agencies, not the international ISCED classifi cation. The validity <strong>of</strong> the indicator as<br />

well as its feasibility and reliability were not challenged.<br />

Dimension 3: Pr<strong>of</strong>essional orientation <strong>of</strong> programmes<br />

In the process <strong>of</strong> drafting the questionnaire it proved to be diffi cult to fi nd adequate indicators for<br />

this dimension. Two indicators were chosen: the number <strong>of</strong> programmes leading to a certifi ed or<br />

regulated pr<strong>of</strong>ession and the number <strong>of</strong> programmes that respond to a specifi c demand. For the<br />

fi rst indicator a link to a EU list <strong>of</strong> regulated pr<strong>of</strong>essions was provided 8 , but respondents appeared to<br />

be confused about the concepts <strong>of</strong> this list. The validity <strong>of</strong> the fi rst indicator was challenged by only<br />

one respondent, but the validity <strong>of</strong> the second indicator was questioned by almost all respondents.<br />

Feasibility and reliability did not score high either.<br />

MAPPING DIVERSITY<br />

Dimension 4: Involvement in LLL<br />

Life long learning is an issue that has been high on many political agendas for a number <strong>of</strong> years.<br />

In the higher education sector, LLL is discussed quite <strong>of</strong>ten, but what higher education institutions<br />

actually do in this area is not very well documented. Finding an adequate indicator was therefore<br />

a tricky operation, in which the project team apparently did not fully succeed. The percentage <strong>of</strong><br />

mature students (30+) enrolled was challenged as a valid indicator for the involvement in LLL. For<br />

some the cut-<strong>of</strong>f point (30 years) was too high, while others questioned the relation between age<br />

9 In ISCED-97 (the International Standard Classifi cation <strong>of</strong> Educational programmes) programmes are classifi ed into<br />

fi elds <strong>of</strong> education according to a 2-digit classifi cation. The classifi cation is consistent with the fi elds defi ned in the manual<br />

‘Fields <strong>of</strong> Education and Training’ (Eurostat, 1999). For further information see OECD (2004). Handbook for Internationally<br />

Comparative Education Statistics. Paris.<br />

10 The EU has developed guidelines for the recognition <strong>of</strong> pr<strong>of</strong>essional qualifi cations.<br />

A list <strong>of</strong> <strong>European</strong> regulation and national lists <strong>of</strong> regulated pr<strong>of</strong>essions can be found on the website: http://ec.europe.eu/<br />

internal_market/qualifi cations/regpr<strong>of</strong>/index.cfm


and life long learning. Feasibility and reliability were also ‘below standard’. Surprisingly, 6 out <strong>of</strong> ten<br />

institutions could provide data, although in half <strong>of</strong> the cases special calculations had to be made.<br />

61<br />

Dimension 5: research intensiveness<br />

Research intensiveness is indicated by two indicators: the CWTS ‘crown’-indicator 9 (a citation based<br />

composite indicator) and the number <strong>of</strong> peer reviewed publications per fte academic staff. Although<br />

the ‘crown’-indicator is seen as a state <strong>of</strong> the art indicator as far as citation scores are concerned,<br />

only half <strong>of</strong> the respondents agreed that this indicator is a valid indicator for research intensiveness.<br />

From the comments we conclude that the reluctance for this indicator is based on the argument<br />

that social sciences and humanities and arts are poorly represented when using citations scores<br />

as an indicator. The term ‘peer reviewed’ evoked some comments. Some institutions equate peer<br />

reviewed with refereed and in some institutions (especially universities <strong>of</strong> applied sciences) there is<br />

no distinction made between normal publications and peer reviewed publications. The reliability <strong>of</strong><br />

the data from this indicator is questioned by half <strong>of</strong> the respondents, whereas the other half strong<br />

supports this indicator as a valid one. Feasibility scores relatively low.<br />

Dimension 6: Innovation intensiveness<br />

Four indicators were selected in this dimension: the number <strong>of</strong> start-up fi rms (annual average <strong>of</strong><br />

last three years), the number <strong>of</strong> applications for patents fi led per fte academic staff, the amount<br />

<strong>of</strong> licensing income as a percentage <strong>of</strong> total income, and the fi nancial volume <strong>of</strong> private research<br />

contracts as a percentage <strong>of</strong> total research revenues. Patents and private research contracts are<br />

seen by most respondents as valid indicators, although there are also some respondents that<br />

question the validity <strong>of</strong> patents as indicators <strong>of</strong> innovation intensiveness. There were furthermore<br />

some comments on the scope <strong>of</strong> patents and licensing income: whether it included medicine or<br />

not.<br />

Dimension 7: International orientation<br />

This dimension covers the international orientation <strong>of</strong> a higher education institution in teaching and<br />

training. The fi rst indicator addresses the free movers; students who enrol abroad with the intention<br />

to get a full degree. The second indicator addresses the <strong>European</strong> mobility programmes and the<br />

activities <strong>of</strong> the institution in that area. Programme <strong>of</strong>fering abroad (<strong>of</strong>f-shore teaching) is the third<br />

indicator in this dimension. The fourth indicator refers to the international character <strong>of</strong> the academic<br />

(teaching) staff. The validity <strong>of</strong> the fi rst and fourth indicator is not or only marginally challenged.<br />

The validity <strong>of</strong> the <strong>European</strong> mobility indicator is challenged more because <strong>of</strong> its focus on Europe<br />

only. There are also some comments on practical issues like the use <strong>of</strong> the academic year, which<br />

<strong>European</strong> programmes to include, and what degree level students to include. The indicator<br />

‘programmes delivered abroad’ also raised some questions: whether joint degree programmes<br />

were to be excluded? What does delivery abroad actually mean? Feasibility and reliability <strong>of</strong> this<br />

indicator did not score very well.<br />

Dimension 8: <strong>European</strong> research pr<strong>of</strong>i le<br />

The fi nancial turnover in <strong>European</strong> research programmes as a percentage <strong>of</strong> total turnover in<br />

research programmes was not considered to be a very valid indicator for this dimension. There was<br />

furthermore a technical comment on the difference between total turnover and total revenues. Data<br />

are available, but it proved not all that easy to collect them.<br />

11 the fi eld normalised citation score, developed by the Centre for Science and Technology Studies (CWTS) at<br />

Leiden University, better known as the “crown indicator”. This bibliometric indicator focuses on the impact the publications<br />

<strong>of</strong> a research group or institute have and relates it to a worldwide fi eld-specifi c reference value. Further Information on this<br />

crown indicator can be found at the web site <strong>of</strong> CWTS (www.cwts.nl).<br />

MAPPING DIVERSITY


Dimension 9: Size<br />

62<br />

The two indicators in this dimension (students enrolled and staff volume) are more or less standard<br />

data that are readily available for all institutions. The validity <strong>of</strong> these indicators for this dimension<br />

is also not questioned. One institution raised the issue <strong>of</strong> academic year versus calendar year: the<br />

former would make life much easier.<br />

Dimension 10: Mode <strong>of</strong> delivery<br />

Two indicators were identifi ed: the number <strong>of</strong> distance programmes <strong>of</strong>fered as a percentage <strong>of</strong><br />

the total number <strong>of</strong> programmes <strong>of</strong>fered and the number <strong>of</strong> part-time programmes <strong>of</strong>fered as a<br />

percentage <strong>of</strong> the total number <strong>of</strong> programmes <strong>of</strong>fered.<br />

Some cases did not provide distance programmes; others had diffi culty providing data on them.<br />

It was also not clear to what extent blended learning had to be included. The percentage <strong>of</strong> parttime<br />

programmes is seen as a valid indicator in this dimension, although in some systems part-time<br />

programmes do not exist (or were allowed only recently). Information on part-time programmes<br />

proved to be easier to collect and proved to be slightly more reliable than information on distance<br />

learning programmes.<br />

Dimension 11: Public private character<br />

The income from competitive and non-competitive government funding, as a percentage <strong>of</strong> total<br />

revenues was the main indicator in this eleventh dimension. The comments made clear that the<br />

distinction between competitive and non-competitive funds caused some confusion. In two cases<br />

the funding from national research councils was excluded because the defi nition was not clear<br />

(enough). Two other cases also commented on the vague defi nitions. The validity <strong>of</strong> this indicator<br />

was however unchallenged.<br />

The second indicator in this dimension is on tuition fees: the annual tuition fees by category <strong>of</strong><br />

student and level <strong>of</strong> degree. The validity <strong>of</strong> this indicator was challenged. This has mainly to do with<br />

the lack <strong>of</strong> information on the total volume <strong>of</strong> tuition related income.<br />

Dimension 12: Legal status<br />

Two cases reported some diffi culty in understanding what information was asked for. The omission<br />

<strong>of</strong> an info-screen may have contributed to this. The information proved easy to collect.<br />

MAPPING DIVERSITY<br />

Dimension 13: Cultural engagement<br />

The respondents in the pilot survey were not very enthusiastic about the two indicators in this<br />

dimension (the number <strong>of</strong> concerts and the number <strong>of</strong> exhibitions). The validity is challenged and<br />

data are diffi cult to collect.<br />

Dimension 14: Regional engagement<br />

Regional engagement is one <strong>of</strong> the more experimental dimensions. Literature does not give any<br />

clear-cut indicators for this dimension. The project team produced three indicators: the annual<br />

turnover in EU structural funds, the number <strong>of</strong> graduates staying in the region, and the number <strong>of</strong><br />

extracurricular courses <strong>of</strong>fered for the regional labour market.<br />

The latter two indicators suffered from the lack <strong>of</strong> a clear defi nition <strong>of</strong> the region. The validity <strong>of</strong><br />

the ‘structural funds’ indicator was severely challenged as well as the validity <strong>of</strong> the indicator on<br />

graduates in the region. The reliability <strong>of</strong> the data was low for all indicators in this dimension and it<br />

proved diffi cult to collect information on these indicators.


Conclusions<br />

The pilot survey showed that the questionnaires were a suitable instrument for collecting information<br />

and data for the project. However, the time and effort needed to complete the questionnaires,<br />

especially the one on indicators, proved to be an obstacle that may keep higher education institutions<br />

from completing the questionnaires and participating in the classifi cation.<br />

63<br />

Based on the results <strong>of</strong> the pilot survey, a number <strong>of</strong> changes were made to the questionnaires.<br />

• Clarifi cation <strong>of</strong> the purpose <strong>of</strong> the survey and, the defi nition <strong>of</strong> indicators has been upgraded<br />

• The assessment panel (lower part <strong>of</strong> the pages on the indicators) has been upgraded. The<br />

‘sliders’ were replaced by 4 point clickable scales; the time-indication item was improved and<br />

an additional question was added to the questions regarding the use <strong>of</strong> existing sources<br />

• The respondents were not any more ‘forced’ to use predefi ned types <strong>of</strong> degree programmes,<br />

but were invited to use self reported (national) types <strong>of</strong> programmes<br />

• An additional indicator on the legal status <strong>of</strong> the institution was developed. The actual information<br />

was complemented with a question regarding the perceived status <strong>of</strong> the institution (using<br />

OECD defi nitions)<br />

MAPPING DIVERSITY


64 Annex IV: The CEICHE II survey<br />

Contents<br />

Introduction<br />

Rationale <strong>of</strong> the survey<br />

Set up and response<br />

Set up<br />

Response<br />

69<br />

69<br />

69<br />

69<br />

70<br />

The dimensions; scores on relevance<br />

Overview <strong>of</strong> the opinions<br />

71<br />

72<br />

MAPPING DIVERSITY<br />

The indicators<br />

Validity <strong>of</strong> indicators<br />

Reliability<br />

Feasibility <strong>of</strong> indicators<br />

Time needed to collect information<br />

Ease to collect<br />

Data from existing source<br />

Valid cases<br />

Overview<br />

Results<br />

Indicator 1a: Highest level <strong>of</strong> degree <strong>of</strong>fered<br />

Indicator 1b: Number <strong>of</strong> degrees awarded in each type <strong>of</strong> degree<br />

Indicator 2a: number <strong>of</strong> subject areas <strong>of</strong>fered<br />

Indicator 3a and 3b: Orientation <strong>of</strong> programs<br />

Indicator 4a: Enrolment by age<br />

Indicator 5a: Annual number <strong>of</strong> peer reviewed publications relative to the total number<br />

<strong>of</strong> academic staff<br />

Indicator 6a: The number <strong>of</strong> start-up fi rms<br />

Indicator 6b: Number <strong>of</strong> patents applications fi led per fte academic staff<br />

Indicator 6c: The annual licensing income<br />

Indicator 6d: Financial volume <strong>of</strong> privately funded research contracts as a percentage<br />

<strong>of</strong> total research revenues.<br />

Indicator 7a: foreign degree seeking students as a percentage <strong>of</strong> total enrolment in<br />

degree programs<br />

Indicator 7b Incoming EU exchange students as a percentage <strong>of</strong> the total number <strong>of</strong><br />

students, by level <strong>of</strong> degree<br />

Indicator 7c EU exchange students sent out as a percentage <strong>of</strong> the total number <strong>of</strong><br />

students, by level <strong>of</strong> degree<br />

Indicator 7d: International academic staff as a percentage <strong>of</strong> total staff (all headcount)<br />

Indicator 7e: Programs delivered abroad<br />

Indicator 8a: Financial turnover in EU research programs as a percentage <strong>of</strong> total<br />

research turnover<br />

Indicator 9a: Enrolment<br />

Indicator 9b: number <strong>of</strong> staff<br />

Indicator 10a: Percentage <strong>of</strong> programs <strong>of</strong>fered as distance learning program<br />

74<br />

76<br />

78<br />

79<br />

80<br />

81<br />

83<br />

85<br />

86<br />

87<br />

87<br />

87<br />

89<br />

90<br />

91<br />

92<br />

93<br />

94<br />

94<br />

95<br />

95<br />

96<br />

97<br />

97<br />

98<br />

98<br />

99<br />

99<br />

100


Indicator 10b: The percentage <strong>of</strong> programs <strong>of</strong>fered as part time programs<br />

Indicator 10c: The percentage <strong>of</strong> students enrolled as part time students<br />

Indicator 11a: Percentage <strong>of</strong> funding from government funding<br />

Indicator 11b: Income from tuition fees<br />

Indicator 12a legal status<br />

Indicator 13a: Concerts and performances<br />

Indicator 13b: Exhibitions<br />

Indicator 14a: Annual turnover in EU structural funds<br />

Indicator 14 b: Graduates in the region<br />

Indicator 14c: Extracurricular courses<br />

Indicator 14 d: Importance <strong>of</strong> regional sources<br />

101<br />

102<br />

102<br />

103<br />

104<br />

104<br />

105<br />

105<br />

106<br />

106<br />

106<br />

65<br />

Discussion<br />

‘Challenging’ dimensions<br />

Clustering dimensions<br />

In conclusion<br />

108<br />

108<br />

108<br />

111<br />

References<br />

112<br />

Appendix 1: Comments<br />

113<br />

MAPPING DIVERSITY


66 List <strong>of</strong> figures<br />

MAPPING DIVERSITY<br />

Figure 1: ‘this dimension is essential for the pr<strong>of</strong>i le <strong>of</strong> our institution’<br />

Figure 2: Most and least important dimensions<br />

Figure 3: Opinions regarding the statement ‘this indicator is a valid indicator’<br />

Figure 4: Opinions on the statement ‘information is reliable’<br />

Figure 5: Minutes needed to report data; average plus and minus 1 standard error<br />

Figure 6: Percentage <strong>of</strong> total time needed to report data; average plus and minus one<br />

standard error<br />

Figure 7: scores on ‘the information is easy to fi nd’<br />

Figure 8: Number <strong>of</strong> responding higher education institutions using existing sources, by<br />

indicator<br />

Figure 9: Number <strong>of</strong> valid responses, by indicator<br />

Figure 10: Responding higher education institutions by highest degree program <strong>of</strong>fered<br />

Figure 11: Percentages <strong>of</strong> degrees awarded, by type <strong>of</strong> degree<br />

Figure 12: Graduate intensity (graduate degrees awarded as % <strong>of</strong> total degrees awarded)<br />

Figure 13: Dominant degree level (degrees awarded; 40% cut-<strong>of</strong>f point)<br />

Figure 14: Number <strong>of</strong> responding higher education institutions by number <strong>of</strong> subject areas<br />

<strong>of</strong>fered<br />

Figure 15: Higher education institutions by percentage <strong>of</strong> pr<strong>of</strong>essionally oriented programs<br />

<strong>of</strong>fered<br />

Figure 16: Higher education institutions by ratio <strong>of</strong> programs for certifi ed pr<strong>of</strong>ession/programs<br />

with pr<strong>of</strong>essional orientation (subjectively assessed)<br />

Figure 17: Higher education institutions by the percentage <strong>of</strong> mature students enrolled, by<br />

type <strong>of</strong> degree program; mature=30+<br />

Figure 18: Higher education institutions by the percentage <strong>of</strong> mature students enrolled, by<br />

type <strong>of</strong> degree program; mature=25+<br />

Figure 19: Higher education institutions by the number <strong>of</strong> peer reviewed publications per<br />

academic staff member<br />

Figure 20: Higher education institutions by research income as % <strong>of</strong> total income<br />

Figure 21: Higher education institutions by number <strong>of</strong> start-up fi rms (annual average over<br />

last three years)<br />

Figure 22: Higher education institutions by patent application per fte academic staff<br />

Figure 23: Higher education institutions by the percentage <strong>of</strong> licensing income<br />

Figure 24: Higher education institutions by privately funded research contracts as % <strong>of</strong> total<br />

research revenues<br />

Figure 25: Higher education institutions by proportion <strong>of</strong> foreign degree seeking students, by<br />

type <strong>of</strong> program<br />

Figure 26: higher education institutions by the percentage <strong>of</strong> incoming EU exchange students,<br />

by type <strong>of</strong> degree<br />

Figure 27: Higher education institutions by the percentage <strong>of</strong> EU exchange students, sent<br />

out, by type <strong>of</strong> degree<br />

Figure 28: Higher education institutions by % <strong>of</strong> international academic staff<br />

Figure 29: Higher education institutions by % <strong>of</strong> programs <strong>of</strong>fered abroad by level <strong>of</strong><br />

program<br />

Figure 30: Higher education institutions by turnover in EU research programs as % <strong>of</strong> total<br />

research revenues<br />

Figure 31: Higher education institutions by number <strong>of</strong> students enrolled<br />

72<br />

73<br />

77<br />

79<br />

80<br />

81<br />

82<br />

84<br />

85<br />

87<br />

88<br />

88<br />

89<br />

89<br />

90<br />

91<br />

91<br />

92<br />

92<br />

93<br />

93<br />

94<br />

94<br />

95<br />

96<br />

96<br />

97<br />

97<br />

98<br />

98<br />

99


Figure 32: Higher education institutions by fte academic staff<br />

Figure 33: Higher education institutions by ratio non-academic/academic staff<br />

Figure 34: Higher education institutions by % <strong>of</strong> programs <strong>of</strong>fered as distance learning<br />

program by level <strong>of</strong> program<br />

Figure 35: Higher education institutions by % <strong>of</strong> programs <strong>of</strong>fered as part-time program by<br />

level <strong>of</strong> program<br />

Figure 36: Higher education institutions by % <strong>of</strong> part-time students by level <strong>of</strong> program<br />

Figure 37: Higher education institutions by % <strong>of</strong> government funding<br />

Figure 38: Higher education institutions by tuition fee income as % <strong>of</strong> total income<br />

Figure 39: Higher education institutions by public private status<br />

Figure 40: Higher education institutions by concerts and performance per staff member<br />

Figure 41: Higher education institutions by exhibitions per staff member<br />

Figure 42: Higher education institutions by annual turnover in EU structural funds as % <strong>of</strong><br />

total income<br />

Figure 43: Higher education institutions by extra curricula courses <strong>of</strong>fered<br />

Figure 44: Higher education institutions by score on importance <strong>of</strong> different sources <strong>of</strong><br />

income<br />

Figure 45: <strong><strong>Map</strong>ping</strong> <strong>of</strong> the dimensions and the correlations between the scores on<br />

relevance<br />

99<br />

100<br />

101<br />

101<br />

102<br />

103<br />

103<br />

104<br />

104<br />

105<br />

105<br />

106<br />

107<br />

110<br />

67<br />

MAPPING DIVERSITY


68 List <strong>of</strong> tables<br />

Table 1: Sampling strata<br />

Table 2: Age strata<br />

Table 3: Size strata<br />

Table 4: Higher education institutions by region (in IAU database and CEIHE II survey)<br />

Table 5: Overview <strong>of</strong> indicators and dimensions<br />

Table 6: Percentage <strong>of</strong> strongly disagree or disagree on statement ‘this indicator is a valid<br />

indicator’<br />

Table 7: Average time spend on collecting and reporting data, per indicator<br />

Table 8: Opinions on the statement ‘information is easy to fi nd’<br />

Table 9: Percentage <strong>of</strong> the responding higher education institutions using existing sources,<br />

by indicator<br />

Table 10: Percentage <strong>of</strong> valid responses, by indicator<br />

Table 11: Grouping <strong>of</strong> indicators by feasibility score<br />

Table 12: Correlations between dimensions<br />

70<br />

71<br />

71<br />

71<br />

75<br />

77<br />

80<br />

81<br />

83<br />

85<br />

87<br />

109<br />

MAPPING DIVERSITY


Introduction<br />

69<br />

Rationale for the survey<br />

The classifi cation is intended to be based on the actual behavior <strong>of</strong> higher education institutions.<br />

The relevant dimensions <strong>of</strong> that behavior are organized in 14 ‘dimensions’ and measured with 32<br />

indicators. The information on these indicators at the institutional level is diffi cult to fi nd in international<br />

databases. National data sources usually have more relevant information but the use <strong>of</strong> such data<br />

sources is limited because <strong>of</strong> various practical, legal, and even methodological problems.<br />

Therefore a survey among higher education institutions was set up. This survey serves three<br />

purposes:<br />

• to assess the relevance <strong>of</strong> the dimensions selected<br />

• to assess the quality <strong>of</strong> the indicators selected<br />

• to provide data that will allow further analyses on the dimensions and their clustering and on the<br />

indicators and their potential and pitfalls.<br />

The results <strong>of</strong> the survey are presented in this annex. In the conclusion, some analyses and ideas<br />

are discussed on how to proceed with the clustering <strong>of</strong> the dimensions and the transformation <strong>of</strong> the<br />

survey results into a classifi cation tool.<br />

Set up and response<br />

Set up<br />

The survey consists <strong>of</strong> two questionnaires: a questionnaire on the dimensions, querying the<br />

relevance <strong>of</strong> the dimensions and the indicators selected, and a questionnaire on the indicators. The<br />

latter asks both for data on the 32 indicators selected as well as for an assessment <strong>of</strong> the quality <strong>of</strong><br />

the indicators.<br />

Draft questionnaires were developed based on the dimensions and indicators identifi ed and selected<br />

at the end <strong>of</strong> phase I <strong>of</strong> the project (van Vught and Bartelse 2005). These draft questionnaires were<br />

tested and discussed in the pilot survey <strong>of</strong> eight cases (including the two case studies reported in<br />

annex II). Based on the results <strong>of</strong> these tests, the questionnaires were adjusted and put on-line for<br />

the survey (for a pdf version <strong>of</strong> the questionnaires see www.cheps.org//ceihe_dimension.pdf and<br />

www.cheps.org//ceihe_indicators.pdf).<br />

The intended size <strong>of</strong> the sample for the survey was 100 higher education institutions. To keep the<br />

non-response rate as low as possible, networks <strong>of</strong> higher education institutions as represented in the<br />

Advisory Board were asked to introduce the project and identify contact persons. Around 160 higher<br />

education institutions have been contacted. A second channel through which potential participants<br />

to the survey were identifi ed was through an open web-based procedure. On the project website<br />

(www.cheps.org/ceihe) higher education institutions could express their interest to participate.<br />

Based on the information provided on the expression <strong>of</strong> interest form, the project team decided<br />

whether an applicant could participate. In total 16 higher education institutions were selected this<br />

way. A last way to invite institutions to participate was through national and international conferences.<br />

At a number <strong>of</strong> occasions the project was presented and a call for participation was made. Although<br />

it is not possible to determine how many responding higher education institutions came through that<br />

MAPPING DIVERSITY


channel, it was obvious that the strong participation <strong>of</strong> Polish and Turkish institutions was triggered<br />

70<br />

through this channel.<br />

The rationale <strong>of</strong> the project is to show the diversity <strong>of</strong> <strong>European</strong> higher education. To achieve that<br />

goal we are developing a classifi cation tool. To ensure that the tools can capture the diversity we<br />

have to make sure that the data on which the tool is developed and tested are suffi ciently diverse. If<br />

the data are too homogeneous, we cannot be sure that the classifi cation tool developed using those<br />

data can capture the full diversity <strong>of</strong> <strong>European</strong> higher education. To create the required diversity in<br />

the experimental data set, the sample needs to be stratifi ed.<br />

Based on the results <strong>of</strong> the fi rst phase <strong>of</strong> the project, six stratifi cation criteria were selected:<br />

size, age <strong>of</strong> the institution, scope (comprehensive versus specialised), highest degree <strong>of</strong>fered,<br />

research orientation, and country/region. To determine where the boundaries between the strata<br />

are we needed to have some information on all higher education institutions in Europe (Moors and<br />

Muilwijk 1975, pp.63-65). This is problematic since our previous analyses showed that there is no<br />

comprehensive database comprising all higher education institutions. The most comprehensive<br />

database is the database <strong>of</strong> the International Association <strong>of</strong> Universities (IAU). However, there is<br />

no reliable information in that database on scope, highest degree <strong>of</strong>fered and research orientation.<br />

These stratifi cation criteria therefore had to be dropped.<br />

The strata in age and size were based on the information on 1634 universities and 1498 nonuniversity<br />

higher education institution in the IAU database. For the identifi cation <strong>of</strong> the regions, the<br />

UN classifi cation <strong>of</strong> regions was used 1 . In this classifi cation Europe is divided in Eastern, Northern,<br />

Southern and Western Europe. Turkey is categorized by the UN as Asia, but in this project it is<br />

categorized in Southern Europe.<br />

Table 1: Sampling strata<br />

Age (year founded) Size (students enrolled) Region<br />

1816 or earlier 1572 or less<br />

North (Denmark, Finland, Sweden,<br />

Norway, Ireland, UK, Latvia, Estonia,<br />

Lithuania)<br />

1817-1917 1573-6400<br />

West (Austria, France, Germany,<br />

Belgium, the Netherlands,<br />

Luxembourg, Switzerland)<br />

1918-1972 6401-15539<br />

South (Greece, Italy, Spain, Portugal,<br />

Malta, Cyprus, Slovenia, Turkey)<br />

1973 or later 15540 or more<br />

East (Bulgaria, Czech Republic,<br />

Slovakia, Hungary, Poland,<br />

Romania, Russia)<br />

MAPPING DIVERSITY<br />

Response<br />

67 Higher education institutions submitted a valid response to the indicator questionnaire and 85<br />

responsed to the dimensions questionnaire.<br />

Age<br />

In terms <strong>of</strong> age, the response is skewed towards the younger categories (see table below). A<br />

possible explanation for this may be found in the underrepresentation <strong>of</strong> non university institutions<br />

in the IAU data. Since non-univeristy institutions are on average younger than universities, it is<br />

plausible that (part) <strong>of</strong> the ‘skewed sample’ is due to this factor. In the table we present also an<br />

alternative defi nition <strong>of</strong> the age classes, based on the response in the survey.<br />

1 http://unstats.un.org/unsd/methods/m49/m49regin.htm#europe


Table 2: Age strata<br />

IAU based strata<br />

Survey based strata<br />

Older than 190 14.1% Older than 95 28.2%<br />

91-190 15.3% 41-95 23.5%<br />

35-90 31.8% 20-40 23.5%<br />

Younger than 35 38.8% Younger than 20 24.7%<br />

71<br />

Size<br />

Based on the IAU based size strata we may conclude that the sample is skewed towards the larger<br />

higher education institutions. Apparently, larger higher education institutions have more resources,<br />

commitment or opportunities to participate in the survey. Whether this conclusion will hold also with<br />

a larger sample remains to be seen.<br />

Table 3: Size strata<br />

IAU based strata<br />

Survey based strata<br />

Less than 1,573 9.0% Less than 7,500 23,9%<br />

1,573-6,400 10.4% 7,500-15,000 20,9%<br />

6,401-15,539 25.4% 15,000-30,000 31,3%<br />

More than 15,540 55.2% More than 30,000 23,9%<br />

Region<br />

In the IAU database the Western Europe category is relatively big and the Southern category<br />

relatively small. The responding higher education institutions are very evenly distributed across the<br />

UN regions. This discrepancy is to a large extent caused by the Turkish institutions in the sample<br />

(that are not in the IAU database).<br />

Table 4: Higher education institutions by region (in IAU database and CEIHE II survey)<br />

IAU<br />

survey<br />

non univ univ total total<br />

East 374 42% 319 21% 693 28% 16 19%<br />

North 240 27% 304 20% 544 22% 20 23%<br />

South 114 13% 252 16% 366 15% 23 27%<br />

West 172 19% 659 43% 831 34% 26 31%<br />

The dimensions; scores on relevance<br />

The question ‘this dimension is essential for the pr<strong>of</strong>i le <strong>of</strong> our institution’ is a central question in the<br />

project. It probes the opinion <strong>of</strong> one <strong>of</strong> the key stakeholders in the debate on classifi cation <strong>of</strong> higher<br />

education institutions regarding the issues that are essential for pr<strong>of</strong>i ling their higher education<br />

institution.<br />

The results <strong>of</strong> this question can be used in two ways. First <strong>of</strong> all they can be used to produce an<br />

overview <strong>of</strong> the opinions regarding the relevance <strong>of</strong> the fourteen dimensions described.<br />

The scores on relevance can also be used to cluster the dimensions. For a classifi cation tool,<br />

fourteen dimensions might be judged to be too many. This calls for a reduction <strong>of</strong> dimensions. One<br />

way to do this is by analysing the correlations between the scores on relevance <strong>of</strong> the fourteen<br />

dimensions and seeing whether clusters <strong>of</strong> dimensions emerge.<br />

MAPPING DIVERSITY


72 Overview <strong>of</strong> the opinions<br />

The questions regarding relevance were on average completed by almost 95% <strong>of</strong> the responding<br />

higher education institutions. For eight out <strong>of</strong> the 14 dimensions more than 80% <strong>of</strong> the responding<br />

higher education institutions agreed on the relevance <strong>of</strong> the dimensions: 1, 2, 3, 5, 7, 9, 11, and 12.<br />

There was only one dimension (13) on which less than 60% agreed as being relevant.<br />

Figure 1: ‘this dimension is essential for the pr<strong>of</strong>ile <strong>of</strong> our institution’<br />

14: regional engagement<br />

13: cult engagement<br />

12: legal status<br />

11: public private<br />

10: mode <strong>of</strong> delivery<br />

9: size<br />

8: <strong>European</strong> research pr<strong>of</strong>ile<br />

7: international orientation<br />

6: innovation intensiveness<br />

5: research intensiveness<br />

4: life long learning<br />

3: orientation <strong>of</strong> progr<br />

2: range <strong>of</strong> subjects<br />

1; types <strong>of</strong> degrees <strong>of</strong>fered<br />

0% 20% 40% 60% 80% 10<br />

strongly disagree disagree agree strongly agree<br />

MAPPING DIVERSITY<br />

The relative relevance <strong>of</strong> the dimensions was also measured by the fi nal question <strong>of</strong> the dimensionsquestionnaire;<br />

the ranking question. For this question the respondents were asked to list the three<br />

most important and the three least important dimensions (see Figure 2).<br />

Dimensions 1 (types <strong>of</strong> degrees), 5 (research intensiveness), 7 (international orientation, teaching<br />

and staff), and 2 (range <strong>of</strong> subjects) were mentioned as the most important dimensions by more<br />

than one third <strong>of</strong> the responding higher education institutions. Dimension 10 (mode <strong>of</strong> delivery) and<br />

13 (cultural engagement) were mentioned as the least important dimensions by more than one third<br />

<strong>of</strong> the respondents.<br />

To fi nd out whether there is consensus among the responding higher education institutions regarding<br />

relative importance, we also compared the number <strong>of</strong> times a dimension was rated as least important<br />

to the number <strong>of</strong> times the dimension was rated most important. The scores on three dimensions<br />

(involvement in LLL, international research orientation and size) are mixed: the ratios between the<br />

most important and least important scores are roughly around 65%. There is clearly no consensus<br />

regarding the relative importance <strong>of</strong> these three dimensions. On the remaining dimensions the<br />

consensus is much higher. Six dimensions are seen more frequently as most important than as least<br />

important: 1, 2, 3, 5, 6, and 7. The balance is most negative for dimensions mode <strong>of</strong> delivery, cultural<br />

engagement, public private character and legal status.<br />

A lack <strong>of</strong> consensus is not a disqualifying characteristic. It merely means that the responding higher<br />

education institutions differ in their opinion regarding the relevance <strong>of</strong> those dimensions for the<br />

pr<strong>of</strong>i le <strong>of</strong> their institution.


What is more problematic (or more diffi cult to interpret) is that the results <strong>of</strong> the overall ranking and<br />

the relevance scores on the individual dimensions are not completely consistent. Public private<br />

character, legal status and, above all, size score high on the relevance scale whereas they are very<br />

frequently mentioned as the least important dimensions. Innovation intensiveness scores relatively<br />

low on the relevance scale but is frequently mentioned as most important dimension.<br />

73<br />

Figure 2: Most and least important dimensions<br />

regional engagement<br />

cult engagement<br />

least important<br />

most important<br />

legal<br />

pub-priv<br />

mode delivery<br />

size<br />

eur research pr<strong>of</strong><br />

international<br />

innovation intens<br />

research intens<br />

lll<br />

pr<strong>of</strong> orientation<br />

range subjects<br />

type degree<br />

0 10 20 30 40 50 60 70<br />

MAPPING DIVERSITY


74 The indicators<br />

The position <strong>of</strong> the higher education institutions and how they score on the fourteen dimensions<br />

cannot be determined using the abstract descriptions presented in the previous chapter. For that<br />

purpose 32 indicators were selected. These indicators can be seen as quantitative information that<br />

can be used to assess the position <strong>of</strong> a higher education institution on the dimensions. Many <strong>of</strong> the<br />

indicators specifi ed are composite indicators, combining two or more data-elements.<br />

In this report we <strong>of</strong>ten use a shorthand ‘code’ when referring to the indicators. This ‘code’ consists<br />

<strong>of</strong> a number (referring to the dimension the indicator belongs to) and a letter (see Table 5).<br />

This chapter consists <strong>of</strong> two parts. In the fi rst part we elaborate on three characteristics <strong>of</strong> the<br />

30 indicators that were used in the on-line questionnaires. First we look into the validity <strong>of</strong> the<br />

indicators; do the responding higher education institutions think that the indicators we have selected,<br />

measure the phenomena we are investigating? Do the indicators convey a ‘correct’ picture <strong>of</strong> the<br />

dimension?<br />

After answering that question the focus shifts to the question <strong>of</strong> whether the information reported<br />

is trustworthy: the perceived reliability <strong>of</strong> the information reported. Since there are signifi cant<br />

differences in the status <strong>of</strong> the indicators (some are based on widely accepted standard statistics,<br />

whereas others have a more experimental character) the project team thought it to be imperative to<br />

check the perceived reliability <strong>of</strong> the information reported.<br />

The fi nal characteristic <strong>of</strong> the indicators discussed is whether it is feasible for responding higher<br />

education institutions to collect the information on the indicators. This issue was one <strong>of</strong> the main<br />

reasons for the survey. A large part <strong>of</strong> the information underlying the classifi cation has to come from<br />

the individual higher education institutions. Given growing survey fatigue and the administrative<br />

burdens higher education institutions have to face, it is crucial to know how higher education<br />

institutions think about the burden this questionnaire puts on them. Four indications for feasibility<br />

are described: the time needed to fi nd and report the information, the perceived ease <strong>of</strong> fi nding the<br />

information, the use <strong>of</strong> existing sources and the percentage <strong>of</strong> valid responses received.<br />

The second part <strong>of</strong> the chapter comprises the scores on the indicators.<br />

MAPPING DIVERSITY


Table 5: Overview <strong>of</strong> indicators and dimensions<br />

Dimension<br />

Indicator<br />

1: types <strong>of</strong> degrees <strong>of</strong>fered 1a: highest level <strong>of</strong> degree program <strong>of</strong>fered<br />

1b: number <strong>of</strong> qualifi cations granted in each type <strong>of</strong> degree program<br />

2: range <strong>of</strong> subjects <strong>of</strong>fered 2a: number <strong>of</strong> subject areas covered by an institution using the<br />

UNESCO/ISCED subject areas<br />

3: orientation <strong>of</strong> degrees 3a: the number <strong>of</strong> programs leading to certifi ed/ regulated<br />

pr<strong>of</strong>essions as a % <strong>of</strong> the total number <strong>of</strong> programs<br />

3b: the number <strong>of</strong> programs <strong>of</strong>fered that answer to a particular<br />

demand from the labour market or pr<strong>of</strong>essions (as % <strong>of</strong> the total<br />

number <strong>of</strong> programs)<br />

4: involvement in life long<br />

learning<br />

4a: number <strong>of</strong> adult learners as a % <strong>of</strong> total number <strong>of</strong> students by<br />

type <strong>of</strong> degree<br />

5: research intensiveness 5a: number <strong>of</strong> peer reviewed publications per fte academic staff<br />

5b: the ISI based citation indicator, also known as the ‘crown<br />

indicator’<br />

6: innovation intensiveness 6a: the number <strong>of</strong> start-up fi rms)<br />

6b: the number <strong>of</strong> patent applications fi led<br />

6c: the annual licensing income<br />

6d: the revenues from privately funded research contracts as a % <strong>of</strong><br />

total research revenues<br />

7: international orientation:<br />

teaching and staff<br />

8: International orientation:<br />

research<br />

7a: the number <strong>of</strong> degree seeking students with a foreign nationality,<br />

as % <strong>of</strong> total enrolment<br />

7b: the number <strong>of</strong> incoming students in <strong>European</strong> exchange<br />

programs, as % <strong>of</strong> total enrolment<br />

7c: the number <strong>of</strong> students sent out in <strong>European</strong> exchange programs<br />

7d: international staff members as % <strong>of</strong> total number <strong>of</strong> staff<br />

members<br />

7e number <strong>of</strong> program <strong>of</strong>fered abroad<br />

8a: the institution’s fi nancial turn-over in <strong>European</strong> research<br />

programs as % <strong>of</strong> total fi nancial research turn-over<br />

9: size 9a: number <strong>of</strong> students enrolled (headcount)<br />

9b: number <strong>of</strong> staff members employed (fte)<br />

10: mode <strong>of</strong> delivery 10a: number <strong>of</strong> distance learning programs as % <strong>of</strong> total number <strong>of</strong><br />

programs<br />

10b: number <strong>of</strong> part-time programs as % <strong>of</strong> total number <strong>of</strong><br />

programs<br />

10c: number <strong>of</strong> part-time students as <strong>of</strong> total number <strong>of</strong> students<br />

11: public/private character 11a: income from (competitive and non-competitive) government<br />

funding as a % <strong>of</strong> total revenues<br />

11b: income from tuition fees as % <strong>of</strong> total income<br />

12: legal status 12a: legal status<br />

13a: number <strong>of</strong> <strong>of</strong>fi cial concerts and performances (co)-organised<br />

13: cultural engagement<br />

by the institution<br />

13b: number <strong>of</strong> <strong>of</strong>fi cial exhibitions (co)-organised by the institution<br />

14: regional engagement 14a: annual turnover in EU structural funds as % <strong>of</strong> total turnover<br />

14b: number <strong>of</strong> graduates remaining in the region as % <strong>of</strong> total<br />

number <strong>of</strong> graduates<br />

14c: number <strong>of</strong> extracurricular courses <strong>of</strong>fered for regional labour<br />

market<br />

14d: importance <strong>of</strong> local/regional income sources*<br />

* did not appear in the dimensions-questionnaire<br />

75<br />

MAPPING DIVERSITY


76 Validity <strong>of</strong> indicators<br />

For each <strong>of</strong> the 14 dimensions one or more indicators have been selected. The scores on these<br />

indicators have to convey a correct or at least plausible picture <strong>of</strong> the dimension they belong to. This<br />

validity is assessed by a question in the dimensions-questionnaire. The higher education institutions<br />

were asked to give their opinion regarding the statement: ‘indicator a is a valid indicator for this<br />

dimension’.<br />

The average perception <strong>of</strong> the validity <strong>of</strong> the indicators varied substantially between indicators. For<br />

eight indicators less than 15% <strong>of</strong> the responding higher education institutions (strongly) disagreed<br />

with the statement that the indicator was a valid one. For 12 indicators the respondents have some<br />

doubts regarding the validity: between 30% and 50% <strong>of</strong> the responding higher education institutions<br />

indicated that they did not consider those indicators to be valid indicators (within the dimension they<br />

are presented in).<br />

Table 6: Percentage <strong>of</strong> strongly disagree or disagree on statement ‘this indicator is a valid<br />

indicator’<br />

Less than 15% 15%-29% 30-50%<br />

1a 1b 3b<br />

2a 3a 4a<br />

7a 5a 6a<br />

7b 5b 6b<br />

7c 8a 6c<br />

7d 10a 6d<br />

9a 10b 7e<br />

9b 10c 13a<br />

11a<br />

13b<br />

11b<br />

14a<br />

12a<br />

14b<br />

14c<br />

There are fi ve dimensions where the validity <strong>of</strong> the indicators selected raises some doubts: 3<br />

(orientation <strong>of</strong> degrees) 2 , 4 (involvement in life long learning) 3 , 6 (innovation intensiveness) 4 , 13<br />

(cultural engagement) 5 , and 14 (regional engagement) 6 . These fi ve dimensions have a more<br />

experimental status than the other dimensions and because <strong>of</strong> that, this outcome is very much what<br />

could be expected.<br />

MAPPING DIVERSITY<br />

2 comments referred the subjective and ‘vague’ character <strong>of</strong> indicator b. There were furthermore some comments<br />

that the indicators could not differentiate between academic and non-academic or pr<strong>of</strong>essional institutions. The project team<br />

deliberately avoided this ‘traditional’ dichotomy in the defi nitions, to break free <strong>of</strong> these high institutionalized labels.<br />

3 comments were on the cut-<strong>of</strong>f point. In some systems other defi nitions <strong>of</strong> ‘mature’ students are used (e.g., over<br />

21 years on entrance in the UK), which may lead to confusion. It was also mentioned that national differences in entrance<br />

age and different way in which the programs are organized may lead to different age structures <strong>of</strong> the student body. In those<br />

cases the indicator does not identify differences in involvement in LLL but systemic differences.<br />

4 comments mainly referred to national differences in patenting practices.<br />

5 the indicators are considered to be too ‘simplistic’ and not covering the full width <strong>of</strong> cultural activities.<br />

6 comments revealed some problems regarding the demarcation <strong>of</strong> the region, and the weak link between the<br />

eligibility <strong>of</strong> the region for structural funds and the regional engagement <strong>of</strong> a higher education institution. It was furthermore<br />

suggested to use the indicator on start-ups (6a) as an indicator for this dimension as well.


Figure 3: Opinions regarding the statement ‘this indicator is a valid indicator’<br />

14d<br />

14c<br />

14b<br />

14a<br />

13b<br />

13a<br />

12a<br />

11b<br />

11a<br />

10c<br />

10b<br />

10a<br />

9b<br />

9a<br />

8a<br />

7e<br />

7d<br />

7c<br />

7b<br />

7a<br />

6d<br />

6c<br />

6b<br />

6a<br />

5b<br />

5a<br />

4a<br />

3b<br />

3a<br />

2a<br />

1b<br />

1a<br />

77<br />

0% 20% 40% 60% 80% 100%<br />

Strongly disagree Disagree Agree Strongly agree<br />

MAPPING DIVERSITY


78 Reliability<br />

The indicators selected differ in status. Some indicators are already used in different contexts and<br />

build on standard data, whereas others are ‘experimental’ and use information that is not in the set<br />

<strong>of</strong> commonly reported data. This has consequences for the perceived validity <strong>of</strong> the indicators (see<br />

above) but it may also have consequences for the perceived reliability <strong>of</strong> the information reported.<br />

In most higher education systems the defi nitions and data collection procedures for standard data,<br />

like the number <strong>of</strong> students enrolled or the number <strong>of</strong> staff, are harmonized. Because <strong>of</strong> this, it is<br />

more than likely that the data reported are not infl uenced by the person or department that provides<br />

the data. As long as the procedures and defi nitions are followed, the data will be trustworthy. For the<br />

‘experimental’ indicators, defi nitions and procedures are not (yet) harmonized. For these indicators<br />

it might be that the data reported depend on the person or department that reports the data. To<br />

fi nd out whether this reliability problem is perceived to exist by the responding higher education<br />

institutions, they were asked to respond to the statement: ‘the information is reliable’.<br />

The responses are very positive about the reliability <strong>of</strong> the information provided. For 25 indicators<br />

at least fi ve out <strong>of</strong> six responding higher education institutions reported that they (strongly) agreed<br />

with the statement ‘information is reliable’. The indicators on which slightly more responding higher<br />

education institutions had some doubts regarding the reliability are 3a and 3b (orientation <strong>of</strong><br />

degrees), 6d (revenues from private contracts) and 14b and 14c (regional engagement).<br />

This very positive result may be biased because the person giving the opinion has most likely put<br />

in a lot <strong>of</strong> effort to collect the information. Avoiding cognitive dissonance may lead the respondent<br />

to an ‘over-positive’ assessment.<br />

MAPPING DIVERSITY


Figure 4: Opinions on the statement ‘information is reliable’<br />

14d<br />

14c<br />

14b<br />

14a<br />

13b<br />

13a<br />

12a<br />

11b<br />

11a<br />

10c<br />

10b<br />

10a<br />

9b<br />

9a<br />

8a<br />

7d<br />

7e<br />

7b<br />

7a<br />

6d<br />

6c<br />

6b<br />

6a<br />

5a<br />

4a<br />

3b<br />

3a<br />

2a<br />

1b<br />

1a<br />

79<br />

0% 20% 40% 60% 80% 100%<br />

strongly disagree disagree agree strongly agree<br />

Feasibility <strong>of</strong> indicators<br />

The main outcome <strong>of</strong> the analysis <strong>of</strong> national and international data sources on higher education<br />

was that higher education institutions will have a leading role in the collection <strong>of</strong> data. (Inter)national<br />

data sources simply do not breakdown information by individual institutions or if they do, privacy<br />

regulations prevent these sources from publishing data at the individual institution level. This put<br />

the heavy burden <strong>of</strong> data provision on the higher education institutions. Given survey fatigue among<br />

higher education institutions it is most important to know whether it is feasible for an institution to<br />

collect and report the information and data asked for in the questionnaires.<br />

To assess the feasibility <strong>of</strong> the process <strong>of</strong> collecting and reporting the data we used four indications:<br />

the time needed to collect data on the indicator; the score on the scale ‘easy to collect’; whether the<br />

data were collected from an existing source; and the total number <strong>of</strong> valid cases.<br />

MAPPING DIVERSITY


80<br />

Time needed to collect information<br />

The time needed to collect the information on an indicator is a crucial indication <strong>of</strong> the feasibility<br />

<strong>of</strong> the data collection. Time is scarce for higher education institution administrators so if the time<br />

needed is limited, feasibility is considered to be high. First the overall time spent on the indicator<br />

questionnaire was calculated. 25% <strong>of</strong> the responding institutions spent less than an hour on the<br />

indicator questionnaire 25% between one and three hours, another 25% spent between three hours<br />

and a day on the questionnaire and the remaining 25% between a day and a week.<br />

Second, the average time needed was calculated for each indicator.<br />

For nine indicators the average time reported is less than ten minutes. For ten indicators respondents<br />

took on average around half an hour or longer to collect and report the data (see Table 7). The<br />

dispersion around the average score was relatively high. This may mean that there are huge<br />

differences between institutions in the way they have the information available or it may mean that<br />

some institutions take much longer to collect and report the data than others (possibly for size and<br />

capacity reasons). To pick up on the latter explanation, a new variable was calculated, taking the<br />

time needed for a particular indicator as a percentage <strong>of</strong> the total time needed to collect and report<br />

data on all indicators (see Figure 6).<br />

Table 7: Average time spend on collecting and reporting data, per indicator<br />

less than 10 min.<br />

around 30 minutes and longer<br />

1a (highest degree <strong>of</strong>fered)<br />

4a: enrolment by age and type <strong>of</strong> degree<br />

1b (degrees awarded)<br />

6c: licensing income<br />

2a (disciplines)<br />

6d: income from research contracts<br />

3a (certifi ed/ regulated pr<strong>of</strong>essions) 7d: international staff<br />

3b (pr<strong>of</strong>essional programs)<br />

8a: income from EU research contracts<br />

6a (start-up fi rms)<br />

10a: distance programs<br />

7e (programs <strong>of</strong>fered abroad)<br />

11a: public income<br />

9a (enrolment)<br />

13a: number <strong>of</strong> exhibitions<br />

12a (legal status)<br />

13b: number <strong>of</strong> concerts<br />

14b: graduates in the region.<br />

Figure 5: Minutes needed to report data; average plus and minus 1 standard error<br />

90<br />

80<br />

MAPPING DIVERSITY<br />

70<br />

60<br />

50<br />

40<br />

30<br />

20<br />

10<br />

0<br />

1a time<br />

1b time<br />

2a time<br />

3a time<br />

3b time<br />

4a time<br />

5a time<br />

6a time<br />

6b time<br />

6c time<br />

6d time<br />

7a time<br />

7b time<br />

7e time<br />

7d time<br />

8a time<br />

9a time<br />

9b time<br />

10a time<br />

10b time<br />

10c time<br />

11a time<br />

11b time<br />

12a time<br />

13a time<br />

13b time<br />

14a time<br />

14b time<br />

14c time<br />

14d time<br />

The graph <strong>of</strong> the average <strong>of</strong> the proportion <strong>of</strong> the total time spend on each indicator shows a different<br />

outcome. A number <strong>of</strong> indicators appear in both graphs as little time consuming (1a: highest degree<br />

<strong>of</strong>fered, 2a: disciplines, 6a: number <strong>of</strong> start-up fi rms, 7e: number <strong>of</strong> programs <strong>of</strong>fered abroad, 9a:


enrolment and 12a: legal status) or as time consuming (4a: enrolment by age and level <strong>of</strong> degree,<br />

14b: graduates in the region, 6d: income from research contracts, 11a: public income and 13a:<br />

number <strong>of</strong> concerts). The scores on indicator 6b (number <strong>of</strong> patents applications), 6c (licensing<br />

income), 7a (international and foreign degree seeking students) and 7b (exchange students) differ<br />

remarkably between the two approaches.<br />

81<br />

Figure 6: Percentage <strong>of</strong> total time needed to report data; average plus and minus one standard<br />

error<br />

16%<br />

14%<br />

12%<br />

10%<br />

8%<br />

6%<br />

4%<br />

2%<br />

0%<br />

1a time<br />

1b time<br />

2a time<br />

3a time<br />

3b time<br />

4a time<br />

5a time<br />

6a time<br />

6b time<br />

6c time<br />

6d time<br />

7a time<br />

7b time<br />

7e time<br />

7d time<br />

8a time<br />

9a time<br />

9b time<br />

10a time<br />

10b time<br />

10c time<br />

11a time<br />

11b time<br />

12a time<br />

13a time<br />

13b time<br />

14a time<br />

14b time<br />

14c time<br />

14d time<br />

Ease to collect<br />

The time needed to collect and report the information is only one aspect <strong>of</strong> feasibility. Respondents<br />

may perceive the effort needed to fi nd the information differently. Therefore the respondents were<br />

asked whether they agreed or disagreed with the statement ‘The information is easy to fi nd’.<br />

There are two groups <strong>of</strong> indicators. The fi rst group comprises those indicators for which more<br />

than 60% <strong>of</strong> the respondents reported that they strongly agree with the statement. The second<br />

group comprises indicators where at least 20% <strong>of</strong> the respondents disagreed (strongly) with the<br />

statement.<br />

There is a signifi cant overlap between the lists <strong>of</strong> ‘feasible’ indicators <strong>of</strong> table 1 and table 2 and<br />

between the lists <strong>of</strong> less feasible indicators in both tables. The only exception is indicator 3b<br />

(pr<strong>of</strong>essional programs).<br />

Table 8: Opinions on the statement ‘information is easy to find’<br />

60% Strongly agree 20% disagree<br />

1a<br />

3b<br />

1b<br />

4a<br />

2a<br />

6c<br />

7e<br />

6d<br />

9a<br />

7d<br />

9b<br />

8a<br />

10a<br />

14b<br />

10b<br />

14c<br />

10c<br />

14d<br />

11a<br />

11b<br />

12<br />

MAPPING DIVERSITY


82<br />

Figure 7: scores on ‘the information is easy to find’<br />

14d<br />

14c<br />

14b<br />

14a<br />

13b<br />

13a<br />

12a<br />

11b<br />

11a<br />

10c<br />

10b<br />

10a<br />

9b<br />

9a<br />

8a<br />

7d<br />

7e<br />

7b<br />

7a<br />

6d<br />

6c<br />

6b<br />

6a<br />

5a<br />

4a<br />

3b<br />

3a<br />

2a<br />

1b<br />

1a<br />

0% 20% 40% 60% 80% 100%<br />

Strongly disagree Disagree Agree Strongly agree<br />

MAPPING DIVERSITY


Data from existing sources<br />

The question whether the information was taken from existing sources and if so, from what source,<br />

serves a double purpose. First it is assumed that the use <strong>of</strong> existing sources has a positive infl uence<br />

on the feasibility <strong>of</strong> collecting and reporting data.<br />

83<br />

For eight indicators, more than 75% <strong>of</strong> the responding higher education institutions reported that<br />

they used existing sources. For 10 <strong>of</strong> the 31 7 indicators, less than 50% <strong>of</strong> the responding higher<br />

education institutions reported the use <strong>of</strong> existing sources.<br />

Table 9: Percentage <strong>of</strong> the responding higher education institutions using existing sources,<br />

by indicator<br />

More than 75% Less than 50%<br />

1a<br />

4a<br />

1b<br />

6c<br />

2a<br />

6d<br />

7b<br />

10a<br />

9a<br />

10c<br />

9b<br />

13a<br />

11a<br />

13b<br />

11b<br />

14a<br />

14b<br />

14c<br />

14d<br />

Again there is a considerable overlap between this table and the previous two tables categorising<br />

the indicators as feasible and less feasible.<br />

7 for indicator 5b, no questions were asked.<br />

MAPPING DIVERSITY


84<br />

Figure 8: Number <strong>of</strong> responding higher education institutions using existing sources, by<br />

indicator<br />

14d<br />

14c<br />

14b<br />

14a<br />

13b<br />

13a<br />

11b<br />

11a<br />

10c<br />

10b<br />

10a<br />

9b<br />

9a<br />

8a<br />

7d<br />

7e<br />

7b<br />

7a<br />

6d<br />

6c<br />

6b<br />

6a<br />

5<br />

4a<br />

3b<br />

3a<br />

2a<br />

1b<br />

1a<br />

0 10 20 30 40 50 60 70<br />

MAPPING DIVERSITY<br />

The second purpose <strong>of</strong> the question on the sources used refers to the potential use <strong>of</strong> national and<br />

international data sources. In an early stage <strong>of</strong> the project, national and international databases were<br />

analysed to fi nd out whether data from these existing sources could be used to fi ll the database<br />

underlying the classifi cation. The result <strong>of</strong> that analysis was rather negative: there are only reliable<br />

data on a few elements in international data sources, and the use <strong>of</strong> national sources will be very<br />

time consuming. The latter is caused by the many questions to be answered at the institutional<br />

level, legal constraints, methodological constraints and the differences in scope. The question <strong>of</strong><br />

what existing source was used gives us the opportunity to fi nd out what existing sources the higher<br />

education institution uses and/or trusts. It is seen as a fi rst step in the quest for usable national<br />

sources, in order to reduce the survey load for higher education institutions.<br />

Only a few responding higher education institutions reported the use <strong>of</strong> a specifi c agency as a data<br />

source. For indicator 1b eight agencies were reported, for 1a and 9a six and for 2a, 7a and 11a<br />

fi ve.<br />

The most commonly mentioned agencies are ministries and statistical agencies. In some countries,<br />

higher education (funding) councils were mentioned.


Valid cases<br />

The fourth indication <strong>of</strong> the feasibility <strong>of</strong> data collection is not derived from a question in the<br />

‘assessment’ part <strong>of</strong> the questionnaire. This indication, the percentage <strong>of</strong> valid responses, builds on<br />

‘the pro<strong>of</strong> <strong>of</strong> the pudding is in its eating!’ If many responding higher education institutions have been<br />

able to provide valid responses for an indicator, we assume that this is an indication that collecting<br />

the data on that indicator is highly feasible. A low valid response points at low feasibility. For a<br />

number <strong>of</strong> indicators it proved impossible to distinguish invalid responses from a ‘0’-response.<br />

85<br />

Table 10: Percentage <strong>of</strong> valid responses, by indicator<br />

50%-75% Less than 50%<br />

4a<br />

7e<br />

6d<br />

10a<br />

10c<br />

10b<br />

The remaining indicators had a score higher than 75%.<br />

Figure 9: Number <strong>of</strong> valid responses, by indicator<br />

14d<br />

14c<br />

14b<br />

14a<br />

13b<br />

13a<br />

12a<br />

11b<br />

11a<br />

10c<br />

10b<br />

10a<br />

9b<br />

9a<br />

8a<br />

7d<br />

7e<br />

7c<br />

7b<br />

7a<br />

6d<br />

6c<br />

6b<br />

6a<br />

5a<br />

4a<br />

3b<br />

3a<br />

2a<br />

1b<br />

1a<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

MAPPING DIVERSITY


86<br />

Overview<br />

Calculating an overall rank score 8 is a tricky exercise. There is no clear conceptual basis for weighting<br />

the rank scores on the individual feasibility scores. Yet there is an argument to make for weighting<br />

the fi rst two indicators stronger than the latter two. The fi rst two are self reported by the respondents,<br />

whereas at least the last indicator is indirectly derived from the sample.<br />

Based on the weighted rank scores 9 we may distinguish three broad categories: indicators with<br />

no or only minor feasibility problems, indicators with some feasibility problems, and indicators with<br />

signifi cant feasibility problems. To determine which indicators go into which category, we may either<br />

use the list <strong>of</strong> indicators (sorted by rank score) and make three equally sized groups, or we may look<br />

in this list for relatively large differences in the scores <strong>of</strong> consecutive indicators. The result <strong>of</strong> these<br />

groupings <strong>of</strong> overall feasibility scores is presented in the table below.<br />

Table 11: Grouping <strong>of</strong> indicators by feasibility score<br />

method feasibility Indicator<br />

equal size<br />

high 2a, 9a, 1a, 12a, 1b, 11b, 7e, 9b, 6b, 6a, 5<br />

medium 10b, 13b, 13a, 10a, 14a, 7a, 6c, 3b, 10c, 11a<br />

low 14d, 14c, 3a, 7b, 7c, 7d, 8a, 6d, 14b, 4a<br />

Differences between consecutive scores<br />

high 2a, 9a, 1a, 12a, 1b, 11b, 7e, 9b, 6b<br />

medium 6a, 5, 10b, 13b, 13a, 10a,14a, 7a, 6c, 3b, 10c, 11a, 14d, 14c, 3a<br />

low 7b, 7c, 7d, 8a, 6d, 14b, 4a<br />

MAPPING DIVERSITY<br />

8 Components are: % time: average <strong>of</strong> time needed to collect information on the indicator as a % <strong>of</strong> time needed<br />

to collect information on all indicators<br />

% disagree: % <strong>of</strong> respondents who disagreed or strongly disagreed with statement ‘easy to collect data’<br />

not existing source: percentage <strong>of</strong> respondents that used new/ not readily available sources <strong>of</strong> information<br />

invalid cases: the number <strong>of</strong> respondents who did not report valid information for the indicators<br />

9 weighted rank score: sum <strong>of</strong> rank scores (rankscores % time and % disagree counted double) divided by four


ate (3r d<br />

doctor<br />

cycle)<br />

or (1st bachel<br />

cycle)<br />

Results<br />

87<br />

Indicator 1a: Highest level <strong>of</strong> degree <strong>of</strong>fered<br />

In the questionnaire, four levels <strong>of</strong> degree programs were specifi ed:<br />

• doctor or equivalent third cycle degree programs;<br />

• master or equivalent second cycle degree programs;<br />

• bachelor or equivalent fi rst cycle degree programs;<br />

• other levels <strong>of</strong> degree programs.<br />

Almost 73% <strong>of</strong> the responding higher education institutions <strong>of</strong>fer a doctorate program. 17% <strong>of</strong>fer a<br />

master degree as highest level, 6% <strong>of</strong>fer bachelor as the highest level and 5% <strong>of</strong>fer another type <strong>of</strong><br />

degree as highest degree program.<br />

Figure 10: Responding higher education institutions by highest degree program <strong>of</strong>fered<br />

N=66<br />

100%<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

other<br />

master (2nd cycl e)<br />

Indicator 1b: Number <strong>of</strong> degrees awarded in each type <strong>of</strong> degree<br />

In the questionnaire, the types <strong>of</strong> degrees awarded were not predefi ned. In the testing <strong>of</strong> the<br />

questionnaire it proved that the four categories were seen as too restrictive by some <strong>of</strong> the<br />

respondents. Therefore the description <strong>of</strong> the type <strong>of</strong> degree was left open for the respondent to<br />

complete. Although many respondents used the ‘standard’ types, there was a substantial group who<br />

used the original names <strong>of</strong> the degrees. This lead to the need for recoding. Based on the recoded<br />

categories, percentages were calculated, representing the proportion <strong>of</strong> each <strong>of</strong> the degree levels<br />

in the total number <strong>of</strong> degrees awarded.<br />

Not surprisingly, the bachelor and master degree programs are the largest programs in terms <strong>of</strong><br />

degrees awarded. Doctorate programs, <strong>of</strong>fered by almost 75% <strong>of</strong> the responding higher education<br />

institutions, are with an average size <strong>of</strong> 6%, the third ‘largest’ program.<br />

MAPPING DIVERSITY


doct<br />

er mast<br />

bachelor<br />

ee sub-degr<br />

her PG ot<br />

her ot<br />

88<br />

Figure 11: Percentages <strong>of</strong> degrees awarded, by type <strong>of</strong> degree<br />

N=65<br />

100%<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

0 10 20 30 40 50 60 70 80 90 100<br />

cumul at i ve %valid r esponse<br />

How to read this graph? 30% <strong>of</strong> the responding higher education institutions (the horizontal axis)<br />

reported a percentage <strong>of</strong> bachelor degrees awarded that was 30% or lower (vertical axis). 20%<br />

(100-80) reported a percentage higher than 90%.<br />

The number <strong>of</strong> data points for bachelor is higher than for sub-degree programs because less higher<br />

education institutions reported that they *awarded sub-degrees.<br />

The graph gives an impression <strong>of</strong> how the responding higher education institutions scored on the<br />

indicator and whether certain groups or categories may be identifi ed.<br />

Based on the discussions <strong>of</strong> the draft survey report a new indicator was included: the number <strong>of</strong><br />

graduate degrees awarded as a percentage <strong>of</strong> the total number <strong>of</strong> degrees awarded.<br />

Figure 12: Graduate intensity (graduate degrees awarded as % <strong>of</strong> total degrees awarded)<br />

N=65<br />

100%<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

MAPPING DIVERSITY<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

30 40 50 60 70 80 90 100<br />

0 10 20<br />

ve %valid r esponse<br />

i at cumul<br />

The scores on this ‘graduate intensity’-indicator suggest the existence <strong>of</strong> three categories: low<br />

(0-40%) medium (40-60%) and high intensity (60% and higher).<br />

Another way to look at these data is to determine the dominant level at which degrees are awarded.


If 40% is used as a cut<strong>of</strong>f point, the bachelor and master degree programs emerge as the most<br />

frequent dominant programs. Less than 5% <strong>of</strong> the responding higher education institutions do not<br />

have a dominant program.<br />

89<br />

If this indicator would be used to classify higher education institutions, we would discern three<br />

groups: bachelor dominated, master dominated, and other/no dominant program.<br />

Figure 13: Dominant degree level (degrees awarded; 40% cut-<strong>of</strong>f point)<br />

N=65<br />

master<br />

bachelor<br />

sub-degree<br />

other pg<br />

bachelor+master<br />

other<br />

no dominant level<br />

Indicator 2a: number <strong>of</strong> subject areas <strong>of</strong>fered<br />

The number <strong>of</strong> subject areas <strong>of</strong>fered varies between 1 and 9. Most higher education institutions<br />

<strong>of</strong>fer 5 or 6 subject areas (the average is 5.4) and around one out <strong>of</strong> fi ve higher education institutions<br />

can be characterized as comprehensive (<strong>of</strong>fering 8 or 9 subject areas).<br />

Figure 14: Number <strong>of</strong> responding higher education institutions by number <strong>of</strong> subject areas<br />

<strong>of</strong>fered<br />

N=66<br />

20<br />

15<br />

10<br />

5<br />

0<br />

1 2 3 4 5 6 7 8 9<br />

MAPPING DIVERSITY


Pr opor ti on <strong>of</strong><br />

3a:<br />

ogr ammes l eadi ng to<br />

pr<br />

ti f i ed pr <strong>of</strong> essi ons<br />

cer<br />

Subjecti ve assessment<br />

3b:<br />

pr opor ti on pr <strong>of</strong> essi onal<br />

<strong>of</strong><br />

i ented pr ogr ammes<br />

or<br />

90<br />

Indicator 3a and 3b: Orientation <strong>of</strong> programs<br />

The third dimension (orientation <strong>of</strong> degrees) comprises two indicators: an ‘objective’ indicator – the<br />

number <strong>of</strong> programs leading to a certifi ed or regulated degree – and a subjective assessment <strong>of</strong><br />

the pr<strong>of</strong>essional orientation <strong>of</strong> the degrees <strong>of</strong>fered. The concept <strong>of</strong> the orientation <strong>of</strong> degrees proved<br />

to be diffi cult to capture. In the early versions <strong>of</strong> the list <strong>of</strong> indicators various formulations were<br />

used, but there was neither a comprehensive and acceptable statistic to be found nor a generally<br />

acceptable qualitative indicator. In the fi nal questionnaire, both a more objective and a subjective<br />

indicator were included. If the results on both indicators prove to be consistent, the combined<br />

indicator may be continued to be used to convey an indication <strong>of</strong> the orientation <strong>of</strong> the programs. If<br />

not, the choice <strong>of</strong> indicators needs to be reconsidered.<br />

The objective indicator is the percentage <strong>of</strong> programs leading to certifi ed/regulated pr<strong>of</strong>essions.<br />

It is quite remarkable that one out <strong>of</strong> every six higher education institutions only provides programs<br />

that lead to regulated pr<strong>of</strong>essions. On average the percentage is 39%.<br />

The subjective assessment <strong>of</strong> the proportion <strong>of</strong> pr<strong>of</strong>essional oriented programs leads to a higher<br />

score: the average score is 56%.<br />

Figure 15: Higher education institutions by percentage <strong>of</strong> pr<strong>of</strong>essionally oriented programs<br />

<strong>of</strong>fered<br />

3a: N=52<br />

3b: N=48<br />

100%<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

0 10 20 30 40 50 60 70 80 90 100<br />

cumulative %val i d r esponse<br />

MAPPING DIVERSITY<br />

Around 23% <strong>of</strong> the responding higher education institutions reported a similar<br />

number <strong>of</strong> programs for both indicators 65% reported more programs for the<br />

subjective indicator than for the objective indicator.


doct<br />

er mast<br />

Figure 16: Higher education institutions by ratio <strong>of</strong> programs for certified pr<strong>of</strong>ession/programs<br />

with pr<strong>of</strong>essional orientation (subjectively assessed)<br />

N=47; one case scored higher<br />

than 5<br />

5<br />

91<br />

4,5<br />

4<br />

3,5<br />

3<br />

2,5<br />

2<br />

1,5<br />

1<br />

0,5<br />

0<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

cumulative %val i d r esponse<br />

Indicator 4a: Enrolment by age<br />

Enrolment by age is assumed to give an indication <strong>of</strong> involvement in life long learning. The assumption<br />

is that an institution that enrolls a large proportion <strong>of</strong> mature students is more involved in life long<br />

learning than an institution that enrolls only a small number <strong>of</strong> mature students.<br />

There has been some debate on what a mature student is. Some argue that any student aged 30<br />

or over is a mature student while others set the threshold fi ve years lower. In the questionnaire age<br />

was categorized in a number <strong>of</strong> broad age categories and broken down by type <strong>of</strong> degree. For this<br />

report the results are based on two defi nitions <strong>of</strong> mature students: students aged 30 and older, and<br />

students aged 25 and older.<br />

Figure 17: Higher education institutions by the percentage <strong>of</strong> mature students enrolled, by<br />

type <strong>of</strong> degree program; mature=30+<br />

doctorate: N=26<br />

master: N=34<br />

100%<br />

bachelor: N=40<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

0 10 20 30 40 50 60 70 80 90 100<br />

cumulative %val i d r esponse<br />

bachel or<br />

MAPPING DIVERSITY


doct<br />

er mast<br />

Figure 18: Higher education institutions by the percentage <strong>of</strong> mature students enrolled, by<br />

92<br />

type <strong>of</strong> degree program; mature=25+<br />

doctorate: N=26<br />

master: N=34<br />

100%<br />

bachelor: N=40<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

bachel or<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

0 10 20 30 40 50 60 70 80 90 100<br />

cumulative %val i d r esponse<br />

It is not surprising to see that doctorates have a higher proportion <strong>of</strong> mature students than master<br />

programs and master programs have a higher proportion than bachelor programs. It is furthermore<br />

not surprising to see that the proportion <strong>of</strong> mature students increases if a wider defi nition is used.<br />

The graphs show that there are substantial differences between responding higher education<br />

institutions in the proportion <strong>of</strong> mature students enrolled.<br />

Indicator 5a: Annual number <strong>of</strong> peer reviewed publications relative to<br />

the total number <strong>of</strong> academic staff<br />

The number <strong>of</strong> academic staff reported may include in some HE-systems doctoral ‘students’. In<br />

those systems doctoral students are not considered to be students but are seen and reported as<br />

academic staff (research trainees). This proved to be the case for one third <strong>of</strong> the responding higher<br />

education institutions. To correct for this systemic infl uence the number <strong>of</strong> academic staff used to<br />

calculate the indicator excludes the number <strong>of</strong> doctoral students.<br />

Figure 19: Higher education institutions by the number <strong>of</strong> peer reviewed publications per<br />

academic staff member<br />

N=67<br />

7<br />

MAPPING DIVERSITY<br />

6<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

40% 50% 60% 70% 80% 90% 100%<br />

0% 10% 20% 30%<br />

r esponse d i %val cumulative


As a result <strong>of</strong> the discussion on the draft report, a new indicator was included: the total amount <strong>of</strong><br />

research income as a percentage <strong>of</strong> total income.<br />

93<br />

Figure 20: Higher education institutions by research income as % <strong>of</strong> total income<br />

N=39<br />

100%<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

cumulative %val i d r esponse<br />

For around half <strong>of</strong> the institutions that provided a valid response research income is not a large part<br />

<strong>of</strong> their income (less than 10%). There is also a group <strong>of</strong> institutions (6 out <strong>of</strong> 38) for which research<br />

income is the major component <strong>of</strong> their income.<br />

Indicator 6a: The number <strong>of</strong> start-up firms<br />

What a start-up fi rm is and whether it is a valid indication <strong>of</strong> innovation intensiveness was discussed<br />

intensively. Although a defi nition was given in the questionnaire, only 27 out <strong>of</strong> 67 higher education<br />

institutions reported to have ‘produced’ start-up fi rms. Whether the 36 institutions that did not<br />

respond to this question do not have information on this item, or actually have no start-up fi rms,<br />

cannot be determined from the results.<br />

Figure 21: Higher education institutions by number <strong>of</strong> start-up firms (annual average over last<br />

three years)<br />

N=67; two cases scored higher<br />

than 30<br />

35<br />

30<br />

25<br />

20<br />

15<br />

10<br />

5<br />

0<br />

40% 50% 60% 70% 80% 90% 100%<br />

0% 10% 20% 30%<br />

r esponse d i %val cumulative<br />

MAPPING DIVERSITY


94<br />

Indicator 6b: Number <strong>of</strong> patents applications filed per fte academic<br />

staff<br />

The second indicator in the innovation intensiveness dimension is the ratio <strong>of</strong> patent applications<br />

fi led to fte academic staff. As numerator the number <strong>of</strong> academic staff excluding doctoral students<br />

is used. Only half <strong>of</strong> the institutions reported data on patent applications fi led.<br />

Figure 22: Higher education institutions by patent application per fte academic staff<br />

N=67; fi ve cases scored higher<br />

than 0.02<br />

0,02<br />

0,018<br />

0,016<br />

0,014<br />

0,012<br />

0,01<br />

0,008<br />

0,006<br />

0,004<br />

0,002<br />

0<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

cumulative %val i d r esponse<br />

Indicator 6c: Annual licensing income<br />

The third innovation intensiveness indicator is formulated as an absolute fi gure. As such it may give<br />

an indication <strong>of</strong> the size <strong>of</strong> the higher education institution and its (past) performance regarding<br />

licensing contracts. However, to measure the intensiveness <strong>of</strong> the innovation activities <strong>of</strong> an<br />

institution, the absolute amounts are not very telling. That is the reason why an alternative indicator<br />

was calculated: the annual licensing income as a percentage <strong>of</strong> total income.<br />

There are only 15 non-zero responses on the annual licensing income question. Again it cannot be<br />

determined whether the other 52 institutions do not have information on this item, or actually do not<br />

have any licensing income.<br />

Figure 23: Higher education institutions by the percentage <strong>of</strong> licensing income<br />

N=59<br />

MAPPING DIVERSITY<br />

1,0%<br />

0,9%<br />

0,8%<br />

0,7%<br />

0,6%<br />

0,5%<br />

0,4%<br />

0,3%<br />

0,2%<br />

0,1%<br />

0,0%<br />

60% 70% 80% 90% 100%<br />

0% 10% 20% 30% 40% 50%<br />

esponse r d i %val cumulative


Indicator 6d: Financial volume <strong>of</strong> privately funded research contracts as<br />

a percentage <strong>of</strong> total research revenues.<br />

The number <strong>of</strong> non-zero responses for this third indicator on innovation intensiveness is much higher<br />

than for the previous two. 40 higher education institutions reported valid data from nearly 0 to 100%.<br />

For 56% <strong>of</strong> the responding higher education institutions, revenues from privately funded research<br />

contracts are less than 10% <strong>of</strong> total research revenues.<br />

95<br />

Figure 24: Higher education institutions by privately funded research contracts as % <strong>of</strong> total<br />

research revenues<br />

N=46<br />

100%<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

cumulative %val i d r esponse<br />

Indicator 7a: foreign degree seeking students as a percentage <strong>of</strong> total<br />

enrolment in degree programs<br />

If relatively many students come to an institution from abroad to take a degree program it is assumed<br />

here that this institution has a high international orientation (in the fi eld <strong>of</strong> teaching and education).<br />

On average, the percentage <strong>of</strong> foreign degree seeking students at the bachelor level is substantially<br />

lower than the percentage at the master and doctorate levels. In addition to these three levels, the<br />

graph below comprises a category <strong>of</strong> all levels. In this category, students for all levels <strong>of</strong> programs<br />

are aggregated and the ratio calculated.<br />

At the bachelor level 75% <strong>of</strong> the responding higher education institutions reported less than 5%<br />

foreign degree seeking students.<br />

MAPPING DIVERSITY


doct<br />

er mast<br />

or bachel<br />

levels all<br />

er mast<br />

or bachel<br />

or +mast er<br />

bachel<br />

levels all<br />

96<br />

Figure 25: Higher education institutions by proportion <strong>of</strong> foreign degree seeking students, by<br />

type <strong>of</strong> program<br />

100%<br />

90%<br />

doctorate: N=22<br />

master: N= 30<br />

bachelor: N= 36<br />

all levels: N=54<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

0 10 20 30 40 50 60 70 80 90 100<br />

cumulative %val i d r esponse<br />

Indicator 7b: Incoming EU exchange students as a percentage <strong>of</strong> the<br />

total number <strong>of</strong> students, by level <strong>of</strong> degree<br />

The <strong>European</strong> Union runs a number <strong>of</strong> programs to stimulate the international mobility <strong>of</strong> higher<br />

education students. We assume that a relatively high percentage <strong>of</strong> incoming students and/or<br />

students sent out within the framework <strong>of</strong> these exchange programs is an indication <strong>of</strong> a strong<br />

international orientation (in teaching and education).<br />

Figure 26: higher education institutions by the percentage <strong>of</strong> incoming EU exchange students,<br />

by type <strong>of</strong> degree<br />

10%<br />

9%<br />

8%<br />

7%<br />

6%<br />

master: N=13<br />

bachelor: N=24<br />

bachelor+master N= 11<br />

all levels: N= 46<br />

cases scoring more than 10%:<br />

master 3, bachelor 5,<br />

all levels 7.<br />

5%<br />

4%<br />

MAPPING DIVERSITY<br />

2%<br />

1%<br />

0%<br />

0 10 20 30 40 50 60 70 80 90 100<br />

cumulative %val i d r esponse<br />

3%


doctorate<br />

er mast<br />

or bachel<br />

or +mast er<br />

bachel<br />

Indicator 7c EU exchange students sent out as a percentage <strong>of</strong> the total<br />

number <strong>of</strong> students, by level <strong>of</strong> degree<br />

97<br />

Figure 27: Higher education institutions by the percentage <strong>of</strong> EU exchange students, sent out,<br />

by type <strong>of</strong> degree<br />

50%<br />

45%<br />

40%<br />

35%<br />

30%<br />

all levels<br />

doctorate: N=12<br />

master: N=16<br />

bachelor: N=25<br />

bachelor+master N=11<br />

all levels: N= 45<br />

cases scoring more than<br />

50%: doctorate 2, master 3,<br />

bachelor 3, all levels 3.<br />

25%<br />

20%<br />

15%<br />

10%<br />

5%<br />

0%<br />

0 10 20 30 40 50 60 70 80 90 100<br />

cumulative %val i d r esponse<br />

The percentage <strong>of</strong> exchange students is below 5% for the majority <strong>of</strong> responding higher education<br />

institutions. There is a group <strong>of</strong> higher education institutions (around 20%) that score substantially<br />

higher than the rest <strong>of</strong> the responding higher education institutions.<br />

Indicator 7d: International academic staff as a percentage <strong>of</strong> total staff<br />

(all headcount)<br />

Part <strong>of</strong> the international pr<strong>of</strong>i le <strong>of</strong> an institution is the international pr<strong>of</strong>i le <strong>of</strong> its staff. One way to<br />

assess that pr<strong>of</strong>i le is by looking at the nationality <strong>of</strong> academic staff. It shows that one third <strong>of</strong> the<br />

responding higher education institutions did not have any international staff or could not provide<br />

information on this indicator.<br />

Figure 28: Higher education institutions by % <strong>of</strong> international academic staff<br />

N=67; cases scoring more<br />

than 50%: 3.<br />

50%<br />

45%<br />

40%<br />

35%<br />

30%<br />

25%<br />

20%<br />

15%<br />

10%<br />

5%<br />

0%<br />

50% 60% 70% 80% 90% 100%<br />

0% 10% 20% 30% 40%<br />

esponse r d i %val cumulative<br />

MAPPING DIVERSITY


er mast<br />

or bachel<br />

98<br />

Indicator 7e: Programs delivered abroad<br />

The international orientation <strong>of</strong> a higher education institution does not only show in the attractiveness<br />

<strong>of</strong> its programs to foreign students. It may also show in its program <strong>of</strong>fering abroad. Off-shore higher<br />

education is seen by some (non-EU) countries as a booming market and if an institution plays an<br />

active role on that market we assume that this shows a strong international orientation. Around one<br />

third <strong>of</strong> the responding higher education institutions provide programs abroad.<br />

Figure 29: Higher education institutions by % <strong>of</strong> programs <strong>of</strong>fered abroad by level <strong>of</strong><br />

program<br />

master: N=16;<br />

bachelor: N=9;<br />

50%<br />

all levels: N= 45%<br />

25<br />

cases scoring more than 50%:<br />

40%<br />

master 2, bachelor 2,<br />

all levels 4.<br />

35%<br />

30%<br />

25%<br />

all levels<br />

20%<br />

15%<br />

10%<br />

5%<br />

0%<br />

0 10 20 30 40 50 60 70 80 90 100<br />

cumulative %val i d r esponse<br />

Indicator 8a: Financial turnover in EU research programs as a percentage<br />

<strong>of</strong> total research turnover<br />

40% <strong>of</strong> the responding higher education institutions reported no data on revenues from EU research<br />

programmes as a percentage <strong>of</strong> total research revenues. Another 40% received just a modest<br />

part <strong>of</strong> their research revenues from EU programmes (0-10%). Only 10% <strong>of</strong> the responding higher<br />

education institutions had more than 25% <strong>of</strong> their research income from EU research programmes.<br />

Figure 30: Higher education institutions by turnover in EU research programs as % <strong>of</strong> total<br />

research revenues<br />

N=63<br />

100%<br />

MAPPING DIVERSITY<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

50% 60% 70% 80% 90% 100%<br />

0% 10% 20% 30% 40%<br />

esponse r d i %val cumulative


Indicator 9a: Enrolment<br />

The number <strong>of</strong> students enrolled is seen as a basic indicator <strong>of</strong> the size <strong>of</strong> the higher education<br />

institution.<br />

99<br />

Figure 31: Higher education institutions by number <strong>of</strong> students enrolled<br />

N=67; cases scoring more<br />

than 60,000: 2<br />

60000<br />

50000<br />

40000<br />

30000<br />

20000<br />

10000<br />

0<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

cumulative %val i d r esponse<br />

Indicator 9b: number <strong>of</strong> staff<br />

Academic staff is the primary production factor in higher education and as such the total number <strong>of</strong><br />

academic staff is a good indicator <strong>of</strong> the size <strong>of</strong> the institution.<br />

Figure 32: Higher education institutions by fte academic staff<br />

N=67<br />

5000<br />

4500<br />

4000<br />

3500<br />

3000<br />

2500<br />

1500<br />

1000<br />

500<br />

0<br />

10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

0%<br />

cumulative %val i d r esponse<br />

Based on the results <strong>of</strong> the 56 responding higher education institutions, four wider staff classes can<br />

be determined:<br />


In addition to academic staff, higher education institutions employ non-academic staff. This staff<br />

100<br />

category comprises a wide variety <strong>of</strong> support functions (from governance staff to administrative<br />

staff to maintenance staff). The relative size <strong>of</strong> this non-academic staff determines to a substantial<br />

extent the overhead on the primary processes (teaching and research). To get an indication <strong>of</strong> the<br />

relative size the ratio non-academic/academic staff was calculated. The resulting graph shows a<br />

clear variety in the ratio.<br />

Figure 33: Higher education institutions by ratio non-academic/academic staff<br />

N=65<br />

200%<br />

175%<br />

150%<br />

125%<br />

100%<br />

75%<br />

50%<br />

25%<br />

0%<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

cumulative %val i d r esponse<br />

Indicator 10a: Percentage <strong>of</strong> programs <strong>of</strong>fered as distance learning<br />

programs<br />

We assumed that for an institutional pr<strong>of</strong>i le it is not only relevant which subjects and levels <strong>of</strong><br />

programs are <strong>of</strong>fered but also how these programs are <strong>of</strong>fered. Two aspects <strong>of</strong> mode <strong>of</strong> delivery are<br />

discussed: distance learning and part-time programs.<br />

Around one third <strong>of</strong> the responding higher education institutions <strong>of</strong>fer programs as distance learning<br />

programs: master programs and bachelor programs are <strong>of</strong>fered by most responding higher<br />

education institutions as distance learning program.<br />

MAPPING DIVERSITY


er mast<br />

or bachel<br />

er mast<br />

or bachel<br />

Figure 34: Higher education institutions by % <strong>of</strong> programs <strong>of</strong>fered as distance learning<br />

program by level <strong>of</strong> program<br />

100%<br />

master: N=21<br />

bachelor: N=21<br />

all levels: N= 30<br />

101<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

all levels<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

0 10 20 30 40 50 60 70 80 90 100<br />

cumulative %val i d r esponse<br />

Indicator 10b: The percentage <strong>of</strong> programs <strong>of</strong>fered as part time<br />

programs<br />

32 out <strong>of</strong> 67 higher education institutions report that they <strong>of</strong>fer part time programs. Part time programs<br />

are mainly <strong>of</strong>fered at the bachelor and master levels. At the other levels the existence <strong>of</strong> part time<br />

programs is reported only in incidental cases.<br />

Figure 35: Higher education institutions by % <strong>of</strong> programs <strong>of</strong>fered as part-time programs by<br />

level <strong>of</strong> program<br />

100%<br />

master: N=23<br />

bachelor: N=25<br />

all levels: N= 32<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

0 10 20 30 40 50 60 70 80 90 100<br />

cumulative %val i d r esponse<br />

all levels<br />

MAPPING DIVERSITY


or ate doct<br />

er mast<br />

or bachel<br />

levels all<br />

102<br />

Indicator 10c: The percentage <strong>of</strong> students enrolled as part time<br />

students<br />

The pattern emerging from the data on part time students is similar to the picture sketched above.<br />

36 <strong>of</strong> the responding higher education institutions report students enrolled as part time students.<br />

Apparently there are a few institutions that enroll part time students but do not <strong>of</strong>fer part time<br />

programmes. Part time students are most frequently enrolled at the bachelor and master level. At<br />

the doctorate level there are 11 higher education institutions enrolling part time students.<br />

Figure 36: Higher education institutions by % <strong>of</strong> part-time students by level <strong>of</strong> program<br />

doctorate: N=11; master:<br />

N=21; bachelor: N=26; all<br />

100%<br />

levels: N= 36<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

0 10 20 30 40 50 60 70 80 90 100<br />

cumulative %val i d r esponse<br />

MAPPING DIVERSITY<br />

Indicator 11a: Percentage <strong>of</strong> funding from government funding<br />

The public character <strong>of</strong> a higher education institution is most apparent in the balance between public<br />

and private funding. The assumption is that a high percentage <strong>of</strong> government funding indicates a<br />

public character.<br />

The majority <strong>of</strong> the responding higher education institutions has provided data on this indicator: only<br />

eight cases are missing. On average the percentage <strong>of</strong> government funding is 59%. One out <strong>of</strong> six<br />

institutions have no or virtually no government funding and 65% <strong>of</strong> the institutions have more than<br />

50% government funding.


Figure 37: Higher education institutions by % <strong>of</strong> government funding<br />

N=59<br />

103<br />

100%<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

cumulative %val i d r esponse<br />

Indicator 11b: Income from tuition fees<br />

The second indicator <strong>of</strong> the balance between public and private fi nancial resources is the role<br />

tuition fees play in the total income <strong>of</strong> a higher education institution. More tuition income means<br />

a more private character. The scores on this indicator are highly infl uenced by national systemic<br />

characteristics, as in a number <strong>of</strong> <strong>European</strong> systems tuition fees are not allowed. In many systems,<br />

institutions are not allowed to deviate from nationally set fees, which also limits the variety <strong>of</strong><br />

results.<br />

The average income from tuition fees is 15% <strong>of</strong> total income. 8 higher education institutions report<br />

no income from tuition.<br />

Based on a visual inspection <strong>of</strong> the graph we may distinguish four classes (none, low, medium a<br />

high).<br />

Figure 38: Higher education institutions by tuition fee income as % <strong>of</strong> total income<br />

N=59<br />

100%<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

40% 50% 60% 70% 80% 90% 100%<br />

0% 10% 20% 30%<br />

r esponse d i %val cumulative<br />

MAPPING DIVERSITY


104<br />

Indicator 12a: legal status<br />

The open question regarding the legal status produced a long list <strong>of</strong> specifi c legal names that<br />

needed to be recoded. The question <strong>of</strong> whether the respondent thinks the institution is public or<br />

private (according to the OECD defi nition) produces a much clearer picture.<br />

Figure 39: Higher education institutions by public/private status<br />

N=60<br />

private<br />

public<br />

Indicator 13a: Concerts and performances<br />

22 responding higher education institutions did not report any information on this indicator. The<br />

absolute number <strong>of</strong> concerts is not a very telling indicator: it may be more informative to present this<br />

information relative to the total number <strong>of</strong> staff (academic and non-academic).<br />

Figure 40: Higher education institutions by concerts and performances per staff member<br />

N=66; three cases score<br />

higher than 0.25<br />

0,25<br />

0,20<br />

0,15<br />

MAPPING DIVERSITY<br />

0,10<br />

0,05<br />

0,00<br />

20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

0% 10%<br />

i d r esponse<br />

%val cumulative


Indicator 13b: Exhibitions<br />

105<br />

The absolute number <strong>of</strong> exhibitions (co-)organized by the higher education institution is also not a<br />

very telling indicator: it may again be more informative to present this information relative to the total<br />

number <strong>of</strong> staff (academic and non-academic).<br />

Figure 41: Higher education institutions by exhibitions per staff member<br />

N=67; three cases score<br />

higher than 0.05<br />

0,05<br />

0,04<br />

0,03<br />

0,02<br />

0,01<br />

0,00<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

cumulative %val i d r esponse<br />

Indicator 14a: Annual turnover in EU structural funds<br />

55% <strong>of</strong> the responding higher educations reported no revenues from EU structural funds, or were<br />

unable to provide the information. Again the absolute amounts are not as telling as the amounts as<br />

a percentage <strong>of</strong> total income.<br />

Figure 42: Higher education institutions by annual turnover in EU structural funds as % <strong>of</strong><br />

total income<br />

N=59<br />

10%<br />

9%<br />

7%<br />

6%<br />

5%<br />

4%<br />

3%<br />

2%<br />

1%<br />

0%<br />

40% 50% 60% 70% 80% 90% 100%<br />

0% 10% 20% 30%<br />

r esponse d i %val cumulative<br />

MAPPING DIVERSITY<br />

8%


106<br />

Indicator 14 b: Graduates in the region<br />

Only 20 responding higher education institutions reported data here. But even for these higher<br />

education institutions the ratio cannot be calculated. The total number <strong>of</strong> graduates (as calculated<br />

from the data from indicator 1b) refers to one year (a fl ow) whereas the number <strong>of</strong> graduates in the<br />

region refers an accumulation <strong>of</strong> graduates over a number <strong>of</strong> years (a stock). There were many<br />

comments regarding the defi nition <strong>of</strong> ‘region’ and the lack <strong>of</strong> systematically collected data on this<br />

item.<br />

Indicator 14c: Extracurricular courses<br />

Offering extra curricular courses, focused at specifi c (local or regional) labour market needs is seen<br />

as an important indication <strong>of</strong> the regional engagement <strong>of</strong> a higher education institution.<br />

40 responding higher education institutions reported that they <strong>of</strong>fered at least one extra curricular<br />

course.<br />

Figure 43: Higher education institutions by extra curricula courses <strong>of</strong>fered<br />

100<br />

N=67; nine cases reported to<br />

<strong>of</strong>fer more than 100 courses.<br />

90<br />

80<br />

70<br />

60<br />

50<br />

40<br />

30<br />

20<br />

10<br />

0<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

cumulative %val i d r esponse<br />

MAPPING DIVERSITY<br />

Indicator 14 d: Importance <strong>of</strong> regional sources<br />

The responses to the ‘importance’ and to the ‘change’ questions <strong>of</strong> indicator 14d have been<br />

combined into one variable (per source).<br />

National sources are critical for more than two thirds <strong>of</strong> the higher education institutions reporting<br />

valid data. For another quarter, the national sources have become signifi cant.<br />

International sources have grown in importance. For the majority <strong>of</strong> responding higher education<br />

institutions international sources have become signifi cant or are signifi cant. For every one out <strong>of</strong><br />

seven international sources are critical.<br />

This information can also serve as an indicator for the dimension international orientation.<br />

Local and regional sources are considered to be far less important than national and international<br />

sources: almost three quarters <strong>of</strong> the respondents consider the regional and local sources to be<br />

insignifi cant.


Figure 44: Higher education institutions by score on importance <strong>of</strong> different sources <strong>of</strong><br />

income<br />

107<br />

N=51<br />

importance <strong>of</strong><br />

local and<br />

regional sources<br />

importance <strong>of</strong><br />

international<br />

sources<br />

importance <strong>of</strong><br />

national sources<br />

0% 20% 40% 60% 80% 100%<br />

insignificant<br />

significant<br />

critical<br />

has become less insignificant<br />

has become significant<br />

has become critical<br />

MAPPING DIVERSITY


108 Discussion<br />

In this chapter we discuss the results <strong>of</strong> the survey and the consequences these may have for the<br />

selection and clustering <strong>of</strong> dimensions and indicators. First we explore the dimensions with ‘poor’<br />

indicators. The survey showed that a number <strong>of</strong> indicators scored signifi cantly lower on criteria such<br />

as validity and feasibility than others. Are these poor indicators evenly dispersed over the dimensions<br />

or can we identify particularly ‘challenging’’ dimensions (dimensions with predominantly poor<br />

indicators)? And what consequences may that have for the selection <strong>of</strong> dimensions and indicators?<br />

The focus then turns to a way to cluster the dimensions, using the scores on the questions on<br />

relevance.<br />

‘Challenging’ dimensions<br />

One <strong>of</strong> the reasons for organising the survey was to fi nd out which dimensions and indicators would<br />

work and which would not. To fi nd an answer to this question we combined the information on the<br />

validity, feasibility and reliability <strong>of</strong> the indicators selected for each dimension. We do not use the<br />

scores on the perceived relevance <strong>of</strong> the dimensions since a high proportion <strong>of</strong> responding higher<br />

education institutions strongly disagreeing with the relevance <strong>of</strong> a dimension is not an indication <strong>of</strong><br />

the quality <strong>of</strong> the dimension. We see such a lack <strong>of</strong> consensus as an indication <strong>of</strong> the diversity <strong>of</strong> the<br />

missions and pr<strong>of</strong>i les <strong>of</strong> the higher education institutions. Only if the vast majority <strong>of</strong> the responding<br />

higher education institutions disagreed with the relevance would we reconsider the choice <strong>of</strong> this<br />

dimension. This was not the case with any <strong>of</strong> the fourteen dimensions.<br />

To identify potential challenging dimensions we selected those dimensions for which at least one<br />

indicator scores more than 5% strongly disagree on the validity and reliability items and is in the<br />

bottom fi ve <strong>of</strong> the overall feasibility ranking.<br />

Using these criteria, there are two ‘challenging’ dimensions: dimension 4, ‘Involvement in live long<br />

learning’ and dimension 6 ‘innovation intensiveness’.<br />

If we use the validity and feasibility criteria only one more dimension emerges as being ‘challenging’:<br />

‘regional engagement’.<br />

If we use the validity and reliability criteria there are four ‘challenging’ dimensions: ‘program orientation’,<br />

‘involvement in life long learning’, ‘research intensiveness’, and ‘innovation intensiveness’.<br />

MAPPING DIVERSITY<br />

If we use the feasibility and reliability criteria again only the dimensions ‘Involvement in life long<br />

learning’ and ‘innovation intensiveness’ emerge.<br />

Clustering dimensions<br />

The scores on the relevance questions can also be used to cluster the dimensions. First we look<br />

at the scores on the question where the three most and the three least important dimensions were<br />

identifi ed. If the scores on two dimensions correlate positively it is likely (Teeuwen 2004) that<br />

whenever a respondent thinks one <strong>of</strong> the indicators is most important he thinks the other indicator<br />

is most important as well. If a pair <strong>of</strong> dimensions are negatively correlated, it is likely that whenever<br />

a respondent thinks one <strong>of</strong> the indicators as most important he thinks the other indicators is least<br />

important. There are fi ve pairs <strong>of</strong> dimensions that appear to correlate positively and seven that<br />

correlate negatively.


The clustering is based on the correlation matrix <strong>of</strong> the scores on the questions (Kendall’s tau). The<br />

signifi cant correlations were mapped to fi nd out whether clear clusters emerge 10 . Using Kendall’s<br />

tau, ten combinations emerge as statistically signifi cant.<br />

109<br />

Table 12: Correlations between dimensions<br />

dimension dimension Kendall’s tau<br />

11 12 0.82**<br />

2 9 0.49*<br />

4 10 0.48*<br />

8 14 0.80*<br />

11 13 0.46*<br />

6 9 -0.71**<br />

2 14 -0.69**<br />

5 10 -0.54**<br />

2 11 -0.53*<br />

5 11 -0.54*<br />

7 11 -0.53*<br />

9 13 -0.41*<br />

** signifi cant at 0.01 level<br />

* signifi cant at 0.05 level<br />

If the dimension public character (11) is mentioned it is very likely that the dimension legal status<br />

(12) is mentioned also in a similar way (most or least important). The same goes (to a lesser extent)<br />

for the combinations ‘range <strong>of</strong> subjects’ – ‘size’; ‘life long learning’ – ‘mode <strong>of</strong> delivery’; ‘international<br />

research orientation’ – ‘regional engagement’ and ‘public character’ – ‘cultural engagement’.<br />

For the other pairs listed it is likely that if one is mentioned important (innovation intensiveness) the<br />

other is considered to be least important (size).<br />

The other way to probe for possible clusters <strong>of</strong> dimensions used the answers to the relevance<br />

questions, posed for each dimension. Again we calculated the bivariate correlations between<br />

the scores. A high correlation between two dimensions means that if the respondent scores one<br />

dimension as highly relevant it is likely that he scores the other dimension this way as well.<br />

10 Factor analysis is not an option, due to the measurement level (which is ordinal).<br />

MAPPING DIVERSITY


We mapped the dimensions and their statistically signifi cant correlations<br />

110 11 .<br />

Visual inspection <strong>of</strong> the map suggests that there are at least three clusters <strong>of</strong> dimensions, with one<br />

or two sub-clusters.<br />

Figure 45: <strong><strong>Map</strong>ping</strong> <strong>of</strong> the dimensions and the correlations between the scores on relevance<br />

6<br />

.34**<br />

.29**<br />

.31**<br />

5<br />

.29**<br />

2<br />

.46**<br />

.40**<br />

1<br />

.30**<br />

.36**<br />

3 14 8<br />

.37**<br />

.38**<br />

7<br />

.28**<br />

4<br />

.27**<br />

11<br />

.28**<br />

13<br />

.35**<br />

9<br />

.32**<br />

12<br />

.28**<br />

10<br />

MAPPING DIVERSITY<br />

1: types <strong>of</strong> degrees <strong>of</strong>fered<br />

2: range <strong>of</strong> subjects<br />

3: orientation <strong>of</strong> degrees<br />

4: involvement in life long learning<br />

5: research intensiveness<br />

6: innovation intensiveness<br />

7: international orientation (students and staff)<br />

8: international orientation; research<br />

9: size<br />

10: mode <strong>of</strong> delivery<br />

11: public private<br />

12 legal status<br />

13 cultural engagement<br />

14 regional engagement<br />

The fi rst cluster comprises fi ve dimensions where dimension 8 (international research orientation)<br />

is the central dimension. The other dimensions in this cluster are 5 (research intensiveness), 1<br />

(types <strong>of</strong> degrees <strong>of</strong>fered), 2 (range <strong>of</strong> subjects), 6 (innovation intensiveness), and 7 (international<br />

orientation (teaching and staff)). This cluster has two faces: one R&D oriented side and one<br />

international orientation side.<br />

The second cluster includes dimensions 3 (orientation <strong>of</strong> degrees) and 14 (regional engagement)<br />

11 Using two-tailed Kendall’s Tau


as well as dimension 4 (involvement in life long learning). This cluster may be characterised as the<br />

orientation towards the regional environment.<br />

The third cluster comprises dimension 9 (size), 11 (public private character) 12 (legal status),<br />

13 (cultural engagement) and 10 (mode <strong>of</strong> delivery). The characterisation <strong>of</strong> this cluster is not<br />

obvious.<br />

111<br />

It is clear that there are other ways to reduce the number <strong>of</strong> dimensions. The scores on the indicators<br />

may be exploited in statistical ways to fi nd clusters <strong>of</strong> indicators that may point at new (clusters <strong>of</strong>)<br />

dimensions. In addition to these quantitative methods one may also consider using more theoretical<br />

approaches. Research literature may be used to fi nd different ways to combine information into<br />

meaningful clusters. At this stage <strong>of</strong> the project the decision to reduce the number <strong>of</strong> dimensions<br />

and how to do so has not yet been taken. To make such a decision, the material collected in this<br />

survey needs to be further analysed.<br />

In conclusion<br />

The survey among higher education institutions has given empirical substance to the frameworks<br />

developed in the classifi cation project so far. After the rounds <strong>of</strong> consultation in the fi rst phase <strong>of</strong><br />

the project, we have been able to collect the views <strong>of</strong> a larger group <strong>of</strong> stakeholders regarding the<br />

relevance <strong>of</strong> the dimensions proposed. The use <strong>of</strong> on-line questionnaires forced the project team to<br />

become more specifi c and concrete regarding the choice <strong>of</strong> indicators and their operationalisation.<br />

The responses to the questionnaires showed that some <strong>of</strong> these choices need to be reconsidered,<br />

but for most <strong>of</strong> the indicators the survey produced valuable results.<br />

One <strong>of</strong> these results is that the data collected can be used to group higher education institutions on<br />

many indicators. For some indicators a visual inspection <strong>of</strong> the scatter plots <strong>of</strong> the results suggest<br />

certain classes. On a number <strong>of</strong> indicators the scores breakdown into three or four classes (none,<br />

low, medium, high). Examples are indicators 3a (% <strong>of</strong> programs leading to certifi ed pr<strong>of</strong>essions),<br />

5a (peer reviewed publications per fte academic staff), 6d (privately funded research contracts as<br />

percentage <strong>of</strong> total research revenues), 7d (% international academic staff), 11a (% government<br />

funding) and 11b (% tuition fee income). For other indicators (such as the two indicators in dimension<br />

9 (size) four more or less equally sized groups <strong>of</strong> institutions emerged. There was also a number <strong>of</strong><br />

indicators for which it proved to be more challenging to come up with more than two groups (‘with’<br />

and ‘without’). Examples <strong>of</strong> this category are the indicators on concerts and exhibitions (dimension<br />

13), on patenting (6b) and on licensing income (6c). The data provided by the higher education<br />

institutions are clearly a crucial input in the further development <strong>of</strong> the classifi cation.<br />

Knowing the results <strong>of</strong> the survey we may conclude that the role <strong>of</strong> higher education institutions is<br />

crucial in the process <strong>of</strong> developing and operating a <strong>European</strong> classifi cation for higher education<br />

institutions. Although there may be opportunities to use more existing data sources that will take part<br />

<strong>of</strong> the burden <strong>of</strong> data provision <strong>of</strong>f the shoulders <strong>of</strong> the higher education institutions, it has become<br />

clear that a substantial part <strong>of</strong> the information has to be provided by the individual institutions.<br />

We may also conclude that once institutions have become involved in the process <strong>of</strong> collecting the<br />

data, they will have a strong intrinsic motivation to complete the questionnaires. This involvement<br />

was visible in the comments made in the survey and in the willingness <strong>of</strong> groups <strong>of</strong> institutions to<br />

co-operate in ‘communities’ to further the development <strong>of</strong> specifi c indicators.<br />

MAPPING DIVERSITY


112 References<br />

Moors, J. J. A. and J. Muilwijk (1975). Steekproeven, een inleiding tot de praktijk. Amsterdam, Agon<br />

Elsevier.<br />

Teeuwen, H. (2004). “The art <strong>of</strong> prediction.” from http://nl.youtube.com/watch?v=PatELskRWWA&f<br />

eature=related.<br />

van Vught, F. and J. Bartelse (2005). Institutional Pr<strong>of</strong>i les, towards a typology <strong>of</strong> higher education<br />

institutions in Europe. Enschede.<br />

MAPPING DIVERSITY


Appendix 1: Comments<br />

113<br />

Dimension<br />

Comments<br />

1: types <strong>of</strong> degrees <strong>of</strong>fered Only few comments were made. There was mention <strong>of</strong><br />

including non- degree program <strong>of</strong>ferings and some confusion<br />

regarding the term ‘qualifi cations’<br />

2: range <strong>of</strong> subjects <strong>of</strong>fered There were very few comments made: two respondents stated<br />

that the ISCED classifi cation was not suited for describing the<br />

range <strong>of</strong> subjects<br />

3: orientation <strong>of</strong> degrees Comments referred the subjective and ‘vague’ character <strong>of</strong><br />

indicator b. There were furthermore some comments that<br />

the indicators could not differentiate between academic<br />

and non-academic or pr<strong>of</strong>essional institutions. The project<br />

team deliberately avoided this ‘traditional’ dichotomy in the<br />

defi nitions, to break free <strong>of</strong> these high institutionalized labels.<br />

4: involvement in life long<br />

learning<br />

Comments were on the cut-<strong>of</strong>f point. In some systems other<br />

defi nitions <strong>of</strong> ‘mature’ students are used (e.g., over 21 years<br />

on entrance in the UK), which may lead to confusion. It was<br />

also mentioned that national differences in entrance age and<br />

different way in which the programs are organized may lead to<br />

different age structures <strong>of</strong> the student body. In those cases the<br />

indicator does not identify differences in involvement in LLL but<br />

systemic differences.<br />

5: research intensiveness Some respondents commented that the indicators do not<br />

apply to universities <strong>of</strong> applied sciences or art schools. It was<br />

furthermore commented that some obvious indicators are<br />

missing (like fte for research, other research outputs, and third<br />

party funds for research). Some <strong>of</strong> these indicators may be<br />

derived from the answers to other dimensions.<br />

6: innovation intensiveness Comments mainly referred to national differences in patenting<br />

practices<br />

7: international orientation:<br />

teaching and staff<br />

8: international orientation:<br />

research<br />

There were few comments on the narrow, <strong>European</strong> scope <strong>of</strong><br />

the indicators and on the impact the national context has on<br />

student mobility (e.g., UK receives much more students than<br />

Finland). Some respondents missed joint degrees or double<br />

degrees as indicators<br />

Most comments were on the EU-scope <strong>of</strong> the indicator: it<br />

is considered to be too narrow and should include other<br />

<strong>European</strong> and international funding sources.<br />

9: size Two respondents suggested to include student staff ratios as<br />

an indicator. There was also the suggestion to use the number<br />

<strong>of</strong> degrees awarded (1b) as an indicator in this dimension.<br />

10: mode <strong>of</strong> delivery Very few comments here. One respondent missed the number<br />

<strong>of</strong> students participating in distance learning programs.<br />

11: public/private character Comments touched upon three issues.. The outcomes depend<br />

on the national public funding mechanism. Public versus<br />

private is a legal issue that should not be confused with the<br />

dependency on public sources [the new <strong>of</strong> the dimension<br />

is misleading]. Other private income like donations are not<br />

included.<br />

12: legal status Very few comments<br />

13: cultural engagement The indicators are considered to be too ‘simplistic’ and not<br />

covering the full width <strong>of</strong> cultural activities.<br />

14: regional engagement Comments revealed some problems regarding the demarcation<br />

<strong>of</strong> the region, and the weak link between the eligibility <strong>of</strong> the<br />

region for structural funds and the regional engagement <strong>of</strong><br />

a higher education institution. It was furthermore suggested<br />

to use the indicator on start-ups (6a) as an indicator for this<br />

dimension as well.<br />

MAPPING DIVERSITY


114 Indicator Comment<br />

1a: highest level <strong>of</strong> degree program <strong>of</strong>fered Some comments that institutions are revising<br />

their degree structure following the Bologna<br />

architecture<br />

1b: number <strong>of</strong> qualifi cations granted in each<br />

type <strong>of</strong> degree program<br />

Mainly clarifi cations <strong>of</strong> what is reported<br />

MAPPING DIVERSITY<br />

2a: number <strong>of</strong> subject areas covered by an<br />

institution using the UNESCO/ISCED subject<br />

areas<br />

3a: the number <strong>of</strong> programs leading to<br />

certifi ed/ regulated pr<strong>of</strong>essions as a % <strong>of</strong> the<br />

total number <strong>of</strong> programs<br />

3b: the number <strong>of</strong> programs <strong>of</strong>fered that<br />

answer to a particular demand from the<br />

labour market or pr<strong>of</strong>essions (as % <strong>of</strong> the total<br />

number <strong>of</strong> programs)<br />

4a: number <strong>of</strong> adult learners as a % <strong>of</strong> total<br />

number <strong>of</strong> students by type <strong>of</strong> degree<br />

5a: number <strong>of</strong> peer reviewed publications per<br />

fte academic staff<br />

Few comments, mainly on ambiguity on what<br />

goes into the broad ISCED-categories<br />

Next to clarifi cations <strong>of</strong> what is reported,<br />

there are some comments on defi nition <strong>of</strong><br />

pr<strong>of</strong>essional and regulated pr<strong>of</strong>ession; to some<br />

<strong>of</strong> the respondents the defi nitions are not clear.<br />

Again some clarifi cations given. There are two<br />

types <strong>of</strong> comments: ‘pr<strong>of</strong>essional’ is ill-defi ned,<br />

and ‘all programs answer to a demand from<br />

the labour market’.<br />

All comments are on the problems providing<br />

the data by specifi ed age group or type <strong>of</strong><br />

program.<br />

Most comments are clarifi cations on what<br />

was reported or why nothing was reported.<br />

It proves that not all institutions have the<br />

information available or think the information is<br />

relevant.<br />

5b: the ISI based citation indicator, also<br />

known as the ‘crown indicator’<br />

6a: the number <strong>of</strong> start-up fi rms Comments referred to the fact that institutions<br />

did not have the information<br />

6b: the number <strong>of</strong> patent applications fi led Comments are mainly clarifi cation <strong>of</strong> what was<br />

reported<br />

6c: the annual licensing income Comments are mainly clarifi cation <strong>of</strong> what was<br />

reported<br />

6d: the revenues from privately funded<br />

research contracts as a % <strong>of</strong> total research<br />

revenues<br />

7a: the number <strong>of</strong> degree seeking students<br />

with a foreign nationality, as % <strong>of</strong> total<br />

enrolment<br />

7b: the number <strong>of</strong> incoming students in<br />

<strong>European</strong> exchange programs, as % <strong>of</strong> total<br />

enrolment<br />

7c: the number <strong>of</strong> students sent out in<br />

<strong>European</strong> exchange programs<br />

7d: international staff members as % <strong>of</strong> total<br />

number <strong>of</strong> staff members<br />

There are some comments on the diffi culty<br />

to obtain total research revenues since<br />

part <strong>of</strong> research revenues are in the lump<br />

sum provided by the government. From the<br />

comments we also can learn that respondents<br />

treat medical research in different ways (in- or<br />

excluding it). The remaining comments are<br />

clarifi cations.<br />

The are two comments on the defi nition <strong>of</strong><br />

foreign student, both stating that nationality<br />

may not be a good indication <strong>of</strong> the<br />

international orientation. Most other comments<br />

are clarifi cations on data provided<br />

Comments clarify the data provided. It proved<br />

to be diffi cult sometimes to report data on<br />

bachelor level only.<br />

Comments clarify the data provided. It proved<br />

to be diffi cult sometimes to report data on<br />

bachelor level only.<br />

Comments clarify the data provided.


7e number <strong>of</strong> program <strong>of</strong>fered abroad<br />

8a: the institution’s fi nancial turn-over in<br />

<strong>European</strong> research programs as % <strong>of</strong> total<br />

fi nancial research turn-over<br />

Most comments explain that no programs are<br />

<strong>of</strong>fered abroad. There are three comments<br />

where the defi nition <strong>of</strong> ‘programs <strong>of</strong>fered<br />

abroad’ are discussed.<br />

Comments clarify the data provided.<br />

9a: number <strong>of</strong> students enrolled (headcount) For two cases the data reported refer to<br />

undergraduate level only. It is also mentioned<br />

that using the academic year as reference<br />

period would make dataprovision easier.<br />

9b: number <strong>of</strong> staff members employed (fte) A few comments were made on special<br />

categories <strong>of</strong> staff (medical, externally<br />

fi nanced) that were in- or excluded.<br />

10a: number <strong>of</strong> distance learning programs<br />

as % <strong>of</strong> total number <strong>of</strong> programs<br />

10b: number <strong>of</strong> part-time programs as % <strong>of</strong><br />

total number <strong>of</strong> programs<br />

10c: number <strong>of</strong> part-time students as <strong>of</strong> total<br />

number <strong>of</strong> students<br />

11a: income from (competitive and noncompetitive)<br />

government funding as a % <strong>of</strong><br />

total revenues<br />

11b: income from tuition fees as % <strong>of</strong> total<br />

income<br />

The issue how to deal with blended learning<br />

was raised a few times. Other comments were<br />

mere clarifi cations.<br />

The comments show that the difference<br />

between part-time programs (designed as<br />

such) and programs that students may take in<br />

part-time is a diffi cult distinction.<br />

Comments clarify the data provided.<br />

Here are some comments on the defi nitions<br />

used. It is not clear what is in competitive<br />

government funding (is research council<br />

funding in or out?)<br />

Some comments that fees include not only<br />

tuition fees from regular degree programs but<br />

also from other courses and other activities.<br />

12a: legal status A few comments on the defi nition <strong>of</strong> ‘legal<br />

status’ and the fact that national descriptions<br />

may be diffi cult to compare in an international<br />

setting<br />

13a: number <strong>of</strong> <strong>of</strong>fi cial concerts and<br />

performances (co)-organised by the institution<br />

13b: number <strong>of</strong> <strong>of</strong>fi cial exhibitions (co)-<br />

organised by the institution<br />

14a: annual turnover in EU structural funds as<br />

% <strong>of</strong> total turnover<br />

14b: number <strong>of</strong> graduates remaining in the<br />

region as % <strong>of</strong> total number <strong>of</strong> graduates<br />

14c: number <strong>of</strong> extracurricular courses<br />

<strong>of</strong>fered for regional labour market<br />

14d: importance <strong>of</strong> local/regional income<br />

sources*<br />

Some comments on the the fact that<br />

information is not (readily) available<br />

Some comments on the the fact that<br />

information is not (readily) available<br />

Comments clarify the data provided.<br />

This indicator evoked many comments. Many<br />

institutions did not have this information<br />

available. There was also a problem with the<br />

defi nition <strong>of</strong> the region.<br />

The comments show that the defi nition <strong>of</strong> extracurricular<br />

courses is not clear to everyone.<br />

A few clarifying comments. One comment<br />

referred to the omission <strong>of</strong> tuition fee as a<br />

source <strong>of</strong> income.<br />

115<br />

MAPPING DIVERSITY


Project Identifi cation number 2006 – 1742 / 001 – 001 SO2 81 AWB<br />

This project has been funded with support from the <strong>European</strong><br />

Commission.<br />

This publication content refl ects the views only <strong>of</strong> the authors. The<br />

Commission cannot be held responsible for any use which may be<br />

made <strong>of</strong> the information contained therein.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!