10.09.2015 Views

Create

Final Report - Acare

Final Report - Acare

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

44<br />

The work of the assessment panels will be<br />

to judge the impact of each idea on each of<br />

the 23 main system attributes. For this the<br />

SP2 process requires a uniform ranking scale<br />

defined both in abstract terms as well as in<br />

numerical terms to allow mapping against<br />

the top level issues. The CREATE team defined<br />

the ranking scale as a numerical value with<br />

a 5-point scale from: strongly positive at +3<br />

points ,weakly positive at +1 point, neutral=0,<br />

and similar marks on the negative side.<br />

For the purposes of the CREATE assessment<br />

process we consider a “neutral” ranking to be<br />

the minimum value required for an idea to be<br />

worthy of incubation for each criterion ranked.<br />

While the set of criteria presented above was<br />

designed to be as comprehensive as possible,<br />

given the wide range of ideas likely to be<br />

submitted to the IDEA Portal it is possible<br />

that some criteria may not be significant<br />

with respect to a specific idea. In this case<br />

it is possible to rank these criteria as “not<br />

applicable”. Basically, this special ranking<br />

is similar to “neutral” but can be used to<br />

normalise the overall rankings of ideas. Further<br />

notes are at Appendices A & B.<br />

It is evidently necessary to define for each<br />

criterion a precise interpretation of the<br />

marking. At Appendix B examples of the<br />

recommended detailed descriptions for the<br />

second level criteria are shown. Such a detailed<br />

listing was found to be necessary during<br />

the trial assessment activity when questions<br />

arose regarding the semantics and intended<br />

interpretations for the criteria. Certainly this<br />

affected some criteria more than others and<br />

this is why the lengths of the descriptions<br />

differ greatly. In addition, for each criterion<br />

there is an explanation given on how to<br />

apply the ranking scale. The descriptions of<br />

the criteria and the ranking instructions are<br />

intended to remove personal bias from the<br />

assessment as far as possible. However, there<br />

may be cases in which the ranking instructions<br />

will not fully match the particulars of a given<br />

idea. In these cases the ranking instructions are<br />

to be considered as guidelines outlining the<br />

intentions behind a criterion.<br />

6.6.5. Lessons Learnt<br />

The assessment process and the criteria set<br />

have evolved over the duration of the CREATE<br />

project. In February 2010, the set of assessment<br />

criteria presented above was tested twice by<br />

being subjected to assessment activities. It was<br />

first tested by a group of Bauhaus Luftfahrt<br />

scientists and then by a session of academia<br />

and industry stakeholders. These two test<br />

activities produced useful feedback on the<br />

criteria set as well as on the overall process.<br />

Based on this feedback several issues were<br />

identified that are particularly important.<br />

Ethics as a criterion<br />

The assessment criterion “Ethical constraints”<br />

caused numerous discussions throughout the<br />

assessment workshop. The discussions related<br />

to the scope of applicability of this criterion as<br />

well as to the exact application of the ranking<br />

scale. Still, the discussions evolving around the<br />

ethics of ideas were seen as an indicator that<br />

a criterion of this kind must be included in<br />

the criteria set. As a result of these discussions<br />

we formulated an extended definition for<br />

this criterion. We are well aware that our<br />

definition is neither fully exact nor fully<br />

comprehensive but should serve as a useful<br />

guideline for assessment panel members.<br />

Orthogonality of the criteria set<br />

One problem of the early versions of<br />

the criteria set was its inherent lack of<br />

orthogonality. Orthogonality in this context<br />

means that every aspect considered relevant for<br />

idea assessment should be covered by exactly<br />

one criterion. However, there were some<br />

criteria that violated this condition. In order<br />

to rectify this issue the criteria set was revised<br />

after the assessment workshop. A number<br />

of criteria that were found to be redundant<br />

or otherwise unsuitable for the process were<br />

dropped while some other criteria subsets were<br />

merged into single criteria.<br />

Assessment panel<br />

The experience gained from the assessment<br />

session suggests that an assessment panel<br />

consisting entirely of aerospace engineers – as<br />

might be expected in an aeronautics activity<br />

– is not necessarily ideal. The assessment<br />

body should incorporate a balanced mixture<br />

of experts from different fields and also a<br />

balanced mixture of visionaries and sceptics.<br />

Preparation of the assessment panel<br />

A diligent and thorough preparation of any<br />

assessment session and of its participants is<br />

of vital importance. This must ensure that<br />

all assessors have a common understanding<br />

of the assessment process, the assessment<br />

criteria and the assessment objectives. The<br />

short briefing held for the assessors on the<br />

assessment objectives at the assessment<br />

workshop turned out to be insufficient. The<br />

goals of the CREATE process differ significantly

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!