06.01.2015 Views

aceUVi

aceUVi

aceUVi

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

The quality metrics used in Western Australia and Manchester assess both the<br />

quality of the experience reported by audience members (eg, ‘inquisitiveness’,<br />

‘captivation’, ‘connection’) and the quality of the product (eg, ‘excellence’ and<br />

‘rigour’). The Manchester pilot found that there was ‘more variation in the public<br />

[audience] scores for the more personal, subjective measures of “relevance”,<br />

“challenge” and “meaning” than for the more technical dimensions of “presentation”<br />

and “rigour”’ (Bunting and Knell 2014, 56). Such systematic discrepancies<br />

suggest that the audience’s rating of experienced impacts on the one hand and<br />

product quality on the other may need to be interpreted differently.<br />

Boerner and Jobst (2013) and Brown and Novak-Leonard (2007) report contradictory<br />

findings regarding the relative significance of impact and product quality<br />

measures, so that the relationship between these two remains an important topic<br />

for further research. The public’s less favourable scoring on the impact measures<br />

(ie, ‘quality of experience’ measures) that were used in Manchester leads Bunting<br />

and Knell to conclude, ‘It’s one thing to offer a polished, absorbing, highly<br />

enjoyable cultural experience; it’s another to make a difference to how people<br />

think and feel about the world’ (56).<br />

The measures of experienced impact that were developed in Western Australia<br />

and Manchester focus on the audience’s intellectual engagement with the performance.<br />

Bunting and Knell note that some respondents in Manchester, in particular<br />

the peer evaluators, wished there was a place to report their emotional<br />

reactions. Rather than adopting additional quantitative impact questions to<br />

capture emotional or social dimensions of the experience that have been included<br />

in other frameworks, Bunting and Knell recommend adding some open-ended<br />

qualitative questions to gather feedback on the emotional experience (Bunting<br />

and Knell 2014, 62).<br />

One of the major challenges of using assessment scores supplied by audience<br />

members at the policy level is ensuring comparability across sites and across<br />

various forms of cultural experience. As noted above, some researchers have<br />

expressed concerns about comparing self-reported audience experiences across<br />

different art forms and contexts (Belfiore and Bennett 2007, 258-60; Brown and<br />

Novak-Leonard 2013, 7). However, one of the core objectives of both the DCA<br />

and Manchester initiatives is enabling ‘comparisons between funding programs<br />

and activities from different art forms’ (Chappell and Knell 2012, 16), and the<br />

integration of peer-review and self-assessment in the evaluation frameworks are<br />

intended as a buffer against idiosyncrasies of the audiences, the objectives of the<br />

organisations, and other contextual factors.<br />

CODA: COMBINING MEASURES OF EXPERIENCE AND QUALITY AT THE POLICY LEVEL 126<br />

UNDERSTANDING the value and impacts of cultural experiences

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!