12.07.2015 Views

Local Evaluation of Children's Services Learning from the Children's ...

Local Evaluation of Children's Services Learning from the Children's ...

Local Evaluation of Children's Services Learning from the Children's ...

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

These shifts in thinking are widely embraced at local level by area-based initiativesincluding <strong>the</strong> Children’ Fund.This recognition has also led to changes in thinking about evaluation. In <strong>the</strong> UKclinical medicine has historically embraced <strong>the</strong> need for evidence based decisionmaking more than many o<strong>the</strong>r sectors, although it has tended to adopt a traditionalapproach to demonstrating ‘what works’. Standards <strong>of</strong> evidence in clinical medicineemphasise experimental methods; ‘randomised control trials’ and systematic reviews<strong>of</strong> randomised control trials in particular are seen as <strong>the</strong> ‘gold standard’ <strong>of</strong> evidence(Becker & Bryman, 2004). Conversely, qualitative methods such as interviewingpr<strong>of</strong>essionals or service users would be considered <strong>the</strong> least robust forms <strong>of</strong>evidence. Whilst ‘lay’ perspectives such as <strong>the</strong> views <strong>of</strong> service users have nothistorically been valued, such experiences are now more readily acknowledged aslegitimate and important forms <strong>of</strong> evidence (Coote, et al., 2004; Becker & Bryman,2004). Traditional experimental approaches are now widely seen as inappropriate foridentifying what works in complex community and area-based initiatives in which<strong>the</strong>re are no simple cause-and-effect relationships between interventions andoutcomes. A Cabinet Office report, establishing a framework for assessing qualitativeevidence, clearly signals that government departments are now valuing <strong>the</strong>contribution <strong>of</strong> qualitative methods in social policy research and evaluation (Spencer,et al. 2004). Fur<strong>the</strong>rmore, it is widely recognised that a combination <strong>of</strong> evaluationmethods should be used appropriately to address different questions in differentsettings (Coote, et al., 2004; Becker & Bryman, 2004).There are also changes in <strong>the</strong> ways evaluators are seen as engaging with policymakers and practitioners. Ra<strong>the</strong>r than being equated with traditional auditing andperformance management roles, evaluation is frequently expected to provideformative or even dialogic feedback to help policy-makers and practitioners developprogrammes <strong>of</strong> activities. This is as well as generating more sophisticated,<strong>the</strong>oretically driven understandings <strong>of</strong> concepts such as social exclusion. Someevaluation is now committed to redressing social power imbalances and embracingdiverse and sometimes contradictory experiences, perspectives and accounts <strong>of</strong>social problems and policy solutions (Donaldson & Scriven, 2003). Some <strong>of</strong> <strong>the</strong>sepotential roles and orientations identified in <strong>the</strong> literature are summarised in TableOne. Whilst some <strong>of</strong> <strong>the</strong>se roles are well established in evaluation practice, o<strong>the</strong>rsare starting to emerge as important departures in <strong>the</strong> ways evaluation is seen asIntroduction 3

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!