24.01.2013 Views

Lessons from the Field - Seer Consulting

Lessons from the Field - Seer Consulting

Lessons from the Field - Seer Consulting

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

18<br />

Part III<br />

Important Considerations and <strong>Lessons</strong> Learned<br />

PROMISING PRACTICES IN<br />

EVALUATION OF CAPACITY BUILDING<br />

This study identified <strong>the</strong> following characteristics<br />

based upon promising practices and/or lessons<br />

learned <strong>from</strong> evaluation reports, interviews<br />

with evaluators and capacity builders, and <strong>the</strong><br />

current literature on capacity building and<br />

organizational effectiveness.<br />

Timely and Planned<br />

Evaluation works best when it is incorporated<br />

<strong>from</strong> <strong>the</strong> beginning of <strong>the</strong> design of a capacitybuilding<br />

effort. Before startup is a time to<br />

question <strong>the</strong> project’s readiness for evaluation<br />

and plan accordingly.<br />

Timeliness leads to a sound design phase that<br />

considers all aspects of <strong>the</strong> evaluation process <strong>from</strong><br />

who should be involved to what is being measured<br />

and why, how to measure, for whom <strong>the</strong> evaluation<br />

is intended and who will receive <strong>the</strong> findings and<br />

how. The planning process should clarify why an<br />

evaluation is ga<strong>the</strong>ring what information and for<br />

whom and for what purpose <strong>from</strong> <strong>the</strong> beginning.<br />

Accountability and evaluation are closely aligned<br />

and this is <strong>the</strong> opportunity to review ethical and<br />

practical questions underlying an evaluation<br />

approach: time, costs, impact on those providing<br />

data and so forth.<br />

The need to plan <strong>the</strong> evaluation during <strong>the</strong> planning<br />

and startup of an initiative was echoed by<br />

nearly every informant interviewed for this study.<br />

Attempting to connect measures and critical questions<br />

at <strong>the</strong> beginning of <strong>the</strong> design may change <strong>the</strong><br />

actual design of <strong>the</strong> project (formative evaluation) and<br />

improve it. If a capacity-building intervention is<br />

looked at through <strong>the</strong> lens of how it will be measured<br />

(ei<strong>the</strong>r by an outside evaluator or through<br />

self-evaluation), certain flaws in <strong>the</strong> logic of <strong>the</strong><br />

design may be caught and corrected.<br />

The need to plan <strong>the</strong> evaluation during <strong>the</strong> planning<br />

and startup of an initiative was echoed by nearly<br />

every informant interviewed for this study.<br />

Unfortunately, many evaluators interviewed for this<br />

report experienced being called in after a project<br />

was designed and operating. In some cases, all that<br />

was left to do was to describe <strong>the</strong> process and its<br />

impact after <strong>the</strong> fact. Some evaluators experienced<br />

difficulty going back to <strong>the</strong> project’s beginning and<br />

capturing baseline information Therefore it became<br />

difficult to quantify change.<br />

Even for a simple capacity-building activity, such as<br />

providing a training, planning for <strong>the</strong> evaluation at<br />

<strong>the</strong> same time <strong>the</strong> training is being designed allows<br />

for synchronicity between <strong>the</strong> trainers’ guide,<br />

learning points, and tools to measure whe<strong>the</strong>r or<br />

not <strong>the</strong> participants attained <strong>the</strong> learning goals.<br />

Stakeholder-based<br />

If <strong>the</strong> question is <strong>the</strong> effectiveness of a capacitybuilding<br />

intervention, <strong>the</strong>n those for whom <strong>the</strong><br />

intervention is intended should be included in<br />

shaping what defines effectiveness (outcomes), how<br />

effectiveness might be shown (indicators), and<br />

methods for measuring it (tools).<br />

The various stakeholders—<strong>the</strong> capacity-building<br />

agent, <strong>the</strong> nonprofit(s) or end users; consumers of<br />

nonprofit services, funders/social investors—will<br />

have different perspectives and needs regarding<br />

evaluation. The richest evaluative experiences—and<br />

<strong>the</strong> ones that appear to be leading to genuine institutionalization<br />

of evaluation for <strong>the</strong> purpose of<br />

ongoing learning—were those that included stakeholders.<br />

These were, without exception, <strong>the</strong><br />

participatory evaluations that by <strong>the</strong>ir nature are<br />

inclusive of multiple stakeholders (see Case Studies,<br />

Part IV for more information).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!