Participatory Evaluation of our 2008 - Action Against Hunger
Participatory Evaluation of our 2008 - Action Against Hunger
Participatory Evaluation of our 2008 - Action Against Hunger
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
ut none <strong>of</strong> the monitoring conducted in <strong>2008</strong> included data necessary for reporting or analysis.<br />
Most <strong>of</strong> the programme monitoring was in fact conducted on behalf <strong>of</strong> FAO, concentrating on crop<br />
growth, with data collection and entry completed in the field and a spreadsheet emailed to FAO<br />
(unfortunately this did not include yield data). ACF also monitored the programme, concentrating<br />
entirely on quantitative cumulative progress <strong>of</strong> for example number <strong>of</strong> groups formed and bank<br />
accounts opened, types <strong>of</strong> enterprises selected, acres planted, number <strong>of</strong> people trained,<br />
additional assets constructed, number <strong>of</strong> household animals and symbolic assets, etc. Some <strong>of</strong> the<br />
above might have contributed to an evaluation, but certainly not to analysing changes in<br />
purchasing power, for example, which was the strongest statement <strong>of</strong> impact forecast in the<br />
proposal and inception report. Conducting this evaluation closer to the programme conclusion may<br />
have facilitated greater access to some <strong>of</strong> these data, but this proved to be surprisingly difficult.<br />
Aside from the fact that hard copy files had been stored already, there appeared little staff<br />
knowledge about what might be found in the files – such as whether they contained copies <strong>of</strong><br />
group pr<strong>of</strong>itability analyses or work plans. Some <strong>of</strong> the electronic data were found, but it was<br />
evident that – like the monitoring system on the whole – it was not stored in a manner that reflected<br />
its anticipated access or use. Nor were there final data against which the baseline could be<br />
contrasted. For these reasons the evaluation had to be based on predominantly external data and<br />
a retr<strong>of</strong>it framework. This made the exercise possible but has compromised its quality and<br />
relevance in that there are very few conclusions that can be drawn on the basis <strong>of</strong> FFS programme<br />
monitoring in <strong>2008</strong>.<br />
Greater investment in initial assessment for design followed by a more strategic commitment to<br />
baseline data collection and monitoring will ensure that information can be utilised more effectively<br />
during implementation and evaluation. This can ensure a more efficient use <strong>of</strong> staff time in the<br />
field, reduction in the costs associated with monitoring (like additional photocopying, vehicle travel,<br />
data entry, etc), and ultimately better reporting. Planning for M&E also helps field staff understand<br />
the content, purpose, and use <strong>of</strong> the information they are collecting. Recent ACF experience<br />
following the FFS programme suggests that greater field staff involvement in all aspects <strong>of</strong><br />
monitoring has improved the accuracy and reliability <strong>of</strong> their data collection. Greater foresight in<br />
data collection, management, and use will also help support ACF’s planned surveillance activities<br />
and enc<strong>our</strong>age greater comparability <strong>of</strong> data within and across departments.<br />
The method <strong>of</strong> initial assessment and baseline data collection, depending on how approached,<br />
might even be able to contribute to the originally intended participatory nature <strong>of</strong> FFS M&E. Any<br />
one <strong>of</strong> the RRA tools used during the evaluation fieldwork could generate participant perspectives<br />
on a range <strong>of</strong> issues pertaining to practices, production, gender, assets, transmission, etc. The<br />
specific criteria they use, the amounts they indicate, the priorities they establish could all be<br />
channelled into more meaningful indicators and monitoring. The same could prove true for a<br />
modified version <strong>of</strong> matrix scoring for pr<strong>of</strong>itability analysis and enterprise selection, as the tool by<br />
necessity would establish a range <strong>of</strong> criteria and preferences that could be evaluated for<br />
achievement at the end <strong>of</strong> the programme.<br />
4.11 Targeting based on vulnerability and integrated criteria<br />
It is difficult to establish precisely how the three programme parishes were selected, or how the<br />
participating communities therein were identified. At household level the process and criteria<br />
become clearer and more effective.<br />
Geographic priorities were established partly on the basis <strong>of</strong> FSL field staff experience and where<br />
they ‘knew’ vulnerability to food insecurity to be higher. As the programme map illustrates (Figure<br />
1), there was geographic overlap between the FFS parishes and other ACF activities in nutrition<br />
and WASH. Of particular interest are water user committees and village health teams, as they too<br />
reflect elements <strong>of</strong> group approaches to management and problem solving. The FFS programme<br />
locations were also all within the catchment area <strong>of</strong> ACF feeding centres. The evaluation team<br />
therefore expected more apparent connections between the FFS groups and those from WASH<br />
and nutrition, however in practice these were surprisingly few. Although not explored in depth<br />
during each focus group session, there was always a question or two about other ACF activities or<br />
<strong>Action</strong> <strong>Against</strong> <strong>Hunger</strong> Uganda - 42 - Farmer Field School <strong>Evaluation</strong>