Design, Monitoring, and Evaluation – Capacity Assessment

portals.wi.wur.nl

Design, Monitoring, and Evaluation – Capacity Assessment

Design, Monitoring, and

Evaluation

Capacity Assessment

Global Synthesis Report

A Report prepared for CARE-USA

By Nalin Johri, M.P.H., M.A.

Final version, January 14, 2002


Table of Contents

Executive Summary......................................................................................................................... i

Introduction:............................................................................................................................. i

First, a look at some of the interesting numbers:..................................................................... i

Observations and Comments: ................................................................................................iii

Some specific recommendations include:............................................................................... v

Acronyms used in this report ......................................................................................................... vi

1. Introduction................................................................................................................................. 1

2. Methods....................................................................................................................................... 1

2.1 Data....................................................................................................................................... 2

2.2 Limitations ............................................................................................................................ 2

3. Results......................................................................................................................................... 3

3.1 Concept for Projects.............................................................................................................. 4

3.2 Diagnosis, Design, Goals, and Indicators ............................................................................. 6

3.3 Monitoring and Evaluation ................................................................................................. 16

3.4 Information Processing and Use ......................................................................................... 28

3.5 Training and Use of DME skills ......................................................................................... 32

3.6 DME CA Process................................................................................................................ 34

4. Discussion and Conclusions ..................................................................................................... 34

5. Recommendations..................................................................................................................... 38

DME CA Global Synthesis i


Executive Summary

Introduction:

During the past two years, as part of its Impact Evaluation Initiative (IEI), CARE-USA asked its

Country Offices to undertake a process of assessing the DME capacities of project and Country

Office staff in order to

a. Help project staff to learn about the IEI standards and good practices for DME;

b. Guide project staff in reflecting on how their own projects were designed and on their

systems for monitoring and evaluating progress towards achieving impact;

c. Use a standard format to measure the strengths and weaknesses of each project to

meet the IEI DME standards (as contained in the CARE Impact Guidelines)

The DME Capacity Assessment (CA) involved project staff reading the Impact Evaluation

Checklist and/or viewing a PowerPoint overview of principles of DME, and subsequently

responding as a group to a project-level discussion guide, completing a project level

questionnaire, and filling an individual assessment of skills and training needs in DME. This

report is based on the quantitative responses to the project level questionnaire received from

186 projects in 23 Country Offices and narrative summaries from 12 Country Offices.

This data has been processed and analyzed by Nalin Johri (Ph.D. candidate at the University of

North Carolina at Chapel Hill), who previously served as Manager (Health Management

Information Systems) / Health, Nutrition, and Population in CARE India. The study was

commissioned and edited by Jim Rugh, CARE DME Coordinator.

The outline of this report, as was the DME CA Tool itself, is structured around the project cycle,

from initial diagnostic assessment to project design and proposal writing, to monitoring and

evaluation plans, baselines, M&E information and reporting systems, to evaluations.

First, a look at some of the interesting numbers:

This reports contains a lot of statistics. It might be useful to simply list some of the key results:

Are projects proposals started as concept papers?

• 67% of projects had a concept paper before full proposal was submitted.

• 31% of the concept papers originated from a Country Office. Other concepts came from

donors (18%), program staff (15%) and others (12%).

• 20% of projects were follow-ups results of prior phases / pilots.

Do projects undertake any diagnostic assessments?

• 31% of projects were based on some form of formal diagnostic assessment:

o 20% were based on full HLS assessment,

o 30% were based on other forms of holistic diagnosis.

DME CA Global Synthesis i


Do projects have logframes?

• 84% of these projects have logframes.

• 73% of projects have household-level impact as their final goal.

• 75% of projects reportedly contribute to higher program goals.

• 65% of projects say their goals meet the “SMART” criteria.

Who was involved in the design and writing of proposals?

• 44% of project proposal writing involved the Sector Coordinator.

• 35% of project proposal writing involved Project Managers.

• 31% of project proposal writing involved the CD or ACD.

• 24% of project proposal writing involved consultants.

Who reviewed project proposals?

• 53% of project proposals were reviewed by the CD / ACD.

• 29% of project proposals were reviewed by a Sector Coordinator.

• 29% of project proposals were reviewed by a Project Manager.

• 36% of the projects say that there are no guidelines for project quality review in their CO.

• 64% say that there was a helpful review of their proposal.

Do the projects have detailed monitoring and evaluation plans?

• 45% of projects have an evaluation plan or design included in project documentation.

• 36% of projects are reported as having detailed M&E plans.

• 62% of project M&E plans involved implementing staff in their development.

• 34% of project monitoring and evaluation plans involved someone from the Country

Office

How were baselines conducted for these projects?

• 80% of projects were reported as having some form of a baseline.

• 43% of the baselines included a quantitative survey

• 57% of baselines measured indicators of impact.

• 72% of baselines included indicators of effect

• 23% of the projects had baselines prior to project startup.

• 59% of the projects say their baselines were different from the initial

diagnosis/assessment.

• 57% of the projects say their baselines were conducted with the same level of rigor as

that planned for the project final evaluation.

Have the projects been re-designed?

• 48% of the projects have been re-designed and most of the changes were documented.

Are project staff satisfied with indicators?

• 43% of projects are satisfied with indicators of impact.

• 60% of projects are satisfied with indicators of effect.

• 73% of projects are satisfied with indicators of output.

• 65% of projects have project staff measuring or processing outcome indicators.

DME CA Global Synthesis ii


What methods are used to count and classify beneficiaries?

• 63% of projects have ways to disaggregate beneficiaries by gender.

• 42% of projects say they have a registry to keep track of beneficiaries.

• 15% of projects rely on estimates of the beneficiaries they reach.

What Management Information Systems (MIS) do projects use?

• 35% of projects say they have satisfactory MIS software (without specifying what it is).

• 3 of these projects use MER as their MIS software.

• 24% of projects use some other MIS software.

• 50% of projects generate timely reports through their MIS.

• 53% of projects generate accurate reports through their MIS.

Do project reports meet the needs of key stakeholders?

• 68% of projects have reports that meet the needs of project management.

• 66% of projects have reports that meet the needs of donors.

• 44% of projects have reports that meet the needs of partners.

• 40% of projects have reports that meet the needs of the CO.

• 34% of projects have reports that meet the needs of field supervisors.

• 27% of projects have reports that meet the needs of sector coordinators.

• 11% of projects have reports that meet the needs of CARE USA HQ.

Has there been adequate DME training?

• 38% of projects have adequate training in DME during the past two years.

• 65% of projects have plans for future training in DME.

Observations and Comments:

The above results provide rich data. In putting it all together what is strikingly evident is the

discord in the process from Design to Monitoring to Evaluation. The leap of faith from the ‘D’

to the ‘M&E’ of projects is occasioned by, on the one hand, having rich technical inputs at the

proposal development stage through the Sector Coordinators and Project Managers. On the other

hand, detailed M&E plans exist for only 45% of projects. There is divided responsibility of

DME. The resulting ad hoc monitoring and evaluation systems then “respond to key questions

formulated by those involved, not to an evaluation plan written during the design phase”

(comment from CARE-Bolivia). Though 65% of projects said their staff were measuring

outcome, one wonders if they are actually involved in evaluating the loftier goals of the projects.

A possible area of weakness is that only 36% of project proposals were reviewed through clear

guidelines for the sound review of proposals. The process of review also appears to be ad hoc

across regions without any sound system for a rigorous review of proposals to strengthen the

various elements within them. These guidelines and processes should be usefully reviewed,

compiled, and shared with other Country Offices. As a result of the DMECA, Kenya has already

embarked upon developing DME guidelines.

DME CA Global Synthesis iii


Amidst all this, it is encouraging to note that concept papers as precursors to full proposals are

more the norm (67% of projects) regardless of the source of initial concept for the project.

Country Offices are becoming more rigorous in the design of proposals and increasingly

resorting to logframes (84% of projects) and turning out SMART goals (65% of projects). The

development of good logframes requires a certain attention to clarity of the logical link in the

continuum from inputs to impact. This appears to be a reality in many projects in countries

across the CARE regions with project staff feeling confident that there is clarity of this logical

linkage. Projects are also increasingly tuned to bringing about lasting changes at the householdlevel

(73% of projects). Results and successes from projects appear not to be performing in a

vacuum but part of a larger program strategy with 75% of project goals reportedly contributing

to higher program goals. Being part of a larger program strategy as well as having sound

projects in countries across the regions is going to go a long way in building on the successes and

lessons of projects and may to an extent mitigate the discontinuity in learning from the fewer

prior phase / pilots.

The results of the cross-tabulation on projects that used the full HLS Assessment and logframes

seems to suggest that the process of rigorous pre-project assessment and the use of logframes

with clear logical linkages is an increasing reality in the CARE world. The use of HLS and other

assessments is at times hampered by availability of funds or emergency nature of projects, thus

perpetuating sectoral foci. This needs to be addressed by rigor in pre-project assessment and in

the development of M&E plans and the details therein to measure and track indicators contained

in the proposal.

In spite of 80% of projects having a baseline of some form or the other, only 47% of the

baselines used a quantitative survey. On applying this ‘quantitative’ filter to the indicators

measured during the baseline and satisfaction with indicators there is sharp, almost fifty percent

reduction in the number of projects that had baselines that measured impact/effect or had

satisfactory indicators at various levels. The non-existence of good baselines in this world

driven by numbers and donors driven by measurement of impact needs to be reversed.

Sound results are also built on systems with the wherewithal to track and keep a count of project

beneficiaries. Sixty-three percent of projects reportedly registered beneficiaries by gender,

however far fewer projects (46%) were able to register total beneficiaries in projects. This may

imply varying interpretation of systems for registering beneficiaries.

Existing data management systems need to be further understood. In the rush to automate, it is

likely that attention to detail leading to systematization of the logical link between proposal,

design, and monitoring and evaluation may be overlooked. Although only 36% of projects

reportedly had satisfactory software for data management, at least 65% of projects have feel they

are able to fulfilling the reporting needs of project management and donors. Far fewer projects

(27% - 40%) were fulfilling the management needs of Country Office Senior Management,

Sector Coordinators, or Field Supervisors. If this truly is the case, then apparently the building

blocks of data at the field level need to be strengthened. Apparently, data is also not being

analyzed at the point of collection but being pushed up the monitoring ladder for compilation and

reporting. Attention to sharing of information across projects and across the CARE world still

requires impetus.

DME CA Global Synthesis iv


The self-ratings and the measures of quality of many items such as the proposal, detailed

implementation plan, M&E plans and systems, baseline, and adequacy of training do not appear

to be borne out by the many results that the analyzed data has brought up. These ratings and

quality measures may be clouded by socially desirable responses or other subjectivity. The

results on these measures of quality and ratings needs to be viewed with caution and interpreted

with skepticism.

Finally, the DME CA was visualized as a process to aid reflection and gather lessons to learn

from the successes and weaknesses of project design, monitoring, and evaluation. The objectives

of this exercise also included a refresher on the principles of DME, including reading the Impact

Evaluation Checklist as a part of a larger structured group exercise. The data suggests that in

many projects this exercise may have been short-circuited, probably by pressing project needs.

At the same time, many Country Offices (as evidenced by the narrative summaries from 12

Country Offices) have used the results from the DMECA to undertake a review, reflection, and

analysis of their approach to DME. This has resulted in identification of specific areas for

improvement, reassignment of roles and responsibilities pertaining to DME, and even developing

a DME manual (as in Kenya) as they march onwards to “realistic and adequate, but not

necessarily perfect systems to do M&E activities” (comment from CARE-Kenya). This

‘evaluation as intervention’ has achieved partially its potential to raise the consciousness on and

knowledge of DME.

Some specific recommendations include:

• Encourage better continuity from the ‘D’ (proposal) to ‘M&E’ (design and detail of M&E

plan)

• Systematize before you automate the M&E system

• Institutionalize the process of DME through Country Office teams / individuals

• Encourage the adoption of full HLS or other forms of formal holistic diagnosis

• Systematize the process of review of all elements of proposals with clear and helpful

guidelines for review of proposals

• Encourage the use of quantitative surveys as one of the essential elements of baseline

studies

• Undertake a review of MIS systems in CARE projects and systematize and strengthen

before computerization

• Refocus reports to balance the reporting requirement with the immediacy of management

• Further study of the results of this DME CA exercise through inclusion of -

o Sector-level information on projects

o Information on remaining projects and

o A more in-depth qualitative analysis of the narrative reports.

⊆⊇ ⊆⊇ ⊆⊇

DME CA Global Synthesis v


Acronyms used in this report

ACD Assistant Country Director

CD Country Director

CI CARE International

CIHQ CI Headquarters (in this case, CARE USA/Atlanta)

CO Country Office

DIP Detailed Implementation Plan

DME Design, Monitoring and Evaluation

DME CA DME Capacity Assessment

DME CAT DME Capacity Assessment Toolkit

IEI Impact Evaluation Initiative

M&E Monitoring and Evaluation

MIS Management Information System

DME CA Global Synthesis vi


1. Introduction

CARE’s work in over 60 countries is carried out under a variety of situations that straddle the

continuum from relief to development. The projects that CARE undertakes in these countries

reflect to varying degrees a mix of region/country-specific needs and aspirations and donorpriorities.

Integral to the success of these projects is a rigorous process to reflect on the practices

used to learn from past lessons and to build capacity to develop, plan, and implement better

projects. CARE-USA is at the forefront of these initiatives and has at periodic intervals built

capacity in Country Offices to design and manage more effective projects. Continuing this

initiative is the present Design, Monitoring, and Evaluation Capacity Assessment (DME CA).

During 1999, CARE-USA produced the CARE Impact Guidelines, 1 which included an ‘Impact

Evaluation Checklist’ of best practice DME principles. The full IEI packet included an overview

of the DME principles contained in a PowerPoint ® presentation, and a DME Capacity

Assessment Toolkit (DME CAT) to help projects and COs assess their ability to meet the IEI

standards. Guidelines accompanying the DME CAT positioned it as a group assessment process

requiring the involvement of “all project staff with any responsibilities related to design and/or

monitoring and evaluation”. The DME CAT included a project-level discussion guide and

questionnaire and an individual-level assessment of skills and training needs in DME.

There were two stated purposes in using the DME CAT

a. Guide project staff in reflecting on how their project was designed and on its systems for

monitoring and evaluating progress towards achieving impact; and

b. Use of a standard format to measure the strengths and weaknesses of each project to meet

the DME standards.

This report is based on the quantitative data from project level DME CAT reports received from

23 Country Offices and narrative summaries from 12 Country Offices. Others are still coming in,

but were not received in time for inclusion in this global synthesis.

2. Methods

Project level information from the DME CA Tool was collected on a Microsoft Excel ® template.

This project level information utilized at least three different versions of the Tool. However, to

make the data comparable, version 2.05 of the DME CAT 2 was utilized as the standard data entry

protocol for all projects. Data entry and analysis was done using EPI INFO version 6.04.

Univariate analysis and select cross tabulation using analysis filters were the primary quantitative

analysis adopted while the narrative summaries were content analyzed. Results are presented at

the CARE Region/Country Office and project levels.

1 The CARE Impact Guidelines are based on work of the Impact Evaluation Initiative Working Group (IEI WG) at a

workshop held at the CARE-USA Headquarters, Atlanta, 27-30 April 1999.

2 The updated version of the DME CA Tool was developed with CARE-Bangladesh during the DME CA workshop,

Bangladesh, 14-17 May 2000.

DME CA Global Synthesis 1


2.1 Data

Data was entered for 186 projects from 23 CARE Country Offices located in four CARE

regions. 3 The spread of COs and projects is as depicted in Figure 1. The distribution of projects

by region ranged from 70 projects in East Africa to a low of 30 projects from the Asia region.

By country, the distribution of projects was 17 from India to two projects from Malawi.

Figure 1: Distribution of Projects by Country Office and Region

Haiti (10)

Nicaragua (13)

Peru (13)

Bolivia (10)

Latin America &

Caribbean (46)

Egypt (4)

Niger (13)

Mali (8)

Sierra Leone (3)

Benin/Ghana/Togo (5)

Rwanda (5)

East Africa

(70)

West Bank/Gaza (6)

Zambia (3)

N Sudan (9)

S Sudan (9)

Ethiopia (9)

Somalia (9)

Kenya (11)

Tanzania (8)

Mozambique (6)

Malawi (2)

South & West

Africa (40)

Afghanistan (4)

India (17)

Bangladesh (9)

The results section of the report follows a ‘project lifecycle’ approach to project development

and assessment.

2.2 Limitations

Before proceeding further with this report, certain caveats in the analysis and interpretation of

results should be borne in mind. These are

• Different versions of the questionnaire required choosing the ‘closest’ response to at

times somewhat differently worded questions.

3 These CAs were conducted before the new Mid-East Regional Management Unit had been formed. Therefore the

results from Egypt and West Bank/Gaza are listed under East Africa in this report.

DME CA Global Synthesis 2

Asia

(30)


• On one occasion, respondent generated codes had to be altered to fit in with the codes

used by other Country Offices. Likewise, there were some alterations in the ordering of

responses that had to be modified prior to data entry

• The sample is too small for some of the analysis; hence only numbers and not

percentages are presented by country.

• The spread of projects by region / Country Offices tends to give weightage to the results

from select regions / Country Offices

• Sector information for projects is currently unavailable to use as an analysis filter,

consequently all the projects are viewed as a whole.

• Tremendous spread of projects by beneficiaries reached and lack of information on this

for all projects doesn’t allow assignment of weights.

• It appears that at times the questionnaire has been filled up by individuals rather than as

part of a group exercise. (See section 3.6: Numbers ranged from one individual to 50

project staff participating in the DME CA).

• Definitive response categories considered only responses that were an unqualified

‘Yes’. This may tend to present a pessimistic picture, by not counting “possibly” or “to

some extent”.

• Tremendous variation in projects within a country in terms of their Design, Monitoring,

and Evaluation make it difficult to paint a true Country Office-specific picture.

• These were self-assessments by project staff. The answers represent their own

perceptions. In some cases, outside experts facilitated DME CA sessions. It is hoped

that they helped to provide some objectivity. Nevertheless, all of these results need to be

interpreted as “according to those participating in these DME CA exercises”.

• It is apparent that CARE staffs have a variety of interpretations of some DME terms.

One example is “registry” for counting participants/beneficiaries. While many projects

say their M&E MIS uses a “registry” it is doubtful they all do keep track of individuals,

especially given the size of some projects (with as many as 16.5 million beneficiaries).

• Narrative summaries were available for a sub-sample of the 23 Country Offices.

3. Results

Some of the key results of this analysis were that 84% of projects have a log frame; 20% of

projects were based on a full Household Livelihood Security (HLS) assessment; 73% of projects

say their final goal is addressed at the household impact level; 45% of projects have some form

of evaluation plan; 43% of projects had a quantitative survey as a baseline; and in 65% of

projects, staff were measuring or processing outcome data. It is evident that the DMECA has led

to the systematization of reflection and varying levels of analysis of the state of DME in the

Country Offices.

Summary results were reported in the Executive Summary. Detailed results are divided into the

following sub-sections: Concept for Projects (3.1); Diagnosis, Design, Goals, and Indicators

(3.2); Monitoring and Evaluation (3.3); Information Processing and Use (3.4); Training and Use

of DME Skills (3.5); and DME CA Process (3.6).

DME CA Global Synthesis 3


3.1 Concept for Projects

Sixty-seven percent of projects were started with a concept paper (see Table 3.1.1). Across

regions 60% of projects in East Africa to 76% of projects in Latin America had a concept paper.

Except for Afghanistan and West Bank/Gaza, all other Country Offices reported many projects

being initiated through a concept paper. The existence of concept papers was identified as an

area of strength on the DME continuum by the Sudan Country Office.

The source of initial concept was seen to be predominantly Country Offices (at 31% of all

projects) followed by donor-initiated concepts for project (at 18% of all projects). Only 5% of

project concepts came from the CARE USA Headquarters in Atlanta. Bear in mind that many

projects reported more than one source of initial concept.

By region it appears that Country Offices and donors in the South and West Africa and the Asia

region initiated more concepts for projects. All eight projects in Mali had the Country Office as

one of the sources of initial concept for projects; while in Bangladesh the donors were the

sources of the concepts for eight of the nine projects there.

It was reported that 20% of the projects had a prior phase or pilot. Again, there were apparently

more projects with pilots/prior phase in the Asia and South and West Africa regions.

A cross-tabulation of the data on concept papers and sources of initial concept for projects is

shown in Table 3.1.2. The data does not suggest any variation between Donors, CIHQ, or Project

Staff as source of initial concept and a concept paper developed or not developed for the project.

Among projects that have or did not have a concept paper, it appears that fewer projects with

Country Office or ‘Others’ as source of initial concept are likely to have a concept paper prior to

proposal development.

The period from initial concept to a proposal for 115 projects for which there was information

ranged from zero months (individual projects in Tanzania and West Bank) to 72 months (a

project in Niger). The median duration from concept to proposal for Country Offices is depicted

in Figure 2. The median ranged from zero (West Bank) to 12 months (Mali and Niger).

DME CA Global Synthesis 4


Table 3.1.1: Concept Paper, Source of Initial Concept, & Prior phase of Project

Source of Initial Concept

Country Office and Concept

Country Program

Prior

Region

Paper Donor* CI HQ* Office* Staff* Others* Phase N

Latin America and

Row (%)

35

76.1%

3

6.5%

2

4.3%

6

13.0%

1

2.2%

0

0.0%

1

2.2%

46

100.0%

Column (%) 28.2% 8.8% 20.0% 10.5% 3.6% 0.0% 2.7% 24.7%

Bolivia 10 0 0 0 0 0 0 10

Haiti 8 3 2 6 1 0 1 10

Nicaragua 6 0 0 0 0 0 0 13

Peru 11 0 0 0 0 0 0 13

East Africa 42 7 1 11 5 4 7 70

Row (%) 60.0% 10.0% 1.4% 15.7% 7.1% 5.7% 10.0% 100.0%

Column (%) 33.9% 20.6% 10.0% 19.3% 17.9% 17.4% 18.9% 37.6%

Egypt 2 2 0 3 3 1 0 4

Ethiopia 4 1 1 7 2 0 6 9

Kenya 11 0 0 0 0 0 0 11

Rwanda 2 0 0 0 0 0 0 5

Somalia 5 0 0 0 0 0 0 9

Sudan (North) 5 0 0 0 0 0 0 9

Sudan (South) 8 0 0 0 0 0 0 9

Tanzania 4 1 0 0 0 0 0 8

West Bank / Gaza 1 3 0 1 0 3 1 6

S. and W. Africa 28 12 4 24 11 15 17 40

Row (%) 70.0% 30.0% 10.0% 60.0% 27.5% 37.5% 42.5% 100.0%

Column (%) 22.6% 35.3% 40.0% 42.1% 39.3% 65.2% 45.9% 21.5%

Benin, Ghana, & Togo 2 1 1 1 2 1 0 5

Malawi 2 0 1 1 0 0 0 2

Mali 6 2 0 8 4 7 5 8

Mozambique 2 1 1 4 1 1 0 6

Niger 12 8 1 6 0 6 8 13

Sierra Leone 2 0 0 2 2 0 2 3

Zambia 2 0 0 2 2 0 2 3

Asia 19 12 3 16 11 4 12 30

Row (%) 63.3% 40.0% 10.0% 53.3% 36.7% 13.3% 40.0% 100.0%

Column (%) 15.3% 35.3% 30.0% 28.1% 39.3% 17.4% 32.4% 16.1%

Afghanistan 0 0 0 0 0 0 0 4

Bangladesh 7 8 0 4 6 3 6 9

India 12 4 3 12 5 1 6 17

Total 124 34 10 57 28 23 37 186

Row (%) 66.7% 18.3% 5.4% 30.6% 15.1% 12.4% 19.9% 100.0%

* multiple responses

DME CA Global Synthesis 5


Table 3.1.2: Concept Paper and Source of Initial Concept

Source of Initial Concept Concept Paper

Yes (n=124) No (n=23)

Frequency Percent Frequency Percent

Donor (n=34) 26 21.0% 5 21.7%

CARE International Headquarters (n=10) 8 6.5% 1 4.3%

Country Office (n=57) 37 29.8% 10 43.5%

Project Staff (n=28) 21 16.9% 4 17.4%

Others (n=23) 16 12.9% 5 21.7%

14

12

10

8

6

4

2

0

Bolivia

Haiti

Nicaragua

Peru

Figure 2: Concept to Proposal (in months)

Latin America East Africa South & West Africa Asia

3.2 Diagnosis, Design, Goals, and Indicators

Egypt

NOT AVAILABLE

Ethiopia

Kenya

Rwanda

Somalia

Sudan N

Sudan S

Tanzania

W Bank / Gaza

Benin, Ghana & Togo

Malawi

Mali

Mozam bique

Niger

Sierra Leone

Zam bia

Afghanistan

Bangladesh

Note: Numbers represent Median

Thirty-one percent of all projects had some form of formal diagnosis to identify problems that

would be addressed by the projects (see Table 3.2.1). Over 50% of all projects from the Asia

and South and West Africa regions had some form of formal diagnosis. In trying to ascertain the

form that this diagnosis took, it emerges that 20% of projects had a full Household Livelihood

Security Assessment (HLSA) and 30% of projects had some other form of holistic diagnosis.

More projects in Latin America and East Africa regions had full HLS assessments while more

projects in the Asia region had some other form of holistic diagnosis. Projects in Peru and Niger

accounted for 44% of projects with full HLSAs. One project in Angola used Participatory

Approach Needs Assessment (PANA) as their pre-project assessment.

DME CA Global Synthesis 6

India


Eighty-four percent of projects have logframes. Across regions, at least 75% of projects in any

region have logframes.

Seventy-three percent of all projects report that they have final goals aimed at household impact

level. Seventy-three or more percent of projects in the Asia, East Africa, and Latin America

regions have final goals at the household impact level. Eighty-one percent of projects have

intermediate goals at the effect level. Across regions, at least 75% of projects in any region have

intermediate goals at the effect level. There was concern voiced in one project from Gulf of

Guinea that the “choice of indicators (is) not a good representation of the measure of goals and

objectives of the project”. Seventy-five percent of projects reported that their projects contribute

to a higher program goal. Eighty percent or more of projects in the Asia, East Africa, and Latin

America region report that the projects contribute to higher program goals.

Sixty-five percent of projects have goals that the staff feels meet the SMART criteria (Specific,

Measurable, Appropriate, Reliable, and Time-bound). Across regions, the percentage of projects

with SMART goals was fairly consistent.

When asked about the clarity of the logical linkage between outputs and outcomes, the ratings

for 177 projects with available information ranged from a low rating of 1.0 (individual projects

in Bolivia, Mali, and Peru) to the highest rating of 5.0 (individual projects in Afghanistan,

Bolivia, India, Kenya, Mozambique, Nicaragua, Rwanda, North Sudan, South Sudan, Tanzania,

and West Bank / Gaza). In Figure 3, the median ratings on clarity of logical linkage by Country

Office ranged from a low of 1.0 (Peru) to a high of 4.0 (Afghanistan, Bangladesh, Egypt,

Benin/Ghana/Togo, India, Kenya, Mali, Mozambique, North Sudan, and West Bank / Gaza).

In Table 3.2.2 are depicted the association between full HLS assessment and goals, satisfactory

indicators, and monitoring and evaluation plans for projects. There is not much variation

between projects with or without full HLS assessment and their goals, indicators, or detailed

monitoring and evaluation plans. However, fewer projects with full HLS assessments have

documented evaluation design/plans than projects without full HLS assessments

Taking this analysis further, in Table 3.2.3 are depicted the associations between logframes and

source of initial concept, goals, indicators, and their monitoring and evaluation. There is not

much variation between projects with or without logframes and the source of initial concept of

projects, projects contributing to higher program goals, projects with final goals addressing

household-level impact, projects with satisfactory indicators at the effect and output levels, or

projects with full HLS assessment. More projects with logframes had intermediate goals at the

effect level, SMART goals, satisfactory indicators at the impact level, or sufficiently detailed

monitoring and evaluation plans than projects without logframes. Fewer projects with logframes

have a project document with evaluation plan/design or some other form of holistic diagnosis

than projects than projects without logframes. The “links between log frame, strategy, and

means of measurement” (comment from CARE-Gulf of Guinea) were recognized as areas

needing improvement by many projects.

DME CA Global Synthesis 7


Country Office and

Region

Table 3.2.1: Initial Diagnosis, Log Frame, and Goals

Formal

Diagnosis Full HLS*

Other

Diagnosis* Log Frame

Final Goal

at HH

Impact

Int. Goal at

Effect

Higher

Program

Goal

SMART

Goal N

Latin America and

the Caribbean 4 15 10 35 40 37 37 29 46

Row (%) 8.7% 32.6% 21.7% 76.1% 87.0% 80.4% 80.4% 63.0% 100.0%

Column (%) 6.9% 39.5% 18.2% 22.3% 29.4% 24.7% 26.6% 24.0% 24.7%

Bolivia 0 0 2 9 9 9 8 3 10

Haiti 2 3 3 4 8 6 9 3 10

Nicaragua 0 0 4 9 10 9 7 10 13

Peru 2 12 1 13 13 13 13 13 13

East Africa 14 8 20 57 51 56 56 50 70

Row (%) 20.0% 11.4% 28.6% 81.4% 72.9% 80.0% 80.0% 71.4% 100.0%

Column (%) 24.1% 21.1% 36.4% 36.3% 37.5% 37.3% 40.3% 41.3% 37.6%

Egypt 4 0 3 4 2 4 4 3 4

Ethiopia 5 3 2 8 9 9 8 8 9

Kenya 0 1 2 8 4 6 4 7 11

Rwanda 0 0 4 5 5 5 5 0 5

Somalia 0 0 0 8 6 7 8 6 9

Sudan (North) 0 2 3 9 8 9 9 9 9

Tanzania 0 2 0 5 5 3 5 2 8

West Bank / Gaza 5 0 3 2 3 4 4 6 6

S. and W. Africa 20 12 9 37 20 34 19 22 40

Row (%) 50.0% 30.0% 22.5% 92.5% 50.0% 85.0% 47.5% 55.0% 100.0%

Column (%) 34.5% 31.6% 16.4% 23.6% 14.7% 22.7% 13.7% 18.2% 21.5%

Benin, Ghana, & Togo 0 1 1 4 1 5 3 3 5

Malawi 2 2 2 2 2 2 0 1 2

Mali 5 1 0 7 3 8 4 5 8

Mozambique 0 1 1 6 5 6 4 5 6

Niger 7 5 3 12 7 11 8 8 13

Sierra Leone 3 1 1 3 1 1 0 0 3

Zambia 3 1 1 3 1 1 0 0 3

Asia 20 3 16 28 25 23 27 20 30

Row (%) 66.7% 10.0% 53.3% 93.3% 83.3% 76.7% 90.0% 66.7% 100.0%

Column (%) 34.5% 7.9% 29.1% 17.8% 18.4% 15.3% 19.4% 16.5% 16.1%

Afghanistan 0 1 1 4 3 3 4 4 4

Bangladesh 7 1 2 9 9 4 9 5 9

India 13 1 13 15 13 16 14 11 17

Total 58 38 55 157 136 150 139 121 186

Row (%) 31.2% 20.4% 29.6% 84.4% 73.1% 80.6% 74.7% 65.1% 100.0%

* Multiple responses

DME CA Global Synthesis 8


4.5

4

3.5

3

2.5

2

1.5

1

0.5

0

Bolivia

Haiti

Nicaragua

Peru

Figure 3: Clarity of Logical Linkage

Latin America East Africa South & West Africa Asia

Egypt

Ethiopia

Kenya

Rwanda

Somalia

Sudan N

Sudan S

Tanzania

W Bank / Gaza

Benin, Ghana & Togo

Malawi

Mali

Mozam bique

Niger

Sierra Leone

Zam bia

Afghanistan

Bangladesh

Note: Numbers represent Median

Table 3.2.2: Full HLS, Goals, Indicators, and their M&E

Full HLS

Yes (n=38) No (n=59)

Goals Frequency Percent Frequency Percent

Contribute to Higher Program Goal (n=139) 32 84.2% 43 72.9%

Final Goal addresses Household Impact (n=136) 32 84.2% 42 71.2%

Intermediate Goal at Effect level (n=150) 32 84.2% 47 79.7%

Indicators

Satisfactory Indicators at Impact level (n=80) 21 55.3% 27 45.8%

Satisfactory Indicators at Effect level (n=111) 22 57.9% 35 59.3%

Satisfactory Indicators at Output level (n=136) 26 68.4% 47 79.7%

M&E

Project document containing evaluation design or plan (n=83) 14 36.8% 33 55.9%

Sufficiently detailed Monitoring and Evaluation Plans (n=67) 10 26.3% 19 32.2%

DME CA Global Synthesis 9

India


Table 3.2.3: Logframe, Initial Concept, Goals, Indicators, and their M&E

Logframe

Yes (n=157) No (n=15)

Frequency Percent Frequency Percent

Source of Initial Concept

Donor (n=34) 29 18.5% 3 20.0%

CARE International Headquarters (n=10) 10 6.4% 0 0.0%

Country Office (n=57) 49 31.2% 6 40.0%

Assessment

Full Household Livelihood Security Assessment (n=38) 35 22.3% 3 20.0%

Other form of holistic diagnosis (n=55) 45 28.7% 6 40.0%

Goals

Contribute to Higher Program Goal (n=139) 120 76.4% 10 66.7%

Final Goal addresses Household Impact (n=136) 119 75.8% 11 73.3%

Intermediate Goal at Effect level (n=150) 133 84.7% 10 66.7%

SMART Goals (n=121) 108 68.8% 8 53.3%

Indicators

Satisfactory Indicators at Impact level (n=80) 72 45.9% 5 33.3%

Satisfactory Indicators at Effect level (n=111) 96 61.1% 9 60.0%

Satisfactory Indicators at Output level (n=136) 116 73.9% 12 80.0%

M&E

Project document containing evaluation design or plan (n=83) 71 45.2% 9 60.0%

Sufficiently detailed Monitoring and Evaluation Plans (n=67) 62 39.5% 2 13.3%

In Table 3.2.4 are depicted the primary authors and other persons / offices directly involved with

authoring the proposal for projects. For most project proposals, the Sector Coordinator and/or

Project Manager and/or Country Directors / Assistant Country Directors were the primary

authors (between 31% and 44% of projects). Consultants were involved in writing 24% of

project proposals. CARE International member Headquarters were reported as being directly

involved in 12% of proposals for projects.

The authorship of project proposals varied by region. Sector Coordinators predominated in Asia

(77%) while in East Africa more of the proposals involved the Country Directors/Assistant

Country Directors (46%) or Partners (23%). Fewer of the proposals from East Africa involved

the Sector Coordinators (13%) and fewer of the proposals from Latin America involved the

Country Director/Assistant Country Director. None of the projects in Asia reported the direct

involvement of partners in proposal development while fewer of the projects in South and West

Africa had the direct involvement of consultants than did the proposals from the other regions.

DME CA Global Synthesis 10


Table 3.2.4: Initial Diagnosis, Log Frame, and Goals

Proposal written by

Country Office and Project

Region

Manager*

Sector

Country

Director /

Assistant

Country

Coordinator* Director* CI HQ* Partner* Consultant* N

Latin America and

the Caribbean 17 30 4 7 4 14 46

Row (%) 37.0% 65.2% 8.7% 15.2% 8.7% 30.4% 100.0%

Column (%) 25.8% 37.0% 6.9% 30.4% 18.2% 31.1% 24.7%

Bolivia 3 2 0 1 3 7 10

Haiti

2 9 1 3 0 2 10

Nicaragua 4 10 3 3 1 4 13

Peru 8 9 0 0 0 1 13

East Africa 26 9 32 7 16 18 70

Row (%) 37.1% 12.9% 45.7% 10.0% 22.9% 25.7% 100.0%

Column (%) 39.4% 11.1% 55.2% 30.4% 72.7% 40.0% 37.6%

Egypt

3 0 2 0 2 2 4

Ethiopia 7 1 6 0 7 3 9

Kenya 4 0 7 3 0 2 11

Rwanda 4 0 3 0 0 0 5

Somalia 0 4 6 0 0 2 9

Sudan (North)

3 4 0 0 0 2 9

Sudan (South) 5 0 1 0 1 2 9

Tanzania 0 0 3 4 2 3 8

West Bank / Gaza 0 0 4 0 4 2 6

S. and W. Africa 13 19 13 4 2 5 40

Row (%) 32.5% 47.5% 32.5% 10.0% 5.0% 12.5% 100.0%

Column (%) 19.7% 23.5% 22.4% 17.4% 9.1% 11.1% 21.5%

Benin, Ghana, & Togo 2 2 1 1 2 0 5

Malawi 0 0 1 1 0 0 2

Mali 3 6 4 1 0 1 8

Mozambique 2 0 5 1 0 1 6

Niger

6 7 2 0 0 3 13

Sierra Leone 0 2 0 0 0 0 3

Zambia 0 2 0 0 0 0 3

Asia 10 23 9 5 0 8 30

Row (%) 33.3% 76.7% 30.0% 16.7% 0.0% 26.7% 100.0%

Column (%) 15.2% 28.4% 15.5% 21.7% 0.0% 17.8% 16.1%

Afghanistan 2 0 3 0 0 0 4

Bangladesh 5 8 2 0 0 3 9

India 3 15 4 5 0 5 17

Total 66 81 58 23 22 45 186

Row (%) 35.5% 43.5% 31.2% 12.4% 11.8% 24.2% 100.0%

* Multiple Responses

DME CA Global Synthesis 11


The proposal review process is depicted in Table 3.2.5. Country Directors / Assistant Country

Directors were reported involved in reviewing 53% of all project proposals. The Project

Managers and/or Sector Coordinators reviewed twenty-nine percent of project proposals. CIHQ

was involved in 11% of project proposal reviews. In analyzing by region, we find in the Latin

America region more project proposals were reviewed by the Project Manager (46%) and/or

Sector Coordinator (67%). While in the East Africa region, more project proposals were

reviewed by Country Director/Assistant Country Director (76%) and/or Consultants (21%) than

any of the other regions.

In Table 3.2.6 are depicted the helpfulness of the review process and the changes occasioned by

it. Sixty-four percent of projects were reported as having benefited from the process of review of

proposals while 36% of projects say they (or their CO) had clear guidelines for review. More

projects in Latin America and East Africa reported finding the review of proposals helpful and

having clear guidelines for the review of proposals than in any of the other regions. In Sudan, an

identified area of weakness was the “project design being ever modified”. This in spite of

having strengths in “existence of concept papers”, “realistic proposals”, and “quality of project

design”.

Fifty-five percent of projects had a separate Detailed Implementation Plan (DIP). All of the

projects from India had a separate DIP.

Major changes in the project design (including log frame) were initiated in 48% of proposals,

while changes in the project design were systematically documented in 43% of projects. (The

projects that documented changes were not necessarily those that reported changes in their

design). There were fewer major redesigns of projects in South and West Africa.

The self-rated quality of submitted proposal for 164 projects with available information ranged

from a low of 0 (individual projects in Bolivia and West Bank / Gaza) to a high of 5 (individual

projects in Afghanistan, Egypt, Benin/Ghana/Togo, India, Kenya, Malawi, Mozambique, Niger,

Somalia, and North Sudan). The median ratings of the quality of submitted proposals by

Country Office (Figure 4) ranged from a low of 1.0 (Bolivia, Nicaragua, Peru, and South Sudan)

to 4.5 (Afghanistan and Malawi).

The quality of Detailed Implementation Plans for 157 projects with available information ranged

from a low of 0 (individual projects in Bangladesh, Sierra Leone, and Zambia) to a high of 5

(projects in Afghanistan, Bangladesh, Bolivia, India, Kenya, Malawi, Mali, Niger, Rwanda,

Somalia, North Sudan, and West Bank / Gaza). The median ratings of the quality of DIPs by

Country Office (Figure 5) ranged from a low of 1.5 (West Bank/Gaza) to a high of 4.5 (Malawi).

DME CA Global Synthesis 12


Table 3.2.5: Review of Proposals

Proposal reviewed by

Sector Country Director /

Country Office and Project Coordinator Assistant CountryCI

Consultant

Region

Manager* *

Director* HQ* * N

Latin America and the

Caribbean 21 31 19 2 3 46

Row (%) 45.7% 67.4% 41.3% 4.3% 6.5% 100.0%

Column (%) 38.9% 57.4% 19.2% 10.0% 13.6% 24.7%

Bolivia 2 6 9 0 0 10

Haiti 4 6 3 1 1 10

Nicaragua 2 6 7 1 2 13

Peru 13 13 0 0 0 13

East Africa 24 10 53 10 15 70

Row (%) 34.3% 14.3% 75.7% 14.3% 21.4% 100.0%

Column (%) 44.4% 18.5% 53.5% 50.0% 68.2% 37.6%

Egypt 0 1 2 1 2 4

Ethiopia 7 0 6 0 1 9

Kenya 1 0 11 1 2 11

Rwanda 0 0 4 0 1 5

Somalia 1 4 9 1 3 9

Sudan (North) 6 5 9 3 0 9

Sudan (South) 7 0 8 0 2 9

Tanzania 1 0 4 4 0 8

West Bank / Gaza 1 0 0 0 4 6

S. and W. Africa 6 9 13 4 3 40

Row (%) 15.0% 22.5% 32.5% 10.0% 7.5% 100.0%

Column (%) 11.1% 16.7% 13.1% 20.0% 13.6% 21.5%

Benin, Ghana, & Togo 0 0 2 1 0 5

Malawi 0 0 0 0 0 2

Mali 0 6 4 2 0 8

Mozambique 0 0 4 0 1 6

Niger 6 3 1 1 0 13

Sierra Leone 0 0 1 0 1 3

Zambia 0 0 1 0 1 3

Asia 3 4 14 4 1 30

Row (%) 10.0% 13.3% 46.7% 13.3% 3.3% 100.0

Column (%) 5.6% 7.4% 14.1% 20.0% 4.5% 16.1%

Afghanistan 0 0 2 2 0 4

Bangladesh 3 1 7 0 0 9

India 0 3 5 2 1 17

Total 54 54 99 20 22 186

Row (%) 29.0% 29.0% 53.2% 10.8% 11.8% 100.0

* Multiple Responses

DME CA Global Synthesis 13


Table 3.2.6: Process of review of Proposals and changes therein

Country Office

and Region

Review

Helpful

Clear

Guidelines

Separate

Changes

DIP Redesign Documented N

Latin America and

the Caribbean 35 24 31 29 22 46

Row (%) 76.1% 52.2% 67.4% 63.0% 47.8% 100.0%

Column (%) 29.4% 35.8% 30.4% 32.6% 27.8% 24.7%

Bolivia 3 2 5 9 7 10

Haiti 8 2 9 7 2 10

Nicaragua 11 7 5 0 0 13

Peru 13 13 12 13 13 13

East Africa 50 30 38 32 27 70

Row (%) 71.4% 42.9% 54.3% 45.7% 38.6% 100.0%

Column (%) 42.0% 44.8% 37.3% 36.0% 34.2% 37.6%

Egypt 3 2 2 4 3 4

Ethiopia 6 1 6 6 6 9

Kenya 6 10 6 1 3 11

Rwanda 5 0 2 4 1 5

Somalia 6 0 6 5 5 9

Sudan (North) 9 9 9 0 0 9

Sudan (South) 6 6 0 6 5 9

Tanzania 4 2 5 4 2 8

West Bank / Gaza 5 0 2 2 2 6

S. and W. Africa 20 5 16 11 13 40

Row (%) 50.0% 12.5% 40.0% 27.5% 32.5% 100.0%

Column (%) 16.8% 7.5% 15.7% 12.4% 16.5% 21.5%

Benin, Ghana, & Togo 1 0 1 1 4 5

Malawi 2 0 2 0 0 2

Mali 4 3 4 1 1 8

Mozambique 2 2 1 2 2 6

Niger 7 0 8 5 2 13

Sierra Leone 2 0 0 1 2 3

Zambia 2 0 0 1 2 3

Asia 14 8 17 17 17 30

Row (%) 46.7% 26.7% 56.7% 56.7% 56.7% 100.0%

Column (%) 11.8% 11.9% 16.7% 19.1% 21.5% 16.1%

Afghanistan 2 3 0 0 1 4

Bangladesh 3 1 0 6 7 9

India 9 4 17 11 9 17

Total 119 67 102 89 79 186

Row (%) 64.0% 36.0% 54.8% 47.8% 42.5% 100.0%

DME CA Global Synthesis 14


5

4.5

4

3.5

3

2.5

2

1.5

1

0.5

0

5

4.5

4

3.5

3

2.5

2

1.5

1

0.5

0

Bolivia

Bolivia

Haiti

Nicaragua

Peru

Egypt

Figure 4: Quality of Proposal

Latin America East Africa South & West Africa Asia

Ethiopia

Kenya

NOT AVAILABLE

Rwanda

Somalia

Sudan N

Sudan S

NOT AVAILABLE

Tanzania

W Bank / Gaza

Benin, Ghana & Togo

Malawi

Mali

Mozam bique

Niger

Sierra Leone

Zam bia

Afghanistan

Bangladesh

Note: Numbers represent Median

Figure 5: Quality of Detailed Implementation Plan

Latin America East Africa South & West Africa Asia

NOT AVAILABLE

Haiti

Nicaragua

Peru

Egypt

Ethiopia

Kenya

Rwanda

Somalia

Sudan N

Sudan S

Tanzania

W Bank / Gaza

Benin, Ghana & Togo

Malawi

Mali

Mozam bique

Niger

Sierra Leone

Zam bia

Afghanistan

Bangladesh

Note: Numbers represent Median

DME CA Global Synthesis 15

NOT AVAILABLE

NOT AVAILABLE

India

India


3.3 Monitoring and Evaluation

3.3a Design of Monitoring and Evaluation Systems

Forty-five percent of all projects have a project monitoring and evaluation plan/design including

key questions to be addressed by, and preliminary plans for the baseline, midterm, and final

evaluation (see Table 3.3.1). By region, fewer projects (28% of all projects or less) in Latin

America and South and West Africa had such a document or plan. All projects in Egypt,

Ethiopia, South Sudan, and West Bank / Gaza had such a document or plan.

Thirty-six percent of projects had detailed monitoring and evaluation (M&E) plans that could

guide the implementation of these plans. More projects in Asia (53%) and East Africa (44%)

have detailed M&E plans. In a project in Bolivia, “the plans are normally elaborated by the

project implementation personnel, which reflects that their elaboration does not respond to the

design policies, nor are they a component that is required in the proposals”.

Box 1: How are we doing DME?

“M&E activities are the product of contingencies and are generated ad hoc”.

(Comment from CARE-Bolivia)

“The evaluation responds to key questions formulated by those involved, not to an evaluation

plan written during the design phase”.

(Comment from CARE-Bolivia)

“Quality and accuracy of M&E plans and their implementation vary greatly, from being

extremely detailed to almost non-existent. Most monitoring and evaluation activities are very

output driven, with little attention for impact measurement”.

(Comment from CARE-Nicaragua)

“Though all projects have M&E plans, the quality of the M&E systems in all projects (is) still

poor, project staff have limited knowledge and experience in developing and effectively using

M&E plans for tracking project progress, effect, and impact.

(Comment from CARE-Sudan)

Emergency relief projects do not have M&E plans or logframes there is of course monitoring

of receipt and distribution of foods.

(Comment from CARE-Angola)

Questions were asked pertaining to the authorship of M&E plans. Sixty-two percent of projects

involved the implementing staff in the development of M&E plans. Between 25-34% of projects

reported involvement of someone from Country Office and/or the main proposal authors.

Eighteen percent of monitoring and evaluation plans were written with the involvement of

consultants, 10% had the involvement of partners, and four percent of projects had the

involvement of someone from CIHQ. In the Asia region, more M&E plans were authored with

the involvement of implementing staff (77%), someone from Country Office (57%), someone

from CIHQ (13%), Consultants (33%), or Partners (20%) than in any other region. In the South

and West Africa region, the main proposal authors (43%) authored more M&E plans than in any

of the other regions.

DME CA Global Synthesis 16


Country Office and

Region

Table 3.3.1: Monitoring and Evaluation - Plan and Design

Eval.

Design

Detailed

M&E Plan

Proposal

Author*

Monitoring & Evaluation Plans by

Implement.

Staff*

Country

Office*

CI Member

HQ* Consultant* Partner* N

Latin America and

the Caribbean 13 9 6 25 17 0 7 1 46

Row (%) 28.3% 19.6% 13.0% 54.3% 37.0% 0.0% 15.2% 2.2% 100.0%

Column (%) 15.7% 13.4% 12.8% 21.6% 27.0% 0.0% 20.6% 5.3% 24.7%

Bolivia 1 3 0 3 0 0 0 0 10

Haiti 7 0 5 5 7 0 2 1 10

Nicaragua 5 6 1 7 0 0 5 0 13

Peru 0 0 0 10 10 0 0 0 13

East Africa 41 31 17 43 18 1 8 8 70

Row (%) 58.6% 44.3% 24.3% 61.4% 25.7% 1.4% 11.4% 11.4% 100.0%

Column (%) 49.4% 46.3% 36.2% 37.1% 28.6% 14.3% 23.5% 42.1% 37.6%

Egypt 4 2 1 4 1 0 1 2 4

Ethiopia 9 6 4 7 3 0 2 2 9

Kenya 3 2 3 6 3 0 2 2 11

Rwanda 2 2 0 0 3 0 0 0 5

Somalia 1 3 2 8 4 0 1 0 9

Sudan (North) 5 7 0 7 1 0 0 0 9

Sudan (South) 9 4 4 6 2 0 0 0 9

Tanzania 2 2 0 3 1 0 1 0 8

West Bank / Gaza 6 3 3 2 0 1 1 2 6

South and West 10 11 17 25 11 2 9 4 40

Row (%) 25.0% 27.5% 42.5% 62.5% 27.5% 5.0% 22.5% 10.0% 100.0%

Column (%) 12.0% 16.4% 36.2% 21.6% 17.5% 28.6% 26.5% 21.1% 21.5%

Benin, Ghana, & Togo 0 2 1 5 4 0 1 2 5

Malawi 0 1 0 2 0 0 0 0 2

Mali 3 2 4 4 3 0 1 0 8

Mozambique 3 3 2 4 1 0 4 1 6

Niger 4 3 6 10 3 2 3 1 13

Sierra Leone 0 0 2 0 0 0 0 0 3

Zambia 0 0 2 0 0 0 0 0 3

Asia 19 16 7 23 17 4 10 6 30

Row (%) 63.3% 53.3% 23.3% 76.7% 56.7% 13.3% 33.3% 20.0% 100.0%

Column (%) 22.9% 23.9% 14.9% 19.8% 27.0% 57.1% 29.4% 31.6% 16.1%

Afghanistan 2 3 3 3 1 0 0 0 4

Bangladesh 7 7 2 7 4 2 4 3 9

India 10 6 2 13 12 2 6 3 17

Total 83 67 47 116 63 7 34 19 186

Row (%) 44.6% 36.0% 25.3% 62.4% 33.9% 3.8% 18.3% 10.2% 100.0%

* Multiple Responses

DME CA Global Synthesis 17


3.3b Baseline

In Table 3.3.2 is presented the information pertaining to the baseline studies for the projects.

Eighty percent of projects have a baseline of some form or the other. Forty-three percent of

projects had a quantitative baseline while in 59% of projects the baselines were reported to be

different from initial diagnosis/needs assessment. In 23% of projects, the baselines studies were

conducted before start-up of the project. By region, at least 70% of projects in Asia and South

and West Africa have quantitative surveys under baseline and at least 75% of projects in Asia

and Latin America have baselines that were different from the initial needs assessment/diagnosis.

More projects in East Africa have baselines conducted before project start-up (33%) than any of

the other regions.

More projects collected indicators of effect (72%) than indicators of impact (57%) under their

baselines. At least 60% of the projects in Asia and Latin America have baselines that collected

indicators of impact, while 75% or more of projects in Asia and Latin America have baselines

that collected indicators of effect.

Project data suggest that 57% of them conducted baseline studies with the same level of rigor as

planned for the final evaluation of projects. At least 70% of projects in Asia and Latin America

have as rigorous methods for baselines as planned for the final evaluation of projects.

In Table 3.3.3 are presented the results pertaining to outcomes and satisfactory indicators at the

impact, effect, and output levels. Sixty-five percent of projects have project staff involved in the

collection or processing of information pertaining to outcome level data. At least 70% of the

projects in Asia, East Africa, and South and West Africa have project staff involved in the

collection or processing of outcome-level data.

When asked about the satisfaction with the indicators of impact, effect, and output, the response

suggest the least number of projects had satisfactory indicators of impact (43%), followed by

satisfactory indicators of effect (60%), and 73% of projects have satisfactory indicators of

output. At least 47% of projects in Asia, Latin America, and East Africa have satisfactory

indicators of impact while at least 80% of projects in Asia and East Africa have satisfactory

indicators of output. For satisfactory indicators of effect, there is not much variation by region.

Cross-tabulation of projects with and without quantitative baselines and the indicators of the

project is shown in Table 3.3.4. The baselines of 47% of all projects included a quantitative

survey, while 62% of all projects had baselines that were different from the initial

diagnosis/needs assessment.

DME CA Global Synthesis 18


Table 3.3.2: Pattern and Rigor of Baseline and Indicators Measured

Country Office and

Region

Project had

a Baseline

Baseline

before

Startup

Baseline Baseline

had had

Indicators of Indicators of Rigor of

Impact Effect Baseline

Quantitative

Survey

under

Baseline

Baseline

different

from

Assessment

Latin America and

the Caribbean 40 5 28 35 32 7 37 46

Row (%) 87.0% 10.9% 60.9% 76.1% 69.6% 15.2% 80.4% 100.0%

Column (%) 26.8% 11.6% 26.4% 26.3% 30.2% 8.8% 33.9% 24.7%

Bolivia 9 1 4 6 6 0 6 10

Haiti 9 3 6 8 6 7 9 10

Nicaragua

9 1 5 8 7 0 9 13

Peru 13 0 13 13 13 0 13 13

East Africa 53 23 37 46 33 21 23 70

Row (%) 75.7% 32.9% 52.9% 65.7% 47.1% 30.0% 32.9% 100.0%

Column (%) 35.6% 53.5% 34.9% 34.6% 31.1% 26.3% 21.1% 37.6%

Egypt 4 3 1 2 3 3 3 4

Ethiopia 7 6 3 7 2 7 1 9

Kenya

9 2 6 5 4 4 1 11

Rwanda 2 1 2 2 1 0 0 5

Somalia 6 1 5 7 1 0 4 9

Sudan (North)

7 4 5 7 7 5 6 9

Sudan (South) 9 2 8 8 8 0 7 9

Tanzania 5 1 4 6 3 0 0 8

West Bank / Gaza

4 3 3 2 4 2 1 6

S. and W. Africa 30 7 18 27 20 30 26 40

Row (%) 75.0% 17.5% 45.0% 67.5% 50.0% 75.0% 65.0% 100.0%

Column (%) 20.1% 16.3% 17.0% 20.3% 18.9% 37.5% 23.9% 21.5%

Benin, Ghana, & Togo

3 0 2 3 3 3 0 5

Malawi 2 0 2 2 2 2 2 2

Mali 7 1 3 7 6 7 8 8

Mozambique

5 1 4 5 3 5 3 6

Niger 9 1 3 6 4 9 9 13

Sierra Leone 2 2 2 2 1 2 2 3

Zambia

2 2 2 2 1 2 2 3

Asia 26 8 23 2521 22 23 30

Row (%) 86.7% 26.7% 76.7% 83.3% 70.0% 73.3% 76.7% 100.0%

Column (%) 17.4% 18.6% 21.7% 18.8% 19.8% 27.5% 21.1% 16.1%

Afghanistan 4 2 3 4 3 0 4 4

Bangladesh 9 0 9 9 8 9 8 9

India 13 6 11 12 10 13 11 17

Total 149 43 106 133 106 80 109 186

Row (%) 80.1% 23.1% 57.0% 71.5% 57.0% 43.0% 58.6% 100.0%

DME CA Global Synthesis 19

N


Table 3.3.3: Outcomes and Satisfaction with Indicators

Country Office and

Region

Satisfaction with Indicators

Staff

measuring

outcomes Impact Effect Output N

Latin America and

the Caribbean 19 22 27 32 46

Row (%) 41.3% 47.8% 58.7% 69.6% 100.0%

Column (%) 15.7% 27.5% 24.3% 23.5% 24.7%

Bolivia 1 3 4 4 10

Haiti 8 6 5 6 10

Nicaragua 10 3 8 12 13

Peru 0 10 10 10 13

East Africa 49 33 46 56 70

Row (%) 70.0% 47.1% 65.7% 80.0% 100.0%

Column (%) 40.5% 41.3% 41.4% 41.2% 37.6%

Egypt 3 2 4 4 4

Ethiopia 8 5 7 8 9

Kenya 5 2 3 5 11

Rwanda 0 1 1 5 5

Somalia 9 7 8 7 9

Sudan (North) 9 8 8 8 9

Sudan (South) 9 5 9 9 9

Tanzania 4 1 2 4 8

West Bank / Gaza 2 2 4 6 6

S. and W. Africa 30 9 22 23 40

Row (%) 75.0% 22.5% 55.0% 57.5% 100.0%

Column (%) 24.8% 11.3% 19.8% 16.9% 21.5%

Benin, Ghana, & Togo 2 2 3 4 5

Malawi 1 0 1 1 2

Mali 8 2 5 7 8

Mozambique 4 1 2 3 6

Niger 9 4 9 4 13

Sierra Leone 3 0 1 2 3

Zambia 3 0 1 2 3

Asia 23 16 16 25 30

Row (%) 76.7% 53.3% 53.3% 83.3% 100.0%

Column (%) 19.0% 20.0% 14.4% 18.4% 16.1%

Afghanistan 4 2 2 4 4

Bangladesh 8 7 7 8 9

India 11 7 7 13 17

Total 121 80 111 136 186

Row (%) 65.1% 43.0% 59.7% 73.1% 100.0%

DME CA Global Synthesis 20


Between 62 and 78% of all projects have baselines and measured indicators of impact or effect.

Between 42% and 67% of all projects with baselines had satisfactory indicators of impact, effect,

or output. In the same Table projects with baselines were filtered to include only projects that

had quantitative surveys as baselines. Adding this filter reduced by almost 50% the projects that

had satisfactory indicators of impact, output, or effect or even measured indicators of impact or

effect.

Analyzing the data this way reveals a significant difference between those projects that did use

quantitative surveys in their baselines and those that didn’t, and both the inclusion of and

satisfaction with indicators of impact, effect and even output.

Table 3.3.4: Baseline, Source of Data, and Indicators

Baseline (N=169) Quantitative (N=169)

Frequency Percent Frequency Percent

Source of Data

Included a Quantitative Survey (n=80) 80 47.3%

Baseline different from Assessment (n=107) 105 62.1%

Indicators

Baseline Measured Indicators of Impact (n=106) 105 62.1% 57 33.7%

Baseline Measured Indicators of Effect (n=133) 131 77.5% 74 43.8%

Satisfactory Indicators at Impact level (n=80) 71 42.0% 37 21.9%

Satisfactory Indicators at Effect level (n=111) 95 56.2% 50 29.6%

Satisfactory Indicators at Output level (n=136) 113 66.9% 60 35.5%

The length of time between project startup and baseline for 28 projects with available

information ranged from zero months (individual project in West Bank/Gaza) to 24 months

(individual project in India). The median range of duration of baseline to startup by Country

Offices (Figure 6) ranged from 0.5 months (Sierra Leone and Zambia) to 8 months (Ethiopia).

3.3c Quality of Monitoring and Evaluation

A rating of the quality and utility of the baseline of projects was undertaken. This ranged for 143

projects with available information from a low of 0 (individual project in Nicaragua) to a high of

5 (individual projects in Bolivia, Benin/Ghana/Togo, India, Mali, Nicaragua, and Peru). The

median rating of quality and utility of baselines by Country Offices (Figure 7) ranged from a low

of 2.0 (Niger) to a high of 5.0 (Benin/Ghana/Togo).

DME CA Global Synthesis 21


6

5

4

3

2

1

0

9

8

7

6

5

4

3

2

1

0

NOT AVAILABLE

Bolivia

Bolivia

Figure 6: Months between Baseline and Startup of Project

Latin America East Africa South & West Africa Asia

Haiti

NOT AVAILABLE

Nicaragua

NOT AVAILABLE

Peru

Egypt

Ethiopia

Kenya

NOT AVAILABLE

Rwanda

NOT AVAILABLE

Somalia

Sudan N

NOT AVAILABLE

Sudan S

NOT AVAILABLE

Tanzania

Figure 7: Quality and Utility of Baseline

W Bank / Gaza

NOT AVAILABLE

Benin, Ghana & Togo

NOT AVAILABLE

Malawi

Mali

Mozam bique

Niger

Sierra Leone

Zam bia

NOT AVAILABLE

Afghanistan

NOT AVAILABLE

Bangladesh

Note: Numbers represent Median

Latin America East Africa South & West Africa Asia

Haiti

Nicaragua

Peru

Egypt

Ethiopia

Kenya

Rwanda

Som alia

Sudan N

Sudan S

Tanzania

W Bank / Gaza

Benin, Ghana & Togo

Malawi

Mali

Mozam bique

Niger

Sierra Leone

Zam bia

Afghanistan

Bangladesh

Note: Numbers represent Median

The rating of overall quality of monitoring and evaluation plans for 162 projects with available

information ranged from 0 (individual projects in Peru and Rwanda) to a high of 5 (individual

projects in Afghanistan, Bangladesh, and Benin/Ghana/Togo). The median rating of overall

quality of monitoring and evaluation plans by Country Office (Figure 8) ranged from a low of

DME CA Global Synthesis 22

India

India


2.0 (Ethiopia, Rwanda, Sierra Leone, West Bank, and Zambia) to a high of 4.0 (Afghanistan,

Bangladesh, Peru, and North Sudan).

4.5

4

3.5

3

2.5

2

1.5

1

0.5

0

Figure 8: Overall Quality of Monitoring and Evaluation System

Bolivia

Latin America East Africa South & West Africa Asia

Haiti

Nicaragua

Peru

Egypt

Ethiopia

Kenya

Rwanda

Somalia

Sudan N

Sudan S

Tanzania

W Bank / Gaza

Benin, Ghana & Togo

Malawi

Mali

Mozam bique

Niger

Sierra Leone

Zam bia

Afghanistan

Bangladesh

Note: Numbers represent Median

Respondents were asked to rate the overall quality of the monitoring and evaluation system for

their projects. The overall rating for 165 projects for which information was available ranged

from a low of 0 (individual project in Peru) to a high of 5 (individual projects in Bangladesh and

India). The median rating of the overall quality of the monitoring and evaluation system (Figure

9) ranged from a low of 2.0 (projects in Sierra Leone, West Bank / Gaza, and Zambia) to a high

of 4.0 (projects in Bangladesh, Bolivia, Peru and North Sudan).

When asked to rate the quality of project evaluation reports they had read, the ratings from 68

projects with available information were found to range from a low of 0 (individual project in

West Bank/Gaza) to a high of 5 (individual projects in India and Nicaragua). The median rating

of quality of evaluation reports by Country Office (Figure 10) ranged from a low of 1.5 (Sierra

Leone and Zambia) to a high of 4.0 (India, Nicaragua, Niger, North Sudan, West Bank/Gaza).

DME CA Global Synthesis 23

India


4.5

4

3.5

3

2.5

2

1.5

1

0.5

0

4.5

4

3.5

3

2.5

2

1.5

1

0.5

0

Figure 9: Overall Quality of Monitoring & Evaluation Plans

Bolivia

NOT AVAILABLE

Bolivia

Latin America East Africa South & West Africa Asia

Haiti

Haiti

Nicaragua

Nicaragua

Peru

Egypt

Ethiopia

Kenya

Rwanda

Somalia

Sudan N

Sudan S

Tanzania

W Bank / Gaza

Benin, Ghana & Togo

Malawi

Mali

Mozam bique

Niger

Sierra Leone

Zam bia

Afghanistan

Bangladesh

Note: Numbers represent Median

Figure 10: Quality of Project Evaluation Reports

Latin America East Africa South & West Africa Asia

NOT AVAILABLE

NOT AVAILABLE

Peru

Egypt

Ethiopia

NOT AVAILABLE

Kenya

NOT AVAILABLE

Rwanda

NOT AVAILABLE

Somalia

Sudan N

NOT AVAILABLE

Sudan S

NOT AVAILABLE

Tanzania

W Bank / Gaza

NOT AVAILABLE

Benin, Ghana & Togo

Malawi

Mali

NOT AVAILABLE

Mozam bique

Niger

Sierra Leone

Zam bia

Afghanistan

Bangladesh

Note: Numbers represent Median

DME CA Global Synthesis 24

NOT AVAILABLE

India

India


3.3d Tracking Project Participants

A variety of ways are used by projects to count and keep track of project participants and

beneficiaries (Table 3.3.5). In the 186 projects surveyed, 63% have disaggregated beneficiaries

by gender. At least 65% of projects in Asia, East Africa, and South and West Africa

disaggregated beneficiaries by gender. According to the project staff, gender disaggregation of

participants was largely (63%) through actual registration of the participants by gender. This

was fairly consistent across all regions. Fourteen percent or fewer of projects had to rely on

ratios drawn from samples or estimates to determine how many girls or women they reached.

Keeping count of participants in projects (regardless of their gender) was by way of registration

(46%) or keeping records of beneficiaries for each intervention (42%). Fifteen percent of projects

have relied upon estimates to report project beneficiaries. Five percent of projects have used

occasional samples to estimate beneficiaries under their projects.

More projects report having registries of project participants in Asia (83%) than in any other

region, while in East Africa and Latin America more projects keep records of beneficiaries by

each intervention (at least 57%) than in the other regions. None of the projects in Asia admit that

they have used estimates or occasional samples to count project participants.

In spite of the large number of projects that say they have systems for keeping track of

participants, information on the actual numbers of beneficiaries reached during the past year was

reported by less than a third of these 186 projects. (One hopes that they do better when reporting

beneficiaries on the API.)

The Target Direct Beneficiaries to be reached by 63 projects in 10 Country Offices ranged from

800 (individual project in Ethiopia) to 1,660,800 (one project in India 4 ). The median number of

Total Direct Beneficiaries for Country Offices ranged from 8,700 (Mali) to 54,000 (Malawi) and

is depicted in Figure 11.

Net Total Direct Beneficiaries reported to have been reached during the past year by 54 projects

in 10 Country Offices ranged from 200 (individual project in Mali) to 1,660,800 (the same

project in India). The median number of Net Direct Participants for Country Offices (Figure 12)

ranged from 900 (Egypt) to 60,000 (Malawi).

4 This same project in India (INHP) reported 11,737,500 beneficiaries on the API’01.

DME CA Global Synthesis 25


Table 3.3.5: Project participants - System for Counting Beneficiaries and Gender

Tracking by Gender Tracking project participants

Country Office and Gender

Region

Disaggregation Registration* Ratio* Estimates* Registry* Record* Count* Estimates* N

Latin America and the

Caribbean 17 29 4 5 15 27 5 7 46

Row (%) 37.0% 63.0% 8.7% 10.9% 32.6% 58.7% 10.9% 15.2% 100.0%

Column (%) 14.5% 24.8% 17.4% 19.2% 17.6% 34.6% 55.6% 25.0% 24.7%

Bolivia 10 9 1 0 5 5 0 0 10

Haiti 7 4 3 4 3 7 3 4 10

Nicaragua 0 11 0 1 7 2 2 3 13

Peru 0 5 0 0 0 13 0 0 13

East Africa 54 40 10 16 18 40 1 1670 Row (%) 77.1% 57.1% 14.3% 22.9% 25.7% 57.1% 1.4% 22.9% 100.0%

Column (%) 46.2% 34.2% 43.5% 61.5% 21.2% 51.3% 11.1% 57.1% 37.6%

Egypt 4 3 1 0 2 3 0 0 4

Ethiopia 6 5 5 3 3 7 0 3 9

Kenya 8 5 2 1 3 3 1 3 11

Rwanda 3 2 0 0 4 1 0 1 5

Somalia 9 3 1 5 1 3 0 5 9

Sudan (North) 7 6 0 3 1 6 0 2 9

Sudan (South) 6 6 0 3 2 8 0 0 9

Tanzania 5 4 1 1 1 4 0 2 8

West Bank / Gaza 6 6 0 0 1 5 0 0 6

S. and W. Africa 26 29 7 1 27 8 3 5 40

Row (%) 65.0% 72.5% 17.5% 2.5% 67.5% 20.0% 7.5% 12.5% 100.0%

Column (%) 22.2% 24.8% 30.4% 3.8% 31.8% 10.3% 33.3% 17.9% 21.5%

Benin, Ghana, & Togo 2 3 1 0 3 2 1 0 5

Malawi 2 2 0 0 1 1 0 0 2

Mali 8 8 5 0 4 1 1 1 8

Mozambique 3 4 1 1 3 2 1 3 6

Niger 5 6 0 0 12 0 0 1 13

Sierra Leone 3 3 0 0 2 1 0 0 3

Zambia 3 3 0 0 2 1 0 0 3

Asia 20 19 2 4 25 3 0 0 30

Row (%) 66.7% 63.3% 6.7% 13.3% 83.3% 10.0% 0.0% 0.0% 100.0%

Column (%) 17.1% 16.2% 8.7% 15.4% 29.4% 3.8% 0.0% 0.0% 16.1%

Afghanistan 1 2 1 0 3 0 0 0 4

Bangladesh 9 9 0 0 9 0 0 0 9

India 10 8 1 4 13 3 0 0 17

Total 117 117 23 26 85 78 9 28 186

Row (%) 62.9% 62.9% 12.4% 14.0% 45.7% 41.9% 4.8% 15.1% 100.0%

* Multiple Responses

DME CA Global Synthesis 26


60

50

40

30

20

10

0

NOT AVAILABLE

Bolivia

Figure 11: Target Direct Beneficiaries (in Thousands)

Latin America East Africa South & West Africa Asia

NOT AVAILABLE

Haiti

NOT AVAILABLE

Nicaragua

NOT AVAILABLE

Peru

Egypt

Ethiopia

NOT AVAILABLE

Kenya

NOT AVAILABLE

Rwanda

NOT AVAILABLE

Somalia

NOT AVAILABLE

Sudan N

NOT AVAILABLE

Sudan S

NOT AVAILABLE

Tanzania

W Bank/Gaza

Benin, Ghana, & Togo

Malawi

Mali

Mozam bique

Niger

Sierra Leone

Zam bia

Afghanistan

Bangladesh

Note: Numbers represent Median

Figure 12: Net Direct Beneficiaries reached last year (in Thousands)

70

60

50

40

30

20

10

0

NOT AVAILABLE

Bolivia

Latin America East Africa South & West Africa Asia

NOT AVAILABLE

Haiti

NOT NOT AVAILABLE

NOT AVAILABLE

Nicaragua

NOT AVAILABLE

Peru

Egypt

Ethiopia

NOT AVAILABLE

Kenya

NOT AVAILABLE

Rwanda

NOT AVAILABLE

Somalia

NOT AVAILABLE

Sudan N

NOT AVAILABLE

Sudan S

NOT AVAILABLE

Tanzania

W Bank / Gaza

NOT AVAILABLE

NOT AVAILABLE

Benin, Ghana & Togo

Malawi

Mali

NOT AVAILABLE

NOT AVAILABLE

Mozam bique

Niger

Sierra Leone

Zam bia

NOT AVAILABLE

NOT AVAILABLE

Afghanistan

Bangladesh

Note: Numbers represent Median

DME CA Global Synthesis 27

India

India


Indirect Beneficiaries reportedly reached during the past year by 32 projects in eight Country

Offices ranged from 800 (individual project in Malawi) to 247,500 (individual project in Mali).

The median number of Indirect Beneficiaries for Country Offices (Figure 13) ranged from 2,500

(Malawi) to 58,800 (Mali).

Figure 13: Indirect Beneficiaries reached last year (in Thousands)

16

14

12

10

8

6

4

2

0

NOT AVAILABLE

Bolivia

Latin America East Africa South & West Africa Asia

NOT AVAILABLE

Haiti

Nicaragua

Peru

Egypt

Ethiopia

NOT AVAILABLE

NOT AVAILABLE

3.4 Information Processing and Use

NOT AVAILABLE

Kenya

Rwanda

NOT AVAILABLE

Somalia

Sudan N

NOT AVAILABLE

Sudan S

NOT AVAILABLE

Tanzania

W Bank / Gaza

NOT AVAILABLE

Benin, Ghana & Togo

NOT AVAILABLE

Malawi

Mali

NOT AVAILABLE

Mozam bique

Niger

Sierra Leone

Zam bia

Afghanistan

Bangladesh

Note: Numbers represent Median

Data was queried on how information from monitoring and evaluation was processed and used

and the information needs that it met (Tables 3.4.1 and 3.4.2). Three of the 189 projects used

MER ® for collecting and processing data from monitoring and evaluation (two projects in

Ethiopia and one project in Afghanistan). One project in Somalia “was using the MER software

which it later abandoned due to non-conformity of the intervention logic and data collection

requirements with those required by the software”. Twenty-four percent of all projects have

some other Management Information System (MIS) to collect and process data. More projects

have some form of MIS for collecting and processing data in South and West Africa (45%) and

Asia (43%) than any other region. In spite of the lower use of MER ® or other MIS, 36% of

projects reported satisfaction with the existing software for collecting and processing data

(without specifying what software they use). More projects say they have satisfactory software

programs in Latin America (52%) and Asia (47%) than any other region. Through analysis of

DMECA data, MIS was identified as an area of improvement in projects in Angola, Bolivia,

Somalia, Kenya, Nicaragua, Sudan, and South Sudan.

DME CA Global Synthesis 28

India


MIS needs attention.

Box 2: The State of MIS

(Comment from CARE-Gulf of Guinea)

In one of the projects, the information system has made the write-up of reports more efficient

and this is one of the main uses for specialized information software.

(Comment from CARE-Bolivia)

Some information systems had to be reconstructed several times, because they did not respond

to the project indicators.

(Comment from CARE-Bolivia)

MIS is an area for improvement.

(CARE-Sudan)

Fifty percent of all projects have data management systems that contribute to accurate and timely

reports. At least 50% of projects in Asia, East Africa, and Latin America have data management

systems that contribute to accurate and timely reports.

The needs that the reports from the data management systems were meeting are reported in Table

3.4.2. At least 65% of projects have systems that generate reports fulfilling the needs of Project

Management and Donors. Thirty-three to 44% of projects have projects data management

systems that generated reports fulfilling the needs of Partners, Country Office Senior

Management, Field Supervisors, or Participants. Twenty-seven percent of project reports met

the needs of Sector Coordinators and only 11% of projects generated reports that met the needs

of CARE International Member Headquarters. As compared to other regions, more projects in

Asia and Latin America produce reports that they say met the needs of Project Management (at

least 76%), Donors (at least 70%), or Participants (at least 39%). More projects in East Africa

generated reports that met the needs of partners (60%) and Country Office Senior Management

(64%) than any did the projects in the other regions. While more projects in Asia met the needs

of Sector Coordinators (60%) and more projects in Latin America met the needs of Field

Supervisors (44%) than in any other region. Reports and the information generated was not fully

meeting the needs of stakeholders as reported in the analysis of DMECA from Bolivia, Sudan,

and South Sudan.

DME CA Global Synthesis 29


Table 3.4.1: Use of Management Information System

Country Office and

Region Use MER Use MIS

Satisfactory

software

Accurate

Reports

Timely

Reports N

Latin America and

the Caribbean 0 6 24 27 26 46

Row (%) 0.0% 13.0% 52.2% 58.7% 56.5% 100.0%

Column (%) 0.0% 13.3% 36.4% 27.3% 28.0% 24.7%

Bolivia 0 0 4 1 1 10

Haiti 0 6 4 4 3 10

Nicaragua 0 0 6 9 9 13

Peru 0 0 10 13 13 13

East Africa 2 8 14 37 37 70

Row (%) 2.9% 11.4% 20.0% 52.9% 52.9% 100.0%

Column (%) 66.7% 17.8% 21.2% 37.4% 39.8% 37.6%

Egypt 0 4 2 4 4 4

Ethiopia 2 4 2 6 6 9

Kenya 0 0 0 1 1 11

Rwanda 0 0 0 3 3 5

Somalia 0 0 4 5 5 9

Sudan (North) 0 0 1 5 5 9

Sudan (South) 0 0 4 9 9 9

Tanzania 0 0 1 1 1 8

West Bank / Gaza 0 0 0 3 3 6

S. and W. Africa 0 18 14 15 13 40

Row (%) 0.0% 45.0% 35.0% 37.5% 32.5% 100.0%

Column (%) 0.0% 40.0% 21.2% 15.2% 14.0% 21.5%

Benin, Ghana, & Togo 0 0 0 2 2 5

Malawi 0 2 2 1 0 2

Mali 0 7 2 3 3 8

Mozambique 0 0 2 2 2 6

Niger 0 9 8 7 6 13

Sierra Leone 0 0 0 0 0 3

Zambia 0 0 0 0 0 3

Asia 1 13 14 20 17 30

Row (%) 3.3% 43.3% 46.7% 66.7% 56.7% 100.0%

Column (%) 33.3% 28.9% 21.2% 20.2% 18.3% 16.1%

Afghanistan 1 1 3 2 3 4

Bangladesh 0 9 8 9 7 9

India 0 3 3 9 7 17

Total 3 45 66 99 93 186

Row (%) 1.6% 24.2% 35.5% 53.2% 50.0% 100.0%

DME CA Global Synthesis 30


Country Office and

Region

* Multiple Responses

* Multiple Responses

Table 3.4.2: Project Reports and the needs they fulfill

Project reports fulfilling needs of

Field Project Sector CO Sr. CI Member

Supervisors* Management* Coordinator* Mgmt* HQ* Donors* Partners* Participants* Others* N

Latin America and

the Caribbean 20 35

3 2 0 34 17 18 0 46

Row (%) 43.5% 76.1% 6.5% 4.3% 0.0% 73.9% 37.0% 39.1% 0.0% 100.0%

Column (%) 31.3% 27.6% 5.9% 2.7% 0.0% 27.9% 21.0% 29.0% 0.0% 24.7%

Bolivia 2 3 0 0 0 3 2 1 0 10

Haiti 5 7 3 2 0 6 0 0 0 10

Nicaragua 0 12 0 0 0 12 2 4 0 13

Peru 13 13 0 0 0 13 13 13 0 13

East Africa 24 44 16 45 5 44 42 1815 70

Row (%) 34.3% 62.9% 22.9% 64.3% 7.1% 62.9% 60.0% 25.7% 21.4% 100.0%

Column (%) 37.5% 34.6% 31.4% 60.8% 23.8% 36.1% 51.9% 29.0% 57.7% 37.6%

Egypt 4 4 2 4 0 4 4 4 0 4

Ethiopia 6 6 5 6 2 6 6 3 3 9

Kenya 3 3 0 4 0 3 3 3 0 11

Rwanda 0 4 0 4 0 4 3 2 0 5

Somalia 0 5 0 6 0 5 4 0 0 9

Sudan (North) 9 9 9 9 0 9 6 1 9 9

Sudan (South) 0 9 0 9 0 9 9 0 0 9

Tanzania 0 1 0 0 0 1 4 2 0 8

West Bank / Gaza 2 3 0 3 3 3 3 3 3 6

South and West 12 24 14 16 7 23 13 12 6 40

Row (%) 30.0% 60.0% 35.0% 40.0% 17.5% 57.5% 32.5% 30.0% 15.0% 100.0%

Column (%) 18.8% 18.9% 27.5% 21.6% 33.3% 18.9% 16.0% 19.4% 23.1% 21.5%

Benin, Ghana, & Togo 3 3 0 2 0 2 2 2 0 5

Malawi 0 0 0 0 0 0 0 0 0 2

Mali 0 5 4 3 0 4 3 3 0 8

Mozambique 2 2 0 2 0 4 2 1 1 6

Niger 5 10 8 7 5 9 6 4 5 13

Sierra Leone 1 2 1 1 1 2 0 1 0 3

Zambia 1 2 1 1 1 2 0 1 0 3

Asia 8 24 18 11 9 21 9 14 5 30

Row (%) 26.7% 80.0% 60.0% 36.7% 30.0% 70.0% 30.0% 46.7% 16.7% 100.0%

Column (%) 12.5% 18.9% 35.3% 14.9% 42.9% 17.2% 11.1% 22.6% 19.2% 16.1%

Afghanistan 0 4 0 4 0 4 0 2 0 4

Bangladesh 0 9 9 4 7 9 3 8 4 9

India 8 11 9 3 2 8 6 4 1 17

Total 64 127 51 74 21 122 81 62 26 186

Row (%) 34.4% 68.3% 27.4% 39.8% 11.3% 65.6% 43.5% 33.3% 14.0% 100.0%

DME CA Global Synthesis 31


3.5 Training and Use of DME skills

During the past two years, 38% of projects had adequate formal training of project staff in DME

(Table 3.5.1) and 65% of projects have plans for future training in DME. Compared to other

regions, more projects in Latin America had adequate formal training of project staff in DME

during the past two years (57%) while at least 67% of projects in Latin America, east Africa, and

South and West Africa have plans for future training in DME.

Thirty-one percent of respondents have read evaluation reports on a previous phase of the project

or of other similar projects in their country. There was no information available from the Latin

America region for this section. 5

The rating of overall adequacy of training available in DME as currently provided by CARE for

156 projects with available information ranged from a low of 0 (individual projects in India) to a

high of 5 (individual projects Haiti, Malawi, and North Sudan). The median rating by Country

Office of this overall quality of DME Training (Figure 14) ranged from a low of 1.0 (India, Mali,

Sierra Leone, West Bank/Gaza, and Zambia) to a high of 4.0 (Haiti, Malawi, and North Sudan).

Figure 14: Adequacy of Design, Monitoring, and Evaluation Training

4.5

4

3.5

3

2.5

2

1.5

1

0.5

0

Bolivia

Latin America East Africa South & West Africa Asia

NOT AVAILABLE

Haiti

Nicaragua

Peru

Egypt

Ethiopia

Kenya

Rwanda

Somalia

Sudan N

Sudan S

Tanzania

W Bank / Gaza

Benin, Ghana & Togo

Malawi

Mali

Mozam bique

Niger

Sierra Leone

Zam bia

Afghanistan

Bangladesh

Note: Numbers represent Median

5 This question did not appear on the earlier version of the DME CAT that was translated into Spanish.

DME CA Global Synthesis 32

India


Table 3.5.1: Training, DME Skills, and Capacity Assessment

Country Office Adequate Future

Read

Eval. Read IEI View

and Region

Latin America

Training Training Report Checklist PowerPoint N

and the

Caribbean 26 33 0 0 0 46

Row (%) 56.5% 71.7% 0.0% 0.0% 0.0% 100.0%

Column (%) 36.6% 27.5% 0.0% 0.0% 0.0% 24.7%

Bolivia 4 2 0 0 0 10

Haiti 0 10 0 0 0 10

Nicaragua 9 8 0 0 0 13

Peru 13 13 0 0 0 13

East Africa 32 47 14 23 11 70

Row (%) 45.7% 67.1% 20.0% 32.9% 15.7% 100.0%

Column (%) 45.1% 39.2% 24.1% 31.9% 17.5% 37.6%

Egypt 2 4 3 4 0 4

Ethiopia 8 4 7 4 5 9

Kenya 0 4 0 9 0 11

Rwanda 1 2 0 0 0 5

Somalia 4 9 0 0 0 9

Sudan (North) 9 9 0 0 0 9

Sudan (South) 8 5 0 0 0 9

Tanzania 0 7 0 0 0 8

West Bank / Gaza 0 3 4 6 6 6

S. and W. Africa 4 28 24 32 26 40

Row (%) 10.0% 70.0% 60.0% 80.0% 65.0% 100.0%

Column (%) 5.6% 23.3% 41.4% 44.4% 41.3% 21.5%

Benin, Ghana, & Togo 0 5 0 4 0 5

Malawi 2 2 0 2 1 2

Mali 2 4 7 8 8 8

Mozambique 0 4 0 3 0 6

Niger 0 13 13 13 13 13

Sierra Leone 0 0 2 1 2 3

Zambia 0 0 2 1 2 3

Asia 9 12 20 17 26 30

Row (%) 30.0% 40.0% 66.7% 56.7% 86.7% 100.0%

Column (%) 12.7% 10.0% 34.5% 23.6% 41.3% 16.1%

Afghanistan 0 2 0 0 0 4

Bangladesh 8 7 9 0 9 9

India 1 3 11 17 17 17

Total 71 120 58 72 63 186

Row (%) 38.2% 64.5% 31.2% 38.7% 33.9% 100.0%

DME CA Global Synthesis 33


3.6 DME CA Process

Thirty-nine percent of projects reported that respondents in the DME CA process either read the

IEI Checklist or viewed the PowerPoint © presentation on the overview of principles of DME (see

Table 3.5.1). No information on the DME CA process was available from the Latin America

region 6 .

The time to complete the DME CA process ranged from 0.3 hours to 48 hours for the 106

projects for which information was available. Ninety-six projects provided information on the

number of staff who filled out the individual assessment of DME Skill / Needs. This number

ranged from one to 50 staff per project that filled out these individual training needs forms.

Many Country Offices (e.g. Angola, Bolivia, Gulf of Guinea, Kenya, Somalia, South Sudan, and

Sudan) used the DMECA analysis to reflect upon their state of DME and plan on taking specific

steps to improve DME in their Country Offices. This includes plans for a DME manual in

Kenya, reassigning key DME roles and responsibilities among country staff, and identifying

specific areas for improvement in Angola, Bolivia, Gulf of Guinea, Kenya, and Sudan.

4. Discussion and Conclusions

The above results provide rich data. In putting it all together, what is strikingly evident is the

discord in the process from Design to Monitoring to Evaluation. The leap of faith from the ‘D’

to the ‘M&E’ of projects is occasioned by, on the one hand, having rich technical inputs at the

proposal development stage through the Sector Coordinators and Project Managers. On the other

hand, monitoring and evaluation plans exist for only 45% of projects. There is little continuity

from the Design phase (25% of projects had the involvement of main proposal authors) and

M&E plans are developed in many cases (over 60%) by Implementing Staff. Another source of

lack of continuity is the divided responsibility of DME since this is “one among many tasks …

(leading to) a sensation of having to perform M&E” (comment from CARE-Bolivia). The lack

of continuity also leads to ad hoc monitoring and evaluation systems that “respond to key

questions formulated by those involved, not to an evaluation plan written during the design

phase” (comment from CARE-Bolivia). Though 65% of projects said their staff were measuring

outcomes, one wonders if they are actually involved in evaluating the loftier goals of the

projects. Involvement of staff in measuring outcomes included maintaining weekly records of

informal interviews and observations that fed into reports and dissemination of reports to

participants. An added layer of complexity is the long duration from initial concept to the

completion of a proposal. In many cases this has taken a year or more. When viewed across

regions, it does appear that countries in the Latin America and East Africa region have further to

go in bridging the gap between the ‘D’ and ‘M&E’.

Twenty percent of the projects reportedly had their origins in a prior phase or pilot. This is

suggestive of possible discontinuity in learning on account of shifting donor / country priorities.

Another possible area of weakness is that only 36% of project proposals were reviewed through

clear guidelines for the sound review of proposals. The process of review also appears to be ad

6 These questions were added to the DME CAT after it had been translated into Spanish.

DME CA Global Synthesis 34


hoc across regions without any sound system for a rigorous review of proposals to strengthen the

various elements within them. In the Latin America and East Africa regions there were more

instances of clear and helpful proposal review guidelines. These guidelines and processes should

be usefully reviewed, compiled, and shared with other Country Offices. Some Country Offices,

including Kenya, have already embarked on a process leading to the development of their own

guidelines for DME.

Amidst all this, it is encouraging to note that concept papers as precursors to full proposals are

more the norm (67% of projects) regardless of the source of initial concept for the project.

Country Offices are becoming more rigorous in the design of proposals and increasingly

resorting to logframes (84% of projects) and turning out SMART goals (65% of projects). The

development of good logframes requires attention to clarity of the logical link in the continuum

from inputs to impact. This appears to be a reality in some projects in countries across the

CARE regions with project staff feeling confident that there is clarity of this logical linkage.

There are still others, including a project in Gulf of Guinea, which has identified this logical

linkage as an area for improvement. Projects are also increasingly tuned to bringing about

lasting changes at the household-level (73% of projects). There appears to be a move to results

and successes from projects not performing in a vacuum but part of a larger well knit program

strategy. Seventy-five percent of project goals reportedly contribute to higher program goals.

Being part of a larger program strategy as well as having sound projects in countries across the

regions is going to go a long way in building on the successes and lessons of projects and may to

an extent mitigate the discontinuity in learning from the fewer prior phase / pilots.

The cross tabulation of projects with Full HLS assessment and the goals, indicators, and M&E

systems (Table 3.2.2) did show fewer projects with full HLS assessment having some form of

plan for evaluation or even detailed M&E plans. The variation may not be as marked on account

of the small samples. This does suggest at the least a weak association between the conduct of

full HLS Assessments and the use of logframes and the positioning of goals and indicators and

plans for their monitoring and evaluation. At times, Country Offices such as South Sudan report

that “the scope of such assessments is definitely determined by the funds available and in our

case they have been inadequate. Another limiting factor has been the short term and emergency

nature of some of our projects.” From the analysis in Bolivia, we learn that “in none of the cases

has the holistic focus on HLS been applied for the situational analysis. The deficiency clearly

shows the marked sector orientation of the projects … and there is a certain tendency towards the

continuity of specialized sector focuses”.

The sound use of logframes can be furthered through planned and meticulous initial diagnosis

and needs assessment. However, the cross tabulation of log frame with sources of initial

concept, assessment, goals, indicators, and M&E plans (Table 3.2.3) resulted in more projects

with logframes having intermediate goals at the effect level, satisfactory indicators of impact, or

detailed M&E plans. Among projects with logframes, fewer projects have some form of

evaluation plan or were sourced from Country Offices.

The results on the full HLS Assessment and logframes discussed above suggest that the process

of rigorous pre-project assessment and the use of logframes with clear logical linkages is an

increasing reality in the CARE world. This needs to be matched with rigor in the pre-project

DME CA Global Synthesis 35


assessment and in the development of monitoring and evaluation plans and the details therein to

measure and track indicators contained in the proposal.

Many of the projects are striving to achieve household-level impact and contributing to higher

program goals with stated SMART goals and satisfactory indicators at various levels. Measuring

and achieving this is difficult with so few projects having quantitative baselines. In spite of 80%

of projects having a baseline of some form or the other, only 47% of the baselines used a

quantitative survey. On applying this ‘quantitative’ filter to the indicators measured during the

baseline and satisfaction with indicators there is sharp, almost fifty percent reduction in the

number of projects that had baselines that measured impact/effect or had satisfactory indicators

at various levels. What’s surprising is that, according to this data, the indicators were poorer in

the projects that had quantitative baselines. The non-existence of good baselines in this world

driven by numbers and donors driven by measurement of impact needs to be reversed. Projects

with quantitative baselines need further strengthening especially in the Latin America and East

Africa regions. These were also the very regions that had good guidelines for the review of

proposals, implying a possible disconnect between good design and sound M&E systems.

Sound results are also built on systems with the wherewithal to track and keep a count of project

beneficiaries. Sixty-three percent of projects reportedly registered beneficiaries by gender,

however far fewer projects (46%) were able to register total beneficiaries in projects. This may

imply varying interpretation of systems for registering beneficiaries. The establishment of sound

systems to track and count participants is of course subject to varying local conditions and the

number of participants and/or the geographical spread and complexity of individual projects.

Existing data management systems need to be further understood. The results suggest that 25%

of projects used some form of MIS (including three projects using MER ® ). However, 36% of

projects reportedly had “satisfactory software” for data management (without specifying what

kind). In the rush to automate, it is likely that attention to detail leading to systematization of the

logical link between proposal, design, and monitoring and evaluation may be overlooked. This

was articulated in the example from Somalia where the use of MER had to be withdrawn “due to

non-conformity of the intervention logic and data collection requirements with those required by

the software”. At least 65% of projects have data management systems that reportedly were

fulfilling the reporting needs of project management and donors. A comment from the

Nicaragua Country Office suggests that “there is little information sharing between projects …

even within the mission hierarchy. Information flow is ill defined and very much driven by the

information needs of donors and headquarters.” Far fewer projects (27% - 40%) were fulfilling

the management needs of Country Office Senior Management, Sector Coordinators, or Field

Supervisors. The narrative reports based on the analysis of the DMECA by Country Offices (e.g.

from Angola, Bolivia, Gulf of Guinea, Nicaragua, South Sudan, and Sudan) seems to suggest

that in very few projects is there a good system to prepare reports that meet the needs of

stakeholders in the project. If this truly is the case, then the building blocks of data at the field

level need to be strengthened. Apparently, data is not being analyzed at the point of collection

but being pushed up the monitoring ladder for compilation and reporting.

The self-ratings and the measures of quality of many items such as the proposal, DIP, monitoring

plans and systems, baseline, and adequacy of training do not appear to be borne out by the many

DME CA Global Synthesis 36


esults that the analyzed data has brought up. These ratings and quality measures may be

clouded by socially desirable responses or other subjectivity. The results on these measures of

quality and ratings needs to be viewed with caution and interpreted with skepticism.

Finally, the DME CA was visualized as a process to aid reflection and gather lessons to learn

from the successes and weaknesses of project design, monitoring and evaluation. The objectives

of this exercise also included a refresher on the principles of DME, including reading the Impact

Evaluation Checklist (within the CARE Impact Guidelines) as a part of a larger structured group

exercise. The data suggests that in many projects this exercise may have been short-circuited,

probably by pressing project needs. At the same time, many Country Offices (as evidenced from

the narrative summaries from 12 Country Offices) have used the results from the DMECA to

undertake a review, reflection, and analysis of their approach to DME. They have used SWOT

analysis to identify specific areas for improvement, reassigning roles and responsibilities in

DME, and even developing a DME manual (as in Kenya) as they march onwards to “realistic

and adequate, but not necessarily perfect, systems to do M&E activities” (comment from CARE-

Kenya). Consequently, this ‘evaluation as intervention’ may have partially achieved its potential

to raise the consciousness on and knowledge of DME. 7

These are the results that emerge from the analysis of project level information from 186 projects

in 23 countries and the narrative summaries from 12 Country Offices. The results would be

more complete with the availability of information from the remainder of projects and inclusion

of more of the narrative reports. The DME CA is a welcome attempt to further understanding of

DME. Through measuring and reflecting on skills and experiences in DME in the CARE and

larger world, project quality and effectiveness is bound to increase. As Stan Foster has rightly

said, “You get what you inspect … Not what you expect”.

7 There were notable exceptions, including CARE Bangladesh, that reported that "this wasn't just an assessment, it

was capacity building in DME!". (As reported to the Asia DME Workshop).

DME CA Global Synthesis 37


5. Recommendations

The analysis of the DME CA and this report would be incomplete without recommendations that

the findings of this analysis suggest. These could be briefly outlined as under

• Encourage continuity from the ‘D’ (proposal) to ‘M&E’ (detail of M&E plan and

functionality of M&E system)

• Systematize before automating the M&E system

• Institutionalize the process of DME through Country Office teams / individuals

• Encourage the adoption of full HLS or other forms of formal holistic diagnosis

• Further strengthen the use of rigorous logic models (logframes), clear logical linkage, and

SMARTer goals

• Systematize the process of review of all elements of proposals with clear and helpful

guidelines for proposal review

• Encourage the use of quantitative surveys as one of the essential elements of baseline

studies

• Strengthen the process of tracking participants and beneficiaries

• Undertake a review of DME-IS systems in CARE projects; systematize and strengthen

before computerization

• Refocus reports to balance the reporting requirement with the immediacy of management

• Share project proposals and key reports among countries to learn from each other’s

experiences and successes. This may be facilitated through the Internet.

• DME training plans need to address the staff needs and build up capacity.

• Revise and standardize the DME CAT and repeat periodically to refresh DME skills and

measure progress from this baseline

• Further study of the results from this round of DME CAs through inclusion of -

o Sector-level information on projects

o Beneficiaries’ numbers

o Duration of project

o Information on remaining projects

o Further analysis of the narrative reports

⊆⊇ ⊆⊇ ⊆⊇ ⊆⊇ ⊆⊇

DME CA Global Synthesis 38

More magazines by this user
Similar magazines