13.07.2015 Views

Integrating Evaluative Capacity into Organizational Practice

Integrating Evaluative Capacity into Organizational Practice

Integrating Evaluative Capacity into Organizational Practice

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

MissionStrategyFinanceFund RaisingEvaluationProgramDevelopmentMarketing andCommunicationsTechnologyStaff DevelopmentHuman ResourcesCollaboration<strong>Integrating</strong><strong>Evaluative</strong><strong>Capacity</strong> <strong>into</strong><strong>Organizational</strong><strong>Practice</strong>Ask ?sGather DataAnalyze DataUse FindingsA Guide for Nonprofit and Philanthropic Organizationsand Their StakeholdersAnita Baker, Beth BrunerThe Bruner Foundation, 2012BRUNERFOUNDATIONEFFECTIVENESSINITIATIVES


<strong>Integrating</strong><strong>Evaluative</strong><strong>Capacity</strong> <strong>into</strong><strong>Organizational</strong><strong>Practice</strong>BRUNERFOUNDATIONEFFECTIVENESSINITIATIVESA Guide for Nonprofit and Philanthropic Organizationsand Their StakeholdersAnita Baker, Beth BrunerThe Bruner Foundation, 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong>AcknowledgmentsThis guidebook was developed by Anita Baker and Beth Bruner based on 16 years of collaborationtargeted at helping organizations build evaluative capacity—the capacity to do evaluation,to commission evaluation, and to think evaluatively.The authors’ efforts to increase knowledge and use of participatory program evaluation amongnonprofit service providers and grantmakers began in 1996 with a pilot initiative, the RochesterEffectiveness Partnership (REP). Evaluations of each phase of the initiative provided clearevidence that through REP, participants learned about, could do, and could commission betterevaluations of their programs. As the initiative evolved, alumni study groups were created to helpparticipants sustain their evaluation capacity over time. Additions were made to the curriculum toaddress how evaluation skills could extend or “ripple” beyond participants <strong>into</strong> other programs,departments, or organizations. At the conclusion of the REP in 2003, Baker, Bruner, and anumber of the former REP partners were curious about how increased evaluation skills might beuseful beyond the development, implementation, and improvement of programs. In other words,could evaluation play a role in human resources or technology acquisition or marketing? A groupof 12 organizations spent one year (2003–2004) thinking about transferring evaluation skills <strong>into</strong>multiple organizational areas through the use of what they defined as <strong>Evaluative</strong> Thinking. The<strong>Evaluative</strong> Thinking in Organizations Study (ETHOS) documents this effort.The Hartford Foundation for Public Giving’s Building Evaluation <strong>Capacity</strong> (BEC) began in2006 and continues to this day. BEC builds upon the REP work, is also delivered by Anita Baker,and contributes to the work of integrating increased evaluation skills and organizationalevaluative practice.It is the belief of the authors and our many partners over time that building a culture ofevaluation—one in which evaluation skills are firmly embedded and used for development,improvement, and decision making about programs, strategies, and initiatives—is fundamentallyimportant in creating a learning organization, a data-driven organization, an organization sensitiveto making informed decisions in a timely way. We also believe that evaluation skills arehighly transferrable beyond evaluating an organization’s programs, strategies, and initiatives.<strong>Evaluative</strong> Thinking is just that—a way of thinking, a process of acquiring and using relevantdata in any aspect of an organization’s work.That is what this guidebook is about—<strong>Evaluative</strong> <strong>Capacity</strong>—the integrating of evaluationskills and evaluative thinking <strong>into</strong> everyday organizational practice to ensure not only strongerprograms but also stronger, more effective organizations better able to deliver on their missions.The contents of this guidebook were influenced by the former REP partners, the formerand current BEC partners, as well as the work of Michael Quinn Patton, Paul Connolly, PeterYork, Paul Light, Hallie Preskill, the Grantmakers for Effective Organizations (GEO), MarkKramer, and others who regularly share their insights through American Evaluation Associationand GEO blogs and publications.A special thanks to our reviewers whose suggestions helped to frame this guidebook: independentevaluators Victoria Dougherty and Jodi Paroff, grantmakers Amy Studwell from the HartfordFoundation for Public Giving and Rebecca Donham from the MetroWest Health Foundation, andour nonprofit sector representatives, Executive Directors Tracy Hobson from the Center for Anti-Violence Education (CAE) in Brooklyn, New York, Ann Marie Cook from Lifespan of Greater Rochester,Inc., and Allison Overseth, Partnership for After School Education (PASE), New York City.ii A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong>ForewordPreliminary work on the Rochester Effectiveness Partnership (REP) began in 1995. Between1996 and 2003, over 150 people in Rochester, NY participated directly in the initiative whoseoutcome was to build individual and organizational evaluation capacity using a participatory model.A second interactive initiative, <strong>Evaluative</strong> Thinking in Organizations Study (ETHOS), wasconducted in Rochester throughout 2004 with the purpose of understanding more about therelationship between increased evaluation capacity and the use of evaluative thinking—thinking grounded in evaluation skills but focused beyond the program level.Three publications were developed from this work. All are available for download on theBruner Foundation, website: http://www.brunerfoundation.org/. Those who want a hard copycan make a request through the website as well.Participatory Evaluation Essentials: A Guide for Nonprofit Organizations and their EvaluationPartners. Published in 2004 and updated in 2010, this guidebook provides a detailedcurriculum for building participatory evaluation capacity. An automated version of theguidebook is available in PowerPoint format.Participatory Evaluation Essentials: A Guide for Funders and their Evaluation Partners.Published in 2004, this guidebook provides a detailed curriculum for use by funders. Thefunder materials were updated and enhanced as a series of PowerPoint guides.Evaluation <strong>Capacity</strong> and <strong>Evaluative</strong> Thinking in Organizations. Published in 2006, thismonograph includes the history behind the development of the <strong>Evaluative</strong> Thinking Assessmenttool as well as examples of its practical use in an organization. An example report of<strong>Evaluative</strong> Thinking Assessment results is also available.This publication, <strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> <strong>into</strong> <strong>Organizational</strong> <strong>Practice</strong>, was developedin response to the continuing need expressed by nonprofit trainees to further assess andoperationalize evaluative thinking. It extends information first provided in 2006 in a series ofshort, electronic articles called <strong>Evaluative</strong> Thinking Bulletins. The guidebook is intended toanswer the following questions:• What is evaluative thinking, why is it important, and how can it be assessed?• What is the relationship between evaluation and evaluative thinking?• How can organizations use evaluative thinking in multiple areas of their work?• What evaluation skills are critical to evaluative thinking and how can organizations enhanceevaluation capacity?• How can organization board members enhance and learn from evaluation?• How is evaluative information packaged and distributed?• How can an organization sustain its evaluative capacity?These materials are for the benefit of any 501(c)(3) organization and may be usedin whole or in part provided that credit be given to the Bruner Foundation.They MAY NOT be sold or redistributed in whole or in part for a profit.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012iii


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong>Contents1. Evaluation and <strong>Evaluative</strong> Thinking 1Key Definitions 1<strong>Organizational</strong> Effectiveness, <strong>Evaluative</strong><strong>Capacity</strong>, and <strong>Evaluative</strong> Thinking 2Indicators of Evaluation in Multiple<strong>Organizational</strong> Areas 2Why and How to Assess <strong>Evaluative</strong>Thinking in Your Organization 6<strong>Evaluative</strong> <strong>Capacity</strong> and <strong>Organizational</strong>Leaders 62. <strong>Evaluative</strong> Thinking at Work 9Evaluation and Technology 9When Does an Organization NeedTechnology Help to Support Evaluationand <strong>Evaluative</strong> Thinking? 10Using Your Existing Technology—What Else Is Out There? 11<strong>Evaluative</strong> Thinking, Staff Development,and HR 11<strong>Evaluative</strong> Thinking and Collaboration 13Using <strong>Evaluative</strong> Thinking to ImproveCommunications and Marketing 143. Evaluation Essentials 17Evaluation Planning 18Evaluation Strategy Clarifications 18Purposes of Evaluation 19The Importance of Evaluation Stakeholders 19Evaluation Design 19Evaluation Questions Get You Started 20Criteria of Good Evaluation Questions 20Additional Evaluation Design Considerations 20Data Collection Overview 21What Happens After Data Are Collected? 21Evaluation Reports 22Commissioning Evaluation 224. Building Evaluation <strong>Capacity</strong> 25Data Collection Details 25Tips for Improving Data Collection 26Data Analysis Basics 29Things to Avoid When Reporting theResults of Analyses 315. Board Members, Evaluation, and<strong>Evaluative</strong> Thinking 35Involvement of Board Members in Evaluation 35Working with Your Board to Define ProgramOutcomes, Indicators, and Targets 37Board Involvement in Securing EvaluationConsultants 38What About Governance and <strong>Evaluative</strong>Thinking? 396. Using Findings 41Uses and Packaging for Evaluation Findings 41Evaluation Reports Revisited 42Components of a Strong Evaluation Report 43Preparing Findings to Share 44What Should Be Done If Evaluation ResultsAre Negative? 44What Should Be Done If Evaluation ResultsAre Inaccurate or Inconclusive? 44What Should Be Done If Evaluation ResultsAre Positive? 457. Sustaining <strong>Evaluative</strong> <strong>Capacity</strong> and<strong>Evaluative</strong> Thinking 47Extending and Sustaining <strong>Evaluative</strong><strong>Capacity</strong> and <strong>Evaluative</strong> Thinking 47Thinking About Evaluation-Related Policies 49<strong>Organizational</strong> Evaluation Plan Development 49Getting Started With an Evaluation Inventory 50Next Steps: Increasing and <strong>Integrating</strong><strong>Evaluative</strong> <strong>Capacity</strong> 51iv A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong>Appendices 531. Useful Evaluation Terms 532. <strong>Evaluative</strong> Thinking Assessment 553. History of Evaluation 644. Different Evaluation Purposes RequireDifferent Evaluation Questions 655. Distinguishing Between Evaluationand Research 666. Making Data Collection Decisions 677. Evaluation Report Outline 698. Commissioning Evaluation 709. Getting Funding for Evaluation 7410a. Notes on Sampling andRepresentativeness 7510b. How Big Should the Sample Be? 76A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012v


e<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 1fABm2 41. Evaluation * and <strong>Evaluative</strong>Thinking8dwgcKey DefinitionsEvaluation is a process that applies systematicinquiry to program management, improvement, anddecision making. Evaluation is also used to assessthe status or progress of a strategy (i.e., a groupof meaningfully connected programs, not just thesimple aggregation of multiple programs) or an initiative(a grouping of strategies).Evaluation <strong>Capacity</strong> is the ability of staff and theirorganizations to do evaluation. Because evaluation issystematic, it involves proficiency in a particular setof skills.• Asking questions of substance• Determining what data are required to answer specificquestions• Collecting data using appropriate strategies• Analyzing collected data and summarizing findings• Using the findingsEvaluation is an active, dynamic process. It is theability to do evaluation—to assess what is takingplace with a program, a strategy, or an initiative—that allows an organization to use data to informits decision making. For evaluation to be relevant,to reach its potential, it must respond to importantquestions, be timely, and provide a springboard for*Note that evaluation in the context of this guidebook is notmeant to include experimental design or research processessuch as randomized controlled trials or multi-year longitudinalresearch projects.action. Evaluation is what grounds learning, continuousimprovement, and data-driven decision making.Organizations with evaluation capacity are those withstaff who are able:• To do, commission, or contribute, such as throughparticipatory evaluation, meaningful and timelyevaluations• To understand information collected throughevaluation• To use information from an evaluation to makedecisionsEvaluation capacity used well leads to programs,strategies, and initiatives that allow organizations tobetter deliver on their missions and to better meetthe needs of those they serve. But because of thefocus on service delivery, it does not necessarily leadto more effective organizations.<strong>Evaluative</strong> Thinking is a type of reflective practicethat uses the same five evaluation skills listedabove in areas other than programs, strategies, andinitiatives. It is an approach that fully integratessystematic questioning, data, and action <strong>into</strong> anorganization’s work practices. It can contribute toimproved learning and decision making.<strong>Evaluative</strong> thinking is grounded in the same approachas evaluation: Better planning and decision makinghappen when they are based on systematicallycollected data.** (For example, an organizationthat applies evaluative thinking to HR strategiesA Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 20121


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 1determines needs and existing levels of competencebefore procuring training, assesses both the deliveryof the training and the use of material that wereintroduced, and then completes the process by usingthe information to plan future trainings.)<strong>Evaluative</strong> <strong>Capacity</strong> is the combination of evaluationskills and evaluative thinking. It requires a commitmentto doing and using evaluation in programs,strategies, and initiatives as well as a commitment tousing those same skills in other aspects of organizationwork.**Organizations with high evaluative capacity:• Use the five evaluation skills in programmatic ANDorganizational work• Frame questions which need to be asked and canbe answered with data• Have staff who have and hone their evaluation skills• Have staff who use evaluation skills to collect andanalyze data• Have a clear plan of what to do with the informationthey collect• Actually use data for making decisions• Act in a timely way<strong>Organizational</strong> Effectiveness, <strong>Evaluative</strong><strong>Capacity</strong>, and <strong>Evaluative</strong> Thinking<strong>Organizational</strong> effectiveness has been operationalizedby the Grantmakers for Effective Organizations (GEO)(www.geofunders.org). They state that it is the abilityof an organization to fulfill its mission through ablend of sound management, strong governance, anda persistent rededication to achieving results.**This guidebook provides examples of and strategies for increasingevaluative thinking in the following areas: mission development,strategy planning, governance, finance, leadership, funddevelopment/fund raising, evaluation, program development, clientrelationships, communication and marketing, technology acquisitionand planning, staff development, human resources, businessventure development, alliances and collaborations.For an organization to be able to determine thesoundness of its management, the strength of itsgovernance, and its capacity/status regarding achievementsof results, that organization must have evaluativecapacity, both evaluation skills and the ability totransfer those skills to organizational competencies.<strong>Evaluative</strong> capacity does not automatically bring aboutorganizational effectiveness. But, without evaluativecapacity, it is difficult for an organization to be ableto understand or assess its level of effectiveness or todetermine when strategies designed to enhance organizationaleffectiveness are in fact doing so.The remainder of this guidebook clarifies ways thatorganizations can develop and strengthen their evaluativecapacity by identifying and enhancing evaluativethinking and by building additional evaluation capacity.Indicators of Evaluation in Multiple<strong>Organizational</strong> AreasThrough intensive work with leaders and staff fromnonprofit organizations in Rochester, New York, theBruner Foundation identified 15 organizational areaswhere evaluative thinking is particularly important.Multiple indicators for each area are listed in the followingcharts, and details about how to achieve themare addressed throughout the rest of this guidebook.To the right are multiple indicators within each organizationalcapacity area. The list is not exhaustivenor does the absence of an individual indicator implythat evaluative thinking is absent or even substantiallyreduced at the organization. Rather, this is a startinglist of indicators, shown to promote reflection aboutevaluative thinking in a systematic way. This set ofindicators was identified through work with multipleorganizations and has been changed, upgraded, andenhanced over time. Users of this guidebook areencouraged to think about these and other ways thatevaluative thinking is demonstrated (or absent) intheir own organizations.2 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 1Indicators of <strong>Evaluative</strong> ThinkingOrganization MissionStrategic PlanningLeadershipGovernanceOrganizations that use <strong>Evaluative</strong> Thinking will:• Develop mission statements specific enough to provide a basis for goals and objectives.• Review and revise the mission statement on a scheduled basis (e.g., annually) with inputfrom key stakeholders as appropriate.• Regularly assess compatibility between programs and mission.• Act on findings of compatibility assessments (in other words, if a program is not compatiblewith the mission, it is changed or discontinued).• Have a formal process for strategic planning.• Obtain input from key stakeholders (staff, board, community, and clients) about strategicdirection where appropriate using evaluative strategies such as interviews and surveys.• Assess activities related to strategic process at least annually and involve key stakeholders(staff, board, community, and clients) in assessment where appropriate.• Use strategic plans to inform decision making.<strong>Organizational</strong> leaders who use <strong>Evaluative</strong> Thinking will:• Support and value evaluation and evaluative thinking. Use evaluation findings in decisionmaking.• Include attention to evaluation as an important part of a succession plan. New leaders willbe expected to value and be knowledgeable about evaluation.• Educate staff about the value of evaluation and how to participate effectively in evaluationefforts.• Motivate staff to regularly use specific evaluation strategies.• Modify the organizational structure as needed to embrace change in response to evaluationfindings.• Foster use of technology to support evaluation and evaluative thinking.• Use data to set staff goals and evaluate staff performance.• Use data to make staffing decisions (e.g., to decide which staff works on which projects,which staff members are eligible for promotions or advancements, or which staff membersneed more assistance).• Include attention to evaluation in all management-level succession planning. Managersshould be expected to value and where possible be knowledgeable about evaluation.In Organizations where there is <strong>Evaluative</strong> Thinking:• The board uses appropriate data in defining its goals, work plan, and structure to developplans summarizing strategic direction.• The board regularly evaluates its progress relative to its own goals, work plan, and structure.• The relationship between the organization mission and plans for strategic direction areassessed.• There is a systematic process and timeline for identifying, recruiting, and electing newboard members.• Specific expertise needs are identified and used to guide board member recruitment.• The board regularly (e.g., annually) evaluates the executive director’s performance basedon the established goals and work plan, including, where appropriate, the use of someprogram evaluation results.• The board assesses the organization’s progress relative to long-term financial plans.• The board assesses the organization’s progress relative to evaluation results.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 20123


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 1Indicators of <strong>Evaluative</strong> ThinkingFinanceFund Raising/FundDevelopmentBusiness VentureDevelopmentTechnology Acquisition& TrainingClient Interaction• Systems are in place to provide appropriate financial information to staff, and board membersare monitored to ensure they inform sound financial decisions.• A comprehensive operating budget which includes costs for all programs, managementand fundraising, and identifies sources of all funding is developed and reviewed annually.• Unit costs of programs and services are monitored through the documentation of stafftime and direct expenses.• Financial status is assessed regularly (at least quarterly) by board and executive leaders.• Year-end revenues and expenses are periodically forecast to inform sound managementdecisions.• Financial statements are prepared on a budget versus actual and/or comparative basis toachieve a better understanding of finances.• A review process exists to monitor whether appropriate and accurate financial informationis being received from either a contracted service or internal processing.• Capital needs are reviewed annually.• The organization has a plan identifying actions to take in the event of a reduction or lossin funding.Organizations that use <strong>Evaluative</strong> Thinking will:• Conduct research on potential fund development opportunities (grants, contracts) andassess which to pursue.• Develop a written fund development plan that clarifies which grants and contracts will bepursued. Assess whether the plan is being followed and why changes and exceptions aremade. Revise the plan as needed based on assessments.• Involve program staff in proposal writing, especially sections on program design and outcomeson which the program will be assessed.• Assess the costs and benefits for fund-raising events and activities.• Systematically identify gaps in community service.• Assess whether they have the capacity to bring in new types of business.• Research new business venture opportunities.• Base new venture development on capacity findings, results of gap studies, and businessventure development research.In Organizations where there is <strong>Evaluative</strong> Thinking:• Assessment processes will be in place to make decisions about technology maintenance,upgrades, and acquisition.• Technology systems include software that can be used to manage and analyze evaluationdata (e.g., Excel, SPSS).• Technology systems provide data to evaluate client outcomes.• Technology systems provide data to evaluate organizational management.• Technology systems are regularly assessed to see if they support evaluation.• Staff technology needs are regularly assessed.• Client needs assessments are conducted regularly, and client services are designed inresponse to determined needs.• Client satisfaction is regularly assessed and the results of client outcome assessmentsand client satisfaction are used in development of new programs.4 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 1Indicators of <strong>Evaluative</strong> ThinkingMarketing &CommunicationsProgram DevelopmentProgram EvaluationStaff DevelopmentHuman ResourcesAlliances &CollaborationOrganizations that use <strong>Evaluative</strong> Thinking will:• Have a marketing and communications plan. The plan will be linked to the organization’sstrategic plan and will be used to help the organization achieve its mission.• Involve multiple stakeholders, including staff, board members, and technical assistanceproviders, as needed, to develop and assess the marketing and communications plan.• Assess the effectiveness of the organization’s marketing and communications plan (i.e.,determine whether an accurate message is getting out and whether delivery of the messageis furthering the mission of the organization.)• Identify gaps in community services before planning new programs.• Incorporate findings from evaluation <strong>into</strong> the program planning process.• Involve multiple stakeholders in developing and revising program plans.• Develop written program plans including a logical formulation of each program.• Follow program plans where possible and insure that there are strategies in place tomodify program plans if needed.• Regularly conduct evaluations that include attention to characteristics, activities, andoutcomes of selected programs.• Involve program staff, organization leaders, and clients (as appropriate) in developing andrevising program evaluation plans as well as collecting and analyzing program evaluationdata.• Share results of program evaluations including findings about client outcomes, as appropriate,with leaders, staff, clients, board members, and funders.• Use results of program evaluation to drive continuous improvement of programs.• Insure that there are key staff with evaluation expertise to address the organization’sevaluation needs and that there are staff members whose jobs or components of their jobsare dedicated to evaluation.• Hire evaluation consultants when needed.• Provide or obtain training in evaluation for program staff members and make sure that thetraining is current, well-delivered, and provided for enough staff members to insure thatevaluation use is a standard practice.• Conduct a formal staff development needs assessment annually.• Develop a plan for staff development based on needs assessment data.• Evaluate the staff development plan.• Provide opportunities for staff to assess staff development training sessions.• Use results of staff training assessments to influence future staff development.• Have an established personnel performance review process.• Use performance reviews to provide feedback relative to performance expectations.• Collect and update information on credentials, training and cultural competencies ofstaff; and then use the results to recruit, hire, and train culturally competent staff.• Conduct regular (e.g., annual or biannual) staff satisfaction surveys and use the results toinform modification of policies and procedures.• Evaluate existing partnerships, alliances, and collaborations based on organization missionand strategic plans.• Identify additionally needed partnerships, alliances, and collaborations.• Regularly assess partnerships, alliances, and collaborations in which the organization isinvolved to determine if they are functioning effectively and continue to meet organizationmission and strategic direction.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 20125


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 1Why and How to Assess <strong>Evaluative</strong>Thinking in Your OrganizationIn 2005, Bruner Foundation evaluation consultantsand representatives from 12 nonprofit organizationsin Rochester, New York created the <strong>Evaluative</strong> ThinkingAssessment Tool to assess the extent to whichevaluative thinking is present in various organizationalcapacity areas. The tool grew out of the BrunerFoundation’s <strong>Evaluative</strong> Thinking in OrganizationsStudy (ETHOS). For more information about ETHOS,please see the Effectiveness Initiatives section of theBruner Foundation website (www.brunerfoundation.org). In 2007, the tool was automated and in 2010–11, it was updated again after additional study.The modified version of the <strong>Evaluative</strong> ThinkingAssessment Tool was designed to capture leaderperceptions about evaluative thinking in a criticalsubset of organizational capacities, for a particularpoint in time. Both the original and the modifiedversion of the <strong>Evaluative</strong> Thinking Assessment Toolinclude multiple questions to capture informationabout indicators of evaluative thinking in the 15 differentorganizational capacity areas. For each itemon the modified version of the assessment tool, therespondent is asked to report whether an indicatorof evaluative thinking is present using codes shownin drop-down boxes next to each indicator. It isexpected that summarizing the organization’s bestprojections about evaluative thinking will help theorganization recognize whether and to what extent itis incorporating specific evaluative thinking strategies<strong>into</strong> its work and in which organizational areas.It will also help the organization to prioritize strategychanges related to evaluative thinking.Those completing the Bruner Foundation <strong>Evaluative</strong>Thinking Assessment Tool are advised to complete all15 worksheets in the instrument and then view thesummary table and summary graph that are generatedautomatically. Summaries and all pages in the<strong>Evaluative</strong> Thinking Assessment workbook can beprinted if desired; users of the tool are encouraged togenerate their own <strong>Evaluative</strong> Thinking Assessmentreport from these worksheets. Please see the actual<strong>Evaluative</strong> Thinking Assessment tool for remainingdetailed instructions for its use, as well as the examplereport which presents <strong>Evaluative</strong>Thinking Assessmentresults for a fictitious organization. [Thesetools can be found in the Effectiveness Initiativessection of the Bruner Foundation website: FeaturedResources on the <strong>Evaluative</strong> Thinking page.]The <strong>Evaluative</strong> Thinking Assessment Tool wasdesigned to facilitate discussions about:• Perceptions of evaluative thinking in multipleorganizational areas• Changes in evaluative thinking• Challenge areas where additional evaluative thinkingmight be incorporated <strong>into</strong> organizational work<strong>Evaluative</strong> Thinking Assessment scores can alsoinform the setting of priorities regarding incorporationof or enhancement of evaluative thinking inorganizational practice. Users of the tool are alsoencouraged to think about score thresholds for theirown organizations—what is ideal, what is expected,and what is unacceptable—and to think of responsesto challenge areas that are identified through its use.<strong>Evaluative</strong> <strong>Capacity</strong> and <strong>Organizational</strong>LeadersIn order for an organization to develop and sustainevaluative capacity, it must have leaders who, at thevery least, not only value evaluation, but more importantly,practice evaluative thinking.Specifically, organizational leaders should:• Support evaluation and evaluative thinking• Provide or obtain training in evaluation and evaluativethinking for themselves and key staff• Use data to set staff goals, evaluate staff performance,and make staffing decisions• Include attention to evaluation as an importantpart of succession planning• Foster the use of technology to support evaluationand evaluative thinking• Use evaluation findings in decision making6 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 1Leaders and key staff should also have regular involvement in evaluation actions,including some or all of the ways identified in the list below.<strong>Evaluative</strong> Roles of Organization Leaders and Key Staff• Identify programs that will benefit from evaluation• Identify and support staff in all aspects of their efforts to design and conduct evaluations• Promote the widespread practice of assessment and learning within the organization• Identify questions of consequence (based on expected outcomes)• Carefully select evaluation strategies that will produce usable evaluation findings• Thoughtfully conduct evaluation activities and analyses of data• Use evaluation results to strengthen programs and enhance program decision making• Share evaluation training and other evaluation learning with colleagues• Promote clear communication about the purposes of the evaluation• Design, conduct, and support evaluations that are honest and that help inform decision making• Promote the continuous improvement of programs, as needed• Set short-term measures and milestones, but also identify longer-term opportunities for organizationsto use information and produce stronger programs and desired outcomes• Identify outcome targets in advance of initiating evaluation (use prior efforts, external standards, oragreed-upon expectations to determine reasonable targets)• Do peer education with colleagues• Support policies that clarify:— How evaluation results should be used (i.e., in context, and in conjunction with other findings anddocumentation)— Who is responsible for evaluation (including quality assurance, training, compensation)— How evaluative capacity can be sustained (e.g., through succession planning that includesattention to evaluation knowledge, through technology planning that supports evaluation)As stated in the acknowledgements to this guidebook,it is the belief of the authors and their manypartners over time that building a culture of evaluation—onein which evaluation skills are firmlyembedded and used for development, improvement,and decision making about programs, strategies, andinitiatives—is fundamentally important in creating alearning organization, a data-driven organization, anorganization sensitive to making informed decisionsin a timely way. We also believe that evaluation skillsare highly transferrable beyond evaluating an organization’sprograms, strategies, and initiatives. <strong>Evaluative</strong>thinking is just that—a way of thinking, a processof acquiring and using relevant data in any aspect ofan organization’s work. The following section providesadditional examples of the application of evaluativethinking to important organizational competencyareas beyond program evaluation.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 20127


818A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012Participatory Evaluation Essentials • Chapter 1


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 22. <strong>Evaluative</strong> Thinking at WorkAs stated in the Introduction, evaluative thinking canbe used in many areas of organizational work. Herewe address technology, staff development and HR,and collaboration. In any area, the process is thesame: ask important questions, systematically collectand analyze data, summarize and share information,and act on the findings.Evaluation and TechnologyOrganizations typically need technology systems forfour main purposes: (1) management support (e.g.,for use with financial, HR, or procedural information);(2) internal and external communication (e.g.,e-mail capability, proposal or report development andsubmission, contact with program participants andothers); (3) program operations (e.g., client tracking,archiving and recordkeeping, best practice research,program descriptions, logic model development); and(4) evaluation (for example, of programs, clients,overall organizational effectiveness).An organization that is committed to evaluative thinkinguses it to inform decisions related to technologyand embraces supportive technology use. Inother words, they ask questions about whether theorganization’s hardware and software truly informorganizational and program practices, they systematicallycollect data about technology systems in use toensure that they truly are supportive, and they analyzeand act on the data that is collected (i.e., technologychanges are made based on information). Inaddition, they use technology to support evaluationwork. The list on the next page enumerates questionsthat should be answered to determine if an organizationis using evaluative thinking to support technologyand using technology to support evaluation.Questions to Consider Regarding Use of <strong>Evaluative</strong>Thinking to Determine Technology Needsand Use• Do staff who need them have individual computersor access to computers? Do most or all staffmembers use the Internet as a research tool? Domost or all staff members use e-mail for internalcommunication?• Does the organization have technical supportfor computers and software (e.g., assistance forcomputer problems)? Is there a technology systemsplan and budget in place to guide decisionsabout technology, maintenance, upgrades, andacquisitions?• Are the technology needs of staff and/or programsregularly assessed?• Are the organization’s technology systems regularlyassessed to see if they support organizationaleffectiveness and evaluation in particular?• Are the organization’s technology systems changedor upgraded if assessment data suggest there areunmet needs?• Do staff members receive training to use technologyto track clients and support program deliveryand evaluation?A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 20129


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 2Questions to Consider Regarding Use of <strong>Evaluative</strong> Thinking to Determine Technology Needs and Use• Do staff who need them have individual computers or access to computers? Do most or all staffmembers use the Internet as a research tool? Do most or all staff members use e-mail for internalcommunication?• Does the organization have technical support for computers and software (e.g., assistance for computer problems)?Is there a technology systems plan and budget in place to guide decisions about technology, maintenance,upgrades, and acquisitions?• Are the technology needs of staff and/or programs regularly assessed?• Are the organization’s technology systems regularly assessed to see if they support organizationaleffectiveness and evaluation in particular?• Are the organization’s technology systems changed or upgraded if assessment data suggest there are unmetneeds?• Do staff members receive training to use technology to track clients and support program delivery andevaluation?• Do staff members have opportunities to influence design of or use of technology for tracking informationabout their programs?• If the organization is using technology to support evaluation, do the organization’s technology systems:—Store and manage data to evaluate organizational effectiveness overall including financial,governance, communications, and staffing information, as well as data used to evaluate clientoutcomes?—Include software (such as SPSS or SAS) that can be used to manage and analyze data?• Do staff members have opportunities to influencedesign of or use of technology for tracking informationabout their programs?• If the organization is using technology to supportevaluation, do the organization’s technology systems:— Store and manage data to evaluate organizationaleffectiveness overall including financial,governance, communications, and staffing information,as well as data used to evaluate clientoutcomes?— Include software (such as SPSS or SAS) thatcan be used to manage and analyze data?When Does an Organization NeedTechnology Help to Support Evaluationand <strong>Evaluative</strong> Thinking?Most program evaluation and evaluative thinkingtechnology needs are straightforward—the technologymust only help to manage small databases andto perform simple calculations such as frequencies(i.e., counts of occurrences or responses), and crosstabulations(analyses of combined data). Staff canuse existing technology (hardware, software, internetconnections) to manage and store information and toconduct analyses of data to inform decision making.There are at least four conditions, however, whenadditional support is beneficial:• When there are no staff members with expertise inthe use of software to manage and analyze data;• When the evaluations, plans or assessments beingconducted require the development of, or accessto, large or complex databases;• When a funder or collaborative agreement requiresthat information be stored in and analyzed fromcustomized databases;• When the planned analyses cannot be conductedby hand or with the use of readily available softwaresuch as Excel.10 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 2It is very important to factor in evaluation needswhen considering new technological support.In organizations where there is evaluative thinking,technology planning processes include attention toevaluation and ways that technology can support it.Using Your Existing Technology—WhatElse Is Out There?Most organizations have multiple computer workstationsfor staff member use. These computers arecommonly equipped with the Microsoft Office suite ofsoftware that includes word processing, spreadsheet(Excel), and relational database (Access) applications,with connections to the Internet. (Less common arestaff members who are able to use all of these applicationsspecifically to support program evaluation andother evaluative thinking needs.)Excel, in particular, is a versatile and readily availabletool for managing and analyzing both surveyand record review (administrative) data. For simpleinstructions and a sample Excel file including multipleworksheets, visit the resources and tools’ pages of theBruner Foundation website.There are four other types of software that can addressdata management and analysis needs to help supportregular use of evaluative thinking and effective evaluation.Consider whether any of these are necessary foryour continued efforts to use technology evaluatively:1. Statistical analysis software (such as SPSS,Minitab or SAS)2. Software to help conduct electronic/web-basedsurveys (such as Survey Monkey or Zoomerang)3. Qualitative data analysis software (such as QSRN6, Atlas.ti, E6, QSR NVivo)4. Database software (such as FileMaker Pro,QuickBase)Again, in terms of organizational priorities, it isimportant to both use evaluative thinking to enhancetechnology and to use available technology systems toenhance your evaluation.How Do Nonprofit Organizations Get Technology Help?1. Use or develop internal capacity. Be sure todevelop clear job descriptions, and/or needs statements,and to assess the effectiveness of technologystaff.2. Seek technical assistance from the nonprofit or forprofitsectors in the form of interns, consultants, orshared services. Be sure to assess the usefulnessof the services.3. Seek capacity-building grants from funders offeredexclusively or as part of a larger effort. Be sure thegrant’s intent matches the organization’s technologyneeds.<strong>Evaluative</strong> Thinking, Staff Development,and HRIn addition to the various data collected about programsand participants, most organizations managedata about staff. This includes information aboutstaff background and qualifications, compensation,and performance. Many organizations also provideongoing training for their staff, both internally and viaother providers, including some professional developmentthat is mandatory and some that is elective.<strong>Evaluative</strong> thinking can be useful in both of theseareas of organizational function.Organizations that incorporate <strong>Evaluative</strong>Thinking <strong>into</strong> their Human Resources practices:• Have an established personnel performance reviewprocess• Use performance reviews to provide feedback relativeto performance expectations• Collect and update information on credentials,training and cultural competencies of staff anduse the results to recruit, hire, and train culturallycompetentstaff• Conduct regular (e.g., annual or biannual) staffsatisfaction surveys and use the results to informmodification of policies and proceduresA Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201211


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 2<strong>Evaluative</strong> Thinking and CollaborationIt is often advantageous (sometimes even required)for organizations to work together. The combinedefforts of two or more organizations are identified bya variety of terms with overlapping definitions. Commonforms of working together include:• Alliances: Formal agreements establishing an associationbetween organizations to achieve a particularaim or advance common interests.• Partnerships: Cooperative, usually voluntary,relationships between groups that agree to shareresponsibility for achieving some specific goal(s).Partnerships can also include the sharing ofresources, risks, and rewards according to termsreached through (often prespecified, sometimeslegal) agreement.• Collaboration: Though it is often used as a synonymfor the above and to describe other joint ormultiple efforts, this term is primarily meant tosignify durable and extensive relationships whereparticipants bring separate organizations <strong>into</strong> a newstructure with full commitment to a common mission.Partners pool or jointly secure resources andshare the results and rewards.Regardless of what the relationship is called, workingtogether requires decision making. Decision makingis strengthened by evaluative thinking.Should you join in or go it alone?When forming alliances, partnerships, or collaborations,always consider the following.• Alliances, partnerships, and collaborations are notefficient ways of doing business.• All partners must do and get something, but theirefforts are not necessarily equal.• True collaboration is inherently interactive, relationship-based,and voluntary.• It takes time, is difficult to achieve, and cannot bemandated, forced, or faked.To apply evaluative thinking to consideration of partnershipformation, try collecting and analyzing datato answer the following questions.• Why is a combined approach better? Are thereadditional or better outcomes that will result fromthe combined approach?• Are partners necessary to achieve the desiredoutcomes?• Which organizations and which specific staff orpositions from each organization will be involvedand how?• What is each organization’s history of and commitmentto working with others?• How will working together work? Are there written,agreed upon guidelines and responsibilities for keycollaborative functions such as oversight, communication,recordkeeping, and decision making?• What is the timeframe for collaborative work? Howmuch time will be required for start-up, maintenance,and dissolution of the collaboration?• How will the effectiveness of work done with othersbe evaluated? Is the group committed to usingevaluation findings to strengthen its work?Once partnerships have formed, evaluativethinking can be used again to determine theeffectiveness of partnerships and collaborationsand inform course corrections.• Are partnerships, alliances, and collaborationsembraced by the organization?• Do partners articulate a shared vision for theirwork? Can they articulate how they benefit fromparticipating in the collaboration? Do partners supportthe partnership agenda?A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201213


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 2• The effectiveness of the organization’s communicationsand marketing plan (i.e., whether an accuratemessage is getting out and whether delivery of thatmessage is furthering the mission of the organization)is regularly assessed.Further, communications and marketing plans willinclude and be linked to evaluation results. In organizationsthat regularly use evaluative thinking, thecontent of what is marketed and included in communicationis driven, in part, by evaluation results.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201215


16 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 31. Do uoiu Tage2. Bau di maro3. itu ard teger4. No aren pash5. Do uoiu Tage6. Bau di maro7. itu ard teger3. Evaluation EssentialsAs stated in Section I, it is the belief of the authorsand their many partners over time that building aculture of evaluation is fundamentally importantfor data-driven learning organizations. Knowledgeabout essential evaluation skills is key. Further, it isour belief that evaluation skills are highly transferrablebeyond evaluating an organization’s programs,strategies, and initiatives and can be applied toany aspect of an organization’s work. This sectionprovides an important overview regarding essentialevaluation skills.Evaluation is a science with a relatively short history.It became a distinctive field of professionalsocial science practice in the late 1960s (Patton1982) and is widely used in conjunction with thework of nonprofit organizations today. Many typesand classifications of evaluation exist; many termsare associated with the practice. (For a brief list ofimportant evaluation terms and a brief summaryof the history of evaluation, see Appendices 1 and3.) Because evaluation is the cornerstone to thedevelopment of evaluative capacity, this guidebookincludes two sections (this and the following section)detailing basic content about evaluation andstrategies for enhancing evaluation capacity.It is important to start any discussion of evaluativecapacity with two key definitions.Working Definition of Program Evaluation: Thepractice of evaluation involves the thoughtful,systematic collection and analysis of informationabout the activities, characteristics, and outcomesof programs for use by specific people to reduceuncertainties, improve effectiveness, and informdecisions regarding those programs (adapted fromPatton 1982).Working Definition of <strong>Evaluative</strong> Thinking: A typeof reflective practice that incorporates use of systematicallycollected data to inform organizationalactions and other decisions.Evaluation capacity is the ability of staff and theirorganizations to do evaluation. <strong>Evaluative</strong> capacityis the combination of evaluation skills and evaluativethinking. It requires a commitment to doing andusing evaluation in programs, strategies, and initiativesas well as a commitment to using those sameskills in various aspects of the organization.For more content, activities, and a step-by-stepguide to plan for and conduct program evaluation orto commission external evaluation, see ParticipatoryEvaluation Essentials: An Updated Guide For NonprofitOrganizations And Their Evaluation Partners(Baker 2010). This guidebook can be downloadedat no cost from the Effectiveness Initiatives sectionof the Bruner Foundation website, www.brunerfoundation.org.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201217


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 3Using Participatory Evaluation EssentialsThe Participatory Evaluation Essentials guidebookis organized to help evaluation trainees walkthrough the process of designing an evaluation,collecting and analyzing evaluation data, projectingtimelines and budgets, and even writing anevaluation report. Each section of the guidebookincludes pullout materials that can also be usedfor training activities. An evaluation planning summarycan be used to help trainees fully design anevaluation to implement in their own programs.The guidebook can also be used as an evaluationreference source for those who want to knowmore about a specific topic. It contains an evaluationbibliography with updated text and onlinereferences about evaluation and examples ofevaluation tools such as surveys, interviews, andobservation protocols.Evaluation PlanningPlanning an evaluation requires forethought andattention to strategic and technical steps. It alsorequires recognition of assumptions about the workbeing evaluated and existing biases held by thosewho will conduct the evaluation. Good evaluationplanning also includes front-end analysis planningand thought about use of evaluation results. There isa need to establish neutrality to the degree possibleby those making evaluation decisions. A written planor design is required for the evaluation overall. Specificadministration and analysis plans are stronglyrecommended for each data collection strategy.Evaluation Strategy ClarificationsIn addition to basic definitions, it is useful to understandcurrent and common thinking about evaluationstrategy. To that end, it is important to recognize thefollowing:• All evaluations are partly social, because theyinvolve human beings; partly political, becauseknowledge is power and decisions about what getsasked and how information is used are politicaldecisions at the organizational level; partly technical,because choices always have to be made abouthow to collect and analyze data (Herman, Morris,Fitz-Gibbons 1996).• Both qualitative (descriptive) and quantitative(numerical) data can be collected using multiplemethods (e.g., observations, interviews, surveys,or statistical analyses of practical assessmentsfrom review of program records). Although therehas been much debate about which strategies andtypes of data are best, current thinking indicatesthat both qualitative and quantitative data arevaluable and both can be collected and analyzedrigorously.• There are multiple ways to address most evaluationneeds. While there are things you should definitelyavoid, there is no single or best way to design andconduct evaluation. Different evaluation needs andresources call for different designs, types of data,and data collection strategies. A rigorous evaluationwill include:— Multiple data collection strategies (e.g., somecombination of surveys, interviews, observations,and record reviews, though not necessarily all inone evaluation)— Collection of data from or about people frommultiple perspectives (e.g., program participants,their caregivers, staff of the program, orother service providers)— Collection of data at multiple points in timeIn brief, evaluation involves five steps.1. Specifying questions2. Developing an evaluation design3. Applying evaluation logic (i.e., identifying outcomes,indicators and targets, and determininghow and when to measure them)18 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 34. Collecting and analyzing data, making action recommendations5. Summarizing and sharing findingsPurposes of EvaluationEvaluations are typically conducted to accomplishone or more of the following:• Render judgments• Inform data-driven decision making• Facilitate improvements• Generate knowledgeIt is critical to carefully specify why evaluation isbeing conducted. This should be done at the earlieststages of evaluation planning and with the inputof multiple stakeholders (e.g., organization leaders,staff, and possibly board members, program participants,and funders).The Importance of EvaluationStakeholdersEvaluation stakeholders are people who have astake—a vested interest—in evaluation findings (Patton1997, p. 41). Stakeholders include anyone whomakes decisions about a program, desires informationabout a program, and/or is involved directly orindirectly with a program.• Most programs have multiple stakeholders.• Stakeholders typically have diverse and often competinginterests (e.g., if you were to ask what wasimportant about an afterschool program of youthparticipants, parents, teachers, and program staff,the answers and related program interests andneeds would be different).• In program evaluation, stakeholders typicallyinclude organization officials (e.g., executive director),program staff, program clients or their caregivers,and ultimately program funders. Sometimesorganization board members, community members,or other organizations are also stakeholders. It iscritical for an evaluation to involve some of thesekey stakeholders in the evaluative process, especiallyin evaluation planning. When appropriate,there can also be important roles for stakeholdersin data collection, analysis, and reporting. Whenstakeholders are not involved at all in evaluation,there will likely be misunderstandings and perhapsdissatisfaction regarding evaluation design, implementation,and use.Evaluation DesignDeveloping an evaluation design helps you thinkabout and structure an evaluation. Evaluation designscommunicate evaluation plans to evaluators, programofficials, and other stakeholders.Evaluation is not free! But evaluation shouldnot be viewed as in competition with programresources.Evaluations can be funded as a component of aprogram (using program funds) or as a separateproject (using ear-marked or additional funds).A common rule of thumb is to set aside at least10% of the cost of the program for evaluation.For additional details about evaluation funding,see Appendix 9.A good design should include the following:• Summary information about the selected program,why it is to be evaluated, what the underlying logicis regarding the program (i.e., how the program isexpected to use resources to deliver services thatcontribute or lead to outcomes, and how it will beknown when outcomes have occurred)• Between two and five questions to be addressed bythe evaluationA Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201219


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 3• Clear description of data collection and data analysisstrategies that will be used• Identification of individuals who will be responsiblefor collecting data• Estimated timeline for evaluation activities andprojected costs to do the evaluation• Description of products of the evaluation (usuallyincluding report and executive summary withformatting/layout information) with notation regardingwho will receive them and probable plans fortheir useEvaluation Questions Get You StartedEvaluation questions ideally should be determinedby the service providers (e.g., executive directors,program directors, and managers, maybe line staffand other stakeholders), together with the evaluator,in accordance with the purpose of the evaluation.Evaluation questions should:• Focus and drive the evaluation. They should clarifywhat will and will not be evaluated.• Be carefully specified (and agreed upon) inadvance of other evaluation design work.• Generally represent a critical subset of informationthat is desired to address the purpose of the evaluation.(While it’s easy to identify many questionsthat could and maybe should be answered, usuallyonly a subset can be addressed in any single evaluationdue to resource constraints and the need tofocus attention on a manageable set of requests).For more information and examples of evaluationquestions as they relate to different evaluation purposes,see Appendix 4.Criteria of Good Evaluation QuestionsIt is important to keep the number of evaluationquestions manageable. The exact number dependson the purpose of the evaluation and resources availableto conduct the evaluation; however, limiting theevaluation to address between two and five questionsis strongly advised.The following are criteria of good evaluation questions(adapted from Patton 1997):• It is possible to obtain data to address the questions.The data must be available for use by thoseundertaking evaluation.• Questions should focus on current or recent (butnot future) events, skill development, or responseto programs.• There is more than one possible answer to thequestion, i.e., the findings are not predeterminedby the phrasing of the question.• Those conducting the evaluation want and needthe information derived from addressing the questionsand know how it will be used internally and,where appropriate, externally.• The questions are aimed at changeable aspects ofprogrammatic activity (i.e., they should focus onthose things which can be modified where findingswarrant change).Evaluation does not focus on cause or attributionnor is it conducted to prove anything. Thatrequires more of a research approach, includingexperimental design (see Appendix 5). Answeringevaluation questions that meet the above criteriaallows for data-driven decision making aboutimportant, select programs or program features.Additional Evaluation DesignConsiderationsAs evaluation is planned, the following should bedetermined:1. Is assistance from a professional evaluator necessary,useful, or affordable? (See the followingsection on commissioning evaluation.)20 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 32. What descriptive information should be collected toclarify context (e.g., staffing plans, activity schedules,history)?3. How will service delivery and program implementationbe documented? What must be tracked (e.g.,intake, attendance, activity schedules)?4. What specific outcomes will be assessed? What areindicators of those outcomes? What targets or levelsof outcome attainment are desired? How good isgood enough?Different terms are used to describe the results andoutcomes of programs, what is expected, and how youknow if meaningful outcomes are achieved. The followinglexicon, adapted from the United Way of America,Outcomes are changes in behavior, skills, knowledge,attitudes, condition, or status. Outcomesare related to the core business of the program,are realistic and attainable, within the program’ssphere of influence, and appropriate. Outcomesare what a program is held accountable for.*Indicators are specific characteristics or changesthat represent achievement of an outcome. Indicatorsare directly related to the outcome and helpdefine it. They can be seen, heard or read. Indicatorsmust be measureable and make sense in relationto the outcome whose achievement they signal.Targets specify the amount or level of outcomeattainment that is expected, hoped for, orrequired. Targets or levels of outcome attainmentcan be determined relative to external standards(when they are available) or internal agreement(based on best professional hunches, past performance,or similar programs). Targets for outcomeattainment should be established in advance ofdata collection and analysis whenever possible.* Note: Though outcomes are commonly described as changes,sometimes outcomes are the maintenance of a desired or preconditionstatus.is useful (though you are reminded that these terms,like all others in evaluation, are used differently bydifferent program stakeholders and evaluators).Data Collection OverviewThere are four primary evaluation data collectionstrategies which can (and should) be combined toaddress evaluation questions and allow for multiplesources of data. All have both benefits and limitationsand require preparation on the front end. Allcan be used to collect either qualitative or quantitativedata.• Record and document review• Surveys• Interviews• ObservationRemember, mixed methodologies and multiplesources of data or respondents, collected at multiplepoints in time, increase evaluation rigor and usefulnessof findings. Data do not have to be collected forall participants in every program cycle; determiningfindings from samples (subgroups) of participantsor point-in-time (snapshot) estimates can be useful.Section IV of this guide provides many additionaldetails on data collection.What Happens After Data Are Collected?Once evaluation data have been collected via surveys,interviews, observations and/or record reviews, thereare three remaining important stages of evaluation.1. Data must be analyzed and results must be summarized.Organizations are great at collecting data,but they often forget to analyze and review theresults. Analysis must be done to obtain resultsand to inform learning and decision making.2. Findings must be converted <strong>into</strong> a format (e.g., areport) that can be shared with others. The reportA Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201221


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 3may take different forms for different stakeholderaudiences (e.g., memo with key findings, oralpresentation). The most common form used tosummarize evaluation results is the EvaluationReport (see the following and section VI).3. Action steps must be developed from findings.This is probably the most important step—itmoves evaluation from perfunctory compliance<strong>into</strong> the realm of usefulness.At the organization or program level, making changesbased on findings would be next (step 4), oftenfollowed by another round of question specification,design development, collection and analysis ofdata, conversion of findings <strong>into</strong> a sharable format,and development of new action steps. Both evaluationand evaluative thinking follow these same steps(specify questions, develop designs, collect andanalyze data, identify action steps and recommendations,summarize and share findings).Evaluation ReportsAs stated previously, in order to facilitate usefulness,the results of evaluation (or an evaluative inquiry)must be shared. Evaluation reports are the most commonstrategy for sharing evaluation results. Strongreports are written using straightforward terms in astyle mutually agreed upon by evaluators and stakeholders.They typically include:• A description of the subject program• A clear statement about evaluation questions andthe purpose of the evaluation• A description of actual data collection methods• A summary of key findings and discussion orexplanation of the meaning and importance of keyfindings• Suggested action steps for the program and nextsteps or issues for further consideration whichclarify continued evaluation or program relatedfollow-up (as appropriate)It is important to note that while the report shouldconclude with suggested action steps, it is not thejob of the evaluator to make specific recommendationsbased on findings (although specific recommendationsregarding future evaluation are appropriate).Evaluation stakeholders, i.e., those commissioningthe evaluation and those responsible for operations,must determine what actions should be proposedor taken in response to evaluation findings. Theirdecisions can and should be included in an evaluationreport and duly attributed to them. Section VIcontains more details about evaluation reporting.Commissioning EvaluationSometimes consultants are needed to conduct anevaluation or to provide coaching and assistance toan organization conducting a participatory evaluation.***Tips for commissioning and paying forevaluation are provided in Appendices 8 and 9 tothis guidebook. Most importantly, organizations thatcommission evaluation should be sure that the consultantsthey work with have:• Basic knowledge of the substantive area beingevaluated• Knowledge about and experience with programevaluation (especially with nonprofit organizations)• Good references from trusted sources• A personal style and approach that fits with theirorganization***Working Definition of Participatory Evaluation: Trained evaluationpersonnel and practice-based decision makers (i.e., organizationalmembers with program responsibility—service providers)working in partnership. Participatory evaluation brings togetherseasoned evaluators with seasoned program staff to provide trainingand then to design, conduct, and use results of evaluation(Cousins 1998).22 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 3Things to Determine Before Initiating Evaluation Design1. Program and Evaluation Purpose: What is the general purpose of the selected program, and how does itcontribute to the organization’s mission? What are the specific reasons why this program was selected forevaluation? How does this project contribute to the broader field? What is the purpose for this evaluation,and what are the possible lessons learned?2. Implementation and Feasibility: Does the target population of the selected program know about and wantto participate in the program? If applicable, have all necessary collaborative agreements been secured?How will the organization guard against implementation impediments such as insufficient recruitment orparticipation, staff turnover, or insufficient program funds?3. Program Design and Staging: What are the key components of the program, how do they fit together, andhow are they expected to contribute to participant outcomes? Has a reasonable program logic model beendeveloped? How will an evaluation project best be staged over time?4. Outcomes: What program and participant outcomes are expected? How does the organization know whenthese outcomes are achieved? If applicable, how are program and participant data tracked? What otherreporting is currently being conducted?5. External Assistance and Finances: Will an evaluation consultant be needed to complete the evaluation? Ifso, how will the consultant be selected? How will the consultant’s work be overseen? Will external fundingbe needed and available to support evaluation of this program? If yes, how can that funding be acquired,and how does the proposed evaluation fit <strong>into</strong> the mission of the proposed funder?Note: You don’t have to know all the answers to these questions to design an evaluation, but gathering as much information as possiblebefore initiating the design is useful.Things to Avoid When Working with Evaluation Consultants• Agreeing to an evaluation design that you do not understand.• Agreeing to an evaluation where fee payments are not attached to deliverables.• Commissioning an evaluation on a timetable that is inappropriate for the program.• Commissioning an overly complicated evaluation design or one for which there is insufficient stakeholderinvolvement.• Allowing evaluation consultants to conduct overly-complicated or onerous data collection strategies or toexclude the use of data collected directly from respondents. At least some primary or direct data collectionis preferable.• Assuming that an evaluation design must ALWAYS include measurement of outcomes. Sometimesimplementation assessment (i.e. determining if program activities are being delivered effectively) is allthat can or should be done.• Forcing evaluation of outcomes that are inappropriate or beyond the scope of the program.• Developing an unrealistic timetable for evaluation reporting. There will always be a lag between final datacollection and reporting.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201223


24 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 44. Building Evaluation <strong>Capacity</strong>As clarified from Section I, the key components ofevaluative thinking—asking questions, systematicallygathering data, analyzing data and sharing results,and developing action steps from the results—canbe applied to most aspects of organizational practice.Because we believe that increased capacity toconduct evaluation helps organizations think and actevaluatively, for example when organization administratorsand board members use evaluation skills todefine, collect data about and determine “compatibility”of programs to mission and then change ordiscontinue those identified as incompatible, theyare using evaluative thinking to inform organizationaldirection. This section of the guidebook providesadditional detail for strengthening evaluation capacity,with special attention to effectively collecting andanalyzing data.Data Collection DetailsThere are four primary ways to collect evaluation data.All have benefits and limitations and many decisionswhich must be made before data collection can commence.The following are basic distinctions.Surveys have a series of questions (items) with predeterminedresponse choices. They can include onlyindependent questions or groups of related questions(scales) that can be summarized. Surveys can alsoinclude open-ended items for write-in or clarification.Surveys can be completed by respondents orthe person(s) administering the survey. They can beadministered in write-in, mail-in, electronic, andother alternative formats (e.g., as responses that arecast by holding up a certain colored card, applyingcolored stickers that signify response categoriesonto easel sheets with questions, or using candy ormarbles of different colors to represent responses).Interviews are one-sided conversations between aninterviewer and a respondent. They can be conductedin person, by phone, with individuals or with groups.Questions are (mostly) predetermined but openended.Respondents are expected to answer usingtheir own terms.Observations are conducted in person to view andhear actual program activities. They can be focusedon programs overall or participants in programs.Record review is a catch-all category that involvesaccessing existing information. Record review dataare obtained from program records including thosedesigned for evaluation, those used for other purposes,and those used by other agencies (e.g., reportcard grades might be a source of data for evaluationof an after-school program; data collected as part ofa drug screening at a health clinic might be used aspart of the evaluation of a prevention program).The table on the following page presents a summaryof how different data collection strategies can beused when conducting evaluations. It also providesexamples of when various strategies could be usedto collect data to inform evaluative thinking for theorganization overall (shown in blue).A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201225


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 4Use Record Reviews to . . .Collect behavioral reports.Test knowledge.Verify self-reported data.Determine changes over time.Monitor unit costs for a program through documentationof staff time and direct program expense.Use Surveys to . . .Study attitudes and perceptions.Collect self-reported assessment of changes inresponse to program.Collect program assessments.Collect some behavioral reports.Test knowledge.Determine changes over time.Ascertain whether staff members are satisfied withtheir new professional development program, ortheir medical and dental benefits, and/or still haveunmet needs.Use Observations to . . .Document program implementation.Witness levels of skill or ability, program practices,behaviors.Determine changes over time.Determine whether program modification strategiesare being followed, and are helping to respond tochallenges.Use Interviews to . . .Study attitudes and perceptions using respondent’sown language.Collect self-reported assessment of changes inresponse to program.Collect program assessments.Document program implementation.Determine changes over time.Determine whether existing technology systems aresupporting evaluation needs.For more details about these data collection methodsand practice activities, be sure to review the ParticipatoryEvaluation Essentials Guide (Baker and Bruner2010).Tips for Improving Data CollectionMaximizing Survey ResponseThe number one way to maximize survey response isto write or use a good survey instrument and tailorthe administration to the desired respondent group.The right set of well-written questions, answer choiceappropriateness and brevity are all important. Thefollowing are additional tips to increase the number ofpeople who respond.• Advertise survey purpose and importance as well asadministration details in advance.• Unless anonymous, carefully document whoreceives and completes surveys. Conduct follow-upand send reminders.• Consider using incentives including lotteries drawnfrom respondent lists to encourage responses, butbe sure to think carefully about what would “incentivize”your target response group.• Make it easy for a respondent to answer (e.g., useprogram time when possible, include postage, besure return directions are clear, use electronic surveyoptions, send reminder emails, etc).26 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 4Nonresponse bias, which happens when large numbersof targeted respondents do not answer, canseverely limit your use of response data. If only someof those who should answer a survey actually do, youmust limit survey findings only to the group of respondents(e.g., 80% of the survey respondents ratedthe activity favorably). When there is sufficient andrepresentative response, there can be generalizationto larger groups (e.g., 80% of participants rated theactivity favorably). See Appendix 10 for an explanationof how to sample to get sufficient response andhow many responses are needed to generalize.Survey Questionnaire Development: Step-by-Step1. Identify the key issues you wish to explore or ‘test’ via the survey. Review available literature, includingproprietary sources, to determine if there are good surveys or items that already exist to measurethe key issues. Start with any known research organizations or projects in the field of interest (e.g.,the Harvard Family Research Project [HFRP]).2. Convert the key issues <strong>into</strong> questions and remember to:• State the question in very specific terms, using language that is appropriate for the target population• Use multiple questions to sufficiently cover the topic• Avoid double negatives• Avoid asking multiple questions in one• Be sure response categories match the question, are exhaustive, and don’t overlap3. Determine what other data are needed for analysis (demographics, other background like how long therespondent has been affiliated with the agency or program, contact information).4. Determine how the questions will be ordered and formatted.5. Include directions (such as mark one for each, mark one or clarify here) for responses.6. Have the survey instrument reviewed by others including representatives from the targeted respondentgroup.7. Develop an analysis plan and administration plan. That way, you will be prepared for data entry and tosummarize findings. Thinking through the analysis plan will also help with survey review—if you can’tfigure out how you will use data from the survey, you may not need the questions or they may need tobe rephrased. The administration plan will help you figure out in advance how best to get the surveyconducted and the completed survey retrieved from as many of the targeted respondents as possible.Improving the Effectiveness of Interviews (adaptedfrom Patton 1987)An interview is not a conversation; rather, it is aseries of questions designed to invoke answers in therespondent’s own words. An interviewer should notinterrupt the respondent (other than to regain controlor move the interview along), and an interviewershould not share opinions about the questions orcomment on the responses.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201227


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 4The following ten tips can make evaluation interviewsmore useful.1. Select the type of interview (e.g., structured,where all questions are asked in the same wayand the same order; semi-structured, whereorder and exact wording are more flexible;unstructured, where only topics are identified)or combination of types that is most appropriateto the purposes of the evaluation. Develop ananalysis plan, sampling strategy, and reportingtemplate.2. Communicate clearly what information is desiredand why that information is important. Let therespondent know how the interview is progressing.3. Remember to ask single questions and to useclear and appropriate language. Check (or summarize)occasionally to be sure you are hearingand recording the respondent’s responses accurately.Avoid leading questions.4. Don’t conduct the interview like an interrogation,demanding a response. Ask questions and obtainanswers, being sure to listen attentively andrespond appropriately to let the person know sheor he is being heard.5. Recognize when the respondent is not clearlyanswering the question and press for a fullresponse.6. Maintain neutrality toward the specific contentof response. You are there to collect information,not to make judgments about the respondent.7. Observe while interviewing. Be aware of andsensitive to how the person is affected by andresponds to different questions.8. Maintain control of the interview.9. Treat the person being interviewed with respect.Keep in mind that it is a privilege and responsibilityto learn about another person’s experience.10. <strong>Practice</strong> interviewing. Develop your skills.Using Observations Effectively for EvaluationData CollectionThe purpose of conducting observations is to describea program thoroughly and carefully and in sufficientdetail so that users of the observation report will knowwhat has occurred and how it has occurred. Observationsinvolve looking and listening. A particularstrength of observations is that data are collected inthe field, where the action is, as it happens.Effective observations require careful preparation. Itis useful to know what is supposed to happen anddevelop a protocol that includes cues to remind theobserver what to focus on, that facilitates recordingwhat is seen and heard, and that incorporates use ofcodes to facilitate the analysis process where possible(see Appendix of the Participatory Evaluation Guidefor examples of observation protocols). Additionally,analysis plans and reporting templates should bedeveloped with any protocols.During the observation, be sure to use the protocol.At the conclusion of an observation, always ask, “Wasthis a typical session?”Making the Most of Record Reviews Using Datathat Are Already AvailableReview of program records involves accessing existinginternal information or information that was collectedfor other purposes. Data are obtained from aprogram’s own records (e.g., intake forms, programattendance); from records used by other agencies(e.g., report cards, drug screening results, or hospitalbirth data); or by adding questions to standard recordkeepingstrategies (e.g., a question for parents aboutprogram value added to an enrollment form).• Remember, record reviews require that you extractdata that were collected for other purposes. Makesure that the use of other records makes sensegiven your evaluation questions, and that any data28 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 4collection limitations and inconsistencies in theexisting data will not also cause major limitationsto your study.• Be sure to address confidentiality and accessissues long before you actually need the data. Howdoes your source collect and maintain the data?Often, record review data can be retrieved via electronicsubmission if you know how to ask. Developstandard confidentiality agreements and requestprotocols as needed. Beware! Securing access,extracting, and modifying data for use in your studymay take considerable time, and the steps aredependent on the progress of others.RECORD REVIEW CAUTIONIt is rare that you will be able to influencehow data are collected by the original users.For example: If you were evaluating a programproviding access and support services for highschool graduates about to be first-generation collegeenrollees, and you wanted to follow up theirmatriculation and progress as undergraduates,you would need access to records from all thecolleges your participants ultimately attended.Each would have its own system of trackingenrollment, etc. Or, you might access a centralrepository of information on college enrollmentand retention such as the National StudentClearinghouse. But you must keep in mind thatsystem was developed originally to provide lendingorganizations with enrollment verificationsand deferments of financial aid. Its accuracy islimited by timing requirements for high schoolgraduation and credit accumulation updates,by FERPA blocks and by nonparticipation in thesystem (more than 92% of post-secondary institutesparticipate, but others do not).Data Analysis BasicsStaff members at nonprofit organizations oftenbelieve this is where their expertise regarding evaluationends. It can be helpful to bring in specialassistance to conduct data entry and analysis, but itis not required.The Rochester Effectiveness (REP) and HartfordBuilding Evaluation <strong>Capacity</strong> (BEC) projects andothers like them have shown conclusively that staffwithin nonprofit organizations have the ability to dogood data analysis, i.e., to become “analysts,” usingreadily available tools (such as Microsoft Excel).Use the following steps to guide analyses of any data:1. Organize and enter data <strong>into</strong> databases. Althoughthe term database often is used to mean anelectronic file (such as an Excel spreadsheet or anAccess, SPSS or customized information managementsoftware file), it can also be a neat stackof completed surveys, interviews or enrollmentforms, or a handwritten grid that captures results.The important thing is that the data are arrangedin such a way that it will be easy to inspect andlook for trends.2. Clean and verify information to store in electronicdatabases. Visually inspect the database anddouble-check a sample to make sure there wereno systematic data entry errors (e.g., review about10% of the electronic data records to be sure theyare the same as the paper records). If needed,calculate minimum, maximum, and mean valuesto see if there are outliers (i.e., extremes or exceptionalcases) in the data set. Determine whetheroutliers should be removed or included with therest of the data.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201229


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 43. Develop a plan before analyzing data.• Specify what you need to know in advance. Itis helpful to go back to the evaluation questionsand reclarify what you need to know andhow the results of the analysis will answer thosequestions.• Specify how good is good enough (i.e., clarifytargets and how it will be decided whetherresults—differences between targets and actualoutcomes—are favorable/positive, unfavorable/negative, or neutral).• Specify how data will be encoded (e.g., whencategories like Excellent and Very Good = Positivewill be combined).• Specify what analytical strategies or calculationswill be performed (e.g., frequencies—counts or percentages of events or responses,cross tabulations where two or more variablesare reviewed together such as gender and age,or status by year). Clarify how missing data willbe handled.• Clarify how data will be disaggregated. Mostof the time, participants in a program are notall the same, and there is a need to look at theoutcomes or responses of groups of participantsseparately or comparatively (i.e., to cross tabulateor look at more than one variable at a time).It is common, for example, to present separateand combined results for those of different gendersor age groups or from groups with differinglevels of experience with the program.• Determine whether and which tests and analyticstrategies (e.g., chi square tests, t-tests,analysis of variance [ANOVA]) are needed todetermine statistically significant change, andwhether effect size must be calculated to determinemagnitude of change. (Refer to the bibliographyin Evaluation Essentials for examplesof references that clarify the use of differentstatistical tests. Keep in mind that most basicprogram evaluation and use of evaluative thinkingto inform organizational decisions dependson relevance and meaningfulness rather thanstatistical tests.Example 1: An organization served a large groupof youth in a summer learning loss preventionprogram. Pre-program and post-program readingtest scores were not significantly different(although on average, post-program scoresincreased slightly). The program still concludedit had achieved something important becausethe expectation was that there might be significantdecline over the summer.Example 2: An organization serving femaleoffenders with high rates of recidivism comparedthe number of days participants remained unincarceratedto the number of days unincarceratedfor those in a similarly-sized population withsimilar profiles that had not received any postincarcerationsupport. The difference in averagenumber of days participants stayed free was verysmall, but significant (121 days on average forthe target group as compared to 119.5 days onaverage for the comparison group). However,the organization did not consider this as strongevidence their program worked, because theywere trying to promote freedom for more sustainedperiods, six months or more, in an effortto break the cycle of recidivism. While their participantsdid significantly better than those withno services, they missed the 180-day minimumtarget by a substantial amount. In response,the organization went back to their participants,staff, and other consultants to determine whatelse their program needed to help support participants.• Determine how to present results, (e.g., asnumbers, percentages, in tables, or as graphs).See also the final section of this manual foradditional suggestions regarding communicationof results.30 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 4It is the analyst’s job to show how informationfrom a variety of data collection sources revealedimportant, actionable findings, or to clarify whenand why there may be discrepancies in data collectedvia different sources.For example, the evaluation may ask participantsand some significant others about the impact ofa program on them, e.g., teenagers in a programand their parents or school teachers. The findingscould show that teenagers thought the programwas critically important while their parents weren’taware of this at all, or the reverse, or you couldfind that parents corroborated or expanded onthe importance of the program described by theparticipants. Both are important and actionablefindings.7. Feeling compelled to use all of the informationcollected. Sometimes evaluation data collectiondoes not work out as planned. In fact, research hasshown that most evaluation studies vary from originalplans. Don’t use partial information or resultsof data collection activities that were problematic.For example, if you administered a batch ofsurveys and a few were filled out incorrectly or bypeople who should not have been in the targetpopulation, eliminate them. Explain, if needed,why the information was discarded, but concentrateon findings that answer the key questions ofthe evaluation work.8. Including any action steps or conclusions thatare not clearly developed from the findings. Anyconclusions or summary statements should beillustrated by actual data collection results.9. Introducing totally new topics <strong>into</strong> the report inthe final sections. Do not use the report to explainwhy the design was changed, what wasn’t done, orwhat should be happening with a program, regardlessof the findings presented in the report.For more information about how to strengthen evaluationcapacity in your organization, consider reviewingavailable texts on the subject. The Bruner Foundation’sParticipatory Evaluation Essentials: an UpdatedGuide for Nonprofit Organizations and Their EvaluationPartners (Baker and Bruner 2010) contains specificguidance for conducting evaluation and buildingevaluation capacity as well as a bibliography of otherevaluation resources. (Appendix 8 in this guidebookalso includes details on commissioning and payingfor evaluation.)Organizations that use evaluative thinking:• Regularly conduct evaluations that include attention to characteristics, activities, and outcomes of selected programs.• Involve multiple evaluation stakeholders including staff, organization leaders (sometimes even board members), andclients (as appropriate) in developing and revising evaluation plans as well as collecting and analyzing evaluation data.• Share results of evaluations, including findings about client outcomes (as appropriate), with multiple evaluationstakeholders including leaders, staff, clients, board members, and funders.• Use results of evaluation to generate knowledge, render judgments, inform decisions, and improve programs wherenecessary.• Insure that key staff with evaluation expertise address the organization’s evaluation needs and that there are staffmembers whose jobs or components of their jobs are dedicated to evaluation.• Hire evaluation consultants when needed (see Appendix 8 for details).• Provide or obtain training in evaluation for program staff members and make sure that the training is current, welldelivered,and provided for enough staff members to insure that evaluation use is a standard practice.32 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 4BUILDING EVALUATION CAPACITY PROJECTSIn projects devoted to evaluation capacity building (e.g., Rochester Effectiveness Project [REP] conductedby the Bruner Foundation 1996–2003; Building Evaluation <strong>Capacity</strong> [BEC] conducted by the HartfordFoundation for Public Giving, 2006–present; Evaluation Institute [MWEI] conducted by the MetroWestHealth Foundation in Framingham, Massachusetts), all of the information in the Participatory EvaluationEssentials guidebook was delivered in a series of comprehensive multi-hour training sessions. Each sessionincluded a presentation of information, hands-on activities about the session topic, opportunities fordiscussion and questions, and homework for trainees. By the end of the training sessions, trainees haddeveloped their own evaluation designs. For REP and BEC, the trainee organizations also implemented theirdesigns during an additional 10 months of evaluation coaching and review. As projects concluded, traineessummarized and presented the findings from their evaluations. The REP nonprofit partners and other nonprofitprovider trainee organizations in Hartford agreed that the up-front training helped prepare them to dosolid evaluation work and provided opportunities for them to increase participation in evaluation within theirorganizations. We recommend this approach for those who are interested in building evaluation capacity.Representatives from the trainee organizations in Hartford and Framingham also reported that theirincreased evaluation capacity and opportunities to assess evaluative thinking in their organizations helpedthem understand and make application of evaluative thinking in multiple areas of their organizational work.For example, trainee organizations shifted their organizational structure to insure that there are key staffwith evaluation expertise to address the organizations’ needs and that there are staff members whose jobsor components of their jobs are dedicated to evaluation.The Bruner Foundation is developing an automated matrix/guide that describes key features such as context,convener, timeframe, deliverable, and the cost of Evaluation <strong>Capacity</strong> Building projects it has beeninvolved with over the past 15+ years. The guide is designed to help both grantmakers and nonprofit organizationsconsider different strategies for enhancing evaluative capacity.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201233


34 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 55. Board Members, Evaluation,and <strong>Evaluative</strong> ThinkingAs those responsible for the governance of the organization,board members have a significant role toplay in ensuring that the organizational leaders usedata-driven decision making. Therefore, they canhave a significant role to play in supporting, encouraging,and sometimes driving evaluative thinkingin order to strengthen the organization. Nonprofitleaders and staff are routinely involved with multiplestakeholders. It is our belief that evaluators, funders,program providers and their board members can andoften should be meaningfully engaged in evaluation.Involvement of Board Members inEvaluationBoard members, though often seen only as recipientsof evaluation information, can also play importantroles in the process of evaluation. Involvement ofboard members who collaborate with staff, organizationleaders, and/or evaluators can help to:• Contribute to a sense of shared responsibility forand ownership of evaluation• Increase the quality of evaluation by including differentperspectives• Provide opportunities to discuss and clarify expectationsfor the evaluation• Provide opportunities for board members and staffto interact• Surface or highlight multiple perspectives thatexist (e.g., among staff, between staff and programmanagers or executive leaders, between directclients and their caregivers)This requires a collaborative approach by selectedboard members and usually additional training forthem regarding effective evaluation practice. Tofacilitate meaningful involvement of board members:• Determine who from the board should be part ofthe stakeholder group• Make sure selected board members understandeffective evaluation practices (don’t assume theyall will understand, as it is still a relatively newfield; don’t assume that management or evenresearch expertise is equivalent to knowledge aboutevaluation)• Clarify specific roles for individual board membersbased on the evaluation plans, board membercapabilities, and timeframe• Be clear about and get agreement about time commitment,responsibilities, and working in partnershipwith leaders, staff, and evaluators (avoidcreating scenarios where board members’ interests,ideas, or responses are deemed more valuable thanthose of others)• Specify ways and a timeframe for reporting progressto the whole board• Reduce fears and misunderstandings about theevaluationA Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201235


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 5When and How Board Members Can Be Involved in EvaluationDepending on their skill sets and availability, board members can be part of a group that helps focusand conceptualize the evaluation, interpret data and develop action steps, and communicate aboutevaluation and evaluation results.During the evaluation planning stage, board members can help:• Clarify the evaluation questions and purpose for the evaluation• Help select evaluation consultants• Review and refine program logic model or theory of change including clarification of outcomes identification,specification of indicators, and setting of targets• Select data collection strategies, specify timelines, and determine products• Review data collection instruments, administration and analysis plans, and report outlinesWhile the evaluation is being conducted, board members can help:• Hear and reflect on updates on evaluation progress and preliminary findings• Begin to think about communications plans• Connect evaluation team members to other resources or expertise as neededWhen the evaluation is completed, board members can help:• Discuss and interpret results from summarized findings and develop action steps• Determine audiences and formats for evaluation reportingRemember again, all members of the board do notneed to have comprehensive knowledge about evaluation,but it is helpful for everyone to learn or knowbasic information, and for those involved more seriouslyin evaluation actions to learn or know specificslike those identified in Sections III and IV. The followingare requisites. Board members must understandand have consensus on:• The working definition for evaluation and evaluativethinking, and what is expected of organizationrepresentatives engaged in participatory evaluation• The purpose of the specific evaluation they areworking on• Reasonable outcomes and related indicators andanalytical strategies• How to commission and pay for evaluation• How to interpret and report evaluation findings• How evaluation findings will be used• Why evaluative thinking is important and howthey can incorporate evaluative thinking <strong>into</strong> theirown workBecause it is important that stakeholders are “on thesame page,“ the following recaps important informationsummarized in Section III.36 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 5• All evaluations are partly social, because theyinvolve human beings; partly political, becauseknowledge is power and decisions about what getsasked and how information is used are politicaldecisions at the organizational level; and partlytechnical, because choices always have to be madeabout how to collect and analyze data (Herman,Morris, Fitz-Gibbons 1996).• Having a clear evaluation design ensures better,more timely, information.• Both qualitative (descriptive) and quantitative(numerical) data can be collected using multiplemethods (e.g., observations, interviews, surveys,statistical analyses of practical assessments fromreview of program records). Although there hasbeen much debate about which strategies andtypes of data are best, current thinking indicatesthat both qualitative and quantitative data arevaluable and both can be collected and analyzedrigorously.• There is no “silver bullet” or “one size fits all”to address evaluation needs. Different evaluationneeds and available resources dictate differentdesigns, types of data, and data collection strategies.Comparison groups, experimental designsand even “pre and post” testing are not requiredor even appropriate in many program evaluationdesigns. (Be sure to review Appendix 5 on the differencesbetween evaluation and research for moreclarification about evaluation standards.)Evaluations are typically conducted to accomplishone, two, or all of the following:• To account for or document grant or programactions or expenses• To generate new knowledge• To render judgments• To inform data-driven decision making and facilitateimprovements as neededIt is critical to carefully specify what program (or programcomponent) is to be evaluated, why the evaluationis being conducted, and what will be done withthe findings (see also Section VI for more informationabout evaluation findings).Review all sections in this guidebook (especially IIIand IV) together with board members to help preparethem for their roles.Working With Your Board to DefineProgram Outcomes, Indicators,and TargetsClarifying outcomes, indicators, and targets forprograms is some of the hardest work of evaluationand a common focus of stakeholder, including boardmember, involvement (see Section III). Keep in mindthe following:• Outcomes, especially long-term outcomes, shouldnot go beyond the program’s purpose (i.e., don’tproject educational outcomes for an employmentand training program).• Outcomes should not go beyond the scope ofthe target audience (i.e., don’t project changethroughout the county if you are only serving asmall proportion of county residents in a particularneighborhood).• Avoid holding a program accountable for outcomesthat are tracked and influenced largely by anothersystem, unless there is meaningful interaction withthat system regarding outcome change (e.g., don’thold an afterschool program accountable for theoutcomes of students at school, unless the afterschooland day school programs are integrated).• Do not assume that all subpopulations will havesimilar outcomes (e.g., outcomes may be very differentfor those with longer program experience;different subgroups may require different targets).A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201237


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 5• Be sure to measure outcomes on a timetable thatcorresponds to logical projections of when outcomescould be accomplished. (For example, donot assume you will be able to report about schooloutcomes until after the end of the school year.)• Most outcomes have more than one indicator,and indicators may not capture all aspects of anoutcome. Identify the set that is believed (or canbe agreed) to adequately and accurately signalachievement of an outcome. For example:OutcomesInitial:Teens are knowledgeableabout prenatalnutrition and healthguidelines.Intermediate:Teens follow propernutrition and healthguidelines.Longer Term:Teens deliver healthybabies.Participants will beactively involved inafterschool activities.IndicatorsProgram participants are ableto identify food items that aregood sources of major dietaryrequirements.Participants are within properranges for prenatal weightgain.Participants abstain fromsmoking.Participants take prenatalvitamins.Newborns weigh at least 5.5pounds and score 7 or aboveon the APGAR scale.At least 500 students willparticipate each month.Students will attend 70% ormore of all available sessions.At least half of the participantswill participate in 100or more hours per semester.• To make judgments about a program or facilitateimprovement of a program, you need not measureall indicators for all participants. Selection ofsample data for key indicators can provide ampleinformation for decision making (see also ParticipatoryEvaluation Essentials: an Updated Guide forNonprofit Organizations and Their Evaluation Partners,Baker and Bruner 2010, for more informationabout sampling strategies and sample size).• Set targets in advance based on: best professionalhunches, external standards (when they are available),and past performance (when baseline orinitial data are available). Do not agree to targetsthat are unrealistically high or embarrassingly low.Board Involvement in SecuringEvaluation ConsultantsEvaluation assistance can be obtained from independenttechnical assistance or evaluation consultants,evaluation or other technical assistance consultingfirms, and sometimes universities with graduateprograms that include training or projects in evaluation(especially program evaluation). Along with keyorganizational decision makers, board members canbe collaboratively involved in the identification andeven selection of evaluation consultants.Before hiring any consultant or organization be sureto find out whether they have:• Experience with evaluation, especially with nonprofitorganizationsClients show slowedor prevented diseaseprogression at six and12 months.A total of 66% or more of allclients will have:— Sustained CD4 countswithin 50 cells— Viral loads


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 5What About Governance and <strong>Evaluative</strong>Thinking?Governance is the primary role for the board of directorsin nonprofit organizations. In addition to theroles in evaluation described above, board memberscan and should incorporate evaluative thinking <strong>into</strong>their governance roles.As stated in Section I, in organizations where there isevaluative thinking:• The board is aware of and understands evaluationsof the organization’s programs, initiatives, andstrategies.• The board uses appropriate data in defining thegoals, work plan, and structure to develop planssummarizing strategic direction.• The board regularly evaluates progress relative toown goals, work plan, and structure.• The relationship between organization mission andplans for strategic direction are assessed.• There is a systematic process and timeline foridentifying, recruiting, and electing new boardmembers.• Specific expertise needs are identified and used toguide board member recruitment.• The board regularly (e.g., annually) evaluates theexecutive director’s performance based on establishedgoals and work plan.• The board assesses the organization’s progress relativeto long-term financial plans.• The board includes appropriate evaluation resultsin the assessment of the organization’s progress.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201239


6. Using Findings<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 6e8dfAwBm2 4gcEffective evaluative thinking and evaluation dependon successful, timely communication of results andultimately use of findings to inform decision making.The following are important considerations related toinformation sharing to promote findings use.Uses and Packaging for EvaluationFindingsMost evaluation findings are presented in formalevaluation reports. These are usually written documentsthat include a cover page; table of contents(optional); and sections describing what was done(methodology), what was found (results), what itmeans (discussion/interpretations) and what can bedone (action steps/recommendations). These canbe produced with word processing or presentationsoftware (such as PowerPoint). Internal and externalevaluation reports can be developed, as well as a briefexecutive summary with minimum descriptions of thework and a discussion and presentation of key findings.It is common for multiple reports to be developedwhen there are multiple stakeholder audiencessuch as staff, board, clients, or a larger community.In addition to written reports, evaluation findings andresults of evaluative inquiry are also commonly disseminatedvia oral presentations, again differently todifferent audiences, and sometimes via more creativemeans such as display posters, flyers, and brochures.Any of these dissemination forms can also be postedto websites, Facebook pages, or blogs and linkedelectronically to other informative sites or reports.Organizations that regularly use evaluative thinkingwill do the following regarding development and disseminationof evaluation findings:• Develop a reporting strategy in the evaluationdesign stage. Determine who needs to see and hearfindings—i.e., who the audiences are. (It may alsobe helpful to establish general guidelines aboutreporting—e.g., executive summaries are alwaysdeveloped, or never developed, and findingsalways go to involved program staff before theyare presented either in writing or orally to otherstakeholders).• If draft communication is needed, be sure todetermine who should be involved, how they shouldrespond to draft information, and how much time isavailable for response. You also need to figure outwhat you will do with the feedback (e.g., incorporateit, make changes based on it, or just note it).• Don’t assume reporting is only about compliance(i.e., reporting out to stakeholders such as funders,board members, or senior managers). For reporting toexemplify evaluative thinking, it must inspire action.• Figure out the best way to communicate evaluationfindings to various audiences. For example,if written reports are never used and unnecessaryfor documentation, find other ways—such as oralpresentations—to present findings. Or, if boardmembers always want both a written report and anoral presentation, be sure to factor in timing andA Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201241


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 6skills required to get both of those accomplished.Don’t forget about translation if any versions need tobe shared with audiences who use languages otherthan English.• Plan for and allocate both time and financialresources to effectively communicate findings.Development of reports and/or presentations takesboth time and skill. It is often the most time-consumingaspect of evaluation. It should not, however,be seen as a task that should always be farmed out.The process of developing written or oral reports isusually very instructive regarding what the findingsare and what they mean.• Revisit the planned reporting strategy and makenecessary changes. Once evaluation projects areunderway, it is common to recognize new audiencesand/or additional reports that should be developed.As findings are readied for reporting, also consider thefollowing:• Do the findings accurately reflect the data that werecollected and analyzed?• How might the interpretation be inaccurate?• Are there any unintended consequences that mightresult from sharing the findings?• Are there any missing or overlooked voices?Lastly, timing and venue are also important considerations.Along with audience identification andpresentation strategy selection, those responsible forcommunicating evaluation or inquiry results shouldconsider whether there are:• Natural opportunities for sharing findings such asregularly occurring or preplanned meetings focusedon the evaluation or inquiry subject• Opportunities to host a special convening for thepurpose of presenting findings• Opportunities during regular work interactions (e.g.,clinical supervision, staff meetings, board meetings)where selected findings could be presented• Informal discussions including review of results“Ate” Steps to Promote Effective Information Sharing1. Deliberate—Spend time with people close tothe evaluation work and confirm the findings.You must convince yourself/ves first.2. Anticipate—Determine how you want to usethe findings and what value might be derivedfrom the findings.3. Investigate—Once you have findings, testthem with key stakeholders. They will shedlight on perceived value of the findings.4. Calibrate—Develop a result-sharing mechanismthat can convey the message you wantto convey to your chosen audience.5. Illuminate—Remove any unnecessary detailsand highlight the key findings.6. Substantiate—Take a step away from the workand come back to it later with fresh eyes. Askyourself, “Do the findings still resonate?”7. Annotate—Proofread the final draft. Misteakscan distract from results.8. Communicate—Share the results!Modified with permission from a presentation made by HowardWalters, OMG Center for Collaborative Learning, to nonprofit organizationevaluation capacity building traineesEvaluation Reports RevisitedWhen a full evaluation is conducted (e.g., of aspecific program), the evaluation results are usuallysummarized <strong>into</strong> an evaluation report. The followingprovides guidelines for completing a formal evaluationreport:1. Determine the needs, purposes, and probableaudiences for evaluation reporting. (Rememberkey stakeholders. How can results best bereported to clients, staff, funders, and others?)42 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 62. Develop a report outline (be sure to include thecomponents identified in the next section).3. Determine the desired reporting formats (writtendocument, electronic document, written or electronicpresentation materials, executive summaries,consumer reports, etc.) and develop a reportproduction timeline with writing assignments.4. Assign someone (or a team) the task of primaryauthor(s) and the responsibility of final production.Assign others specific writing assignmentsas needed.5. Develop a report production timeline.6. Develop a dissemination plan. (It is common anddesirable to develop multiple evaluation productsfor different stakeholders. For example, staffmay need a very detailed report while executiveleaders or boards need only a summary of keyfindings.)7. Share the report outline, audience list, suggestedreporting formats, proposed timeline, and disseminationplan with key stakeholders.8. Revise the report outline and all other reportplans to incorporate key stakeholder suggestions.9. Complete writing assignments so that allthe report sections (according to the revisedreport outline) can be compiled by the primaryauthor(s).10. Combine all sections and develop the first reportdraft. Share the draft with appropriate stakeholders.11. Make revisions as needed and produce all necessaryversions of the report.Components of a Strong Evaluation ReportA strong evaluation report should include the following:• Program description• Purpose of the evaluation• Evaluation questions• Description of actual data collection methods used• Summary of key findings (including tables, graphs,vignettes, quotes, etc.) and a discussion or explanationof the meaning and importance of key findings• Suggested action steps based on the findings• Next Steps (for the program and the evaluation)• Issues for Further Consideration (loose ends)When writing an evaluation report, be sure to followthe guidelines in the previous section or at leastdevelop an outline first and pass it by some stakeholders.When commissioning an evaluation report,ask to see a report outline in advance. When reviewingothers’ evaluation reports, don’t assume they arevaluable just because they are in a final form. In allcases, review the report carefully for the above, anddetermine whether the report achieves its purpose.Important Things to Remember About Report Writing• Follow the report outline described above. Feelfree to be somewhat flexible with the order, butdon’t leave out whole sections. Make sure findingsare featured prominently.• As suggested in the guidelines, make aninternal outline including who is responsiblefor which sections. Be sure to leave time forstakeholders to help you with editing and makingrevisions.• Be economical in decisions about what toinclude in the report. Shorter is better. Avoidexcessive use of jargon.• Think, in simple terms, about what you aretrying to say, and then write that. Use completesentences and standard English grammar conventions.You can use bulleted statements, butbe sure your reader can follow your logic.• Use tables and graphs to help illustratefindings. All tables and graphs must havetitles, labels and legends, or footnotes so thatthey stand alone.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201243


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 6• Use quotes and vignettes or snippets fromfield notes to illustrate the findings. Remember,quotes should have quote marks aroundthem and be attributed to the speaker or writer(e.g., “a participant”). If you are presentingfield notes, be sure they are clearly identified(e.g., show the date and time the observationor interview were conducted) and in context.• Use formatting to increase clarity. Headersand sections will help the reader know what ishappening in your report. Be consistent aboutwhere and how they appear (centered, bold,underlined, side headings, etc.). Number thepages. When generating a draft, think aboutdouble-spacing.• Be consistent in the use of language, capitalization,punctuation, etc. For the most part,evaluation reports should be written in thepast tense—only report what you actually didand what you found. The action steps or Issuesfor Further Consideration sections can includereferences to future actions.• Read and re-read your work—if you can’tunderstand it, chances are others won’t either.Preparing Findings to ShareEvaluation findings often include both positive and lesspositive (negative) information. It is rare when programresults are totally good or exceptionally awful. It is usefulto think with stakeholders on the front end, before resultsare available, about what to do with different possibleresults. The following are steps to employ when resultsare negative, inaccurate, inconclusive, or positive.What Should Be Done If EvaluationResults Are Negative?Explain with as much certainty as possible what theresults are and what they mean.• If targets have been missed, clarify how negativethe results are. For example: If it was expectedthat 85% of your participants would remain in theprogram through the six-month period, but only40% did, it should be explained that rather thanhaving most participants complete the program,most did not.• If possible, use available data to explain why thenegative results occurred.• Don’t blame bad program results on bad evaluationdata collection techniques unless the evaluationwas badly designed or implemented. If so, redo itand then report about results.• Clarify the next course of action based on theresults (e.g., the program will be discontinued,staff will change, more staff training will be incorporated,the target population will shift, etc.).• If possible and useful, clarify what did work andwho it worked for—so that can be built upon infuture efforts.• After summarizing findings, as described in the sectionon basic data analysis, the analyst must try togive the stakeholders some overall judgments. Avoidthe milquetoast approach of throwing in a goodfinding for each negative. Clarify if the findings aremostly positive, or mostly negative, or mixed, andthen use examples to support the statement.What Should Be Done If EvaluationResults Are Inaccurate Or Inconclusive?Inaccurate—determine whether inaccurate resultshave been caused by faulty data collection, faultydata analysis, or other evaluation missteps. If areport is released with information that is later deter-44 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 6mined to be inaccurate, errata can be released orreports can be recalled. It is uncommon to completea full report when it is known that results are inaccurate.Strategies for communicating with stakeholdersabout why results will not be available should beundertaken.Inconclusive—sometimes results do not indicate withclarity what judgments can be made or what nextsteps can be undertaken.• Present results in an unbiased fashion and indicatethat no conclusions could be drawn from the information.• Allow the reader to determine the significance ofthe information.• If there is concern that the lack of conclusivenessis due to problems with data collection or analysis,develop a plan to correct any evaluation proceduresfor future efforts.• If the lack of conclusiveness stems from programchallenges (such as major changes in staffing orservice delivery), report that carefully and clarifywhat could be done to remediate the problem inthe future.What Should Be Done If EvaluationResults Are Positive?As with negative results, explain with as muchcertainty as possible what the results are and whatthey mean.• If targets have been exceeded, clarify how positivethe results are. If possible, use available data toexplain why you think the positive results occurred.• Some organizations believe that evaluation has toreveal problems and negative facts about programs,so they automatically distrust positive results.Some programs set up designs that are so biasedthere is no possibility that anything but positiveresults will occur. Neither condition is valuable.If, after careful data collection and analysis, it isclear that the results are positive, report them andcelebrate the accomplishment.• As with negative results, it is important to clarifythe next course of action based on the results (e.g.,the program will be continued without changes,new strategies, staff or target populations will betried, etc.)• Resist assuming that positive results suggest thatthe next iteration of the program can be conductedwith many more participants (ramped up) or withnew or different groups of participants.• Design careful follow-ups. If possible and useful,clarify what did work and who it worked for so thatcan be built upon in future efforts. As with negativereports, it is most instructive to report as carefullyas possible when things have gone well so thatsimilar programming choices (approach, staffing,target population selection, etc.) can be tried again.As stated in the previous section, it is important toincorporate evaluation results <strong>into</strong> a larger organizationalcommunications plan that specifies what messagesare important to circulate, as well as when andhow that circulation happens. It is also very importantto use results of evaluations to inform next stepsin the subject programs and to factor program results<strong>into</strong> the personnel reviews of key staff and executiveleaders responsible for the programs as appropriate.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201245


46 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 77. Sustaining <strong>Evaluative</strong> <strong>Capacity</strong>and <strong>Evaluative</strong> ThinkingOrganizations that regularly use evaluative thinkingencourage the extension and use of evaluativecapacity throughout their organizations. Organizationleaders support evaluation and have key roles.Additionally, multiple staff members are involved inevaluation capacity-building activities (such astraining) and in conducting evaluation.Extending and Sustaining Evaluation<strong>Capacity</strong> and <strong>Evaluative</strong> ThinkingOrganizations that have enhanced their evaluativecapacity also need to focus on sustaining and extendingthat capacity. The following activities can helpaccomplish this:• Apply the knowledge obtained through evaluationtraining to multiple evaluation needs in therganization• Work to ensure that staff understands how evaluativecapacity (and evaluation in particular) isrelevant to their work and why data collection andanalysis are important• Engage multiple staff in sequential, brief trainingsessions that address planning, data collection,data analysis and reporting, and have hands-onlearning opportunities• Involve multiple staff in the processes of designingevaluations or evaluative inquiries, collecting data,analyzing data, and summarizing findings• Assess and continue to model evaluative thinkingin all organizational practiceWhat does it look like when executive leadersand management staff use evaluative thinking?In addition to helping to spread evaluative capacitythroughout the organization, executive leaders andmanagement staff should also incorporate evaluativethinking, whenever possible, <strong>into</strong> their daily roles.The following are indicators that it is happening.Leaders and managers in organizations that regularlyuse evaluative thinking:• Support and value evaluation and evaluative thinking.Use evaluation findings and results of findingsfrom evaluative thinking assessments in decisionmaking. (For example: When a program evaluationshows the work has not hit desired targets, actionsare taken to modify or discontinue the program;when development opportunities occur, the organizationdecides whether to respond to a requestor RFP based on a review of mission compatibility,organizational capacity and interest in taking onthe new program, and past competitive successeswith the RFP sponsor.)• Educate staff about the value of evaluation andhow to participate effectively in evaluation efforts(e.g., by identifying training needs among staff,developing or commissioning evaluation training forwhole staff groups, and making evaluation-relatedtasks part of the job descriptions of key staff).• Motivate staff to regularly use specific evaluationstrategies (e.g., not only to use surveys but also toA Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201247


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 7Thinking About Evaluation-RelatedPoliciesOrganizations use policies to clarify their guidingprinciples and to identify acceptable and unacceptableprocedures. In addition to calling on leadersto help sustain evaluative thinking, organizationscan also address it through policies and plans. Thedevelopment of specific, evaluation-related policiesabout, for example, leadership succession or staffdevelopment, can help to extend or sustain evaluativecapacity. The following are questions to informevaluation-related policy development.Succession (i.e., the transfer of responsibility orpower)• What must the organization do to ensure that aperson filling an executive director or managementposition will be knowledgeable about andsupport evaluation and evaluative thinking in theorganization?Staff Development• What evaluation training is needed and provided?• How often is evaluation training provided and bywhom?• How does the organization ascertain that traineeshave effectively learned about evaluation?Responsibility for Evaluation• Who conducts which evaluation tasks (i.e., whatroles are fulfilled by executive director, managers,staff, others)? Are specific work descriptionsprovided and time set aside to address evaluationtasks?• Is there an “Officer in Charge” to ensure qualityfor, and oversight of, all evaluation work?Compensation• How are staff who contribute to evaluation compensatedfor their time?• Are their levels of compensation based on qualityof evaluation work?General• Approach. Is evaluation including multiple stakeholdersalways strived for? Are there types of evaluationthat will not be undertaken (e.g., randomizedcontrolled experiments)?• Subjects. Are there some areas of organizationalwork that are off-limits?• Schedules. How much evaluation can or should bedone in a selected time frame (e.g., one year)?<strong>Organizational</strong> Evaluation PlanDevelopmentDeveloping an evaluation plan is an important firststep to conducting effective evaluation. Organizationwideevaluation plans can further clarify how, when,why, and which evaluations will be conducted. Thefollowing are suggested steps to inform organizationalevaluation planning.1. Inventory current evaluation efforts and/or conductannual reviews of all evaluation work. Be sureto clarify the status of all evaluations currentlyunderway including what questions are beingaddressed, how data are being collected and analyzed,when reporting is expected, and to whomreports are being presented.2. Identify current evaluation needs. Determinewhether there are programs that need evaluationsinitiated, revised, or discontinued.• Are consultants used and who oversees their work?A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201249


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 73. Specify the organizational evaluation plan—account for all programs.— Identify which programs are to be the subjects ofnew comprehensive evaluation. Have staff, alongwith stakeholders and/or consultants, develop fullevaluation plans for those programs. Plans shouldinclude: purpose, evaluation questions, identificationof outcomes, indicators, targets, selection ofdata collection strategies, specification of analysisplans, proposed work schedule, budget (if applicable),and deliverables, reports, and disseminationplans.— Identify which programs are the subjects ofongoing evaluation. Develop full evaluation plansfor those programs.— Identify any other documentation or reportingrequired for the term of the organizationalevaluation plan.Getting Started With an EvaluationInventoryAn evaluation inventory is a way to document evaluationactivity at an organization. Most organizationshave evaluation work underway at all times becausethey are required to and/or they are committed tousing evaluation to inform decision making.To conduct an inventory, start by answering the followingquestions:• What evaluations are currently underway and who,if anyone, is helping to conduct them?• How are data being collected and on what schedule?• How are evaluation findings being used?• Who receives any evaluation reports and inwhat form?Evaluation InventoryProgramBeingEvaluatedKeyEvaluationQuestions*ExpectedUsesEvaluated ByDataCollectionStrategiesOverallDates*Products/DatesReportingTo:q Internalq External? Surveys? InterviewsFROM:____________(month/year)(specify whobelow)? Observation? Record ReviewTO:_______________(month/year)q Internalq External? Surveys? InterviewsFROM:____________(month/year)(specify whobelow)? Observation? Record ReviewTO:_______________(month/year)*Obtain full evaluation design for each selected program including work plans and description of outcomes, indicators, and targets.50 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Section 7The following is an example of a chart that could beused to record the results of an evaluation inventory.Uses for Evaluation InventoriesThe results of an evaluation inventory can be used toaccomplish the following:• Inform stakeholders (including board members,funders, staff) about evaluation work that is alreadybeing done• Identify areas where there is insufficient attentionto evaluation• Help reduce duplication of efforts so that the samedata are not collected more than once or managedin multiple databases• Inform decisions about using evaluation resources• Assist with projections of needed evaluationresources• Clarify schedules and highlight peak data collectionand analysis timesNext Steps: Increasing and <strong>Integrating</strong><strong>Evaluative</strong> <strong>Capacity</strong>Organizations that want to enhance their evaluativecapacity should consider the following:1. Conduct an evaluative thinking assessment toidentify where there is substantial or less useof evaluative thinking. [The <strong>Evaluative</strong> ThinkingAssessment tool can be found in the EffectivenessInitiatives section of the Bruner Foundationwebsite: Featured Resources on the <strong>Evaluative</strong>Thinking page.]2. Use the results of the assessment to develop anaction plan.3. Form a task force or working group to championdevelopment of an evaluative thinking action plan.Develop a schedule and create opportunities forthe team to interact regularly with other leadershipgroups (including representatives from the board).4. Identify specific planned actions for each capacityarea where more evaluative thinking is neededand prioritize the list. Be sure questions like thefollowing can be answered.— Do the organization’s scores match perceptionsof evaluative thinking in the organization?— In which areas are there strong indications ofevaluative thinking? What can be learned fromthese areas of strength?— In which areas are there limitations that couldbe addressed? What are the priorities?— Is it better to tackle the organizational areaswith the least evidence of evaluative thinkingor address those areas that would be easier tomake progress?— What changes would produce the most benefitto the organization?5. As evaluative thinking is assessed and plans aredeveloped to enhance evaluative thinking use,organizations must also stay vigilant regardingevaluation capacity, making sure that leaders promotethe use of evaluation, that staff are chargedwith conducting evaluation, and that sufficienttraining exists.Taken together, this should lead to the integration ofevaluation skills and evaluative thinking <strong>into</strong> multipleorganizational practices, ensuring both strongerprograms and stronger organizations.Please visit the Effectiveness Initiatives pagesof the Bruner Foundation website,http://www.evaluativethinking.org/,for tools that can help you conduct an evaluativethinking assessment of your organization andmove from assessment to action plans.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201251


Appendix 1: Useful Evaluation Terms<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAssessment: Synonym for evaluation, but often usedto refer to a technique (e.g., practical assessment) ora mini-study.Benchmarks: Performance data used for comparisonpurposes. They can be identified from your program’sown prior data or relative to performance in the field.Compliance/Monitoring: Type of evaluation whereevaluation questions are focused on adherence topre-specified procedures.Comparison Groups: Nonparticipants who are identifiedas a reference for comparison (e.g., individualsat different sites).Control Groups: Nonparticipants who are usuallyidentified in the use of an experimental design,ideally on an over-subscribed program (i.e., wherethere are more participants than slots). The treatmentor experimental group actually participates inthe program and the control group, although eligibleand similar, does not receive or participate in theprogram. Results of treatment and control groupoutcomes are then compared to determine programcontribution to outcomes.*** WARNING ***Comparisons must be conducted very carefully.Extrapolation: Modest speculations on the likelyapplicability of findings to other situations undersimilar, but not identical, conditions. Extrapolationsare logical, thoughtful, and problem-oriented ratherthan purely empirical, statistical, and probabilistic.Formative Evaluations: Focus on ways of improvingand enhancing programs and are conducted in theearly or ongoing stages of a program.Generalize: To assign qualities based upon groupmembership, or to make inferences about groupsor programs based on the outcomes of a sample orsubset of members.Goals: Conditions (usually broad) that programs areworking toward (e.g., to promote well-being).Indicators: Observable, measurable characteristics ofchanges that represent elements of an outcome (e.g.,normal birth weight is an indicator of a healthy babyoutcome).Needs Assessments: Determine whether existingservices are meeting needs, where there are gaps inservices, and where there are available resources.These are often conducted prior to initiation of anevaluation or in response to evaluation findings.Objectives: Something that is worked for or strivedfor, that can be observed or measured.Outcomes: Results for participants, during and/orafter participation in a program.Outputs: Products of a program’s activity (e.g., # ofsessions held, # of participants served).Qualitative Data: Consist of detailed description ofsituations, events, people, interactions, and observedbehaviors; direct quotations from people about theirexperiences, attitudes, beliefs and thoughts; andexcerpts or entire passages from documents, correspondence,records, and case histories. Qualitativedata collection methods permit the evaluator to studyselected issues in-depth and detail and typicallyto produce a wealth of detailed data about a muchsmaller number of people and cases.Quantitative Data: Come from questionnaires, tests,standardized observation instruments, and programrecords. Quantitative data collection methods permitthe complexities of the world to be broken <strong>into</strong> partsand assigned numerical values. To obtain quantitativedata it is necessary to be able to categorize theobject of interest in ways that permit counting.Random Assignment: A technique that allows programproviders to randomly divide participants <strong>into</strong>treatment (those who get services) and control groups(those who don’t).A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201253


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 1: Useful Evaluation TermsReliable Measures: Those that can be repeated undersimilar conditions.Research: In social science is also a systematiccollection of information, but it is undertaken todiscover new knowledge, test theories, establish universaltruths, and generalize across time and space.Summative Evaluations: Are aimed at determiningthe essential effectiveness of a program. They areespecially important in making decisions about terminating,maintaining, or extending a program.Triangulation: Multiple streams of informationobtained by either collecting different kinds of dataabout the same subject; using different workers tocomplete the same tasks; using multiple methodsto obtain data; and using multiple perspectives toanalyze information.Valid Measures: Those that accurately measure whatthey are intended to measure. (Warning—this is difficultto test. For most social and behavioral variables,no agreed upon testing standards exist).54 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


Appendix 2: <strong>Evaluative</strong> Thinking Assessment<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixORGANIZATION MISSIONAssessmentPrioritya.The mission statement is specific enough to provide a basis for developing goals andobjectives1b.The mission is reviewed and revised on a scheduled basis (e.g., annually)with input from keystakeholders as appropriate)1c. The organization regularly assesses compatibility between programs and mission 1d.The organization acts on the findings of compatibility assessments (in other words, if aprogram is not compatible with the mission, it is changed or discontinued)1Comments:100Please proceed to the next worksheetSTRATEGIC PLANNINGAssessmentPrioritya. There is a formal process for strategic planning 1b.Using evaluative strategies such as interviews and surveys, input is obtained from keystakeholders (staff, board, community, and clients) about strategic direction where appropriate0This is a lower prioritythis year.c. Activities related to strategic process are assessed at least annually 0This is a high prioritythis yeard.Activities related to strategic process involve key stakeholders (staff, board, community, andclients) where appropriate0This is a lower prioritythis year.e. Strategic plans inform decision making 1Comments:40Please proceed to the next worksheetA Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201255


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 2: <strong>Evaluative</strong> Thinking Assessmenta.GOVERNANCEAssessmentBoard uses evaluation data in defining goals/workplan/structure to develop plans summarizingstrategic direction 0PriorityThis is a highpriority this yearb. Board regularly evaluates progress relative to own goals/workplan/structure1c. There is a systematic process for identifying, recruiting, and electing new board membersd. Specific expertise needs are identified and used to guide board member recruitment10This is not apriority at alle.The board regularly (e.g., annually) evaluates the executive director’s performance based onestablished goals/workplan 1f. The relationship between organization mission and plans for strategic direction are assessed0This is a highpriority this yearg. The board assesses the organization’s progress relative to long-term financial plansh. The board assess the organization’s progress relative to program evaluation resultsComments:1050This is a highpriority this yearPlease proceed to the next worksheet56 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


Appendix 2: <strong>Evaluative</strong> Thinking Assessment<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Appendixa.b.c.d.e.f.g.h.ijThe organization has systems in place to provide appropriate financial information to staff andboard membersThe organization monitors its financial information systems to ensure they inform soundfinancial decisionsThe organization annually develops a comprehensive operating budget that includes costs forall programs, management and fundraising, and identifies sources of all fundingThe organization annually reviews the comprehensive budget that includes costs for allprograms, management and fundraising, and identifies sources of all fundingThe organization monitors unit costs of programs and services through the documentation ofstaff time and direct expensesFinancial status of organization is assessed regularly (at least quarterly) by board andexecutive leadersThe organization prepares financial statements on a budget versus actual and/or comparativebasis to achieve a better understanding of financesThe organization periodically forecasts year-end revenues and expenses to inform soundmanagement decisionsThe organization has a review process to monitor whether they are receiving appropriate andaccurate financial information whether from a contracted service or internal processingCapital needs are reviewed at least annuallyFINANCEAssessment111111Priority0 This is a highpriority this year111k1The organization has established a plan identifying actions to take in the event of a reduction orloss in fundingComments: 91Please proceed to the next worksheetCopyright © 2003 ActKnowledge and the Aspen Institute Roundtable on Community Change. All rights reserved.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201257


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 2: <strong>Evaluative</strong> Thinking AssessmentLEADERSHIPAssessmentPrioritya. Executive leaders support and value program evaluation and evaluative thinking 1b. Executive leaders use evaluation findings in decision making 0cd.Plans for executive leadership succession include attention to evaluation–new executiveleaders are expected to value and be knowledgeable about evaluationExecutive leaders educate staff about the value of evaluation and how to participate effectivelyin evaluation effortse. Executive leaders motivate staff to regularly use specific evaluation strategies 000This is a highpriority this yearThis is a highpriority this yearThis is a highpriority this yearThis is a highpriority this yearf.Executive leaders modify the organizational structure as needed to embrace change inresponse to evaluation findingsg. Executive leaders foster use of technology to support evaluation and evaluative thinking 01This is a highpriority this yearh. Management uses data to set staff goals and evaluate staff performance 1i.Plans for management succession include attention to evaluation–new managers are expectedto value evaluation and where possible are knowledgeable about evaluation1jStaffing decisions (e.g., to decide which staff work on which projects, which staff are eligiblefor promotions or advancements, or need more assistance) are based on data0This is a highpriority this yearComments:40Please proceed to the next worksheetFUND RAISING/FUND DEVELOPMENTAssessmentPrioritya.b.cOrganization conducts research on potential fund development opportunities (grants andcontracts) and assesses which to pursueOrganization develops a written fund development plan that clarifies which grants andcontracts will be pursuedOrganization assesses the written fund development plan to be sure it is being followed and todetermine why changes and exceptions are made0 This is a highpriority this year0 This is a highpriority this yeard. Organization revises the plan based on assessmentsefStaff (as appropriate) are involved in writing grant proposals (particularly sections on programdesign and outcomes)The costs and benefits for fund raising events and activities are assessedComments:10 This is a lowerpriority25Please proceed to the next worksheet58 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


Appendix 2: <strong>Evaluative</strong> Thinking Assessment<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Appendixa.b.cd.ef.EVALUATIONAssessmentThe organization involves program staff, organizational leaders, and clients (as appropriate) indeveloping/revising program evaluation plans 0The organization involves program staff, organizational leaders, and clients (as appropriate) incollecting program evaluation data 1The organization involves program staff, organizational leaders, and clients (as appropriate) incollecting analyzing program evaluation data 0The organization insures that there are key staff with evaluation expertise to address theorganization’s evaluation needs 0The organization insures that there are staff members whose jobs or components of their jobsare dedicated to evaluation 0The organization provides (or obtains) training in evaluation for program staff members andmakes sure that the training is current, well-delivered, and provided for enough staff membersto insure that evaluation use is a standard practice 0PriorityThis is a highpriority this yearThis is a highpriority this yearThis is a lowerpriorityThis is a lowerpriorityThis is a highpriority this yearg. The organization hires evaluation consultants when neededhEvaluations that include attention to characteristics, activities and program and clientoutcomes are regularly conducted for organization programs 01This is a highpriority this yearijResults of program evaluations, including findings about client outcomes, are shared asappropriate with leaders, staff, clients, board members, and funders 1Results of program evaluation drive continuous improvement of programsComments:030This is a highpriority this yearPlease proceed to the next worksheetCLIENT RELATIONSHIPSAssessmentPrioritya. Client needs assessments are conducted regularly 1b. Client services are designed in response to determined needs 1c. Client satisfaction is regularly assessed 1d. Client outcomes are regularly assessed 0This is a highpriority this yeare. Results of client satisfaction assessments are used in developing new programs 1f. Results of client outcome assessments are used in developing new programs 0This is a highpriority this yearComments: 67Please proceed to the next worksheetA Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201259


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 2: <strong>Evaluative</strong> Thinking AssessmentPROGRAM DEVELOPMENTAssessmentPrioritya. The organization identifies gaps in community services before planning new programs 1b. Findings from program evaluation are incorporated <strong>into</strong> the program planning process 1c. Multiple stakeholders are involved in developing/revising program plans 1d.Program plans are followed where possible, and there are strategies in place to modifyprogram plans if needed1e. Each program has a written program plan including a logical formulation 1Comments: 100Please proceed to the next worksheetCOMMUNICATIONS AND MARKETINGAssessmentPrioritya.b.cOrganization has a marketing and communications plan that is linked to the organization’sstrategic plan which is used to help the organization achieve its mission.Multiple stakeholders, including staff and board members and technical assistance providersas needed, are involved in developing the marketing and communications plan.Organization assesses the effectiveness of its marketing and communications planning (i.e.,determines whether an accurate message is getting out and whether delivery of the message isfurthering the mission of the organization).111d.Multiple stakeholders including staff and board members and technical assistance providersas needed, are involved in assessing the marketing and communications plan.Comments: 1001Please proceed to the next worksheet60 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


Appendix 2: <strong>Evaluative</strong> Thinking Assessment<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixTECHNOLOGY ACQUISITION PLANNING AND TRAININGAssessmentPrioritya.An assessment process is in place to make decisions about technology maintenance,upgrades, and acquisition1b.Technology systems include software that can be used to manage and analyze evaluation data(e.g., Excel, SPSS)0This is a lowerpriorityc. Technology systems provide data to evaluate client outcomes 1d. Technology systems provide data to evaluate organizational management 1e. Technology systems are regularly assessed to see if they support evaluation 0This is a lowerpriorityf. Staff technology needs are regularly assessed 1Comments: 67Please proceed to the next worksheetSTAFF DEVELOPMENTAssessmentPrioritya. A formal staff development needs assessment is done annually 1b. There is a plan for staff development, based on needs assessment data 1c. The staff development plan is evaluated 1d. There are opportunities for staff to assess staff development training sessions 1e Results of staff training assessments influence future staff development 1Comments: 100Please proceed to the next worksheetA Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201261


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 2: <strong>Evaluative</strong> Thinking AssessmentHUMAN RESOURCESAssessmentPrioritya. Organization has an established personnel performance review process 1b.Performance reviews are used (at least annually) to provide feedback relative to performanceexpectations1c.Organization collects and updates information on credentials, training, and culturalcompetencies of staff0This a lowerpriorityd.The organization uses results of data collected regarding staff credentials, training, andcultural competencies to recruit culturally-competent staff0This a lowerprioritye.The organization uses results of data collected regarding staff credentials, training and culturalcompetencies to train culturally competent staff1f. Organization conducts regular (e.g., annual, biannual) staff satisfaction surveys. 1g.Organization uses the results of staff satisfaction surveys to inform modification of policiesand proceduresComments: 570This is a higherpriority this yearPlease proceed to the next worksheetBUSINESS VENTURE DEVELOPMENTAssessmentPrioritya. Organization systematically identifies gaps in community services 1b. Organization assesses whether they have the capacity to bring in new types of business 0This a lowerpriorityc. Organization researches new business venture opportunities 1d.Organization strategies regarding new business ventures are based on capacity findings,results of gap studies, and business venture development researchComments: 500This a lowerpriorityPlease proceed to the next worksheet62 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


Appendix 2: <strong>Evaluative</strong> Thinking Assessment<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • Appendixa.ALLIANCES AND COLLABORATIONAssessmentExisting partnerships/alliances/collaborations are assessed regularly to determine if they meetorganization mission and strategic direction 0PriorityThis is not apriority for this yearb.c.Planning is conducted to identify additionally needed partnerships/ alliances/collaborationsExisting partnerships/alliances/collaborations are assessed regularly to determine if they arefunctioning effectively 0Comments:1This is not apriority for this yearPlease proceed to the Summary Table to review assessment scoresA Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201263


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 3: History of EvaluationLate 1950s–early 1960sMid-1960s, the Johnson EraMid-1970sThroughout 1970s and1980s1990s2000–PresentEvaluation mainly focused on educational assessment, conducted by social scienceresearchers in a small number of universities and organizations.The War on Poverty and the Great Society programs of the 1960s spurred a large investmentof resources in social and educational programs. Senator Robert Kennedy, concernedthat federal money would be misspent and not be used to help disadvantaged children,delayed passage of the Elementary and Secondary Education Act (ESEA) until an evaluationclause was included. The resulting bill required submission of an evaluation plan bylocal education agencies and summary reports by state agencies. As a result, evaluationrequirements became part of every federal grant. (Early expectations were that evaluationwould illuminate the causes of social problems and the clear and specific means withwhich to fix such problems.)Two US-based professional evaluation associations emerged in 1976: the EvaluationNetwork—mostly university professors and school-based evaluators), and the EvaluationResearch Society—primarily government-based and university evaluators. (In 1985, thesetwo organizations merged to form the American Evaluation Association, AEA, www.eval.org,which now has more than 3700 members worldwide.)• Growing concerns voiced about the utility of evaluation findings and the use ofexperimental and quasi-experimental designs.• Huge cuts in social programs during Reagan presidency, resulted in less governmentinvolvement, diminished or removed evaluation requirements from federal grants.• Many school districts, universities, private companies, state education departments,the FBI, the FDA, and the General Accounting Office (GAO) developed internalevaluation units.Increased emphasis on government program accountability and a movement for organizationsto be lean, efficient, global, and more competitive. Evaluation became commonlyused not only as part of government mandates but also to improve program effectiveness,enhance organizational learning, and inform allocation decisions in a wide variety of bothpublic and private organizations. A number of foundations created internal evaluationunits, provided support for evaluation activities, or both.Increasing and sustained interest in participatory, collaborative, and learning-orientedevaluations. National evaluation associations being established throughout the world.Preskill and Russ-Eft 200564 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 4: Different Evaluation Purposes Require DifferentEvaluation QuestionsPurposeRendering Judgments[Some need met, some goalattained, some standardachieved. Must specify criteriafor judgment in advance.]Facilitating Improvements[Using information to monitorprogram efforts and outcomesregularly over time to providefeedback to improve implementation,to fine-tune strategies, andto make sure that participantsare progressing toward desiredoutcomes.]Generating Knowledge[Conceptual rather thaninstrumental use of findings.]QuestionsTo what extent did the program work?To what extent did it attain its goals?Should the program be continued/ended?Was implementation in compliance?Were funds used appropriately, for intended purposes?To what extent were desired client outcomes achieved?What are the programs strengths and weaknesses?How and to what extent are participants progressing toward desired outcomes?Which types of participants are making good progress and which aren’t?What kinds of implementation problems have emerged, and how are they addressed?What’s happening that wasn’t expected?What are staff and participant perceptions of the program?Where can efficiencies be realized?What new ideas are emerging that can be tested?How is the program model actually working?What types of interventions are being used?What types of outcomes can be expected? How do you measure them?What are the lessons learned?What policy options are suggested by the findings?A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201265


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 5: Distinguishing Between Evaluation and ResearchProgram evaluation and research are similar, but they have different objectives and data standards. They are alsodifferent in size and scope. The following table shows important differences between them.Program Evaluation ResearchObjectivesChange and action oriented, aimed at determiningimpactAimed at causality, testing theoryData Evidentiary data Very precise measurementsNumbers ofSubjectsProgram target groups or samples of targetgroups (sometimes very small)Usually study of samples from fairly large populationsStandardsUsefulness, practicality, accuracy, ethicalnessTruth, causality, generalizabilty, theoryCosts Range from minimal to fairly substantial Usually high costsStakes/ScopeFairly low stakes, fairly narrow scope (theprogram)Very high stakes (e.g., human life or health)FocusWhether something is being done well, notnecessarily better. Should focus on context,activities, outcomes of participantsDetermining the best treatments, solutions, etc. Caninclude community indicators where appropriateUseShould not be conducted unless there is realopportunity to use the resultsSometimes conducted when the use is uncertain66 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


Appendix 6: Making Data Collection Decisions<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixMethod Advantages Disadvantages DecisionsSURVEYS(Several commerciallyavailable,or unique instrumentscan bedeveloped)Easy to quantify and summarizeresults; quickest andcheapest way to gather newdata rigorously; useful for largesamples, repeated measures,comparisons between unitsand norms/targets. Good forstudying attitudes and perceptions—canalso collect somebehavioral reports.Hard to obtain data on behavior,context shaping behavior(attribution). Not suitedfor subtle, sensitive issues.Surveys are impersonal anddifficult to construct. Mustaddress language and administrationchallenges; must avoidnonresponse, biased or invalidanswers, overinterpretationwith small samples.Who gets surveyed (sampling)?How will confidentiality bemaintained?Validity of self-assessment?What are standards of desirability?Need for repeated measures -what intervals?INTERVIEWS(Structured,semi-structured,intercept)Readily cover many topicsand features; can be modifiedbefore or during interview; canconvey empathy, build trust;rich data; provide understandingof respondents’ viewpointsand interpretations. Good forstudying attitudes and perceptions—canalso collect somebehavioral reports.Expensive, sampling problemsin large programs; respondentand interviewer bias; noncomparableresponses; timeconsuming to analyze andinterpret responses to openendedquestions. Training andprotocols required to conduct.Who gets interviewed (sampling)?How will confidentiality bemaintained?Validity of self-assessment?What are standards of desirability?Need for repeated measures—what intervals?OBSERVATIONS(Participantsduring programsessions, participantsin othersettings)Rich data on hard-to-measuretopics (e.g., actual practices,behaviors). Behavioral dataindependent of self-descriptions,feelings, opinions; dataon situational, contextualeffects. Good for studyingprogram implementation andsome behavioral changes.Constraints on access (timing,distance, objections to intrusion,confidentiality, safety);costly, time-consuming;observer bias, low interobserverreliability; may affect behaviorof people observed; hard toanalyze, interpret, report data;may seem unscientific. Trainingand protocols required toconduct.What subjects will be observedHow many at which levels?Need for repeated measures—what intervals?RECORD REVIEW(E.g., programrecords, schoolrecords, casemanagementrecords)Nonreactive; often quantifiable;repeated measures showchange; credibility of familiaror standardized measures (e.g.,birth weight, arrest incidents,drug test results, staff or parentassessment results); oftencheaper and faster than gatheringnew data; can includedata from other independentsources. Good for determining(behavioral) status.Access, retrieval, analysis problemscan raise costs and timerequirements; validity, credibilityof sources and measurescan be low. Definitions mustbe determined prior to use, areoften externally determined,cannot be customized; need toanalyze data in context; limiteddata on many topics.Which documents?How can access be obtained?Need for repeated measures—what intervals?A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201267


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 6: Making Data Collection DecisionsMethod Validity Reliability AvailableResourcesCulturalAppropriatenessSURVEYSLowNo opportunity for clarification.Participants often chooseresponses other thanthose provided.Participants may notwant to report privatebehavior.Participants may notbe aware of their ownactions, behaviors, orattitudes.HighAdministration isconsistent from oneindividual to next.Standard responsechoices provideconsistent range ofresponses.Little opportunity fordata collector to influenceresults.EconomicalMass distributed.Costs based onnumber of mailings,use of phone or mail,incentives.VariedBest for literate,middle–class American-bornpopulations.Particularly bad forimmigrants and refugees.INTERVIEWSHighCan clarify questions andprobe for more in-depthresponses.Personal interaction canestablish rapport foropen discussion.Focus groups can fosterdiscussion and sharing.Focus groups can clarifyindividual viewpointsthrough dialog with others.LowInterviews are uniquebased on comments ofrespondents. Differentquestions and probeslikely to be used.ModerateIndividual interviews:moderate expense.Focus group: low tomoderate expense.StrongIndividualized interviewswork well whenpaper formats arethreatening or invasiveand when behavioror attitudes pose aproblem.Focus groups workwell when the groupopinion is the culturalnorm.OBSERVATIONSHighObservers can directlyobserve behavior thatmay not be accuratelyreported otherwise.Observers can directlyobserve behaviors thathave standards developedby professionals orinstitutions.ModerateObservers need structuredprotocols forcoding their observations.Less structuredobserver formatsreduce reliabilitybecause differentobservers may reachdifferent conclusions.Moderate–ExpensiveTime is requiredin order to observebehaviors. This canbe mitigated by using“natural observers.”ModerateCultural differencesin behavior may bemisinterpreted.RECORDREVIEWLow to ModerateNot really designed tomeasure, rather to document/record.Low to HighDepends on whetherthere are standards forrecord keeping.EconomicalData are part ofthe service deliveryprocess and usuallyalready exist. (Useof case records forevaluation requires upfront planning). Someissues of access, confidentiality.VariedDepends on servicedelivery, appropriatenessof program. Mayover- or under-representcertain groupsdue to bias.68 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 7: Evaluation Report OutlineI. IntroductionA. Introduction to agency and/or program (mission, goals, and main activities)B. Purpose of evaluationC. Evaluation Questions1. Clear and concise questions that are in alignment with program mission and goals.2. If there are several evaluation questions, group them <strong>into</strong> categories (e.g., service delivery, staffoutcomes, participant outcomes, programmatic impacts).D. Report Organizer (optional and un-necessary if report is short)II. Methods/Data Collection StrategiesA. Description of selected methods (in narrative or table form)—this describes what actually happenedduring data collection, not what the evaluator set out to or attempted to do.*B. Relationship between questions and data collection strategies (usually done as a table)*C. Data collection rationale—explanation of why methods were chosen including clarification regarding use ofparticipatory data collection.*D. Data collection respondents and missing data—description of the target populations for each datacollection activity including why they were selected and whether there is any missing data.*E. Data collection challenges*F. Description of targets for analysis (this can also be addressed in the findings section).* Note Sections b - f, can all be addressed as part of the description of selected methods.III. Evaluation FindingsA. Summaries of the results of data collection and analysisB. Response to evaluation questions (where feasible)C. Comparison of findings to targets (where feasible and appropriate)IV. ConclusionsA. Summary of Key FindingsB. Final Analysis* How does the author understand the data* How does the author believe the data will impact the program* Strengths and weaknesses of the program as revealed by evaluation findingsC. Action Steps/Recommendations—what will be done with the report, what could/or should be done withthe programD. Issues for Further Consideration (any outstanding issues raised by the evaluation)A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201269


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 8: Commissioning EvaluationWhat to look for in an Evaluator• Basic Knowledge of substantive area beingevaluated• Experience with program evaluation (especiallywith non-profit organizations)• Good References (from sources you trust)• Personal style and approach fit (probably mostimportant)Before you Commission an Evaluation• Talk to a few trusted colleagues about experiencesthey have had. If possible also talk to some grantmakersand a professional evaluator. Gather somebasic advice and determine if it is relevant foryour project.• Think about how you will identify evaluators.Is sole sourcing an option for your project, or willa competitive process be more appropriate oradvantageous? There are merits to either approach.If you want to or are required to use a competitiveapproach, determine how broad the competitionshould or must be. (Determine answers to thesequestions.)— Will an invitational approach work, or do youwant unrestricted competition?— Are there any geographic limitations oradvantages?— Are there tax or business requirements(for-profit./non-profit, private firm, individual,university institute or department, etc.)— What sources will you use to inform evaluatorsabout your project and attract bidders (e.g.,RFPs posted on your website, in publications,through associations)• Determine the best strategy and requirements forproposals. Can you go ahead with a letter proposal,a letter of interest (LOI)? or an interview onlyprocess, or is a Request for Quotes/Qualifications(RFQ), or a full Request for Proposal (RFP) best?Whatever you decide, be sure to make the requestspecific enough that you make your needs known,but not so detailed that the evaluator or otherstakeholders have no real input <strong>into</strong> how toproceed.• Determine the timeline for finding evaluators. Ifyou do a full, competitive proposal process, youwill need to determine your sequence for announcingyour competition, releasing the actual request(RFP or RFQ), conducting a bidders conferenceor responding to clarification requests, collectingresponses, making selections (including possible“best and final” competitions or invited interviews/presentations) and notification. Remember thatgood responses, especially those that are writtenor presented through a meeting take some timeto develop —be sure to give potential contractorsadequate time.• Determine the format for response. Do you need awritten or oral response, or a combination? Whatcategories of information are required and whatadditional materials will help. (It is always helpfulto provide specific questions of interest—seefollowing—and parameters for response.)• Determine who will be involved in making theselection and how. Do you need external reviewers?How will you manage multiple (conflicting?)reviews? If you host an interview, who needs toparticipate? What role can/should program staffand leaders play?Questions to Ask Evaluators (in RFP’s, RFQs orinterviews)• What do you need to know to properly design anevaluation for this program/initiative? (The evaluatorshould minimally need to know about thepurpose for commissioning the work, as well asdetails about service delivery or other organizationalstructure, and scope of the project includingtimeframe, ballpark program budget and size of70 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 8: Commissioning Evaluationthe target population. A smart evaluator will alsorequest other background materials or perhapseven a preliminary visit.)• What evaluation questions would guide your effort?(It may also be valuable to have a preliminary conversationabout outcomes and indicators, or specifya whole task where the evaluator works togetherwith program staff to clarify expected outcomes,timeframes, indicators, and important assumptionsand contextual issues related to service delivery.)• What strategies would you use to address theevaluation questions? (Be specific about how youwould: collect and analyze data involve agencystaff, why this approach makes sense or is common,whether there are any standard instrumentsand why they were chosen.)• How will you handle common challenges? (Forexample, how will your evaluation design beaffected by poor project implementation, whichoutcome measures would be appropriate if theprogram is not well implemented? How will youcommunicate this to stakeholders?) What will youdo to insure necessary access to subjects and confidentialityof response?• What timeline will the evaluation project operateon? (Specify in chart or calendar form, when keyevaluation tasks will be completed.)• Who will conduct the work and what other relevantexperiences do they have? (Identify key staff andclarify their level of involvement—attach resumesand a capacity statement with descriptions of othersimilar projects. Be sure to get specific directionsif web-site reviews are recommended. Ask aboutsupervision if multiple evaluators are involved—who is ultimately responsible for collecting andanalyzing data, verifying accuracy and reportingresults? For multi-site initiatives, will any localevaluators be involved? How will they and anyother staff be trained to conduct specific evaluationactivities?)• How and when will the findings from the evaluationwork be communicated? What products/deliverableswill be developed? (look for multiple productswhere appropriate) Will the products of the evaluationhave any greater usefulness? How are programmanagers expected to use the information obtainedthrough the evaluation?• How will evaluation resources be used to completethis work including professional time, travel, otherdirect costs, indirect costs? (Be sure to ask for atask-specific budget.)Evaluation Resources• Independent technical assistance or evaluationconsultants• Evaluation consulting firms• Universities with graduate programs that includetraining or projects in program evaluationRemember, evaluation should not be viewed as incompetition with program resources. Evaluations canbe funded as a component of a program (using programfunds) or as a separate project (using ear-markedor additional funds). A Common rule of thumb is toset aside at least 10% of the cost of the program, forevaluation.What’s the difference between an RFP and anRFQ? Which is best to use?1) An RFP is a Request for Proposal. Responses toRFPs should include specific projections abouthow a contractor will undertake the work (i.e.,data collection strategies, analysis plans, instrumentdevelopment), what it will cost (includingtypes of projected expenditures), when it willget done, who will work on it, and what productswill be delivered. Usually responses to RFPs alsoinclude sections where the potential contractorclarifies understanding of the project and thecontext surrounding the project, identifies keyA Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201271


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 8: Commissioning Evaluationquestions that will guide the evaluation work,clarifies who will be working on the project,what qualifies them and how the project willbe managed, and provides specific informationabout qualifications or previous relevant experiences.Other project-specific sections can alsobe included depending on needs (e.g., a sectionon what a contractor thinks are likely evaluationchallenges and how those would be handled, asection addressing confidentially and participatoryapproaches). Any forms or documentationrequired of contractors are typically appended.RFPs are more formal and responses are typicallylengthier. They are best used under the followingconditions: when the grantor is not very familiarwith the potential recipients of the request orwhen there are very specific evaluation needs(especially those dictated by another externalsource or a project/program model). Grantorsshould be sure to provide ample time for a fullresponse.2) An RFQ can be either a Request for Qualificationsor Request for Quote depending on the nature ofthe solicitation. Responses to RFQs can includemany of the same elements as RFPs, but theinformation is less detailed, focused on approachand why a specific contractor should be selectedto do the work. RFQs typically include sections/questions like the following:1. What would be your general approach tocompleting this project?2. Who from your staff would be involved in thisproject and why?3. What are the expected timelines for the workand the expected products?4. What are the projected costs?5. What prior experiences qualify you for this work.Contractors are also typically asked to providespecific references and some are invited to participatein follow-up interviews where they come toprovide additional responses to specific questionsand to clarify their workplans.RFQs are somewhat less formal and best usedwhen you have some familiarity with the contractorswho might respond, when style fit is paramount,and when you have a briefer time spanto make a selection. There is less reading andwriting involved with RFQs, but more of an onuson both the grantor and the potential contractorsto maximize the interview process to inform theselection decision.Questions to Consider Before Identifying aProject for Evaluation1) Contributions to Mission or Broader FieldWhat is the general purpose of the proposedproject and how will it contribute to grantmaker’smission? If not, are there other reasons why itshould be supported? (This can also be discussedwith grantee as long as they are aware of thegrantmaker’s mission.)How does this project contribute to the broaderfield? What are the likely lessons learned?2) Implementation and FeasibilityDoes the program target population know aboutand want to participate in this program? Haveall necessary collaborative agreements beensecured?How will you guard against implementationimpediments?3) Project Design/StagingDescribe the key components of the project andhow they are integrated <strong>into</strong> the overall projectdesign. Has a reasonable logical formulation beendeveloped for the program?72 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 8: Commissioning EvaluationHow will the project be staged over time?4) Finances (ask if necessary)Please clarify the following details aboutprojected budget:__________________ _______________5) Outcomes & EvaluationHow is the project expected to impact participants?How will you know when this hashappened?Things To Avoid When CommissioningEvaluation Projects• Agreeing to fund a program evaluation design thatyou do not understand.• Agreeing to fund a program evaluation where disbursementis not attached to deliverables.• Commissioning a program evaluation on a timetablethat is inappropriate for the program.• Commissioning an overly complicated evaluationdesign, or one for which there was insufficientstakeholder involvement in the development.• Assuming that you must ALWAYS measureoutcomes.• Forcing evaluation of outcomes that are inappropriateor beyond the scope of the program.How Do You Stay Informed About EvaluationStatus After You Agree To Support It?Periodic or Mid-Point ReportsRequest that the evaluator make status reports about:• What evaluation activities have taken place;• Any data collection or instrument developmentproblems they have encountered;• Any proposed changes to data collection timelinesor strategies;• Preliminary findings when appropriate.These status reports should be on a regular schedulethat matches with funder needs and the timeline ofthe evaluation. Compare each status report with proposedactivities and timeline. Be prepared to requestclarification if there have been challenges (e.g.,timeline, access).Note: do not require evaluator grantees to completestatus reports unless there is a real commitment toreview them. Be aware that to produce them takes timeand costs money.Project ConclusionRequest a final status report or make it part of thefinal evaluation report. Compare what was actuallydone with what was proposed. Assume there will besome differences as not all tasks work as projected.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201273


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 9: Getting Funding for EvaluationWhere does the money come from?Evaluation, even self-assessment requires financialresources. Usually the cost to do a good evaluation isequivalent to about 10 – 15 percent of the costs tooperate the program effectively (i.e., a program withan operational budget of $265,000 per year wouldrequire about $25,000 - $40,000 to evaluate for oneyear). Actual expenditures may be less if the evaluationis more participatory, but other organizationalcosts may be associated. For example, if programstaff are involved in evaluation data collection, thereare still resource needs to cover the costs associatedwith their program work. Most of the funds forevaluation pay for the professional time of thosewho develop evaluation designs and data collectiontools, collect data, analyze data and summarize andpresent evaluation findings. Other expenses includeoverhead and direct costs associated with the evaluation(e.g., supplies, computer maintenance, communication,software).It is bad practice to assume there is a standard fixedevaluation cost (e.g., $10,000 ) regardless of programsize or complexity, and it is dangerous to fundan evaluation project that does not clarify how evaluationfunds will be used. The Participatory EvaluationEssentials Guide provides instructions for specifyingan evaluation timeline, projecting task-specific levelof effort (i.e., estimating how much time it will takeeach person to complete each evaluation-relatedtask) and calculating associated costs. There is alsoa sample evaluation budget.There are two ways to project evaluation costs:1) Identify a reasonable total amount of fundsdedicated for evaluation (see cost percentageson the left) and then develop the best evaluationdesign given those resource requirements (i.e.,work backwards from a fixed available amount forevaluation).2) Develop the best evaluation design for the subjectprogram and then estimate the costs associatedwith implementing the design. Negotiate designchanges if costs exceed available funds.To obtain funds to use for evaluation purposes:• Write evaluation costs <strong>into</strong> project developmentbudgets (i.e., add the projected cost of evaluation<strong>into</strong> the total cost of operating the project).Requests for Proposals (RFP’s) often include asection on evaluation. Take it seriously. Ratherthan inserting the perfunctory fixed amount, try toestimate what it would really cost to do good evaluation*and include that figure as a line-item on theproject budget. Then use the money accordingly ifthe project is funded.• Set aside funds for evaluation on a percentage basis<strong>into</strong> the organizational budget (e.g., each year setaside 5% of all organizational funds to use forevaluation purposes). Develop a plan to use thesefunds. For example, each year plan to conduct comprehensiveevaluation of some programs (e.g., thosethat are newer or problematic) and then requiremore limited data collection/review for others.• Obtain funds solely for the purposes of evaluation.Some grantmakers will provide dedicated fundsfor evaluation, or capacity-building which includesevaluation capacity-building or evaluative thinkingenhancement. These are usually accessed throughresponse to an RFP or other proposal development.*• Consider sharing and/or pooling resources withother departments or organizations involved inevaluation. If you need help to estimate evaluationproject costs or to specify a good design, considercontacting an evaluation technical assistanceprovider. Some evaluation consultants will do this“on-spec” in return for the opportunity to conductthe evaluation if the project gets funded.74 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 10a: Notes on Sampling and RepresentativenessOften, surveys are not administered to every participantin a group. Instead some members of thegroup are selected to respond. This selected groupis known as a sample. If the participant group islarge, sampling may be advisable, but the followingquestions must be answered. How will the sample ofrespondents be identified? How many respondentsare needed for valid results? How will the sample bedefined and how will representativeness be ensured?The following are some necessary steps .1) Define the target population to be studied. Theterm population refers to all possible respondentsor subjects of the survey. The population definitionmust be precisely specified.2) Decide whether you should try to include all membersof the population (census) or to sample.3) Select a small subset of a population that isrepresentative of the whole population. Unless thepopulation is very small (fewer than 200), samplingis almost always used.Ways to Select a Sample• Simple Random Sampling approximates drawing asample out of a hat. The desired number of samplerespondents is identified and selected arbitrarilyfrom a randomly arranged list of the total population.Each individual on the list has the samechance for being selected to participate.• Stratified Samples are used when some importantcharacteristics of the population are known prior todata collection. This is also commonly done whenparticipants represent multiple geographic areas, orwhen there is disproportionate gender representation.• Convenience Samples involve those respondentswho can easily be contacted for participation in asurvey. While their responses are often enlighteningand can be summarized, they should not begeneralized to the entire population.• Purposeful Samples include information-richcases. These can include extreme or deviant cases,maximum variation sampling, typical cases, criticalcases (those that can make a point dramatically),and other variations.There are several ways to select a sample. The mostcommon of these include simple random sampling,stratified samples, convenience samples, and purposefulsamples.A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 201275


<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixAppendix 10b: How Big Should the Sample Be?The number of program participants will determinewhether to include everyone in the evaluation orselect a sample, i.e., a smaller group who can representeveryone else and from whom we can generalize.The sample should be as large as a program canafford in terms of time and money. The larger thesample size (compared to the population size), theless error there is in generalizing responses to thewhole population—i.e., to all cases or clientsa program.1 st RULE OF THUMB: if the population is less than100, include them all (and strive to get an 80%response rate); if the population is bigger than 100select a probability sample. (See Appendix 10A forsampling strategies.)Probability samples allow calculation of the likelyextent of the deviation of sample characteristics frompopulation characteristics. Sampling Error is the termused to refer to the difference between the resultsobtained from the sample and the results obtained ifdata had been collected from the entire population.The objective when drawing samples is to decreasesampling error and to assure confidence that theresults are reliable. 2 nd RULE OF THUMB: a commonstandard for program evaluation is 95% confidencewith a sampling error of ± 5%. In English thatmeans that you believe that 95 percent of the timethe results from your sample, would be off by nomore than 5% as compared to the results you wouldhave gotten if you had collected data from everyone.It is the absolute size of the sample rather than theratio of sample size to population size that affectsthe sampling error (Comer and Welch, 1988, p.192). Sample sizes for varying population sizes anddiffering sampling error rates have been calculated(see following page). If you wish for more precisionuse the following calculation (for 95% confidence,5% error).n=385 ÷ ((1+ (385/N))Example: If your population is known to have 472members, then a sample of 212 would be necessaryto ensure 95% confidence with no more than 5%error. 385 ÷ ((1+ (385/472)) = 212Relationship Between Sample Sizes andSampling Error• As shown on the next page, when a sample iscomparatively large, adding cases provides littleadditional precision.• As population sizes increase, the total size of thesample becomes proportionately smaller withoutaffecting error.• When the population size is small, relatively largeproportions are required to produce reasonableerror rates.• A standard proportion (e.g., 33%) will not work asa sampling strategy for varying population sizes.• 3 rd RULE OF THUMB: you must always draw alarger sample than what is planned for because ofrefusal. To do this, you need to estimate the refusalrate and then factor that <strong>into</strong> your calculation.Desired sample size ÷ (1- refusal rate)= TOTAL SAMPLE.76 A Bruner Foundation Guidebook for Nonprofit Organizations and Their Stakeholders • Copyright © by the Bruner Foundation 2012


Appendix 10b: How Big Should the Sample Be?<strong>Integrating</strong> <strong>Evaluative</strong> <strong>Capacity</strong> Into <strong>Organizational</strong> <strong>Practice</strong> • AppendixSample Sizes (n) @ 95% Confidence, with 3, 5 and 10% Sampling ErrorPopulation Size (N)Sampling Error±3% ±5% ±10%100 92 80 (80%) 49250 203 152 (61%) 70500 341 217 (43%) 81750 441 254 (34%) 851,000 516 278 (28%) 882,500 748 333 (13%) 935,000 880 357 ( 7%) 9410,000 964 370 ( 4%) 9525,000 1,023 378 ( 2%) 9650,000 1,045 381 (


Back Coverwww.brunerfoundation.orgThe Bruner FoundationBRUNERFOUNDATIONEFFECTIVENESSINITIATIVES

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!