An evaluation plan is a comprehensive roadmap that outlines how your nonprofit will systematically assess whether your program is working as intended and achieving its desired outcomes. A well-designed evaluation plan strengthens your grant proposal by demonstrating accountability, learning orientation, and commitment to evidence-based practice.
Core Components of an Evaluation Plan
1. Evaluation Questions
Start with clear, specific questions that align with your program goals and logic model. These typically fall into several categories:
Process Questions examine program implementation:
- Are we reaching our intended target population?
- Are activities being implemented as planned?
- What is the quality of service delivery?
- How satisfied are participants with the program?
Outcome Questions assess program effectiveness:
- To what extent are participants achieving intended short-term outcomes?
- What changes are occurring in participants’ knowledge, skills, attitudes, or behaviors?
- Are we seeing the expected medium and long-term impacts?
Impact Questions evaluate broader community change:
- How is the program contributing to systems-level change?
- What unintended consequences (positive or negative) are occurring?
2. Evaluation Design
Choose an appropriate evaluation design based on your program type, resources, and funder requirements:
Pre-Post Design: Measure participants before and after program participation. This is common and relatively straightforward, showing change over time.
Comparison Group Design: Compare program participants to a similar group not receiving services. This helps establish whether changes are due to your program versus external factors.
Randomized Controlled Trial: Randomly assign eligible participants to treatment and control groups. This provides the strongest evidence of program effectiveness but requires significant resources and may raise ethical considerations.
Mixed Methods Design: Combine quantitative data (numbers, statistics) with qualitative data (stories, interviews) for a comprehensive picture.
3. Data Collection Methods and Instruments
Select data collection methods that match your evaluation questions and are feasible for your organization:
Surveys and Questionnaires: Useful for gathering standardized information from many participants. Can measure knowledge, attitudes, behaviors, and satisfaction.
Interviews: In-depth conversations that provide rich, detailed information about participant experiences and outcomes.
Focus Groups: Group discussions that capture diverse perspectives and can reveal unexpected insights.
Observations: Direct observation of program activities or participant behavior in natural settings.
Administrative Data: Existing records like attendance, test scores, employment records, or health indicators.
Document Review: Analysis of program materials, participant work products, or organizational records.
4. Outcome Indicators and Metrics
Define specific, measurable indicators for each outcome:
Short-term Outcomes (1-3 months):
- 85% of participants will demonstrate improved job interview skills (measured by pre/post role-play assessments)
- 90% of participants will complete resume writing module
Medium-term Outcomes (3-6 months):
- 70% of participants will complete entire program
- 60% of completers will obtain employment within 3 months
Long-term Outcomes (6-12 months):
- 50% of employed participants will retain jobs for at least 6 months
- Average wage increase of 25% compared to baseline
5. Data Management and Analysis Plan
Outline how you’ll handle, store, and analyze data:
Data Management: Describe database systems, confidentiality protocols, and data security measures. Include consent procedures and participant privacy protections.
Analysis Methods:
- Quantitative analysis: Statistical tests to measure change over time, comparison between groups, correlation analysis
- Qualitative analysis: Thematic coding of interviews, content analysis of open-ended responses
Reporting Schedule: Specify when and how findings will be shared with stakeholders, funders, and participants.
6. Evaluation Team and Responsibilities
Identify who will conduct the evaluation:
Internal Evaluation: Staff members conduct evaluation activities. More affordable but may lack objectivity.
External Evaluation: Outside consultant or academic partner leads evaluation. More credible but typically more expensive.
Participatory Evaluation: Involves program participants in designing and conducting evaluation. Builds ownership and provides unique insights.
7. Budget and Resources
Include evaluation costs in your overall program budget, typically 5-15% of total project costs:
- Staff time for data collection and analysis
- External evaluator fees (if applicable)
- Survey tools or assessment instruments
- Data management software
- Incentives for participant participation
- Report preparation and dissemination
Sample Evaluation Plan Template
Program Example: Youth Job Readiness Program
Program Goal: Increase employment outcomes for at-risk youth aged 16-24
Key Evaluation Questions:
- Are participants developing job readiness skills as measured by pre/post assessments?
- What percentage of participants complete the program?
- How many participants obtain employment within 6 months of completion?
- What barriers prevent participants from completing the program?
Evaluation Design: Pre-post design with 6-month follow-up
Data Collection Timeline:
- Baseline (Program Entry): Demographics survey, job readiness skills assessment, employment history
- Mid-Program (Month 2): Participant satisfaction survey, attendance tracking
- Program Completion: Post-program job readiness assessment, exit interview
- 6-Month Follow-up: Employment status survey, phone interview about job experiences
Data Sources and Methods:
- Quantitative: Job readiness assessment scores, employment data, attendance records
- Qualitative: Exit interviews, focus groups with participants, employer feedback interviews
Sample Size and Target: 150 participants annually, with goal of 80% data collection rate
Evaluation Plan Best Practices
Align with Logic Model: Ensure your evaluation questions and indicators directly connect to your program’s theory of change.
Balance Rigor with Feasibility: Choose methods your organization can realistically implement with available resources.
Plan for Utilization: Design evaluation to provide actionable information for program improvement, not just reporting requirements.
Build in Learning: Include mechanisms for using evaluation findings to make mid-course corrections during program implementation.
Consider Cultural Responsiveness: Ensure evaluation methods are appropriate for your target population and consider potential cultural barriers to participation.
Address Limitations: Acknowledge constraints in your evaluation design and explain how you’ll work within them.
A strong evaluation plan demonstrates to funders that you’re committed to learning, accountability, and continuous improvement while providing the evidence needed to document your program’s impact and inform future programming decisions.
Like this tip? Check out my grant writing books, courses and newsletter.
Was this answer helpful? Share it now: