On this page you will find resources pertaining to various topics in Evaluation such as data collection, evaluation design and logic models. These tools, developed by the Center to Improve Program and Project Performance (CIPP) are intended to provide technical assistance and support for grantees and evaluators.
These resources were created to specifically assist OSEP grantees with evaluation of their projects, including development of logic models and evaluation plans, designing and reporting on performance measures, collecting and analyzing data, and other topics. Learn More
Logic Models for Evaluation
Logic Models provide a starting point for developing an effective evaluation plan. Here you will find resources that pertain to using logic models to inform evaluation planning.
Using a Logic Model to Build a Strong Evaluation Plan: A 4-part Training Series The purpose this training is to help participants: Understand the benefits of using a logic model to create an evaluation plan; Develop a strong logic model that shows how inputs and activities are expected to lead to meaningful short-, medium-, and long-term outcomes; and Align an evaluation plan and analysis to critical components within the logic model.
- Part 1: Why Link the Evaluation Plan to a Logic Model?
- Part 2: Logic Models 101 – an Introduction to Logic Models
- Part 3: What Makes a Good Logic Model?
- Part 4: How to Link the Evaluation Plan to the Logic Model.
- Using a Logic Model to Build an Evaluation Plan
Resources to Support Evaluation of OSEP Projects
These resources were created to specifically assist OSEP grantees with evaluation of their projects, including development of logic models and evaluation plans, designing and reporting on performance measures, collecting and analyzing data, and other topics.
Compared to What? Identifying Good Comparison Data to Assess Project Results: Measuring outcomes for the population affected by your project is important, but measuring outcomes in isolation does not tell the full story. You need to consider how the outcomes may have been different if your project had not been implemented. That is, you need to know, “compared to what?” Below are each of the briefs that make up the 5-part series to help grantees identify good comparison data to assess project results.
- Compared to What? Identifying Good Comparison Data to Assess Project Results
- Compared to What? Using Extant Data
- Compared to What? Using Nonequivalent Pre-Post Control-Group Designs
- Compared to What? Using One-Group Pre-Post Designs
- Compared to What? Using Single Case Designs
Linking Expectations to Evaluations: Using Your Logic Model to Create Your Evaluation Plan: This brief provides guidance and examples for creating strong alignment between project evaluation plans and logic models. This will help you to target evaluation resources and focus your energy. This document can be used in conjunction with the other resources listed on this page to ensure your logic model and evaluation plan are aligned, accurately reflect your project, and are of high quality.
Making Project Measures Meaningful: Quality, Relevance, Usefulness, and Beyond: OSEP grantees often focus their evaluations and project performance measures on the outcomes of quality, relevance, and usefulness (QRU), consistent with the Government Performance and Results Act (GPRA). In this brief, we focus on developing project performance measures that move beyond assessments of QRU in two ways, by: (1) Limiting the number of project performance measures focused on QRU and (2) Striving for a greater focus on other types of outcomes in your remaining measures. These shifts can open the door to incisive measures that better capture your important project work and the outcomes associated with it.
Hiring a Third-Party Evaluator: This brief gives a quick overview of important factors to consider when hiring a third-party evaluator to support your project.
Managing Evaluation Data: This brief outlines the importance of managing your evaluation data and provides some tips for successful data management.
Grantee Guide to Project Performance Measurement: This CIPP TA product is designed to help projects funded under OSEP’s discretionary grant programs develop high-quality project performance measures. These project-specific measures supplement the required Government Performance and Results Act (GPRA) measures grantees are required to report to OSEP each year. This guide presents an approach for developing high-quality project performance measures and explains how the GPRA and project measures work together to allow for meaningful tracking of program and project performance. The guide also outlines a simple, but thorough, five-step process for developing performance measures that:
- supports reviews of documentation relevant to project work;
- helps identify critical activities, outputs, and outcomes associated with the project; and
- guides selection of the best measures of project performance.
Templates for each step are provided below.
Grantee Guide to Project Performance Measurement Webinar: In this webinar staff from the Center to Improve Program and Project Performance (CIPP) talk about the CIPP Grantee Guide to Project Performance Measurement, which presents a thorough, yet simple, approach to identifying and creating high-quality project performance measures. The tool guides users through the process with the goal of helping them demonstrate credible evidence of project progress and results. The tool includes a number of templates, including Excel worksheets, that grantees can use to develop their own project performance measures.
Effectively Communicating Evaluation Findings: This CIPP TA product is designed to help grantees effectively communicate information about project progress and results to a variety of audiences. The product includes a discussion of why grantees might want to communicate with a specific audience, and outlines key questions to ask when designing a communications plan for different audiences. It highlights some contextual factors that might influence communications with specific audiences, and presents a brief discussion of challenges and ethical considerations associated with communicating evaluation findings. It also provides an overview of common communication tools and products that have been found useful with specific audiences. Throughout, checklists and worksheets tailored to specific audiences are included to help staff think through how to create a plan to link evaluation data and communication strategies.
Evaluating Special Education Programs: Resource Toolkit: This comprehensive Toolkit provides an array of evaluation resources for OSEP grantees. Starting with a discussion of the basics of evaluation, including working with third-party evaluators and developing an evaluation budget, it walks the reader through the process of planning and conducting an evaluation. It also offers some key methodological considerations, such as how to conduct high-quality data analysis and how to develop a system to measure fidelity of a project. The Toolkit is designed to highlight important issues to consider and to offer a variety of resources, including sample templates and forms, links to other evaluation-related products and resources, and recommended readings on research and evaluation.
Demonstrating Evidence across the Project Cycle: This interactive tool discusses the different types of evidence that can be used throughout the project cycle to, for example, support theories of action, choice of interventions, and provide formative feedback on how project activities are progressing. It also discusses how to use the evidence collected to monitor progress and fidelity in implementation with the goal of demonstrating ultimate project results, or long-term outcomes. For each phase, the tool identifies what types of evidence might be collected, where they might be found, or how they might be produced and used. The tool includes links to resources that grantees and evaluators might use when evaluating their project.
CIPP Logic Model Outline: The CIPP Logic Model Outline provides information to help project staff and evaluators think through how they will implement and evaluate their project. It provides guidance on typical types of inputs, activities, outputs and expected short-term, medium-term, and long-term outcomes that should be considered when planning, implementing, and evaluating a project. It also illustrates the roles of external factors and evaluation throughout the project cycle.
Budgeting for Evaluation Brief: Knowing how much to budget for an evaluation requires an understanding of the evaluation process and of the various factors that might influence costs. This CIPP brief discusses these factors and the included Evaluation Cost Considerations Worksheet can help you think more deliberately about the different factors affecting costs in an evaluation.
Conceptualizing Capacity Building: Capacity building is a term that is frequently used, yet hard to define and measure. The CIPP brief “Conceptualizing Capacity Building” presents the different dimensions of capacity building described in the literature and outlines strategies to evaluate capacity building efforts. It also includes an annotated bibliography with links to useful tools such as state and district capacity assessments.
Why Evaluate?: This CIPP infographic discusses some of the reasons to conduct an evaluation and outlines concerns that are commonly associated with evaluations.
The following webinars have additional information and recourses related to evaluation.
In this webinar staff from the Center to Improve Program and Project Performance (CIPP) talk about the CIPP Grantee Guide to Project Performance Measurement, which presents a thorough, yet simple, approach to identifying and creating high-quality project performance measures. The tool guides users through the process with the goal of helping them demonstrate credible evidence of project progress and results. The tool includes a number of templates, including Excel worksheets, that grantees can use to develop their own project performance measures.
Are you looking for a third-party project evaluator? Find out what to look for, why you need a third-party evaluator and how to plan for their work with your project.
This Exploration/Application webinar focused on identifying evaluation needs, finding and hiring, as well as monitoring and managing evaluation progress. The presenters offered guidelines and demonstrations of how to create a third-party evaluation scope of work, and explain how to use it to design a Request for Proposals and contract for the third-party evaluator.
The accompanying “Guidelines for Working with Third-Party Evaluators” document is designed to assist grantees and their OSEP...
These webinars discuss important aspects related to planning, designing and conducting customer surveys as part of an evaluation. The first webinar outlines the varied purposes of customer surveys, walks participants through the process of planning a high-quality customer survey, identifies some of the common problems related to planning customer surveys, and highlight the benefits of a well-designed customer survey. The second webinar walks participants through key aspects of survey design, identifies some of the common problems associated with designing customer surveys, and discusses how to carry out pilot testing to improve the quality of the survey. The third webinar, outlines...
These webinars discuss the ins and outs of how to plan and conduct high-quality customer interviews as part of a rigorous evaluation. The first webinar discusses the benefits and limitations of using qualitative interviews in an evaluation and walks participants through the “who, what, when, where, and how” of integrating interviews into an evaluation. The second webinar discusses what it means to collect “high-quality” qualitative data, provides strategies for collecting high-quality data through interviews, and discusses ways to analyze and report interview data in a rigorous manner. It also includes a brief discussion of common software packages for qualitative analysis...