Programs are designed and implemented to create change, often throughout multiple levels of a program; but at the very basic level, change occurs in individuals. Quoting Albert Wenger, “change creates information.” Information about changes that occurred during a program is essential to evaluate a program.
According to Radhakrishna and Relado (2009), a program might influence change in a program participant’s knowledge, attitude, skills, aspirations (KASA), and behavior. Another common set of individual outcomes called KAP include knowledge, attitudes, and practices (for example, see Chaplowe (2008)). At their intersection, changes in:
- what participants understand (knowledge),
- what participants use to base their decisions (attitudes/beliefs), and
- what participants eventually enact (practices/behaviors)
lead to useful information about the potential impact of a program.
Interviewing multiple individuals for a project is time consuming and scheduling those interviews also can be time consuming. There are several online applications that can match available interview times between interviewers and interviewees; however, I created an easy, quick, and no-cost system to schedule interview times using a Google Doc as described below.
I created a Google Doc that listed available day/times I was available for interviews. This Google Doc was shared with all interviewees at the same time using a shared link giving them permission to view but not edit the online document.
I easily and quickly edited the online document with days/times that became available or unavailable as my time commitments changed. At some level, using a Google Doc in this manner was a quick way to develop an instant website describing my availability. I used an email similar to the one below to share this process with those I was planning to interview.
I’d like to schedule an hour phone conversation to talk with you about [name of program].
This online document suggests several day/time possibilities. Please reply to this email with your preferred choices. If none will work, please suggest a couple others. I’ll confirm in a return email.
I’m really looking forward to talking with you about [name of program].
The Harvard Family Research Project offers several useful evaluation resources. The guide Afterschool Evaluation 101: How to Evaluate an Expanded Learning Program is one of their valuable resources useful to provide non-evaluators an overview of program evaluation. The guide is organized around nine steps:
- Determine the evaluation’s purpose
- Developing a logic model
- Assessing your program’s capacity for evaluation
- Choosing the focus of your evaluation
- Selecting the evaluation design
- Collecting data
- Analyzing data
- Presenting evaluation results
- Using evaluation data
Evaluations vary as much as programs do (i.e., different activities, duration, outcomes, et cetera) underscoring the importance of wisely choosing the focus of an evaluation. Step 4 in the guide “Choosing the focus of your evaluation” describes a five tier approach that is summarized below (pages 13-16). Determining an appropriate evaluation focus is largely dependent on a program’s maturity and developmental stage.
- Tier 1- Conduct a needs assessment to address how the program can best meet needs
- Tier 2- Document program services to understand how program services are being implemented
- Tier 3- Clarify the program to see if the program is being implemented as intended
- Tier 4- Make program modifications to improve the program
- Tier 5- Assess program impact to demonstrate program effectiveness
As you can see, evaluation can and should coincide with a program throughout its lifespan. These tiers are useful to help design an evaluation plan and to determine appropriate methods of data collection and analysis.
Out of school programming often seeks to develop children’s’ outcomes beyond those purely academic. Working with program leaders to align program activities/goals with specific outcomes can be a challenge. In many cases, program activities are determined prior determining program outcomes. In these instances, understanding components of positive youth development can be useful to reflect on the relationship and alignment of program activities and program outcomes.
First, the Oregon State University 4-H Youth Development Program developed a Positive Youth Development Inventory (PYDI) intended to assess PYD changes in students ages 12-18. The collection of 55 Likert scale items measures the latent constructs of:
Second, The Colorado Trust developed the Toolkit for Evaluating Positive Youth Development containing survey administration guidance and pre-post and post-only instruments that examine 8 sets of outcomes or domains that include:
- Academic success
- Arts and recreation
- Community involvement
- Cultural competency
- Life skills
- Positive life choices
- Positive core values
- Sense of self
Both resources above were initially designed for students in Grade 4 or higher. Please indicate in the comments below if you know of resources suitable for students in Kindergarten-Grade 3 (ages 5-8).
As indicated in the tagline of this blog, I’m interested in the design, implementation, and evaluation of educational programs. This post, to be refined over time, serves as a framework describing relevant topics that will be explored in the future.
About program design:
- Teacher preparation, induction, and development
- Mathematics professional development
- College readiness and persistence
- School-community intersection
- Community health
About program implementation:
- Fidelity of implementation
About program evaluation:
- Evaluation capacity building
- Empowerment evaluation
- Developmental evaluation
- Monitoring and evaluation
- Evaluation questions and rubrics
- Theory of change and logic models
What other topics intersect or are aligned with these topics that would be helpful to add to this list?
Several clients have more than one evaluation project happening at the same time. Evaluation project activities including data collection, analysis, and reporting differ for each of the projects. As a means to consolidate all of the activities in a monthly snapshot I developed the evaluation memo.
An evaluation memo is a monthly correspondence (1-3 pages) between myself and a client that initiates a dialogue that:
- Recaps evaluation activities that occurred that month
- Poses questions to clients that need responses for the evaluation to move forward
- Requests additional documents, data, or information, and
- Shares upcoming activities and deliverables related to our program evaluation work
The monthly evaluation memo serves as a running record of recent past evaluation work, upcoming evaluation work, and a current client to-do list to keep the evaluation moving forward and on-track.