PMI Assessment Glossary

Research Base

Observational Evaluation: Evaluations of a program that by design or by necessity do not include a comparison group to the individuals treated. Researchers may correct for the lack of a control group by controlling for factors that could be associated with the outcome examined.

Example: Studies on the impact of lead poisoning do not include control groups since children cannot be randomly assigned whether they are lead poisoned or not. It also does not offer many opportunities for natural experiments or, because it is so closely correlated with social-economic circumstances, good opportunities for matched control groups. However, most studies of the impact of lead poisoning do control for factors such as family income and education.

 

Quasi-Experimental Evaluation: Evaluations of a program that take advantage of a natural experiment or use a matched control group design, identifying individuals who haven’t received the treatment that are sufficiently similar to those who did. There are a wide variety of quasi-experimental designs that can meet a diversity of program designs and organizational realities and needs.

Example: A study of a new educational intervention may compare children in one classroom receiving the treatment to children in the classroom next door who did not. Because there is little reason to expect large differences between students in classrooms next door to one another in the same school (assuming they get assigned to their classrooms somewhat randomly), they offer a strong comparison group. However, some schools practice selective classroom assignments in which case a natural control group design would be flawed.

 

Random Control Trial (RCT) Evaluation: These are considered the gold standard of evaluation among researchers as they randomly assign potential participants to a treatment and control group, correcting for unobservable differences between those who receive the treatment and those who don’t. RCT however may raise ethical issues for those implementing social programs as they require denial of treatment to individuals in need.

Example: An afterschool program that always receives more applications than slots available, rather than selecting participants by the strength of their application, selects them randomly. As a result, the applicants selected are very similar on average to the applicants not selected by age, grades, demographics, community of residence, family income, etc.

 

Sufficiently Similar Program Model: A program model is sufficiently similar to a program studied when it can be demonstrated to share the key program model elements identified in the study and that the populations served share similar characteristics (ages, income levels, racial-ethnic distribution, etc.)

Example: While the line between a sufficiently similar program model and one that is not is fairly subjective, the below examples help to delineate some markers to look for:

Factor Program Model Sufficiently Similar Dissimilar
Avg. Daily Participants 18.5 12.5 8.0
Staff:Participant Ratio 1:14 1:12 1:6
Participant Age Range 9-13 10-12 14-16
Tutoring Included? No No Yes
Hours of Program per Week 9 8 12

 

Natural Experiment: A natural experiment makes use of an event outside of the control of the researchers, program providers, or program participants that has the same effect as if participants were randomly assigned to treatment or control groups.

Example: The State of Illinois increases the family income limit for child care vouchers from 180% of the federal poverty level (FPL) to 250% FPL in 2015. A natural control group of individuals with income between 181-250% FPL in 2014 can be compared to families with the same income level in 2015 to study what effects child care vouchers have on parental employment. This is a stronger comparison than studying differences between eligible families who used child care vouchers in 2015 to eligible families who didn’t use child care vouchers in the same year, because there may be some unobservable factor correlated with workforce outcomes that is also correlated with taking up child care. Comparing to the 2014 group that wasn’t eligible has a similar effect to randomly assigning individuals to a treatment or control group.

 

Matched Control Group In a matched control group design, the researcher identifies a sufficient comparison group (say students at a nearby school) and statistically evaluates the similarity of individual students to individuals students on observable measures, such as parent educational attainment, family income, school attendance, grades, etc. Each individual in the treatment and control group receive a “propensity score” (essentially an index value of their collective characteristics) and are compared only to those with a similar score. While these designs are stronger than classic control designs (which only adjust for differences, whereas matched designs seek to eliminate all observable differences), they are not as strong as natural experiments or random control trials because they cannot account for unobservable differences between the treatment and control groups.

 

Theory of Change

Aligned to Research Base: A program model that is aligned to the evidence base implements program components that have been identified through research as key factors in achieving the outcomes desired and removes components that have been demonstrated to be ineffective or even harmful.

Example: A program provider working with youth on probation adds a Multisystemic Therapy component to and removes a “scared straight” component (e.g., prison visits) from their program based on the research base, which finds reductions in future arrests for programs that use MST and increases in future arrests for programs that use scared straight methods.

 

Mid- to Long-Term Outcomes: Mid- to Long-term outcomes are those that will persist beyond the time that the participant/client is directly engaged in your services.

Example: In a workforce training program, a long-term outcome may be that five years later, 95% of clients are employed in the industry for which their certification is relevant.

 

Program Model Description: A program model description for a theory of change includes the key services and design elements for your intervention. It should give an idea of dosage, the characteristics of staff members who work with the participant/client, and the services and strategies that will directly help your clients achieve success.

 

Short-Term Outcomes: Short-term outcomes are those that participants/clients realize by the end of the program term or very soon after the program ends.

Example: In a workforce training program, a short-term outcome may be that a certain percentage of participants have achieved an industry-recognized certification.

 

Vision Statement: A vision statement describes how the community will be different as a result of your efforts as an organization. The community may be a specific geographic area or a specific type of individual.

Example: A workforce training program may seek to ensure all prisoners re-entering Chicago have access to living wage jobs.

 

Management Information System

Management Information System (MIS): An MIS is software that enables an organization to collect, manage and analyze data on their organization, programs, program participants, and program outcomes. MIS range from MS Excel spreadsheets or MS Access databases to low-cost web-based solutions, to high-cost, highly customized solutions. Organizational needs and realities determine which level of solution is the best fit.

Example: For an organization that provides a single program model to 25 individuals a year, an MS Excel spreadsheet is probably the best fit for that organization, and anything more would be an investment out of line with their needs. For an organization that operates across multiple sites, offers several program models, and reaches more than 1,000 participants a year, an MS Access database may not be sufficient and this organization may need to identify a more robust, possibly customized solution.

 

Data Quality

Input Variables: These are variables that describe the characteristics of your program model and your program participants/clients.

Example: For an afterschool tutoring program, these might include instructor-participant ratios, instructor educational attainment, planned service dosage, curriculum used, whether mentors are provided, or real-world experiences available (e.g., fieldtrips, college visits).

 

Output Variables: These are variables that describe the experience of participants in your program. These contrast from outcome variables in that they may have no innate value to a funder, but are important achievements in order to reach the outcomes.

Example: A preschool program might track attendance rates, program completion rates, and the results of a parent satisfaction survey as program outputs, tracking these variables on a regular basis to make adjustments to program performance. However, their funders may be primarily interested in the children’s performance on reading and math tests upon entering Kindergarten (data points not available for performance management during the school year).

 

Metadata: In our usage here, metadata are data points on your organization’s data and use of data. These data points can be leading indicators to identifying issues with data quality, enabling staff responsible for performance management to intervene more quickly.

Example: Metadata may include variables on the completeness of attendance records (participants are consistently marked absent or present), staff time logged into the MIS, frequency of usage for various reports, or variables on inaccurate data entry such as percent of birthdates entered incorrectly.

 

Outcome Metrics: These are variables or combinations of variables that describe what changed for participants as a result of participating in your program. Outcome metrics tracked should be achievable through a quality program implementation and also of value to potential funders.

Example: A physical fitness program might track change in body-mass index, reported change in exercise habits, and reported change in eating habits. This program would not be well-positioned to achieve an increase in grades even with the highest quality program implementation and would not be advised to track that. Funders would not likely be interested in whether or not participants could complete 10 push-ups at the end of the program. However, the program would be advised to track this metric if it was mission-aligned and as long as they were tracking other metrics, such as decreased BMI, that are of interest to funders.

 

Performance Management Staff: Performance management staff are those staff in your organization who have direct responsibility over designing data collection efforts, assigning data entry responsibilities, designing and maintaining the MIS, analyzing data within the MIS, and troubleshooting challenges staff have using the MIS and/or accessing reports, in addition to other performance management responsibilities such as selecting survey and observational instruments for your organization.

Example: These staff may include directors of evaluation, performance management, quality, or research; they may include analyst staff; they may include development department staff responsible for grants monitoring and reporting; or they may include frontline staff with specific data entry responsibilities. There are a wide variety of roles within an organization that may have specific performance management responsibilities.

 

Quality Assurance Process: A data quality assurance process may employ primarily two tactics. The first is an approval process for the creation of new client or program records. One staff member drafts the record and a second staff member reviews it before approving it for creation in the data system. The second is a regularly scheduled review of a random sample of records in your MIS. By general rule of sampling practice, a random selection of 5-10% of records for each staff member responsible for data entry on a regular basis should be sufficient to identify data entry issues and give a sense of the overall quality of data by tracking the number of records with identified inaccuracies. Staff review each variable in the record selected to ensure they are complete, conform to standard conventions (e.g., all street abbreviations have periods or no street abbreviations have periods), and are accurately entered to the best of the reviewers knowledge.

 

Validated Instrument or Process: Instruments designed to quantify qualitative information may often be subject to human error, most likely when inconsistent understanding of questions, definitions, and responses result in inaccurate quantification of the results. To avoid these errors, instruments and tools may be assessed by neutral parties for reliability (consistent understanding of questions, definitions, and responses across respondents and across time) and validity (the qualitative survey results are correlated with quantitative outcome descriptions). Continuous Quality Improvement processes can also be validated through research that they achieve what they aim to – higher quality programs.

Example: The Developmental Assets Profile is an example of a survey instrument that has been independently validated that self-reported results by youth are reportedly accurately (youth understand what they are being asked) and that they are correlated with measurable outcomes (when a youth says they are more engaged with school, their grades improve and their attendance increases).

 

Vision Benchmarks: Benchmarks for your vision are reasonable figures your organization can track to assess progress toward the vision. These may more likely be community-level indicators than individual-level indicators, depending upon the angle of your vision. There are a variety of public data sources that you can tap if your vision does relate to community-level indicators.

Example: An organization with the goal that all 3rd graders in Uptown are reading at grade level in 3rd grade tracks the actual percentage of 3rd graders enrolled in schools located in Uptown who are reading at grade level by third grade. It should not track only the 3rd graders at one elementary school in Uptown and report these results as vision progress or achievement.

 

Leading Indicator: Leading Indicators are variables that you can observe that precede other positive or negative changes in programming, organizational management, or performance management.

Example: Tracking number of views of reports in your MIS could be a leading indicator for staff engagement in your organization’s data-informed culture. If you see the number of report loads decreasing, you might expect to see lower turnout at monthly data review meetings or less timely compliance with data entry responsibilities.

 

Data-Informed Culture

Continuous Quality Improvement (CQI) Process: A CQI process is a standardized approach to reviewing information about the quality of your program implementation, identifying strengths and weaknesses, setting goals for improvement, outlining actions that will be taken to meet those goals, and regularly assessing progress and revising the path forward as needed.

Example: Six Sigma is an example of a private sector quality control mechanism that has translated quite well to the social sector. It is a five-step process: (1) Define (what does quality look like for your organization); (2) Measure (what do the numbers say is your state of quality); (3) Analyze (what can you learn from the numbers you are looking at); (4) Improve (what are actionable goals you can set to change the numbers); and (5) Control (how can you anticipate in the future a potential deficiency and formulate a response before it occurs).

 

PMI Assessment Results

Emerging Result: An emerging rating overall or for a specific component of your PMI Assessment suggests that your component or overall PMI is in a formative stage and, as a result, likely composed of significant gaps. Some of these may be significant limiters to your performance and require immediate corrective action, while others may be important aspects of a thriving performance management system but that can be put on a longer-term track for implementation.

Example: Lacking a management information system is a gap that may require immediate action, whereas lacking a process for tracking metadata is not as pressing and can be addressed later in the developmental process of your organization’s PMI.

 

Performing Result: A performing rating overall or for a specific component of your PMI Assessment suggests that your component or overall PMI has a solid foundation but has several areas that can be more optimized.

Example: You may currently only have one full-time staff member dedicated to a performance management role. In this instance, your PMI is likely in good hands, however, adding an additional full-time role sometime in the mid-term can help add time to your performance management, enabling more opportunities for data quality checks, customized report development, and facilitated use of data to inform decisions.

 

Thriving Result: A thriving rating overall or for a particular component of your PMI Assessment suggests that your component or overall PMI is operating at a very high level of rigor. There may not be many specific actions for improvement needed, but rather you may want to focus on continuously elevating the use of your PMI, for example, by participating in a community of practice or preparing for an in-depth program evaluation.

Example: You may have a high-rated Research Base and Theory of Change. However, you may wish to do a more in-depth evaluation of the quality of your theory of change mapped to your research base, illuminating areas for revision for your theories of change, and areas of focus for future research-gathering or evaluations.