Why You Should Be Measuring Intermediate Outcomes
Do you work to prevent violence using community-based programs or approaches? Have you promised to show measurable reductions in violence in your community by the end of a two-year grant period? Really? Why?
In the world of prevention, we haven't asked this question enough. I'm here to tell you that the chances of you successfully moving the needle of the prevalence of violence in your community within one grant period using one prevention program are slim to nil. I know this isn't a popular thing to say as most of us are in this work to feel good about making a difference in the world.
However, today I'd like to talk about feasibility and the ways in which your program outcomes could be setting you up to fail as well as offer some suggestions for how we can be doing better. Also, while this blog post centers on domestic violence prevention as an example, the points I'm making apply to the prevention of any complex socio-behavioral health problem.
First, Some Context
For more than 20 years, I've had the honor of working with communities to end intimate partner violence and sexual violence. I'm proud to work alongside some of the most innovative and brilliant preventionists around. One thing you should know about me is that I went to graduate school to become a prevention researcher and apply my lived experience of tobacco control in California to preventing issues such as intentional interpersonal violence (including domestic violence and sexual violence). I learned quickly that interpersonal violence is much more complex and harder to address; not to mention severely under-resourced.
Let’s look at domestic violence prevention as an example. The resources allocated by both federal and state governments to fund the prevention of domestic violence pale in comparison to the resources allocated to prevent most other socio-behavioral issues. For example, in California, which has seen marked reductions in tobacco use over the past 40 years (and is looked to as an example of how public health works), allocated just 1.9% of the amount of money it gave to tobacco prevention to domestic violence prevention in fiscal 2019. Nationally, only 10 states currently are receiving a small amount of federal money to prevent domestic violence, and it is funded as a pilot program.
[Funders] often exploit the scarcity economy of behavioral health by asking for unrealistic outcomes to be achieved within a short grant period.
So, why are we asking ourselves to measure changes in prevalence for something as vast and complicated as domestic violence when we simply don't have the funding for it? I mostly hold funders responsible for this, because they often exploit the scarcity economy of behavioral health by asking for unrealistic outcomes to be achieved within a short grant period. (End sexual violence in your community in three years? Sure, why not?!?!) Funders also are often pressured by elected officials wanting to see results for the untenable promises they made to get elected. I'll be posting an additional blog post to address this conundrum and offer some ways out, so stay tuned for that.
For now, all of this is to say: Don’t be too hard on yourself if you realize you're over promising what your program can achieve. You exist within a system that was built without prevention in mind, and it’s time for evaluators and preventionists to do our part locally to change this system.
The Case for Measuring Intermediate Outcomes, or… Measure What You Can Actually Change
It's the public health nurses’ conundrum that goes back centuries: How will people ever know how important a vaccine is if no one is getting sick because of the vaccine? In other words, success in prevention means a lot of things are NOT happening – and how do you measure what did NOT happen?
Measuring prevention is hard.
We have to be smart with the resources we have and measure the things that we actually can change within a given grant period. For example, you may be working to (ultimately) prevent violence, but will you have the time and resources to actually see reductions in violence? Probably not. If your programming isn't comprehensive (working across settings and populations with high dosage) and not funded long term, you probably won't see reductions in incidences of violence and perpetration. However, you CAN measure changes in the factors that contribute to violence.
We have to be smart with the resources we have and measure the things that we actually can change within a given grant period.
For example, building skills for solving problems nonviolently, increasing family support and connectedness, and ensuring a connection to a caring adult, all are factors that have been shown to protect someone from perpetrating teen dating violence. If we know from the research that when these factors increase it is likely that prevalence of violence will decrease, you do NOT need to show those connections in your small, community-based prevention program evaluation. What you need to show is that you're building skills for solving problems nonviolently, increasing family support and connectedness, and ensuring a connection to a caring adult. In other words, the role of local program evaluation is to draw the connections between your specific program and the factors that already are shown to reduce prevalence.
These factors are called "intermediate factors" or "intermediate outcomes" in the world of prevention science. Prevention science specializes in the study of effective prevention interventions, and one of the major contributions of this field has been the understanding of pathways by which a behavior can be changed and the importance of measuring the intermediate factors that predict longer term behavior or outcomes. These are known as “intermediate outcomes." Some researchers refer to this as “opening the black box" – or focusing on the factors that predict the desired change, rather than expecting long-term change with short-term funding.
In fact, understanding WHY you think a program works to impact the prevalence of a problem is essential to successful evaluation. "Evaluation using program theory [identifies] how we understand how a program works and what intermediate outcomes need to be achieved for the program to work. This allows us to distinguish between implementation failure (not done right) and theory failure (done right but still did not work). Without program theory, it is impossible to know if we have measured the right aspects of implementation quality and quantity." (The Essence of Program Theory, by Funnel and Rogers).
Just Draw It
Do you have a conceptual model, theory of change, logic model or drawing that shows how your specific prevention approach contributes to the larger, more comprehensive work of preventing incidences of violence? What part of the pie are you biting off and addressing, while your community partners work on the other pieces? Or, how are your efforts contributing to the larger long-term prevention efforts in your community?
In complex comprehensive initiatives, we cannot always attribute a change in outcome solely due to our efforts. Just as preventing the sale of alcohol to people under the age of 21 won't prevent alcoholism in a community; implementing one healthy relationships class at the local high school won't prevent teen dating violence in that community. However, that class can increase the protective factors of those youth, and you just need to show how those factors are connected to the myriad of other factors that are addressed by other community settings; all of which will ultimately lead a young person to choose whether to perpetrate violence.
Intermediate outcomes also give us easy access to modeling – you guessed it – the overlapping risk and protective factors that interpersonal violence has with other social and health issues. In a resource contained field, we have to be better at identifying and leveraging the factors that contribute to our issue as well as other issues. Using our previous example, one could include additional factors that overlap with multiple issues:
This kind of modeling helps clear up what you're actually going to change, and what falls under the programming or responsibility of additional community partners. What would you add to the model above? Which of the outcomes would you measure in the evaluation of “your awesome prevention program?”
About the author
Wendi Siebold, M.A., M.P.H., is president and senior research associate at Strategic Prevention Solutions. She specializes in interpersonal violence prevention research, program planning and evaluation technical assistance with communities, facilitating community coalitions, and the assessment of organizational and community capacity and readiness for prevention. She holds degrees in Health Behavior & Health Education and Community Psychology, and emphasizes a balanced approach that emphasizes scientific rigor within a realistic community context.
Special thanks to Pat Reyes, M.P.H., for her super smart contributions to this post!