Home PublicationsData Innovators 5 Q’s for Jason Saul, Chief Executive Officer of Mission Measurement

5 Q’s for Jason Saul, Chief Executive Officer of Mission Measurement

by Joshua New
Jason Saul

The Center for Data Innovation spoke with Jason Saul, chief executive officer and founder of Mission Measurement, a social-change focused analytics firm based in Chicago. Saul discussed how most social programs don’t use meaningful data to measure their progress and how Pandora Internet Radio inspired him to take a predictive analytics approach to social change.

This interview has been edited.

Joshua New: Mission Measurement focuses on helping organizations make better decisions about how to affect social change. Why is there a need for this?

Jason Saul: My background is in analyzing government programs and policy, and while I was doing bonds for public projects it struck me that there wasn’t any really good way to measure these programs. I started something called the Center for What Works to develop these methods, publishing books and doing consulting on measuring social impact, and then a couple years ago the governor of Illinois asked me to be on a bipartisan commission called the “budgeting for results commission,” which was set up by the Illinois legislature to figure out what government programs actually work so we could fix the state’s budget problem.

We ran into a major quagmire on the commission because we couldn’t figure out how to get the right data for the legislators and the budget analysts to actually make recommendations for the budget. That left us with two unacceptable alternatives. On the one hand we had a bunch of administrative performance metric data, which had really low value in terms of actually determining the impact or value of a program—it was just data about things like “number of miles of road laid,” and “number of carnival rides inspected.” Most governments call this “performance data,” but I call it just low-grade sludge of useless administrative data. On the other hand, we had university researchers that could crawl through these programs for years and perform randomized control trials to giving you an answer about what works five years after the fact. Given that we had to issue a budget in three months, neither of these would be acceptable options.

We had to invent a third option, and I realized that we needed to get government to start generating predictive data about what programs work. Pretty much every field in the world has tools to make decisions about what to do based on predictions about what will work, except for the field of policy. For example, no lawyer would ever go into court without comparing their case against historically similar cases in a legal database like LexisNexis. No investor would ever take a stake in public company without analyzing the company’s performance and making a prediction about its future. In the policy space, without these tools we’re essentially just guessing, which means most of decisions the public sector makes are not supported by good data, benchmarks, and analytics.

New: Could you explain how the Impact Genome Project–the technological backbone of Mission Measurement–works?

Saul: So this third option I came up with was to create predictive data. We can pull from thousands and thousands of studies and social research about what actually works, structure all of that unstructured data, and turn it into quantitative metrics that we can use to predict outcomes and make better programs. To do this, I needed people who were good at measuring “squishy things,” like social outcomes.

I was listening to Pandora Internet Radio when a Taylor Swift song came on and I thought, “wait, I don’t like Taylor Swift!” But then I listened to the song and realized I like a lot of its musical attributes – something that Pandora’s Music Genome Project realized about me, based on my other tastes, that I never would have. I found the guy who invented that system to develop a similar approach for social impact. So basically, the Impact Genome Project is the largest meta-analysis ever done of social science.

New: How do you you quantify aspects of social change?

Saul: The Impact Genome Project is a public-private venture and we partner with a lot of major foundations to underwrite this research. Our goal is to look at the entire evidence base of social science, meaning all published and unpublished studies ever done on every issue of social impact. There are 11 “genomes” that we’ve established, related to art, culture and identify, youth development, economic development, criminal justice, health, education, and so on. For each genome, we collect all of the studies we can find, grade them on the quality of their evidence, and then decode all of these studies. That means we have analysts read through these studies to figure out the efficacy factors of all the programmatic variables unique to a program—the “gene” that drove a particular outcome. If you look across thousands and thousands of these programs, you start to see a lot of the same components popping up in different ways or to different degrees, which allows us to standardize these “genes” and find which are the most effective and determine which “genes” a program should prioritize over others.

The Impact Genome Project has two parts. The first is building this universal evidence base, and the second is building tools to use this evidence. This allows us to take a program, run it through a genomic analysis based on our evidence base, and learn a lot about it. This lets us say things like, “every other program like mine that was successful used these certain factors,” which can help us improve existing programs, and even develop new programs entirely. This also helps us benchmark programs against others to estimate the cost per outcome of a particular program versus a different but similar one. We’re finishing building our education and health genomes now and will finish the rest over the next three years.

New: What are some early success stories from this kind of data-driven approach to social change?

Saul: Right now we’re still in the development phase, so there aren’t any success stories just yet. But with the help of our clients we’re measuring specific programs with some really promising results. For example, we worked with 3M to analyze all of their grants for science, technology, and math education (STEM) education. Then Adobe came to us and asked us to measure all of their STEM grants, and then Microsoft, Intel, and United Way asked us to do the same. Because we keep measuring the same outcomes, we were able to develop a standardized method for this process and build out our genomes.

So other than STEM, we’ve completed a number of genomes for issues like microfinance, food security, college readiness, and career readiness, and we’re going to be making these available this fall.

New: On your website, you note that the United States spends $6.3 trillion on programs devoted to social change every year. If you could speculate a bit, what do you think would happen if all of these programs were based on your data-driven approach?

Saul: Three things. First, the way we’re going about evidence-based programs today is wrong. We have a binary approach of “you are evidence-based, and you are not.” The only reason why a program might be evidence-based is because someone could afford to evaluate it and write up a study. But a lot of programs couldn’t afford evaluators so we don’t know really anything about them. Our approach, we think, will democratize evaluation and evidence so that we can look at any government or private program and determine the extent to which it relies on evidence. This will give policymakers and practitioners a wealth of knowledge to navigate these kind of programs, which will dramatically lower costs of evaluation.

Second, this approach can promote a lot of innovation. Some people think that this approach will stifle innovative approaches, and they say to me, “what if there’s a program that’s never been studied before? Your genome will write it off or think it fails!” But actually the opposite is true. It doesn’t matter whether or not your program was tested before because we’re looking at the underlying factors that drive outcomes. So if we’re looking at reading proficiency, for example, we can definitively say what degree of parental involvement, frequency of instruction, and so on are likely to be effective. No matter what the new program is, we can see the roles these variables play and estimate its likelihood of success. We’re not talking about inventing some new strain of social impact, but evaluating the delivery mechanisms for factors we know to change behavior. Now that we can standardize these factors, the potential for innovation in these delivery mechanisms skyrockets.

Third, we can dramatically reshape the value equation for government. Now that we can benchmark the cost-per-outcome of creating a job, or reducing obesity risk, or getting a kid to pursue a STEM career, we don’t have to guess anymore—we can know. We don’t have a resource problem in government, we have a resource allocation problem. We don’t really know what works so we’re spraying money all over the place, hoping it works, and then measuring it after the fact. If we can restructure this approach and fund things that we know will work, we can allocate resource so much more efficiently. I actually predict that we could cut the entire government budget in half and help twice as many people once we know how to get the most bang for our buck. Using data instead of guessing is the only way we can get closer to that goal.

You may also like

Show Buttons
Hide Buttons