How agencies are actually using data and data science to evaluate programs

Program evaluation dates almost as far back as government programs. But the art and science of program evaluation is always changing.

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.

Program evaluation dates almost as far back as government programs. But the art and science of program evaluation is always changing. The Foundations for Evidence-Based Policymaking Act, passed back in 2018, aims to bring data and data science more deeply into evaluation. There’s even a foundation to support this work. Recently it surveyed agencies to see how they’re doing. Data Foundation president Nick Hart joined the Federal Drive with Tom Temin to discuss the findings.

Interview transcript:

Tom Temin: Nick, good to have you back.

Nick Hart: Hey, great to be here.

Tom Temin: Now, you did a survey of federal agencies that come under the, let’s call it the Evidence Based act or the Foundations Act. And I should say that six months ago, the Office of Management and Budget — under the Biden administration, as opposed to the earlier administration, the Trump administration — issued its own guidance on how to comply. So did the survey look at how they are doing in light of that particular set of guidance? Or did you look deeper than that?

Nick Hart: Well, definitely deeper. And I guess, let me set the stage here. So the Evidence Act really set requirements in place for much of government to evaluate programs in a way that has never existed before. So for the largest government agencies, it’s now an expectation that you’re evaluating your programs. Some agencies had been doing this for a long time, but many had not. They didn’t have evaluation officials or, in many cases, policies to execute evaluation. The Evidence Act requires that for the large departments like Defense, Health and Human Services, Agriculture. OMB six months ago issued some guidance to take it one step further. And for the agencies that the Evidence Act did not require it of, OMB said this is probably a good idea, we should be evaluating programs across government. So OMB says evaluation is a core function of government. So just like managing funds and the role of a chief financial officer or a CIO, evaluation is an absolutely core function. So there are things that we expect agencies to be doing as they’re thinking about implementing program evaluation. So that means you have to have a person that is tapped to execute the task, probably a team of people, or a staff, to think about what programs to evaluate, how to evaluate, what data you might need. There are requirements around planning for evaluation. So we require agencies now, under the Evidence Act, to have annual plans that tell us what programs they will be evaluating and how. There are these things called multi year learning agendas required by the Evidence Act. That’s intended to align with the agencies’ strategic plans that we’ll see next year.

Tom Temin: Well, the Evidence Act also has, well, the act has the word evidence in its title. So not only are you evaluating, but you’re also using evidence. And so that kind of ties in the data idea. There’s a separate data act, though, also that somehow bridges these two. Fair to say?

Nick Hart: That’s right. So, back in 2014, a major open data law here in the United States was about the financial information that we have in our government. So the intent was to make government spending information more transparent. And this is actually really important for evaluation and the Evidence Act. Spending and how we’re spending government dollars and taxpayer resources is really critical for thinking about whether programs are achieving their goals and how they’re achieving their goals in a cost effective manner. So that’s something that we’ll be looking for as these programs are going forward and how they’re being evaluated.

Tom Temin: Alright, so you surveyed agencies to see how they are doing with the Evidence Act. And how are they doing?

Nick Hart: I think we were quite surprised. There’s a lot of really great effort happening across the agencies. And I think the evaluation officers and officials that we surveyed are showing a lot of promise. They’re they’re making great strides in implementing the Evidence Acts, in many cases at a faster rate than we would have expected. And that’s not to say that this is perfect. There’s a lot of room for improvement, of course. But there are a lot of requirements that are a part of this Evidence Act. And the key piece that I think surprised us is just how collaborative this engagement is happening. So there are a lot of leaders that are chief ex-officers in government. And really, when we’re talking about evaluation and the different aspects of data, or working with different program managers, all of those folks have to collaborate when we’re talking about evaluation. And what we’re seeing is that the evaluation officers are really taking that to stride. They’re they’re collaborating across the agencies, ensure that they’re taking the highest priorities of the agency and really implementing their evaluation directives accordingly. So most of the departments, and the officials that responded, have clear evaluation policies already in place. They have clear expectations about what they’re supposed to do. But there are some gaps that we also observe. For example, a lot of people in the departments don’t actually know what evaluation is yet. So while the evaluation officials know, a lot of other people are still learning. So there’s a clear space for a lot of education that we can definitely see.

Tom Temin: We’re speaking with Nick Hart, president of the data foundation. So things sound good from the agency self-reporting standpoint, but the gaps seem to be in the evidence itself that they have available to do program evaluation.

Nick Hart: Yea, that’s right. So obviously, to do evaluation, you need data. And one of the key themes that came out of the Evidence Act or the DATA Act in the government spending space was to make data more accessible, higher quality and ultimately more open, so you can use it for different forms of analytics. We know there are still tons of problems that exist across government with that accessibility question. There are other things beyond the evaluation questions that we know OMB still hasn’t issued guidance on. For example, around the Open Government Data Act that also passed as part of the Evidence Act, OMB hasn’t issued implementation guidance. So if you’re an evaluator out there, hoping to use some open data or link to open data as part of an evaluation project, that’s going to be a major limitation still. So there are things like that around this whole system and how the pieces connect that we really have a great deal of progress to make. And hopefully we’ll see more leadership out of OMB and the director of OMB, the Administrator of the Office of Information and Regulatory Affairs, and others, really leaning in here to champion these causes in the year ahead.

Tom Temin: Isn’t one of the difficulties, though, understanding what it is you want to evaluate about a program? I mean, it’s simple to say, well, we got this much money, did we spend it all? That’s an often-used evaluation criterion. Fairly simple. But did it serve all the people it’s supposed to serve? Or did we have improper payments because of people that should not have been served by the program who were, or did we have underpayments because there are people who should have been served that were not? That kind of question is a lot tougher. And I imagine setting the criteria for what it is you want to evaluate is one of the top jobs, the initial job.

Nick Hart: Yeah. Tom, you’ve you’ve identified one of the hardest things in evaluation, which is what is the question that you’re trying to answer? And it’s also the the most important thing to start with. So we have — and this is not surprising to those who are federal employees — many government programs, in law, have conflicting goals that is part of a compromise when we do legislative negotiations. You have different perspectives about things that a program should do, and instead of picking one, you pick them both, write them down in the law, and then the programs are charged with administratively sorting all of that out. Well, we then have to evaluate that and assess, you know, when we’re looking at the outcomes, how are these items stacking up and what’s being achieved? The reality is we can evaluate programs across those. We have to collect the right kinds of data, we have to be able to analyze that information, protect the privacy of the beneficiaries and the respondents to surveys and other individuals in the American public. But we can answer questions as long as we have the data and the right kinds of methods and the people to do the analysis. And that’s really what we’re talking about here. That’s the core element of having these evaluation functions across governments. But the core goals of the Evidence Act is to say the system must be in place; we want to be able to use this data in many cases that we’re already collecting across government. So setting those goals, identifying those goals, that’s a activity that really is supposed to be happening at the strategic planning is the design of programs as that’s happening at the outset. But in many cases, we’re coming in years later, after programs have been created, to do just that. But we can do that. And I think that’s something that we’ll be doing more and more in the years ahead as evaluation takes a stronger role across government,

Tom Temin: I guess, program evaluation, on first blush, seems like actuarial evaluation. Sounds really dull. But when you get into it, it’s pretty interesting stuff, isn’t it?

Nick Hart: Well, I think it’s fascinating. I’m a PhD program evaluator, so I’m a little biased in answering this. There’s certainly an actuarial component, right? So understanding the cost effectiveness of programs is a very important component. But even more important is understanding the outcomes of the programs. So if we say the food stamp program is going to improve a certain outcome for an individual in the American public or a child, do we see that outcome manifest in reality? That matters to somebody’s life. Determining whether that actually happens, and whether we can continue to improve it yet more — that is what government is about, and how we can continue to use government services and programs to really make a difference in the lives of the American people. Evaluation has a huge role to play in that. So we’re implementing evaluation programs and activities correctly. Yes, it’s making government work better. It’s making government work more efficiently. It’s ultimately helping all these programs accomplish their missions.

Tom Temin: And you have a long list of recommendations for agencies based on what you learned in the survey, and maybe just to go over a couple of the most important ones.

Nick Hart: Yeah, so you know, not surprisingly, and I feel like a lot of times I beat the drum on the need for resources. But the reality is, you can’t do this without resources. So this is a combination of having the right expertise and people who can do evaluation, can collect the data, manage the information. Sometimes this requires appropriations; in some cases, it requires allocation of funding within an agency. So this is an obligation for the agencies, for OMB and even for Congress itself to ensure the right kinds of levels of resources are really present.

Tom Temin: And by the way, let me just stop you right there for a second. Is there any kind of metric or standard for, say, percentage of dollars of a program that should be devoted to evaluation? Suppose you launch a program, and Congress as its wanted to do, throws 3, 4, 5, $700 billion at something. 1% should be evaluation? 0.5%? Is there any metric there?

Nick Hart: There are a number of different metrics that are out there and suggestions that have been made. Back in 2017, the Commission on evidence-based policymaking suggested a minimum of 1% for evaluation and data-related activities. But in many cases, that’s not enough. Particularly when we’re talking about data collected from state and local partners, you know, think about the vital records system, for example, that we’ve relied on so heavily during the global pandemic in the last couple of years. That’s a system that, you know, probably needs more resources than it’s currently being allocated, for example. So you know, definitely an area where, probably across programs, there’s a great variety of funding needs,

Tom Temin: Because 1% of a trillion is 10 billion, that should get you some pretty good evaluation.

Nick Hart: Well, that’s right. So the Social Security disability Insurance program’s $140 billion a year. 1% of that program is a lot of money.

Tom Temin: Anyhow, so some of the other recommendations.

Nick Hart: Well, definitely a clear recommendation here about some training and education needs for evaluation across government. And this is definitely relevant for senior political appointees, senior executives on the career side, but also really for the evaluation officers themselves to be doing more education about the role they can serve and play for improving the mission of the agency. So really helping folks just understand what evaluation is and how it can improve a variety of activities. Then one really important topic, which is the use of evaluation results. So it’s one thing to collect data, one thing to conduct an analysis, and an entirely different thing to use that analysis to drive a decision. And that’s really what evidence-based policymaking — evidence-informed policymaking — is about. We’re generating evaluations so that it informs decisions to improve policies and programs. How we use that information is really important. Sometimes this can be really hard. You know, there’s a lot of political decisions that are made out there using a variety of factors, which sometimes it’s value-based, sometimes it’s science-based, a whole number of things. Evaluation should always be one of those inputs that have an important seat at the table, so that we can say this is the impact that this program has, and if we increase funding or decrease funding, this is how it is affected. So we know we have a lot of work to do on the use side. And this is a really important one for going forward.

Tom Temin: And I can’t stand the word “governance,” but it does raise the question of program ownership. The program manager is kind of the essential function in government. And that’s a great job. But then, as you mentioned earlier, you have all these CX, this that and the other people surrounding the program people on financial, on procurement, on technology, and so on. And then you have evaluation officers. Do they generally report, or should they report, to the program manager? Or should they be some kind of an independent function, as if we don’t have enough auditors like IGs and GAO already?

Nick Hart: Well, there is an important angle to having some independence for the evaluation function. And that’s because when you’re getting the results from an evaluation, you don’t want it to be so inherently biased by those who are implementing a program. And this is not to say that the program managers have have that that bias, but you need it to have some independence so that it can be viewed objectively. So generally speaking, there’s a role for evaluators to be part of a program but also independent of a program. So the way the Evidence Act is written, it suggests that the evaluation officials and officers should be separate and independent. Generally, that means reporting to someone like a deputy secretary, fairly senior level official, generally we expect it to be a full-time position in large departments. There’s a lot of work to be done here. So if you’re dual having your evaluation official, you’re probably doing this wrong, because there’s just way too much to be done for them to also be the CIO and the CDO and, you know, many other things at the same time. And importantly, they have to be senior enough in the in the agency, in the department, to be able to engage with other senior officials in a meaningful way. And sometimes that involves asking hard questions, and really challenging is this the right goal? Are you asking the right questions about how to implement your program? Are you thinking about measuring the right things? And when we ask those difficult questions, that’s really how we can improve the programs over time. We’re going to learn and improve by doing these evaluations over time.

Tom Temin: And a final question — is there any difference between program analysis, which is a really old word, and program evaluation?

Nick Hart: Well, in fact, program analysis and program evaluation are very closely related. We’ve been doing program analysis literally since the beginning of government. Evaluation is a slightly newer concept. It’s a field that has grown over the last 30 to 40 years. It has very specific methods and approaches. And one of the most important distinctions is that evaluation often has participatory processes. So instead of just a researcher coming in and saying I have data, I can answer questions, the evaluator comes in and meets with the program managers to understand the programs, the goals, the missions, what is needed, how the participants interact with the program. And really tries to delve into providing meaningful analysis, but also — and importantly — recommendations that are useful for the decision maker. So the whole point of the evaluation is to build in the recommendations that can improve a program, and that’s the important distinction.

Tom Temin: So it is two separate functions though, and will remain so?

Nick Hart: Very much related, but in my mind, separate functions.

Tom Temin: Nick Hart is president of the Data Foundation. Thanks so much for joining me.

Nick Hart: Thanks for having me, Tom.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    WikipediaEPA building

    EPA’s Evidence Act officer honored for using data in program evaluation

    Read more

    How a ‘culture shift’ at SBA got program offices to embrace evaluations

    Read more
    GSA

    Agency evaluation offices reviewing equity in pandemic spending and recovery

    Read more