Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.
Agencies don’t lack for dashboards, benchmarks, scorecards and a whole host of metrics for their programs. But do those numbers show the whole picture? And what are the questions that agencies can’t answer with the data they already have?
The Foundations for Evidence-Based Policymaking Act, which President Donald Trump signed in January, gives agencies until the end of July to name chief data and chief evaluation officers. Their jobs will focus on linking up data from other agencies, to make connections they haven’t been able to make before.
“I think it’s important to emphasize the fact that we serve as brokers in lots of different spaces to support evidence-building, and one of those is to really narrow and determine the challenges with data access,” Yancey said. “It’s important to really make sure that you focus on figuring out what specific barriers you have to accessing data linkages, because those require different conversations with different parties, all of which probably roll up to talking with your lawyers.”
Officials at the Agriculture Department’s Economic Research Service wanted to know why more people who’re eligible for the Supplemental Nutrition Assistance Program (SNAP) don’t sign up for benefits. So ERS pulled statistical data from the Census Bureau to compare the number of people eligible for SNAP benefits, versus the number who actually sign up.
Part of the data came from the bureau’s American Community Survey, which goes out to more than 3.5 million people each year. Mark Denbaly, the deputy director for food economics data at ERS, said cross-referencing these two data sets might help shed some light on why more eligible Americans don’t enroll in SNAP.
But there’s more to this than just tracking and measuring data. Denbaly said agencies won’t get good results if they’re not asking the right questions.
“It’s easy to say, let’s have data analytics — and do what? You have to have the right questions and then do the right analysis to infer the right results, and that shouldn’t be underestimated,” Denbaly said.
That’s where the Treasury Department, and components like the Bureau of the Fiscal Service, find themselves right now – asking the right questions and then looking for an answer through the data.
Amy Edwards, Treasury’s deputy assistant secretary for accounting policy and financial transparency, said those questions can go after big challenges, like how to reduce improper payments through a payment integrity center the agency recently launched.
“We’ve been developing a data strategy at Fiscal that’s really focused on what use cases are we trying to solve. What questions are we trying to solve with all the data that we have? And let’s start there, and develop prototypes based on use cases that we think are most valuable,” Edwards said.
But Treasury and its components have also reached out to the public to find out what they want to know about government spending. They polled tourists on the National Mall to ask what questions they had about government spending. That led to the Fiscal Service to launch a website, called Your Guide to America’s Finances, which collects real-time data on federal revenue and spending.
But to really make a difference, chief data and evaluation officers need to find willing partners at their own agencies and other agencies. Under the Evidence Act, part of their roadmap will come in the form of agency learning agendas.
“Those are documents that highlight what are the priority research questions that we have as an agency. What don’t we know that would be useful for us to find the answers to improve performance,” said Andrew Feldman, a director in public service at with Grant Thornton.
But agency program staff might not always share the same enthusiasm for the evaluation staff, since those evaluations could ultimately lead to Congress slashing budgets for under-performing programs.
Thomas Kelly, the acting vice president of the Millennium Challenge Corporation, said his team has won skeptics over through their new evaluation briefs, which give an overview of what the agency intended with its programs, then outlines what actually happened and shows the lessons learned.
“The difference between the evaluation staff and the program staff, there can often be a tension between them. Most people don’t like to be evaluated. But these evaluation briefs are kind of an easy way to start the conversation, and our evaluators have actually said it helps to draw people in,” Kelly said.
And Yancey, over at Labor, said the good way to get buy-in from people is to find those who are already asking questions about their program’s effectiveness.
“Don’t try to convert those that are resistant, because you’ll waste way too much time. What works more effectively is to start to work with the people that want to work with you, and then you create champions for you. I cannot tell you how awesome it is to be in a room when … your program counterpart is saying how awesome it was to work on an evaluation.”