How well are agencies actually complying with the DATA Act?

Federal inspectors general are finishing up the testing of their agencies' compliance with the Data Act. It's one of three audits required under the 2014 law. B...

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.

Federal inspectors general are finishing up the testing of their agencies’ compliance with the Data Act. It’s one of three audits required under the 2014 law. But the Council of Inspectors General changed the audit methodology between the current exercise and the one they did two years ago. Some say that could lead to skewed results though. One of them is Sean Moulton, senior policy analyst at the Project on Government Oversight. He joined the Federal Drive with Tom Temin to explain why. Read Sean’s article here. 

Interview Transcript: 

Tom Temin: You looked at the instructions that IGs were given on how to test whether their agencies are doing what they should do under the Data Act to report federal spending and you found something that seems incredibly arcane in those instructions. But actually, it could have big effects on the results of the audits. Tell us what you found.

Sean Moulton: We looked at these same audits. We’ve been concerned about federal spending for a while, so we looked at the audits when they first came out in 2017 and were excited. Maybe some of the few people excited that new ones were coming out in 2019. And when I looked at them, I was rather shocked at the initial results I was seeing because the the error rates they were finding were so dramatically different and so improved. So I went and looked at the instructions and found that they had changed how they were measuring the error rate so that the numbers would, even if you had the exact same number of errors from 2017 your number would be much lower.

Tom Temin: Let’s just back up for a moment. What IGs are looking at is whether agencies are putting their spending data in the proper formats with the proper tags. Such that could be correct as required by the Data Act.

Sean Moulton: Correct. They measure three different characteristics of quality. Timeliness, the accuracy, and the completeness. They’re rather self explanatory. Timeliness is if you got all of your data in on time. Completeness, if you have blanks in the field that’s considered incomplete. But if you have the wrong information in there, then that’s considered inaccurate.

Tom Temin: So the change in methodology then would make agencies with a given error right look like they are doing better under the second round of audits than they were under the first round of audits.

Sean Moulton: Correct. In the in the first round of audits, what they did is they looked at the all the information in an individual transaction as a whole. and they said if any piece of this record the pieces that we’re checking, if any of them are late, if any of them are incomplete, if any of them are inaccurate, then the record is inaccurate or incomplete. If you had one error in your record, then the record was inaccurate. It was let’s say a 1% because you have one record inaccurate. If you had two errors in there, it was still just the record was inaccurate. So in some ways it wasn’t precise because it basically discounted multiple errors in a single record. So to try and solve that, that imprecision, they decided to start saying, well we’ll look at each piece of information inside the record, each field in the spreadsheet. So you could have two errors, three errors or a dozen errors inside an individual record, but what that resulted in is instead of looking at, let’s say, 100 records, now you were looking at more than 5000 fields. Error rates are the errors over the number of items you look at. So if you grow that bottom number really big, your error rate is gonna come out looking rather small.

Tom Temin: It would look exponentially smaller, in other words.

Sean Moulton: Exactly. And we found some agencies that were very up front about it, and they said, look, there was a change of methodology. You cannot compare our error rate this year with their error rate from two years ago. But other agencies weren’t so accurate about it. They really claimed a big reduction. And I would say that one agency I appreciated because they actually reported both types of errors, and it really underscored what we’re talking about. They were looking at the 2019 errors, and they said that let’s, for example, this was a Department of Energy and they said we have 3% data element error rate. That’s the new way of measuring. Only 3%. That sounds pretty good. And they said, but 49% of our records that we looked at had an inaccuracy. The exact same number of errors, 3% your number, or 49% your number.

Tom Temin: So maybe one of the sins here then is to report using the word record is referring to a different thing than in the 2017 audits versus the latest audits.

Sean Moulton: Yes, and one of the problems as I said the new way they’re doing it, you certainly could look at it and say statistically, it’s more precise, But I think it ignores the fact that most people looking at this data that go on look up contracts or grants and are trying to find out information, they are concerned about the whole transaction. If there is an error or two errors in a record, then they start to mistrust that record. And one of the things that compare this to is a news story in the paper. And if the paper was full of stories and every story had an error, you probably wouldn’t think that paper was the one you should rely on. Now they might say, oh but most of the words are correct. So we have a 1% error rate. But you would still look at that and say, this is not a very good paper. And I think the same is true here for this spending data. I don’t care if they get the amount wrong who got it wrong or where it came from wrong. If they’re getting something wrong in a record than the record itself really can’t be used.

Tom Temin: Sure. So to put it another way, they could say an agency could have 1000 errors in one record. But that’s the only record that has an error. Or they could have one error in 1000 records. Then that would be a much more widespread problem, because even one error in a total spending record renders it not usable.

Sean Moulton: Correct. Hopefully, they’ve got another audit due under the data act in at the end of 2021. What I’m hoping is, they’ll do both because I think that will allow us to really compare the progress that agencies have made all the way from 2017, but also the more precise measure from from the 2019 they put out the end of last year.

Tom Temin: Let me just ask you this, using the analogy of the newspaper that might have something wrong in every story. Is there a qualitative difference between errors that can happen among the different data elements within a record? In other words, if newspaper has a typographical error, they spell the word the wrong because everyone types t-e-h half the time instead of t-h-e. I know I do, and somebody misses that. That’s different than saying Hillary Clinton won the election. That would be a bigger error.

Sean Moulton: These transactions actually have more than 100 fields, and the audit doesn’t even look at all 100. They look at about 57 in total that they have normalized under the Data Act. There standard definitions, they’re considered more important fields, and so right away we’re looking at in the record with the audit itself is trying to focus in on the more important field and then even more precisely in 2019 the audits looked at errors that affected the amounts. So they recognize that if we’re getting the amounts wrong, that’s even worse. That’s the Hillary Clinton example you just gave. So they were reporting in their audits. How many of these errors really affected the amount? And how much off were they? They were actually giving numerical amounts of how many dollars they misreported, which I thought was a good improvement from 2017 to 2019, to give that level of context for these errors.

Tom Temin: So, basically, then if you’re going to look at spending transparency, you have to look at the transparency of the transparency.

Sean Moulton: Yes, it starts to get a little meta, but it is an important factor is how are we measuring our improvements in these areas and our accuracy and our transparency. That’s why you really have to step back and look at you know what are we measuring? How are we measuring it? Are we being consistent year to year?

Tom Temin: Thanks so much.

Sean Moulton: Thank you.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    Getty Imagesfederal data strategy

    Federal Data Strategy to impact all feds, not just ‘data plans for data wonks’

    Read more

    Federal agencies strive for OPEN Government Data Act compliance – and a Modern Data Experience can deliver

    Read more