Contractor associations are okay with the new OMB memo on AI

Trade associations connected to federal technology, generally back the latest White House directives on artificial intelligence.

Trade associations connected to federal technology, generally back the latest White House directives on artificial intelligence. A recent Office of Management and Budget memo gives agencies detailed guidance on making trustworthy and secure AI systems. For one view, the Federal Drive with Tom Temin talked with the Executive Vice President of Policy at the Information Technology Industry Council, Gordon Bitko.

Interview Transcript:  

Tom Temin And when you were a federal CIO, this whole artificial intelligence question really didn’t come up. I mean, people knew it was out there, but it wasn’t really something close to the toolbox for CIOs, was it?

Gordon Bitko It was not. I think what we’ve had is, is a real tipping point in computational ability that’s much more available and accessible, much more widely and readily today than it was. But as you know, Tom, the reality is a lot of what we’re saying is, I know are things that we did talk about 4 or 5 years ago when I was in the government. A lot of the advanced machine learning capabilities, a lot of those tools were there already.

Tom Temin And the White House memos in the Biden administration have been lengthy and detailed, but there’s a lot of detail that needs to be in there. Is there anything in ITI’s view or in your experience, that’s fundamentally different about AI and how to build AI systems than any other kind of logic-based application using software?

Gordon Bitko That’s a great question, Tom. To your initial point, there is an awful lot of specificity in the executive order in the OMB guidance memo, beyond what I would say is the normal sort of guidance that would come out on the technology issue coming from the White House to agencies. The reality is that I think a lot of this is about the perceptions of risk associated with the use cases of AI that people have talked about and the fear about where are the systems going to go and how reliable and how accurate are they. But the reality is, if you look at a lot of the what the memo, the guidance to agencies expects out of this new role, the chief AI officer, it’s no different than what chief information officers are supposed to be doing for agencies already. What’s the inventory of all the technology you have? How is it being used? How are you managing the risks? How are you ensuring you have the right resources in place and prioritizing things? Those are all things that the CIO is supposed to be doing today.

Tom Temin And it sounds like in the case of AI, though, it’s not strictly a CIO concern, because the rise of the data officer, which is somewhat separate from the CIO in some cases, it’s not even in the CIO channel. That varies from agency to agency. But the data issue is a big part of AI where because the data goes into the AI as opposed to the other traditional applications where they produce the data.

Gordon Bitko That’s a great point, Tom. A couple of observations on that. One is I think it’s going to be confusing for agencies. They’ve got a chief data officer now, a chief. AI officer, a chief privacy officer who’s got responsibilities around a lot of these data concerns a chief information officer, a chief information security officer, maybe a chief technology officer. And as you noted, sometimes they’re co-located organizationally, sometimes they’re not. Sometimes their priorities are well aligned with the overall mission. Sometimes they have different priorities because they report to different parts of the organization. That’s going to be a really big challenge for agencies to figure out. Number one. Number two, the point about the importance of data is absolutely critical here. Agencies have been slowly coming around to the importance of that for quite a while and making investments. But that’s lagged. That’s one of the reasons why I would hope that this will give the impetus to agencies to make investments that they need. There is a federal data strategy. They’ve been slowly working towards it, but it hasn’t gotten the attention that it really should. And maybe this is a way to move that forward.

Tom Temin And maybe the memo didn’t state this explicitly, but in some ways it kind of reflects the age old idea that ultimately it’s the program and the program manager, program owner, business line owner that’s ultimately responsible for what happens in that program, including the IT and whatever affects AI would have fair to say.

Gordon Bitko Absolutely. I think for the AI officers to be effective; they’re going to have to be in this in-between space between the technologist, between the CIO, the CTO, the security and the mission, the businesspeople, and understanding that big picture for the AI use case. What is the mission? Why does this AI tool data system going to help solve a problem for the agency in ways that we couldn’t previously? They’re going to have to serve in that bridge. And if they don’t develop that understanding and expertise in the mission, they can’t be successful.

Tom Temin We’re speaking with Gordon Bitko, executive vice president of policy at the Information Technology Industry Council. And then, of course, whatever agencies must do means contractors and companies are going to have to do it. And that’s where it kind of runs downhill to the to the companies. What’s your best advice for all of the integrators, the developers, the application providers, the cloud people that are going to be impacted because AI is so much in demand.

Gordon Bitko Well, the first thing I would say is that there is a recognition of that in the on the side of government, there is an RFI specifically looking about how do we do responsible procurement of AI for government needs. My hope is that the government realizes that the most responsible procurement that they can do is to work closely with industry, with the whole ecosystem you just mentioned to say we want to do this in a secure, reliable way, but using commercial best practices. Far too often, as we’ve talked about in the past on the government comes up with a long list of requirements for, I think, good reasons. But what you end up with then is building some custom one-off solution that is hard to maintain, hard to support, hard to modernize, and eventually you end up with a 20-year-old system that nobody really knows what to do with it anymore. I would really hate to see that happen. In the case of AI, at the pace the technology is moving. The best thing that government industry can do is work together to get solutions in the hands of the people who need it.

Tom Temin It seems like a contractor that’s going to be dealing with algorithms and delivering them almost has to have into its basic operating plan. As a company, the controls needed for AI, rather than try to make a bespoke AI safety solution, let’s put it for each and every contractor task order.

Gordon Bitko That’s right. We would really like to see, for example, the NIST AI risk management framework and guidance built around that be adopted broadly. And then government agencies can have confidence that the contractors that they’re working with understand risk, understand the use cases for their technology and are putting in place the things necessary to ensure that it’s being done safely and securely, that the data that’s being used to train the system are representative, that they’re not biased. All those valid concerns. There are ways to follow them. One of the areas where I think the AI won’t be a memo goes a little bit astray is instead of saying follow that approach, they come out with a laundry list of here’s all the things that they think are high risk. Sure. Undoubtedly, many of them are high risk, but the specifics of every use case are going to vary. And we really rather say, let’s use the risk management framework rather than have this other sort of prescriptive approach.

Tom Temin And can you envision a time when companies won’t really be selling AI, just as agencies won’t be buying AI, they’ll simply be buying and selling contemporary applications, a component of which happens to be artificial intelligence.

Gordon Bitko That’s already happening today, Tom. It’s a great point. And one of the questions that we’ve had and one of the challenges to give you an example, we have in the ITI membership, a number of companies who provide security solutions in one way or another. A lot of the way that they build their models to understand security threats is, is using big data, machine learning, AI models that collect millions or billions of endpoint pieces of data about what are the threats and what are the activity. And they need sophisticated tools to help them understand is this an indicator of a threat or not? And then they provide the results of that to the government, to customers who are using those products and services. That’s going to happen more and more across the board, where people are going to want those capabilities. They’re going to want that because the amount of data is too large otherwise to deal with. It’s not the sort of thing that you can solve in other ways.

Tom Temin Sure. And just a final question on the talent base in industry and government, is it sufficient to cover these needs so that everyone can act in a competent manner and still fulfill what OMB wants? And really, what OMB wants is basic good practice and good hygiene for anyone using AI.

Gordon Bitko I think that’s what they would call a leading question. Right, Tom? We know, we know. And it comes to technology that there is never enough trained, skilled workforce across the board. There are individual agencies that are more technically inclined but are operating from a higher starting point and probably in pretty good shape. But across the board, we need to invest in joint solutions between government and industry, between public and private partnerships to raise the overall skill level. And then I think a related point to that, Tom, is lots of job descriptions over time in the government and outside are going to evolve because people are going to want and need to take advantage of new tools. And so, what does that mean? And what does that mean for retraining the existing workforce so that they know how to use those tools and become more effective and be more focused on actual, real things that we need them to do that can’t be solved by AI, but really actually do need a person to be looking at something and making a decision.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories