Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.
The White House gave agencies a push in the right direction earlier this year, when President Donald Trump signed an executive order to refocus efforts on artificial intelligence research.
Now, the Trump administration is looking to roll out the next wave of federal AI policy. New guidance expected later this month will look at ways to get more than a dozen different research agencies working together more closely on AI developments.
Later this month, the Office of Management and Budget is expected to release its final draft of the year-one action plan of the Federal Data Strategy. Federal Chief Information Officer Suzette Kent, speaking Tuesday at NVIDIA’s GPU Technology Conference this week, said the final version will reflect more than 600 public comments OMB received on the first draft it released in June.
She said the final draft will focus on use cases for geospatial data and financial data. It will also look at ways federal data can support private-sector research in AI. Industry, Kent said, has specifically asked for health care data, as well as data for autonomous vehicles.
For the next phase of AI in government, where agencies have begun to scale up pilot projects for wider use, Kent said different C-suite talent across multiple agencies will have to collaborate with each other. That collaboration will also include input from the Chief Data Officers Council, which Kent said was coming soon.
“It’s unlike other technologies, where maybe the acquisition and the IT community worked with it, or the human capital community and the financial management community. But that is not the same,” Kent said.
In the global competition to develop the most sophisticated AI, U.S Chief Technology Officer Michael Kratsios said the country has an advantage because of what he called its “innovation ecosystem.”
That ecosystem includes research and development from government, the private sector and academia. But Kratsios said it can be hard to coordinate those AI research efforts governmentwide.
“What’s unique about the federal system of driving basic stage research and development is that we don’t have a Ministry of Science. There isn’t just one agency that doles out R&D dollars,” he said. “We have this incredible network of agencies that do a wide variety of extraordinarily different things.”
That network includes agencies as diverse as the National Science Foundation, the Energy Department, the Agriculture Department’s labs and the Pentagon’s Defense Advanced Research Projects Agency.
“All these different pockets of research have their own sort of strengths and weaknesses and places that they can focus best on,” Kratsios said.
Interagency communications on AI have improved. The Trump administration’s executive order on AI created the Select Committee on Artificial Intelligence panel that brings together the heads of federal research agencies and keeps them in the loop with each other’s work.
The White House Office of Science and Technology Policy also released data in September, showing about $2 billion invested in federal AI research, nearly double what the government spent on AI research just three years ago.
Neil Jacobs, the assistant secretary for Environmental Observation and Prediction, also carrying out the duties of the under secretary of Commerce for Oceans and Atmosphere at NOAA, said his agency generates hundreds of terabytes of data every day.
But in order to get a better handle on all that data, the agency is moving to the cloud and running AI algorithms to get better weather predictions in less time.
“The data that we’re using in the models has a shelf life. If we can’t get that data into the forecast model fast enough, it’s useless, because the forecast models kick off on a certain time,” Jacobs said.
But there are significant challenges to rolling out AI tools as well. Former Intelligence Advanced Research Projects Activity director Jason Matheny, now the director of Georgetown University’s Center for Security and Emerging Technology, said the intelligence community has spent a lot of time looking at ways to use an adversary’s AI tools against them.
“You can get a tank that is covered with a sort of form of digital camouflage, that isn’t visible or detectable by a human observer, but causes in machine learning classifier to think that it’s a school bus,” Matheny said.
Lynne Parker, the assistant director for artificial intelligence at OSTP, said the administration has to walk a careful balance between moving the technology forward and making sure it’s secure.
“You want to get it right, you don’t want to go too far in one extreme. If you’re so afraid to use the technology because something might go wrong, then you’re squashing all of those good benefits,” Parker said. “At the same time, if you only look at the benefits and you’re naïve about ways the technology can lead to some unintended consequences, then cause harm as well.”