Sponsored by Pegasystems

As AI goes mainstream, NSF programs trying to ‘respond to the moment’

A new program at the National Science Foundation is asking the question, what happens when algorithm graduate from training data to real world data that continu...

Federal Monthly Insights - Artificial Intelligence - September 12, 2023

Federal agencies are getting more and more excited over the results they’re getting from training artificial intelligence systems with their data. But that’s mostly what it is so far: training, with a carefully curated and limited data set. Now a new program at the National Science Foundation is asking the question, what happens when that same algorithm starts getting fed real world data that continues to expand and grow? Will the results remain as accurate?

NSF’s new Safe Learning Enabled Systems program wants to apply principles of safety learned from building physical things, like bridges and airplanes, to software.

“How do we build safe software that isn’t just safe when we first deploy it, but remains safe even as it continues to adapt to the data that it’s given?” Michael Littman, director of the Division of Information and Intelligent Systems at NSF, said on Federal Monthly Insights – Artificial Intellience. “This is an outstanding problem. This is not a small problem. But we’ve put that challenge out there to the research community and we’ve gotten in what appear to be some really fantastic proposals.”

In a way, these challenges are actually looking to go even further than the standard principles of safety. In physical constructions like bridges and airplanes, engineers have to consider things like metal fatigue, where the materials will degrade over time and eventually give out. The goal in that case is to maintain and preserve the materials for as long as possible.

Littman said this AI program is looking beyond maintaining to actually improving the results over time through the introduction of more data. The idea is that the algorithm will hopefully learn to be better, rather than degrade. NSF wants to see more data yielding better results, rather than drifting biases and outcomes.

And that has a lot to do with the data itself. For example, if you train a facial recognition algorithm on 120 faces, with varied skin tones evenly distributed throughout, you’ll get much better results than if you introduce 10,000 white faces.

“This is something that we understand both from the mathematical standpoint but also from the social science standpoint,” Littman told the Federal Drive with Tom Temin. “They all kind of agree that this is the phenomenon that you get: If you train with biased data that doesn’t really cover the space particularly well, then these systems are going to fall into traps. They’re going to take shortcuts because they’re lazy. Fundamentally, these are lazy systems. They’re just trying to do what the data tells them they need to do. So if you give them limited data, they’re going to be limited systems.”

That’s not the only new program centered on AI currently in development at NSF; the agency has been involved in AI research for decades, and Littman said it’s striving to “respond to the moment,” now that AI has hit the mainstream and sparked imaginations, particularly recently with examples like ChatGPT.

“There’s a tremendous amount of attention, there’s a tremendous amount of opportunity, there’s some new vistas to explore,” he said. “And so we want to make sure that the academic community has the resources that they need to pursue these questions and really answer what society needs them to answer.”

Littman said in his own division, there’s the Information, Integration and Informatics program, which has been focused for years on the problem of bringing real world data to bear on solving problems. That’s foundational, because without a clear path to actually integrating real world data into AI, the question of safety and how to use that data to improve rather than degrade the algorithms would be merely academic.

Then there’s the Human-Centered Computing program, focused on building systems that interact well with people. Eventually, these systems need to be able to explain themselves and how they arrived at their conclusions to the people who are using them, who likely won’t have the technical expertise to figure that out themselves. Because right now, even the most technically proficient people are struggling with figuring out the processes being used by things like neural networks or deep networks.

Finally, there’s the Robust Intelligence program, which revolves around understanding the limitations and power in the core algorithms that actually make AI and machine learning possible.

“And so these are just the core programs,” Littman said. “Then we have a lot of programs that we’ve spun up to address particular problems that have come up in AI and computer security.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    Amelia Brust/Federal News Networkcybersecurity, intelligence, network, computers, technology

    Study finds major shortcomings in Air Force processes to test AI technologies

    Read more
    AI microprocessor on motherboard computer circuit, Artificial intelligence integrated inside Central Processors Unit or CPU chip, 3d rendering futuristic digital data technology concept background

    Labor, OPM, Census dialing up AI tools to crunch data, improve mission outcomes

    Read more