AI is one of the most potent tools supplementing the cybersecurity efforts of federal agencies, but DHS is ensuring it's implemented responsibly and securely.
Gina Scinta, the deputy CTO and security evangelist at Thales Trusted Cyber Technologies, said agencies need to recognize the benefits, risk of quantum and AI.
In today’s budget environment, agency leaders are often asked to do more with less.
Department officials are building on these fraud-prevention efforts, and expect to prevent $12 billion in improper payments annually by 2029.
"We're working with our development teams and we're working with testing. And then we have a policy that covers all of that," said Young Bang.
Newer technologies like automation and AI may offer new solutions to this age-old cybersecurity problem, but they can also be double-edged swords.
A new skills benchmark model at HUD will assess where employees are in their understanding of AI and recommend a personalized path for training and development.
Government agencies have an obligation to provide accurate information to citizens, and a bad bot can have both legal and moral implications.
The Army is exploring the idea of creating a separate path or a sub-path within the software acquisition pathway specifically for AI.
Michael Berkholtz, a senior manager for technology lifecycle services for GSA, said the agency is analyzing emerging technologies based on four criteria.
"General intellectual property law in the United States is still trying to figure out AI and a number of features of AI," procurement attorney Dan Ramish said.
"There are only 75 PEO offices inside the four services. And I decided to go figure out who they were," said Steve Blank.
There are ten key focus areas for current and prospective agency CAIOs to consider to ensure and maximize the benefits and minimize the risks of AI.
The Office of Management and Budget told agencies in a new memo to proactively manage AI risks, promote a competitive market and manage business processes.
DoJ says its policy will encourage independent security and safety research, but will large AI companies follow suit in encouraging vulnerability disclosure?