Exclusive

Draft OMB memo details 10 new requirements to manage AI

The Office of Management and Budget will release a draft memo on how agencies should manage and use artificial intelligence for public comment in the coming wee...

The Office of Management and Budget is in the process of putting some much-needed direction and guidance in place for how agencies should use and manage artificial intelligence.

OMB is laying out about 10 requirements ranging from naming a new chief AI officer, to developing a publicly released AI strategy, to convening an AI governance board to putting some much-needed guardrails around the use of generative AI in a new draft guidance, parts of which Federal News Network obtained.

Multiple government sources familiar with the draft memo say this second draft, which runs about 25 pages, gives agencies a stronger foundation for much of what is already being done.

“OMB convened the Responsible AI Council and in talking across the government, I think many agencies have implemented much of what’s in this draft memo,” said one agency technology executive, who requested anonymity to talk about the draft memo. “What this memo is doing is pulling together content of existing executive orders as they relate to AI in an executable way. This memo tells us what to do and cross walks to the EOs.”

OMB sent out the draft memo to agency technology and other leaders at the end of August or early September and requested comments by Sept. 8.

Sources say this second version of the memo streamlined some of the requirements, but also removed some key pieces, specifically about workforce training and education about how best to use AI.

This draft memo, which is expected to go out for public comment in the Federal Register in the next month, is a key piece to this larger “step up” to AI effort.

“There needs to be a policy around AI. It will be interesting to see if the CIO Council is given extra responsibility to release further guidance around some of these requirements in the memo,” said another agency technology executive, who also requested anonymity. ”We are getting more and more tools like Office 365 adding more capability. It’s a technology wave that we all have to get on board because there is no going back, so having guidance and policies are important. It will be interesting to see how AI and chief privacy officers work together on implementing AI. That battle will be interesting.”

White House AI EO coming too

OMB began working on the memo in May as part of a broader set of efforts focused on AI. A senior administration official said at the time that the upcoming draft guidance reflects the reality that “AI is coming into virtually every part of the public mission.”

President Joe Biden also is expected to issue a new executive order in the coming months related to AI to continue to build on these efforts and will work with Congress on new legislation to better manage responsible AI.

“Realizing the promise of AI by managing the risk is going to require some new laws, regulations, and oversight,” Biden said in July announcing voluntary commitments from seven companies to develop secure, responsible and trustworthy AI. “In the weeks ahead, I’m going to continue to take executive action to help America lead the way toward responsible innovation. And we’re going to work with both parties to develop appropriate legislation and regulation. I’m pleased that Leader [Chuck] Schumer and Leader [Hakeem] Jeffries and others in the Congress are making this a top bipartisan priority.”

The White House announced yesterday the addition of eight more companies joining this effort.

“These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI — safety, security, and trust — and mark a critical step toward developing responsible AI,” the White House wrote in a fact sheet. “As the pace of innovation continues to accelerate, the Biden-Harris administration will continue to take decisive action to keep Americans safe and protect their rights.”

Focus on safety or rights impacting AI

The draft memo applies the themes outlined by the administration around safety and trustworthiness for companies down to the agencies.

For example, the draft memo would require agencies by Aug. 1 to stop using any “safety-impacting or rights impacting” AI without an extension or waiver.

The first agency source said the memo outlines minimum practices for AI use in these areas, and offers examples like voting systems or regulatory areas like food and drug safety or nuclear power.

The source said the definitions or examples are helpful for today, but questioned whether there is enough information for agencies to sustain the understanding.

In the managing risk from using AI section, OMB outlined seven areas for agencies to consider such as managing an agency program devoted to identifying and managing risks from using AI and overseeing the development of a framework to measure ongoing performance of AI applications and whether they are achieving their objectives.

Additionally, agencies would have to report to OMB annually specific lists of AI purposes that are presumed to be rights impacting or safety impacting. Departments also would have to conduct at least annual reviews of any of these types of AI in use.

Mitigate risks of ChatGPT

At the same time, OMB is encouraging agencies to promote innovation and is guarding against discouraging the use of generative AI, such as ChatGPT — which the memo doesn’t call out specifically.

The draft memo includes a section around generative AI that would require agencies to make sure they have “adequate risk mitigation” procedures in place and ensure agencies access to tools.

OMB also specifically discouraged agencies from banning or blocking generative AI.

Several agencies like the General Services Administration and the Environmental Protection Agency put the brakes on how employees could use ChatGPT and other large language models earlier this year.

The second government source said it was essential OMB called out the use generative AI in the draft memo as it’s something a lot of agencies probably already are dipping their toes into and have it in their strategies.

The first government source added since many agencies issued guardrails for using generative AI, the draft memo would help ensure the mission areas are meeting minimum security, technology and data protections. The draft memo didn’t mention ChatGPT or any other specific tool, but just is trying to create a policy foundation for agencies to use these tools.

The draft memo says agencies should promote AI innovation by identifying and prioritizing use cases that improve the agency’s mission, removing barriers to the responsible use of AI, including making sure the enterprise infrastructure can support AI tools and developing workforce training and development plans and policies.

The draft memo gives agencies a year to develop a public strategy to remove barriers to using AI and advancing AI maturity.

A third government technology executive said the lack of details around AI maturity is a little concerning. The executive said understanding what the maturity levels are or what model OMB is referring to is an important starting point.

AI governance, leadership central focus

All of these efforts would need to be managed by an AI governance board and agencies must name a chief AI officer.

The second government source said OMB asked agencies earlier this summer to name senior official in charge of AI by July 1, and the memo just builds on that request.

Several agencies already have CAIOs, including the departments of Defense and Health and Human Services, while others like the departments of State and Veterans Affairs have actively named.

OMB said for CFO Act agencies, the chief AI officer should be a member of Senior Executive Service or equivalent level, while in non-CFO Act agencies the person must be at the GS-15 level or equivalent.

Agencies would have 60 days to name a CAIO, let OMB know who it is, and that person would be part of a new CAIO Council.

Among the CAIO’s responsibilities the draft memo lay out includes supporting interagency coordination of AI activities, including standards setting, coordinating with CFO and chief human capital officers on funding and employee training to apply AI to mission needs, and maintaining the annual AI use case inventory.

The first government source said OMB doesn’t offer enough details about what makes someone qualified for such as role. Some agencies have put the AI responsibility on the chief data officer or chief technology officer.

“The draft memo says the CAIO must have the necessary skills, training and expertise to perform the work, but there isn’t anything about what those are,” the source said. “How is someone qualified for this role? It’s unclear what training or qualifications they really need.”

The source added that adding another person to the “C-suite” is an important step, but, at the same time, needs to be managed so roles and responsibilities are clear and OMB must emphasize the need for the CDO, CAIO, CIO and others to coordinate, collaborate and share.

What’s missing from OMB’s memo?

A former government official, who worked in AI during their time in government and requested anonymity because they didn’t get permission to talk to the media from their current company, said many times CAIOs or similar leaders are spending a lot of time educating CIOs and other leaders about AI’s benefits, opportunities and risks.

Additionally, OMB would require agencies to convene an AI governance board within 60 days that will be led by the deputy secretary.

Of course with any new guidance like this, the big concern among sources is funding. The former government official said how agencies should fund these AI efforts is one of two big areas that will need to be addressed shortly.

The other is around data. The draft memo doesn’t specifically address data quality and data sharing, which are two key pieces to making AI successful.

“Asking agencies to carve out money means this isn’t going to happen easily or consistently across the government,” the former official said. “Some agencies do a better job at carving out money because their leadership is supportive, but that’s not usual. Funding has got to come from Congress and it has to be mandated.”

The data issue has been an ongoing topic of discussion among the CIO Council, according to the second source.

“I think the council is concerned about bad data sources that are out there and how to limit what data AI looks at. I think we all realize that may be impossible to label good or bad data,” the source said. “We want people to use AI, but we are concerned about bad actors deliberately putting out bad data sources.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories