Insight by Maximus

Want to operationalize AI responsibly? Take a holistic and scalable approach

Successful and responsible use of AI requires a roadmap based on desired outcomes and four other critical elements. We talk with AI guru Kathleen Featheringham ...

Part of the confusion over artificial intelligence right now is that there’s no single definition for what is and isn’t AI for many people.

“Some people may think that more basic types of robotic process automation, RPA, is not AI. There are others who do. There are others who don’t think machine learning is AI. They think that AI is reserved for things like very high-end neural nets and things. With that, there just becomes a lot of complications because there’s not one set definition,” pointed out Maximus’ Kathleen Featheringham, vice president of Artificial Intelligence and Machine Learning at Maximus.

But if an organization focuses instead on how to operationalize AI responsibly across an enterprise, that provides context for implementing it, she said. Then, an agency can develop a roadmap for its AI journey.

Featheringham offered five elements necessary for developing that roadmap — one that takes a holistic approach and is scalable:

  • Goals and outcomes
  • People and culture
  • Data
  • Technology
  • Process, procedures and governance

“With those big areas, you can then think about your roadmap,” she said, adding that an AI roadmap must integrate all five because they have dependencies on one another that will evolve over time.

AI roadmap: Goals and outcomes

Noting that this is not a new idea, Featheringham said agencies need to avoid launching AI projects just because the technology might be cool. The goal instead must be “not just doing AI for the sake of AI, but rather focus on AI as part of helping to achieve a certain mission outcome.”

To start, she recommended clearly defining those desired outcomes and goals, which can then serve as the baseline against which the agency measures the value of its AI use.

AI roadmap: People and culture

Often organizations want to kickstart an AI project with the technology, but Featheringham instead advises that after detailing the desired project goals and outcomes that an agency focus on people and culture.

An agency’s use of AI should represent the core values of the organization — “the values of an organization and the use of AI, responsibility must be connected. It shouldn’t be a separate thing,” she said. “When I say responsible, it’s thinking about, with the knowledge that you have now, how would you do it?” That brings in questions of ethics too.

These aspects need to be part of the roadmap and documented, because they change over time, Featheringham pointed out.

“It’s a lot about respectful education within organizations: what AI is, what it isn’t,” she said. She further encouraged organizations to involve a wide range of people in development, planning and execution. “Sometimes that gets missed, where it’s just going to be like, ‘Oh, we’ll be able to replace all these people.’ That’s not really the case. It’s more about unlocking the ability for them to do more of what they’re inherently good at.”

That requires a commitment to organizational change management as processes and roles evolve based on AI’s impact, Featheringham said.

Bringing in a multitude of types of people with different roles, knowledge and backgrounds helps validate outcomes and ensure that they align to the organization’s cultural perspective, its legal perspective and its ethical perspective, as well as build on a deep understanding of how the mission works currently, she said.

“Those parts are all really valid and really needed,” Featheringham said. “The more successful projects include all those types of different diversities of thought and background.”

AI roadmap: Data

Many organizations, both in government and industry, early on often decided that they couldn’t pursue AI because they needed to gather and label data — often with no emphasis on why or how those datasets would be used. The result? Organizations spent a lot of time and money working toward data analytics and potential AI/ML projects that led nowhere or took far longer than anticipated, Featheringham said.

“It’s really thinking about your data as an asset. Again, that term is not new either,” she said. “But for AI purposes, it’s understanding the full lifecycle of where that data came from. So how is it collected? For what purpose was it collected? And then understanding, how is it appropriate to use for a particular AI model?”

When framed this way, there’s no bad data, she said. Plus, an organization can then use its data selectively for minimum viable products that incorporate AI/ML, Featheringham added.

(See the sidebar “Generative AI and solving previously data-starved problems” about the potential of generative AI and predictive analytics.)

AI roadmap: Processes, procedures and governance

Throughout developing its roadmap, an agency will also need to integrate AI policies and best practices into its processes, procedures and governance.

And none of this can or should happen in an AI silo, Featheringham advised. That’s also been a lesson learned.

“Try to reuse as much of your software development, governance and things that you have, knowing that there’s exceptions for things that you have to look at for particular AI parts. Add it into your business functions,” she said. “It’s considered one of the big general-purpose technologies of our day. And that means it really has the capacity to touch every part of our lives.”

And the governance and process aspects — the rules of the road and the lanes of activity for data and for AI in use — inherently tie back to people, culture and mission, she said. “They really should represent the core values of the organization,” Featheringham said. “They should be embedded into existing processes only creating a new when specifically needed.”

AI roadmap: Technology

Now, the use of AI models and algorithms, and technologies that incorporate them, can begin in earnest, Featheringham said. An organization can start with its minimum viable product to prove out its theories and see whether AI helps it achieve its goals and outcomes.

It’s incredibly important to make “sure that your minimum viable product doesn’t just test the technological capability but the actual validity of use and that you put in place the operational processes in place to support scaling to the enterprise,” she said.

Featheringham also advises weaving data operations and model operations into the development, security and operations process — DataOps integrated with ModelOps with

“It’s quite honestly more affordable. It’s scalable, because it’s consistent, and you’re not having to do custom integration every time,” she said. Plus, an organization can build in the transparency and the responsibility aspects “as controls and measures and repeatable processes for monitoring within larger systems.”

“If you’re making decisions based off a certain set of parameters, and those parameters shift, you want to know how those decisions were made to be able to trace it and have auditability, that transparency back end.”

Generative AI and solving previously data-starved problems

While the public discourse over the perils of generative AI and technologies like ChatGPT dominate headlines, Maximus’ Kathleen Featheringham also sees potential — particularly for solving complex challenges at the edge.

Typically, she acknowledged, people use “edge” in technology to mean distribution of data and services relative to the location of where that data or IT capability is needed, often far beyond the confines of a perimeter network. But that’s not the edge she means.

“In this particular instance, I mean edge as in unique scenarios where we don’t have a lot of data to train on,” Featheringham said.

She pointed to healthcare and medical challenges for which researchers have access to small datasets, which hampers their ability to train AI models to help identify the causes, detect predictors or develop treatments.

“With tools like generative, there’s the possibility of being able to create the necessary data to train models effectively get more of those edge types of cases in,” she said. “In colloquial terms, that means those really rare diseases that our family members have, we have better odds at being able to detect them.”

Discover more about how to improve, secure and expand customer service in the Forward-Thinking Government series.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories