Successful and responsible use of AI requires a roadmap based on desired outcomes and four other critical elements. We talk with AI guru Kathleen Featheringham to...
Part of the confusion over artificial intelligence right now is that there’s no single definition for what is and isn’t AI for many people.
“Some people may think that more basic types of robotic process automation, RPA, is not AI. There are others who do. There are others who don’t think machine learning is AI. They think that AI is reserved for things like very high-end neural nets and things. With that, there just becomes a lot of complications because there’s not one set definition,” pointed out Maximus’ Kathleen Featheringham, vice president of Artificial Intelligence and Machine Learning at Maximus.
But if an organization focuses instead on how to operationalize AI responsibly across an enterprise, that provides context for implementing it, she said. Then, an agency can develop a roadmap for its AI journey.
Featheringham offered five elements necessary for developing that roadmap — one that takes a holistic approach and is scalable:
“With those big areas, you can then think about your roadmap,” she said, adding that an AI roadmap must integrate all five because they have dependencies on one another that will evolve over time.
Noting that this is not a new idea, Featheringham said agencies need to avoid launching AI projects just because the technology might be cool. The goal instead must be “not just doing AI for the sake of AI, but rather focus on AI as part of helping to achieve a certain mission outcome.”
To start, she recommended clearly defining those desired outcomes and goals, which can then serve as the baseline against which the agency measures the value of its AI use.
Often organizations want to kickstart an AI project with the technology, but Featheringham instead advises that after detailing the desired project goals and outcomes that an agency focus on people and culture.
An agency’s use of AI should represent the core values of the organization — “the values of an organization and the use of AI, responsibility must be connected. It shouldn’t be a separate thing,” she said. “When I say responsible, it’s thinking about, with the knowledge that you have now, how would you do it?” That brings in questions of ethics too.
These aspects need to be part of the roadmap and documented, because they change over time, Featheringham pointed out.
“It’s a lot about respectful education within organizations: what AI is, what it isn’t,” she said. She further encouraged organizations to involve a wide range of people in development, planning and execution. “Sometimes that gets missed, where it’s just going to be like, ‘Oh, we’ll be able to replace all these people.’ That’s not really the case. It’s more about unlocking the ability for them to do more of what they’re inherently good at.”
That requires a commitment to organizational change management as processes and roles evolve based on AI’s impact, Featheringham said.
Bringing in a multitude of types of people with different roles, knowledge and backgrounds helps validate outcomes and ensure that they align to the organization’s cultural perspective, its legal perspective and its ethical perspective, as well as build on a deep understanding of how the mission works currently, she said.
“Those parts are all really valid and really needed,” Featheringham said. “The more successful projects include all those types of different diversities of thought and background.”
Many organizations, both in government and industry, early on often decided that they couldn’t pursue AI because they needed to gather and label data — often with no emphasis on why or how those datasets would be used. The result? Organizations spent a lot of time and money working toward data analytics and potential AI/ML projects that led nowhere or took far longer than anticipated, Featheringham said.
“It’s really thinking about your data as an asset. Again, that term is not new either,” she said. “But for AI purposes, it’s understanding the full lifecycle of where that data came from. So how is it collected? For what purpose was it collected? And then understanding, how is it appropriate to use for a particular AI model?”
When framed this way, there’s no bad data, she said. Plus, an organization can then use its data selectively for minimum viable products that incorporate AI/ML, Featheringham added.
(See the sidebar “Generative AI and solving previously data-starved problems” about the potential of generative AI and predictive analytics.)
Throughout developing its roadmap, an agency will also need to integrate AI policies and best practices into its processes, procedures and governance.
And none of this can or should happen in an AI silo, Featheringham advised. That’s also been a lesson learned.
“Try to reuse as much of your software development, governance and things that you have, knowing that there’s exceptions for things that you have to look at for particular AI parts. Add it into your business functions,” she said. “It’s considered one of the big general-purpose technologies of our day. And that means it really has the capacity to touch every part of our lives.”
And the governance and process aspects — the rules of the road and the lanes of activity for data and for AI in use — inherently tie back to people, culture and mission, she said. “They really should represent the core values of the organization,” Featheringham said. “They should be embedded into existing processes only creating a new when specifically needed.”
Now, the use of AI models and algorithms, and technologies that incorporate them, can begin in earnest, Featheringham said. An organization can start with its minimum viable product to prove out its theories and see whether AI helps it achieve its goals and outcomes.
It’s incredibly important to make “sure that your minimum viable product doesn’t just test the technological capability but the actual validity of use and that you put in place the operational processes in place to support scaling to the enterprise,” she said.
Featheringham also advised agencies to weave data operations and model
operations into their development, security and operations process — DataOps
integrated with ModelOps integrated with DevSecOps.
“It’s quite honestly more affordable. It’s scalable, because it’s consistent, and you’re not having to do custom integration every time,” she said. Plus, an organization can build in the transparency and the responsibility aspects “as controls and measures and repeatable processes for monitoring within larger systems.”
“If you’re making decisions based off a certain set of parameters, and those parameters shift, you want to know how those decisions were made to be able to trace it and have auditability, that transparency back end.”
Generative AI and solving previously data-starved problems
While the public discourse over the perils of generative AI and technologies like ChatGPT dominate headlines, Maximus’ Kathleen Featheringham also sees potential — particularly for solving complex challenges at the edge.
Typically, she acknowledged, people use “edge” in technology to mean distribution of data and services relative to the location of where that data or IT capability is needed, often far beyond the confines of a perimeter network. But that’s not the edge she means.
“In this particular instance, I mean edge as in unique scenarios where we don’t have a lot of data to train on,” Featheringham said.
She pointed to healthcare and medical challenges for which researchers have access to small datasets, which hampers their ability to train AI models to help identify the causes, detect predictors or develop treatments.
“With tools like generative, there’s the possibility of being able to create the necessary data to train models effectively get more of those edge types of cases in,” she said. “In colloquial terms, that means those really rare diseases that our family members have, we have better odds at being able to detect them.”
Discover more about how to improve, secure and expand customer service in the Forward-Thinking Government series.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Vice President of Artificial Intelligence and Machine Learning, Maximus
Editor, Custom Content, Federal News Network
Vice President of Artificial Intelligence and Machine Learning, Maximus
Kathleen is a leader focused on large scale responsible technology transformation for our client’s most critical and sensitive mission areas. She is leading the development and expansion of Maximus’ AI and Advanced Analytics portfolio within Federal Technology Consulting Services. An expert in AI, data science, strategy, and change management, Kathleen has more than 20 years of experience working with clients across the Federal Government.
She focuses on the convergence of mission and AI with a special emphasis on the human elements of adoption. This includes working with organizations on responsibly thinking about data as an asset, the evolution of the technical architecture to eliminate continual redesign, the evolution of human roles to allow more time for critical thinking and building ethical AI controls and measures into the fabric of an organization.
Prior to joining Maximus, she served as a Director at Booz Allen Hamilton leading highly technical cross-functional teams through transformation. She established and scaled Booz Allen’s AI strategy and training capabilities, that center on empowering organizations to grow their ability to harness analytics for data-driven decision making. She was a founding member of the team that developed Booz Allen’s award-winning Data Science 5K Challenge program, which focused on upskilling the firm’s data science workforce to help clients use data in new ways. Booz Allen’s artificial intelligence (AI) practice. Kathleen led the development of the firm’s AI governance and risk management procedures and operating practices as a part of the development of Booz Allen’s Responsible AI efforts.
Kathleen has an M.S. in intelligence analysis from Mercyhurst College and a B.S. in business administration from Georgetown University. She holds a Change Management Advanced Practitioner certification from Georgetown and a Psychology of Leadership graduate certificate from Cornell University.
Editor, Custom Content, Federal News Network
Vanessa Roberts crafts content for custom programs at Federal News Network and WTOP. A tech and government junkie, she’s been finding and telling B2B, government and technology stories in the nation’s capital since the era of the “sneakernet” — including for numerous brands. Vanessa has a master’s from the Columbia Graduate School of Journalism.