Insight by Ingram Micro

Considering generative AI? Good, because the ROI makes it worth it

By being selective on where, why and how to invest in genAI, agencies can build on existing AI efforts, improve mission delivery and ensure responsible use.

This is the 12th article in our series, The Power of Technology.

No one denies the potential of artificial intelligence. But how actively is your agency investing in and testing use of generative AI?

Early returns on investment suggest that being a wise early adopter makes sense, said Chris “CT” Thomas, technical director of AI and data systems at Dell Technologies.

“For every dollar you invest, you can almost yield 2x to 3x returns,” Thomas said.

What’s more achieving that ROI doesn’t require going all in at the get-go, he said during our The Power of Technology series. It’s more about be strategic in the use cases that the organization chooses.

Here’s why, Thomas explained: Uses for generative AI are applicable across multiple business areas in government, from recruiting new talent and human resources management to engineering and optimizing software development cycles. What that means is that “most organizations are able to start small and then slowly grow and start adding additional applications over time,” he said.

The operational and organizational efficiencies make AI appealing, and “organizations are eager to get using it,” added Tony Celeste, executive director and general manager for Ingram Micro Public Sector.

Thomas agreed and cited work at AI testbeds in the Energy Department’s national labs, the Army’s Project Linchpin and the Air Force’s Chief Data and AI Office as prime examples of agencies leaning into generative AI.

When talking to agencies, a common refrain is, “ ‘We want our organization to keep pushing forward, fail fast, and take our lessons learned and keep pushing,’” Thomas said. “I would encourage any organization to do the same.”

We asked Celeste and Thomas to share tips on how to navigate genAI efforts successfully — from pilots to small programs and then scaling across the enterprise.

Want to use generative AI? Build on existing AI uses

Look to early narrow AI uses, both to identify potential ways to apply large language models but also to broaden the understanding that the technology in and of itself is not new, Celeste said.

“Some of these are more common and have been in use for a very long time,” he said, pointing to monitoring and analyzing data relative to climate change by multiple agencies or the Centers for Disease Control and Prevention’s monitoring of disease outbreaks and performing health care research. These types of early uses already are helping agencies make decisions faster and based on data, Celeste noted.

“We’ve been using AI technologies for automation, in streamlining processes that were highly repetitive, and so those are going to be the use cases that early adopters are going to take a much more aggressive approach to,” he said, adding, “The technology has advanced that we have the compute power, the storage resources, the datasets that are large enough now and the large language models.”

Thomas advised that organization’s look at their mission sets to identify applications that yield the highest return and then which have the broadest impact across the organization. It’s also critical to ensure that introducing AI doesn’t create any work on the backend, he added.

“It does take a little more work upfront, but those returns make it worth it,” Thomas said, adding that value comes from taking a systemwide approach versus an application by application one. The goal? Implementing AI in such a way that it can be easily upgraded and sustained over time, with repeatable results

Protect your data as you scale generative AI

Data ontology and security remain critical as AI use expands, Thomas said. The good news for government is that there’s a natural alignment between generative AI adoption and federal zero trust implementation, he said.

“Both zero trust as well as AI are data-centric approaches and architectures, so the synergy between the two is pretty substantial,” Thomas said.

They both depend on assessing and maintaining risk profiles. “At the end of the day, if you’re looking at the importance of the data, whether or not that’s a machine or human interaction that’s taking place to make a decision or generate a new artifact, that still has a security profile,” he said. “Access to those files and systems, or that content, still has to be put in place where your security controls align with all your existing policies and procedures.”

Celeste pointed out that’s why it’s important to develop detailed guardrails and enterprise policies for all AI use. That approach will ensure data protections evolve as needed, privacy can be maintained as appropriate and responsible use of the technology takes hold, he said.

“The National Institute of Standards and Technology has put together a very good framework for as a starting point, as you start to look at your secure development lifecycle from a software perspective,” Thomas added.

Governance also ensures that agencies define how they vet their AI models, check for biases and track whether anyone outside the organization may be attempting to alter or “poison” results as well, he said.

Embracing genAI to help with security

Both Celeste and Thomas said they expect that genAI increasingly will play a role in the demands on agencies to respond more effectively to potential threats and restore services after attacks as well.

Given the extensiveness of legacy systems and data stores, agencies will need to lean into AI to cull and review historic code and datasets to unlock predictive analytics effectively, Celeste said.

“The people that wrote that code, many of them are no longer with us. Being able to understand the code, where the code actually came from, its original providence and certifying it, this is going to be one of the advantages of AI,” he said. “We won’t have to redo that work over and over again. Once we identify original provenance for a piece of code, AI can catalog that and then reference against it and detect where that same code is used in other applications as well.”

Ontology matters both for older code and new, Thomas said.

“Sometimes you’re working with multiple agencies. They make an update … they didn’t tell you, or a new vendor or supplier or the manufacturer didn’t tell you. And the ripple effect for that kind of the domino tipping over has a major impact to the overall system output,” he said.

“Other things start to break. We start looking at the actual data hygiene: Where’s the data coming from and held? Is it ready to be consumed by AI? Is it in the right format? Where am I going to store it? How am I going to allow people to access it? So that ties back into the cybersecurity question. All these things have to be taken into account from a systems engineering perspective as you’re building your data system.”

To read more articles in The Power of Technology series, click here.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories