Insight by Dell Federal

How a factory approach can accelerate agency use of AI

JP Marcelino, the AI/ML Alliances manager for Dell Federal, said when agencies use a framework to identify AI use cases and technologies they can move faster.

There are more than 700 use cases in the federal inventory for artificial intelligence. Of those use cases, as of Sept. 1, the Energy Department had the most with 177, followed by the Department of Health and Human Services with 156, the Commerce Department with 47 and the Department of Homeland Security with 41.

The thirst for using AI isn’t just about use cases. The amount of money agencies are spending on AI tools and capabilities is growing. From 2020 to 2022, for example, agencies spent $7.7 billion, according to market research firm Deltek. That’s a 36% increase over three years. And this doesn’t include all the funding that goes into systems embedded with AI, such as the DHS insider threat infrastructure or the Department of Veterans Affairs’ health and data analytics platform.

The data for 2023 and 2024 will show even more investments, particularly with the relatively new excitement over generative AI.

Over the last few months, agencies have started to follow the Office of Management and Budget’s direction to offer controlled uses of GenAI tools like the Air Force’s new platform called the Non-classified Internet Protocol Generative Pre-training Transformer (NIPRGPT), which the service hopes can help with tasks such as coding, correspondence and content summarization.

The Energy Department also released a new reference guide for using GenAI. The guide provides an understanding of the key benefits, considerations, risks and best practices associated with GenAI.

JP Marcelino, the AI/ML Alliances manager for Dell Federal, said most initial forays into AI by agencies fall into two types: traditional or discriminative AI, used to detect patterns and for simpler analytics; and generative AI, where agencies are starting to generate new content based off of their data.

Right people, right tech in the AI factory

While agencies are more comfortable with the traditional or discriminative AI use case, slowly they are starting to figure out how they can use GenAI, particularly in a more secure manner.

“When it comes to GenAI, there’s still a lot more carefulness that needs to be done to make sure that nothing’s being exposed from a security standpoint, and making sure all of your data is managed and secured in a way that doesn’t get exposed,” Marcelino said on the discussion Innovation in Government sponsored by Carahsoft. “I still think there’s a challenge around the AI workforce that’s capable of developing these solutions. In order to alleviate and offset some of those deficiencies, part of it is just looking for the right kinds of partners that can help develop these solutions. No one’s ever going to find a single partner or a single software provider that can solve everything there is to develop an AI solution. It really takes a village to develop these solutions. So whether it’s a partner that can help you out early on in the process of figuring out use cases to tackle and focus on, or partners that are more in the line of helping you develop your solutions and put together proof of concepts and move them into production-ready environments for you, I think it’ll take quite a bit of effort from numerous partnerships to be able to solve every challenge along the way.”

To that end, Marcelino said Dell Technologies is leaning into the concept of an AI factory. He said this approach provides a framework to accelerate the implementation of AI capabilities.

“We are really helping customers understand the potential use cases that they want to tackle from an AI standpoint. We are helping them understand what kind of data they have to tackle those potential use cases, whether it’s good or bad data; do you have enough of that data to not only train a solution, but also make sure that you can validate that solution as well?” he said. “Then, there are the three pieces in the middle that help enable the AI capability from a solution standpoint. One is the infrastructure and hardware piece and the ability for us to provide the right kind of AI infrastructure and hardware for the given use case. If you’re looking at a really complex AI solution that requires very large language models to develop a solution, you may be looking at some really high-end compute to be able to support that kind of capability. But at the same time, if you’re a single user, just looking at some kind of AI sandbox, or want to start developing or testing smaller AI models locally, you may not need such high-end compute for that. You may need some kind of faster workstation that can support a single GPU, for example, or some really lower end compute that can handle a handful of users simultaneously.”

Data remains key to the success

The AI factory can help agencies close existing gaps in the workforce, the challenge of moving the tools into production and in addressing data quality and management challenges.

“You can just easily have an AI solution that can be garbage-in and garbage-out, so you want to make sure not only you have good quality data, but also have the ability to have a good data management strategy around it so that you can pull that data from the right places and be able to have good quality data to feed into an AI solution in order to achieve the right kind of accuracy and outcomes you want out of an AI solution,” Marcelino said. “When it comes to moving AI solutions from pilot to production, there’s a pretty low success rate of AI solutions that make it to production. There’s a lot of challenges that are involved with that, whether it’s not getting enough accuracy out of your AI solution, it’s not meeting the right types of outputs or outcomes that you’re looking to achieve from that solution or it can be something as simple as it’s taking too much time to achieve the accuracy that you’re looking to develop.”

One way to help address this challenge, he said, is through a machine learning operations (MLOps) strategy, which helps organizations more easily automate the continuous training and deployment of ML models at scale. It adapts the principles of DevOps to the ML workflow.

“I think there’s ways to help alleviate some of those challenges. Implementing things like an MLOPs strategy, so you have better visibility into the models that you’re developing and looking to deploy,” he said. “Being able to leverage solutions that can do things like help augment the development process, whether they’re things like auto ML tools, for example, to essentially use AI to develop AI solutions. Or leveraging solutions like AI factories, where we’ve taken a lot of the guesswork out of being able to deploy an AI solution into production, where we can essentially provide an end-to-end capability that encompasses partner solutions with our infrastructure and hardware, with the ability to fold in other types of solutions to really package it up in an environment that’s been pre-validated and makes it lower time-to-value to deploy these solutions.”

Listen to the full discussion:

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories