Insight by ThunderCat Technology and Dell Technologies

How to best leverage your hybrid computing environment for efficiency and effectiveness in AI

Managing artificial and machine learning application projects is in large measure a matter of managing data connected to them. Not only curating data, but also ...

Managing artificial and machine learning application projects is in large measure a matter of fine-tuning the locations where data and applications reside. It’s less a matter of data center versus cloud, than of portability among data center, cloud and edge computing, consistent with optimal availability and cost control. Therefore, it’s important for agencies to spend some effort planning the infrastructures for systems hosting AI development and training data, as well as for deployable AI applications.

In the cloud era, this management requirement extends to the commercial clouds agencies employ as components in their hybrid computing environments. With contemporary approaches to storage tiers, application hosting decisions, and locating and updating of the agency’s own data centers, the IT staff can find efficiencies that enable AI development in a cost effective way.

For some of the latest thinking, Federal News Network spoke to Nic Perez, the chief technology officer for cloud practice at ThunderCat Technology; and Al Ford, the artificial intelligence alliances manager at Dell Technologies.

Perez said that AI application development that used to require agency-operated high performance computing environments and the associated software tooling is all finding its way into commercial clouds.

“One of the benefits of the cloud is that agencies can leverage the compute and the power that is available inside these cloud providers,” Perez said. “Move your data, and then absolutely maximize the compute power there.”

Different clouds offer differing toolsets, he added, giving agency practitioners flexibility in the degree of automation they want in staging and training AI applications. Perez said that over the last year or so, he’s seen a “land rush” of agencies moving text analytics, speech, and video data to the cloud, and performing AI on the data there.

In other instances, Ford said, it may make sense to train and deploy artificial intelligence applications neither in the cloud nor in a data center, but rather in an edge computing environment.

For example, “it could be that you’re part of the geological survey, and you’ve got a vehicle carrying a camera, and you need to access that vehicle. So the edge literally could be out, field-deployed,” Ford said. Trucks, aircraft, Humvees – all can serve as edge computing locations. He said hyper-converged and containerized workloads are easily movable among computing resources located where you gather data. In such cases, Ford said, the agency may find it advantageous to add software stacks to the cloud, from which they can communicate to the edge, where artificial intelligence is occurring.

Otherwise, Ford said, applications running large data sets in edge resources often benefit from adding local GPU accelerators. These enhance performance, while helping the agency avoid some of the costs associated with moving large data volumes and workloads in and out of commercial clouds.

Agencies may find that with this approach, they may only need to transfer across their networks the output of an application, the decision-making result of AI. The data, application, and compute stays local.

Still another option is having a vendor own and operate a replication of an agency’s data center, but in a secure, “caged” facility maintained by the vendor. Advantages include having geographically strategic compute power without the need for a data center-sized capital investment.

“You’re using the same equipment, the same technology, the same education and investment you’ve had for a number of years,” Perez said. “You’re just now moving into the next stage and being able to do it faster and quicker.” And on a predictable consumption-cost model.

Perez and Ford said it’s important to distinguish between the training period of AI and the deployment, in terms of the best location and architecture. Each may require a different computing set-up for maximum efficiency. Training is generally more efficient in the cloud, whereas deployment often requires a federated approach.

“Effectively, federated is, rather than bringing the data from the edge back to you, why not send those analytics in that virtualized container to the edge so that you’re not moving the data,” Ford said. “And then, once you have the results computed at the edge, you only send back the results.”

Shape

An Agency Approach to AI and Infrastructure

Shape

Data Center Extraction

Rather than bringing the data from the edge back to you, why not send those analytics in virtualized containers to the edge so that you're not moving the data. And then once you have the results computed at the edge, you only send back the results. You're lowering the bandwidth, decreasing the amount of data that has to traverse all of those networking hops.

Listen to the full show:

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    Army, Air Force, Navy, recruitment

    Army, Air Force ‘optimistic’ about recruitment, Navy falls behind

    Read more
    Getty Images/iStockphoto/baramee2554Retirement

    Another column on retirement. This time, I’m joining you

    Read more

Featured speakers

  • Nic Perez

    Chief Technology Officer, Cloud Practice, ThunderCat Technology

  • Al Ford

    Artificial Intelligence Alliances Manager, Dell Technologies

  • Tom Temin

    Host, The Federal Drive, Federal News Network