Faced with growing artificial intelligence regulation from the Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI and National Institute of Standards and Technology’s AI Safety Institute Consortium, federal agencies are encountering more pressure than ever to evolve the security and safety of their AI technologies. In particular, there is a need to create secure computing capabilities that maintain the privacy and confidentiality of AI models and data sets. Fortunately, this need is increasingly being met by innovations in confidential AI that help agencies reap the benefits of AI while avoiding the security pitfalls for some of their most demanding use cases.
AI brings added data risk to federal agencies
There is no denying the value and power of generative AI (GenAI) tools and large language models (LLMs) in helping agencies automate processes and optimize workflows for tremendous efficiency gains. But with these benefits comes the risk management burden of ensuring sensitive, proprietary or confidential data are not exposed in the process. Any such lapses come with heavy operational, security and compliance implications.
It is not just the data that needs protecting; the AI models themselves represent intellectual property born of significant public investment in research, algorithm development and training, model testing and more. This is especially critical for federal agencies whose data may be extremely sensitive, where operations are highly classified or where third-party cloud integrations could expose government AI data and models to a vendor-run ecosystem.
For federal CIOs and their teams, creating the right protections poses challenges across the IT estate and at multiple, granular levels of data architecture and infrastructure. This includes nuanced configuration at both the hardware level, where there is a need for increased monitoring and authentication; and at the software level, where enhanced software development lifecycle practices must ensure provenance and reliability of data that feed AI processes. The good news is that once these nuances are well-understood, federal IT teams can create a more protected, confidential AI environment to run their LLM and other advanced AI technology applications.
Confidential AI deployments tailor configurations at both the software and hardware levels to accommodate and protect how AI operates and works with data in the organization. This entails authentication protocols to secure data in use, in motion and at rest within systems, cryptographic attestation processes to verify data authenticity and related steps that collectively help establish a trusted execution environment for AI processing to take place.
A confidential AI deployment will apply these computing principles to data and processes used to train LLMs, along with their output and how models perform in actual production. Taken together, these steps help address the concerns of federal IT leaders who want to prioritize AI integration but struggle with how to proceed without exposing AI models and algorithms to malicious attacks.
Confidential AI is helping public and private sector organizations of all kinds enhance the security and privacy of their AI deployments. The best approaches are comprehensive across the entire AI journey from data preparation and consolidation to training, inference and results delivery. And when used alongside storage and network encryption, confidential AI is able to provide the results and those potential multi-model-driven requirements needed for all government data at rest, in transit or in use.
Confidential AI transforms federal agency operations
The advantages of confidential AI are especially critical in the federal agency environment, where mission characteristics dictate AI processes that involve extremely sensitive or regulated data. A Defense Department project using AI for computer vision, for instance, may run on data that is classified and central to national security; or a Department of Health and Human Services disease-modeling effort leveraging AI might draw on susceptible and HIPAA-protected patient records.
Confidential AI protects agencies through systematic isolation, encryption and attestation of such data across the AI pipeline. And federal IT teams are finding innovative ways along that pipeline to further tailor confidential AI to address some of their most demanding use cases. Examples include medical research that uses synthetic data, which is algorithmically generated and designed to replicate real health information, without compromising patient confidentiality.
As another example, emergency response professionals can use confidential AI to maintain encryption levels required for AI-driven digital twins that combine diverse data streams for more accurate and realistic digital twins. These are just some of the many cases where a well-implemented confidential AI deployment can strengthen an agency’s AI risk framework and enhance mission performance and constituent service quality overall.
Conclusion
Confidential AI is an effective way for agencies to leverage the power of AI while ensuring the security, privacy and compliance standards of the underlying data that fuel AI insights and models are well protected. Especially with the right implementation and configuration choices, federal agencies can leverage confidential AI for even their most demanding citizen service use cases.
Securing the AI data pipeline with confidential AI
There is a need to create secure computing capabilities that maintain the privacy and confidentiality of AI models and data sets.
Faced with growing artificial intelligence regulation from the Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI and National Institute of Standards and Technology’s AI Safety Institute Consortium, federal agencies are encountering more pressure than ever to evolve the security and safety of their AI technologies. In particular, there is a need to create secure computing capabilities that maintain the privacy and confidentiality of AI models and data sets. Fortunately, this need is increasingly being met by innovations in confidential AI that help agencies reap the benefits of AI while avoiding the security pitfalls for some of their most demanding use cases.
AI brings added data risk to federal agencies
There is no denying the value and power of generative AI (GenAI) tools and large language models (LLMs) in helping agencies automate processes and optimize workflows for tremendous efficiency gains. But with these benefits comes the risk management burden of ensuring sensitive, proprietary or confidential data are not exposed in the process. Any such lapses come with heavy operational, security and compliance implications.
It is not just the data that needs protecting; the AI models themselves represent intellectual property born of significant public investment in research, algorithm development and training, model testing and more. This is especially critical for federal agencies whose data may be extremely sensitive, where operations are highly classified or where third-party cloud integrations could expose government AI data and models to a vendor-run ecosystem.
For federal CIOs and their teams, creating the right protections poses challenges across the IT estate and at multiple, granular levels of data architecture and infrastructure. This includes nuanced configuration at both the hardware level, where there is a need for increased monitoring and authentication; and at the software level, where enhanced software development lifecycle practices must ensure provenance and reliability of data that feed AI processes. The good news is that once these nuances are well-understood, federal IT teams can create a more protected, confidential AI environment to run their LLM and other advanced AI technology applications.
Learn how federal agencies are preparing to help agencies gear up for AI in our latest Executive Briefing, sponsored by ThunderCat Technology.
Protecting data across the AI pipeline
Confidential AI deployments tailor configurations at both the software and hardware levels to accommodate and protect how AI operates and works with data in the organization. This entails authentication protocols to secure data in use, in motion and at rest within systems, cryptographic attestation processes to verify data authenticity and related steps that collectively help establish a trusted execution environment for AI processing to take place.
A confidential AI deployment will apply these computing principles to data and processes used to train LLMs, along with their output and how models perform in actual production. Taken together, these steps help address the concerns of federal IT leaders who want to prioritize AI integration but struggle with how to proceed without exposing AI models and algorithms to malicious attacks.
Confidential AI is helping public and private sector organizations of all kinds enhance the security and privacy of their AI deployments. The best approaches are comprehensive across the entire AI journey from data preparation and consolidation to training, inference and results delivery. And when used alongside storage and network encryption, confidential AI is able to provide the results and those potential multi-model-driven requirements needed for all government data at rest, in transit or in use.
Confidential AI transforms federal agency operations
The advantages of confidential AI are especially critical in the federal agency environment, where mission characteristics dictate AI processes that involve extremely sensitive or regulated data. A Defense Department project using AI for computer vision, for instance, may run on data that is classified and central to national security; or a Department of Health and Human Services disease-modeling effort leveraging AI might draw on susceptible and HIPAA-protected patient records.
Confidential AI protects agencies through systematic isolation, encryption and attestation of such data across the AI pipeline. And federal IT teams are finding innovative ways along that pipeline to further tailor confidential AI to address some of their most demanding use cases. Examples include medical research that uses synthetic data, which is algorithmically generated and designed to replicate real health information, without compromising patient confidentiality.
As another example, emergency response professionals can use confidential AI to maintain encryption levels required for AI-driven digital twins that combine diverse data streams for more accurate and realistic digital twins. These are just some of the many cases where a well-implemented confidential AI deployment can strengthen an agency’s AI risk framework and enhance mission performance and constituent service quality overall.
Conclusion
Confidential AI is an effective way for agencies to leverage the power of AI while ensuring the security, privacy and compliance standards of the underlying data that fuel AI insights and models are well protected. Especially with the right implementation and configuration choices, federal agencies can leverage confidential AI for even their most demanding citizen service use cases.
Read more: Commentary
Gretchen Stewart is chief data scientist at Intel.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
Taming the breach: Is U.S. incident disclosure working?
Shining a light on shadow AI: Three ways to keep your enterprise safe
Rethinking defense technology: A blueprint for US success in a complex global landscape