The efficiencies and benefits of AI provide too much potential to ignore, as evidenced by President Biden’s Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence. However, AI development in the public sector must be implemented responsibly and securely. Transparency and accountability should be the central focus for any public sector agency looking to implement AI.
The AI EO provides agencies with specific guardrails that can reduce the potential risks surrounding AI, and sets timely goals to help agencies proactively mitigate AI vulnerabilities, minimize potential misuse, and steward strong AI development.
To get ahead of potential risks, agency leaders should align with the cybersecurity practices already in place by governing bodies like the Cybersecurity and Infrastructure Security Agency and the National Institute of Standards and Technology as they evolve guidance for AI governance. Teams are using a secure-by-design approach and embracing collaboration, transparency and privacy for the adoption of safe AI will ensure its success.
Incorporating secure-by-design principles across the software development lifecycle
AI will become central to the software development process by boosting efficiency, expediting and securing DevSecOps, and assisting nearly every other function in the software development lifecycle. AI embedded within software development workflows can help DevSecOps teams write code, suggest code reviewers, understand security vulnerabilities, predict productivity metrics, explain source code, and improve collaboration.
A crucial factor for safe AI adoption is ensuring that security is integrated from the start of the software development process. CISA recently shared new guidance on secure-by-design practices to help software providers improve their proactive approach to the ever-changing threat landscape. Secure-by-design principles should be implemented during the design phase of the development lifecycle to reduce exploitable flaws before programs are introduced for broad use.
An AI bill of materials (AI BOM) is a helpful tool that outlines the sources, sub-AI components, maturity and other specifics to help mitigate security risks. This allows buyers, operators and developers to evaluate the origins, vulnerabilities and risks of integrating AI into an existing technology stack.
By keeping a record of the components that go into the technology across the supply chain, agencies will be in the best position to understand challenges and vulnerabilities as they emerge while safely leveraging AI’s many possibilities.
Federal agencies are already exploring how to understand vendors’ use of AI algorithms through the use of AI BOMs as criteria when evaluating vendors. AI vendors that operate transparently and share critical information to help create standards, tools and tests for secure and trustworthy AI are best positioned to help agencies understand the potential risks and cyber vulnerabilities of AI adoption.
Collaboration, transparency and privacy
Collaboration is critical to ensure the safe and trustworthy use of AI; the federal government can learn from industry partners that are tackling AI development and implementation challenges. Those working in a vacuum could potentially risk reduced speed-to-market and difficulty scaling.
The EO also proposes that developers of AI systems share safety tests with the federal government and directs NIST and the departments of Energy and Homeland Security to develop standards and tools for AI implementation. This will require a level of collaboration across agencies and between the public and private sector. The federal government must look toward strong partnerships with organizations that are eager to work towards the common goal of safe and ethical AI adoption to create a solid base for the future of AI adoption.
Critical to this joint effort is transparency, especially around AI models and how teams are using data. Agencies should prioritize working with vendors that are transparent about their development processes and the data used to train their AI models. For most agencies and the citizens they serve, privacy is crucial – such protections are especially important in the public sector where the misuse of AI could harm national security and public safety.
To continue driving innovation across the public sector and reduce the risks of this emerging technology, agency leaders can look to best practices in areas like cybersecurity and software development. It can be challenging for the federal government to develop and implement regulations at the pace of emerging technologies like AI. The EO on AI is a step in the right direction, and it’s incumbent for every leader in the federal technology space to do our part in driving it forward.
Joel Krooswyk is federal chief technology officer for GitLab Inc.
Translating the AI executive order into security practices
The AI EO provides agencies with specific guardrails that can reduce the potential risks surrounding AI, and sets timely goals.
The efficiencies and benefits of AI provide too much potential to ignore, as evidenced by President Biden’s Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence. However, AI development in the public sector must be implemented responsibly and securely. Transparency and accountability should be the central focus for any public sector agency looking to implement AI.
The AI EO provides agencies with specific guardrails that can reduce the potential risks surrounding AI, and sets timely goals to help agencies proactively mitigate AI vulnerabilities, minimize potential misuse, and steward strong AI development.
To get ahead of potential risks, agency leaders should align with the cybersecurity practices already in place by governing bodies like the Cybersecurity and Infrastructure Security Agency and the National Institute of Standards and Technology as they evolve guidance for AI governance. Teams are using a secure-by-design approach and embracing collaboration, transparency and privacy for the adoption of safe AI will ensure its success.
Incorporating secure-by-design principles across the software development lifecycle
Learn how federal agencies are preparing to help agencies gear up for AI in our latest Executive Briefing, sponsored by ThunderCat Technology.
AI will become central to the software development process by boosting efficiency, expediting and securing DevSecOps, and assisting nearly every other function in the software development lifecycle. AI embedded within software development workflows can help DevSecOps teams write code, suggest code reviewers, understand security vulnerabilities, predict productivity metrics, explain source code, and improve collaboration.
A crucial factor for safe AI adoption is ensuring that security is integrated from the start of the software development process. CISA recently shared new guidance on secure-by-design practices to help software providers improve their proactive approach to the ever-changing threat landscape. Secure-by-design principles should be implemented during the design phase of the development lifecycle to reduce exploitable flaws before programs are introduced for broad use.
An AI bill of materials (AI BOM) is a helpful tool that outlines the sources, sub-AI components, maturity and other specifics to help mitigate security risks. This allows buyers, operators and developers to evaluate the origins, vulnerabilities and risks of integrating AI into an existing technology stack.
By keeping a record of the components that go into the technology across the supply chain, agencies will be in the best position to understand challenges and vulnerabilities as they emerge while safely leveraging AI’s many possibilities.
Federal agencies are already exploring how to understand vendors’ use of AI algorithms through the use of AI BOMs as criteria when evaluating vendors. AI vendors that operate transparently and share critical information to help create standards, tools and tests for secure and trustworthy AI are best positioned to help agencies understand the potential risks and cyber vulnerabilities of AI adoption.
Collaboration, transparency and privacy
Collaboration is critical to ensure the safe and trustworthy use of AI; the federal government can learn from industry partners that are tackling AI development and implementation challenges. Those working in a vacuum could potentially risk reduced speed-to-market and difficulty scaling.
The EO also proposes that developers of AI systems share safety tests with the federal government and directs NIST and the departments of Energy and Homeland Security to develop standards and tools for AI implementation. This will require a level of collaboration across agencies and between the public and private sector. The federal government must look toward strong partnerships with organizations that are eager to work towards the common goal of safe and ethical AI adoption to create a solid base for the future of AI adoption.
Critical to this joint effort is transparency, especially around AI models and how teams are using data. Agencies should prioritize working with vendors that are transparent about their development processes and the data used to train their AI models. For most agencies and the citizens they serve, privacy is crucial – such protections are especially important in the public sector where the misuse of AI could harm national security and public safety.
Read more: Commentary
To continue driving innovation across the public sector and reduce the risks of this emerging technology, agency leaders can look to best practices in areas like cybersecurity and software development. It can be challenging for the federal government to develop and implement regulations at the pace of emerging technologies like AI. The EO on AI is a step in the right direction, and it’s incumbent for every leader in the federal technology space to do our part in driving it forward.
Joel Krooswyk is federal chief technology officer for GitLab Inc.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
AI & Data Exchange 2024: Sen. Mark Warner on creating AI guardrails
DHS’ list of AI use cases found inaccurate, GAO says
Agencies should promote existing workplace flexibilities in federal AI hiring, OPM says