Last week, the Department of Homeland Security announced the creation of an Artificial Intelligence Safety and Security Board tasked with advising its secretary on the “secure development and deployment of AI technology in our nation’s critical infrastructure.” Staffed with top AI leaders and industry CEOs, this new board will no doubt command both attention and authority. That leads to an important question: Where does it begin, and which priorities should lead its high-impact efforts?
This board and its work could have wide-reaching AI policy impact, so getting things right is essential. As the board digs in, it must focus limited bandwidth on maximizing its impact and on realistic efforts to analyze AI challenges and ensure resilience and improvement in our most critical systems.
Thus, the first step should be obvious: Identify the true scope of its responsibilities. Today, DHS defines “critical infrastructure” as 16 industrial sectors which encompass well over 50% of the U.S. economy. How is one board meant to advise across all of these areas effectively, let alone in an emerging, dynamic field like AI?
This massive breadth under current policy comes despite recommendations from the U.S. Cyberspace Solarium Commission set up by Congress between 2019 and 2021. While most people assume that “critical” means the grid, water, pipelines and hospitals, the reach of existing policy extends to rideshare, factories, office buildings, retail, cosmetics, and most obscenely, casinos.
For the 22 individuals tasked with joining the AI Safety and Security Board, this scope could strain priorities, waste time and distract focus from the core systems that really matter. The grid, water, energy, planes, trains and hospitals are truly essential. Start there. Even this limited subset will be a big undertaking, demanding deep consideration and effort.
Not every priority will be equal. Of the many hypothesized AI risks, cyber risk is one of the few that we’re all but guaranteed to experience. This leads to a second core focus area, as digital systems have bugs, and bugs will be exploited.
Thankfully, novel AI-related threats have yet to emerge in force. A recent OpenAI/Microsoft report found that malicious “threat actors” are indeed exploring AI-enabled cyber tactics, techniques and procedures, though success has been limited to low-level uses including research, troubleshooting and spear phishing email generation. With the runway it currently has, the board should focus on preparing for the unknown, the unexpected, and the possibility that offensive AI cyber use cases do emerge.
Beyond developing basic security recommendations, analyzing existing cyber regulations, and identifying gaps, overlaps and opportunities for harmonization to enable nimble action will be an important step. We must also consider whether government agencies are resourced to adequately assist.
The staff managing the National Vulnerability Database (NVD), for instance, are spread thin. Resulting backlogs have left this essential piece of our national cyber infrastructure out of step with existing vulnerabilities. More resources are needed, especially if AI increases the cyber threat landscape. Across the board, deep cyber capabilities will help government and industry flex to and meet new demands, easing success in the uncertain cyber future.
As a third pillar, the board must focus on AI’s promise and diffusion. The appropriate application of AI in critical infrastructure could improve reliability and enable advanced efficiency in both government and the economy.
For example, one 2019 Google study found that machine learning analysis of weather patterns and wind turbine outputs enabled the prediction of wind power supply 36 hours in advance. The result: Google boosted turbine profitability a remarkable 20% in a win for green energy. Other emerging applications include downed powerline detection, predictive maintenance, water quality analysis, and, of course, AI-augmented cyber defenses. The board should study carefully which bottlenecks hold back success. Which regulations stand in the way? Is further investment and research needed?
As the board embarks on advising DHS, it’s imperative that they approach the work with a clear strategy and a sense of priorities. By limiting the scope of their efforts, prioritizing cyber risks, and recognizing the immense potential of AI to improve and fortify our critical systems, it can make significant strides in ensuring minimum AI disruption and maximum AI success.
Three key steps for Homeland Security’s new AI Board
Of the many hypothesized AI risks, cyber risk is one of the few that we’re all but guaranteed to experience.
Last week, the Department of Homeland Security announced the creation of an Artificial Intelligence Safety and Security Board tasked with advising its secretary on the “secure development and deployment of AI technology in our nation’s critical infrastructure.” Staffed with top AI leaders and industry CEOs, this new board will no doubt command both attention and authority. That leads to an important question: Where does it begin, and which priorities should lead its high-impact efforts?
This board and its work could have wide-reaching AI policy impact, so getting things right is essential. As the board digs in, it must focus limited bandwidth on maximizing its impact and on realistic efforts to analyze AI challenges and ensure resilience and improvement in our most critical systems.
Thus, the first step should be obvious: Identify the true scope of its responsibilities. Today, DHS defines “critical infrastructure” as 16 industrial sectors which encompass well over 50% of the U.S. economy. How is one board meant to advise across all of these areas effectively, let alone in an emerging, dynamic field like AI?
This massive breadth under current policy comes despite recommendations from the U.S. Cyberspace Solarium Commission set up by Congress between 2019 and 2021. While most people assume that “critical” means the grid, water, pipelines and hospitals, the reach of existing policy extends to rideshare, factories, office buildings, retail, cosmetics, and most obscenely, casinos.
Get tips and tactics to make informed IT and professional services buys across government in our Small Business Guide.
For the 22 individuals tasked with joining the AI Safety and Security Board, this scope could strain priorities, waste time and distract focus from the core systems that really matter. The grid, water, energy, planes, trains and hospitals are truly essential. Start there. Even this limited subset will be a big undertaking, demanding deep consideration and effort.
Not every priority will be equal. Of the many hypothesized AI risks, cyber risk is one of the few that we’re all but guaranteed to experience. This leads to a second core focus area, as digital systems have bugs, and bugs will be exploited.
Thankfully, novel AI-related threats have yet to emerge in force. A recent OpenAI/Microsoft report found that malicious “threat actors” are indeed exploring AI-enabled cyber tactics, techniques and procedures, though success has been limited to low-level uses including research, troubleshooting and spear phishing email generation. With the runway it currently has, the board should focus on preparing for the unknown, the unexpected, and the possibility that offensive AI cyber use cases do emerge.
Beyond developing basic security recommendations, analyzing existing cyber regulations, and identifying gaps, overlaps and opportunities for harmonization to enable nimble action will be an important step. We must also consider whether government agencies are resourced to adequately assist.
The staff managing the National Vulnerability Database (NVD), for instance, are spread thin. Resulting backlogs have left this essential piece of our national cyber infrastructure out of step with existing vulnerabilities. More resources are needed, especially if AI increases the cyber threat landscape. Across the board, deep cyber capabilities will help government and industry flex to and meet new demands, easing success in the uncertain cyber future.
As a third pillar, the board must focus on AI’s promise and diffusion. The appropriate application of AI in critical infrastructure could improve reliability and enable advanced efficiency in both government and the economy.
For example, one 2019 Google study found that machine learning analysis of weather patterns and wind turbine outputs enabled the prediction of wind power supply 36 hours in advance. The result: Google boosted turbine profitability a remarkable 20% in a win for green energy. Other emerging applications include downed powerline detection, predictive maintenance, water quality analysis, and, of course, AI-augmented cyber defenses. The board should study carefully which bottlenecks hold back success. Which regulations stand in the way? Is further investment and research needed?
As the board embarks on advising DHS, it’s imperative that they approach the work with a clear strategy and a sense of priorities. By limiting the scope of their efforts, prioritizing cyber risks, and recognizing the immense potential of AI to improve and fortify our critical systems, it can make significant strides in ensuring minimum AI disruption and maximum AI success.
Read more: Commentary
Matthew Mittelsteadt is a technologist and research fellow with the Mercatus Center at George Mason University.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
DHS fills out AI safety board with big tech execs
New House bill specifies how agencies should use artificial intelligence
Some new recommendations for securing artificial intelligence