Insight by Check Point Software Technologies

CISOs are playing a larger role to get agencies ready for AI

Cindi Carter, the global CISO for the Americas for Check Point Software Technologies, said AI can help augment cyber defenses already in place at many agencies.

For much of the last decade, cybersecurity experts have warned that the government’s struggle to keep up with the volume, velocity and veracity of cybersecurity data could result in grave consequences.

But along with advances in artificial intelligence over the last few years, there is more hope for organizations to do more to get in front of the cyber threats.

IDC Government Insights said where AI is making an impact today is through intelligent automation. The research firms said there are still risks associated with relying on AI for cybersecurity, but the value is being demonstrated in its ability to improve threat detection, automate security tasks and improve security risk analysis.

The Cybersecurity and Infrastructure Security Agency, for example, outlined 12 AI use cases, many of which would apply these capabilities to areas like vulnerability reporting, threat intelligence feed correlation and advanced network anomaly alerting.

The ultimate goal of applying AI capabilities is to enhance cybersecurity efficiency and accuracy for the network defenders.

Cindi Carter, the global chief information security officer for the Americas for Check Point Software Technologies, said like with any new or emerging technology, agency leaders have to ask a very simple question before jumping into the deep end: “What problem are we trying to solve?”

Carter said that answer can lead agencies and organizations to see where the AI capability can help augment the defenses that the department already has in place and where it can help provide preventative measures.

“How do we help our cybersecurity teams do more with less? What I mean by that is, they have less time; they have less talent on the teams; they have less resources available to them, so we are talking about automating certain processes or certain tasks that could otherwise be considered as perhaps mind numbing, or maybe mundane,” Carter said on Innovation in Government. “When you think about a security operation center and security analysts, they’re looking at the monitors, looking at the blinking lights, as we all like to say, that can get a little bit mind numbing, a little bit dry. So automating with AI some of the low-level responses that maybe security analysts would have to do is an absolute benefit, so that that security analyst can go do that other enriching or critical mind thinking type work.”

Using AI to enrich decisions

Of course with any type of automation, cybersecurity or otherwise, agencies need to guard against using the same old processes with new technology. Carter said if organizations just automate bad processes, the benefits of the technology are likely to be severely limited.

“When you think about terms of responses, incident responses to perhaps an alert that comes through, and it might be what we would consider a low level alert, we have to make sure that those processes that we’re utilizing manually are working the way that we intended, that they are delivering the results that we want, or the desired outcomes, that they are well documented before you can go and put an AI engine or put artificial intelligence around that to respond,” she said. “I do think that ways in which the cybersecurity teams can leverage AI is to help enrich the decisions that we have to make around cybersecurity, what types of vulnerabilities are or what types of threats are present in our environment, what kinds of cyber criminals are attacking our vertical, what types of criminals are interested in the kind of data that we have? Those are the types of enrichment, I call it, that artificial intelligence can help the cybersecurity teams with and it can help us make better decisions, it can help improve those outcomes for the institutions that we serve.”

For many agencies, applying AI to mission and back-office areas remains in the nascent stage. Agencies are still figuring out the policy, regulatory, governance and workforce guardrails.

Carter said agencies seem to have learned some of the lessons from the early days of cloud computing where some organizations that jumped right in realized later that this approach doesn’t work for everything.

“There’s some questions that systems need to ask, and these are answers that may not already be clear to us yet because this space is really still evolving. Some of those questions are how can I look within my own institution and develop some clear guidelines and some oversight for the mechanisms to ensure that our artificial intelligence is used in a secure and compliant manner? We are just now seeing that the regulatory space is starting to evolve around this. We have AI risk management frameworks. We have some of the regulation that the European Union, which actually go into effect in June,” she said.  “So we can take a cue from some of that. We need to look internally at our own policies and procedures, things like our data loss prevention practices, do we have the right data classification? Are we able to do it? Are we adhering to what those existing policies and procedures are? Are we able to look at our acceptable use policy and augment that for the usage of AI in our environment now?”

Putting the right guardrails around AI

She added the data piece to AI is as important, or more important, than almost anything else. Agencies need to have high quality data to train large language models and they need to understand what data they have and how to protect it.

“It does cause CISOs and other security leaders to take a step back and make sure we put the right guardrails around the use of this because we already know that the business is chomping at the bit to leverage it because there are significant benefits that can be obtained,” she said. “But we do have to make sure that we’ve got the right guardrails and oversight in place, but we also need to make sure that we have the right foundation for this to be woven into and built upon.”

Carter said it’s encouraging to see agencies bring together CISOs, chief information officers, chief data officers, business leaders, auditors and lawyers to address many of these complex issues.

She said AI and other emerging technologies also are changing the roles of CISOs and other technology leaders.

“Our roles are extremely dynamic as security leaders, and one of the reasons is because of how quickly technological advances take place and how quickly organizations look to implement them. So you have to be comfortable with that amount of rapid pace change,” Carter said. “As a CISO and a security leader, you have to be continuously hungry to learn something new and evolve with that. Just as technology is evolving and just as the cyber threats are evolving, we have to be willing to embrace that change as cybersecurity leaders and help our organizations understand that too.”

Listen to the full show:

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories