Insight b y Oracle

How AI has raised the stakes in the cyber battles

Cyber attackers have started using artificial intelligence in a big way. That means you need to use it to stay ahead of them.

Artificial intelligence has accelerated the cybersecurity arms race.

Agencies have started to use AI to more quickly identify attack patterns and devise countermeasures to thwart the attacks. AI speeds up detection of anomalous files and behaviors. It identifies patterns indicating zero-day attacks so that security operations have more time to respond.

But, as David Knox, the chief technology officer for industrials, energy and government at Oracle, noted, “The same technology is available to the bad actors.”

For instance, cyber attackers use AI to vastly increase the effectiveness of spear phishing. They’ve sharpened their “spears” with improved language and syntax, and by incorporating details that make emails more convincing, even to sophisticated users. They’ve created so-called deep fakes with video messages that look like they’re coming from the real McCoy. Perhaps more worrisome, Knox said, is attackers’ use of AI to produce the code to create zero-day attacks, after using AI in the first place to identify the best attack vectors.

“Think about how scary it is that you could ask a system to find a vulnerability in an architecture, [then] write the code for the exploit,” Knox said.

One way to stay ahead of this possibility: Create your own runbooks with procedures and contingency plans. Knox said generative AI can be highly useful in doing so. Organizations can use it to capture experienced people’s knowledge, against a base of written content on policy, regulations and their underlying statutes.

“Generative AI can be really good at fine tuning and summarizing those things,” Knox said.

It can also help cyber practitioners create clear communication to everyone in the IT chain, while enabling faster and more consistent training of people. He adds that agencies should keep in mind federal guidelines for safe use of AI coming from the National Institute of Standards and Technology and the Cybersecurity and Infrastructure Security Agency.

Ransomware has also become a top-of-mind cybersecurity concern. The fact that so many organizations feel they have no choice other than to pay ransom to regain access to their data has motivated legions of attackers, Knox notes. If early denial-of-service or data attacks had corporate embarrassment or bragging rights as their motivations, today’s attackers are steely-eyed seekers of money.

“It now becomes like a profession,” Knox said.

Mitigating this threat, he said, requires combining data about systems architecture, continuous operations, disaster recovery plans and early detection of threats — all when agencies strive for high availability of their systems. When thinking about parameters like recovery point and recovery time objectives and how to recover from a disaster, “it gets very complex very quickly,” Knox said.

This is where generative AI can augment people, by dealing with that complexity and abstracting it. By inculcating operational objectives together with things like email system data, system architecture blueprints, the layout of storage and compute resources, and application configurations, AI can help develop specific mitigation strategies for a variety of situations. Knox notes that email phishing is the most common vector for ransomware attacks, but the AI-driven mitigation principles apply to vectors in other systems as well.

Again, the same technology is available to attackers, so cybersecurity has become something of a spy vs. spy activity. Knox said the same need for vigilance, speed and resilience remain even in the AI era.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories