Why human-machine teaming is the future of cybersecurity

Eric Trexler, the executive director of national security and civilian programs for McAfee, explains why machine learning is the fastest way to identify attacks and...

In light of the federal cybersecurity workforce shortage, turning to machines and automation to help secure federal systems and networks is no longer a suggestion; it’s a necessity.

This shortage can be attributed to several factors, one of which is that a lot of person-power is spent on mundane tasks that don’t need to be done by a human. This leads to higher levels of turnover in more junior roles — namely tier-one security operations center (SOC) operators and researchers.

Fortunately, these junior roles are the easiest and most logical to automate with human-machine teaming. A recent Pathfinder report commissioned by McAfee, via 451 Research, explores this topic and describes how human-machine teaming makes for sustainable endpoint security in all enterprises, including government. Artificial intelligence and machine learning can help with more mundane tasks, while leaving higher-level human thinking for more sophisticated attacks, changing the way cyber professionals do their job for the better.

That’s an important caveat, because machines are only as good as the humans creating and using them. Federal cybersecurity workers shouldn’t worry about job security with artificial intelligence looming. In fact, it’s quite the opposite: They should be excited, as their jobs should become more interesting and challenging with automation taking over lower-level tasks. The optimum state of federal cybersecurity is not simply automation, artificial intelligence or machine learning; it’s human-machine teaming.

Despite the 2017 National Defense Authorization Act directing a more limited use of lowest-price technically acceptable (LPTA) contracts, the government continues to leverage these contracts heavily for cybersecurity efforts. Going forward, they will need to leverage machine learning and automation for low-price, lower-skilled activities, reserving human intellect for the higher-order efforts.

This concept is not without precedent. Machines helped us win World War II through cryptanalysis and codebreaking; in the same way, machines can help us defend our systems from modern-day adversaries. The Allies still required Alan Turing and his team. They still needed Joseph Rochefort and his cryptanalysts. Imagine the state of the world if the government continued to work on the enemy’s ciphers and codes manually without involving machines. The Battle of the Atlantic and the Battle of Midway would likely have resulted in significantly different outcomes. Like cryptanalysts in WWII, we need to think differently about cybersecurity today.

Attackers now focus on vulnerable endpoints as the preferred point of entry for malware, as endpoints are not confined to the data center, with its layers of security under the watchful eye of security teams. With the increased use of public and hybrid clouds, the network becomes even more diverse and complex, not to mention the coming mass-propagation of the Internet of Things (IoT) sensors and control devices. Humans simply can’t keep up today, even the best of them. Tomorrow will be even more challenging. This is where machine learning will be key.

Machine learning provides the fastest way to identify new attacks and push that information to endpoint security platforms. Machines are excellent at repetitive tasks, such as making calculations across broad swaths of data, crunching big data sets and drawing statistical inferences based on that data, all at rapid speed. With the help of machine learning, security teams may have greater insight into who the attackers are (basic attribution), what methods they’re using, and how successful those methods are. Despite this, it’s imperative to remember that machines lack the ability to put data into context like humans can, or understand the implications of events. Context is of critical importance in cyber operations and not something as well suited to machines.

Machine learning is a long way from perfect, but it’s making significant gains and worth the effort. Of course, the results derived are always subject to the variables humans submit for calculation and any unknowns that we didn’t calculate in the equation. The models are only as good as the human-provided inputs; as we know, machines don’t think for themselves. A hybrid of human and machine will be the answer, and as technology evolves, the workload will shift.

Government organizations need to understand that today’s attacks are not as simple as finding the next event, but rather correlating events that might come from multiple sources, targeting multiple systems within multiple agencies. One or two events on their own might be benign, but taken out of isolation and viewed from a broader perspective, those events might be indicators of compromise. The job of looking across that broader perspective, correlating events, and telling the story falls to humans.

The key to human-machine teaming is using machines to do what they do best and humans to do what machines can’t do — like making sophisticated judgments and thinking quickly to solve problems. The result will yield not only more interesting federal jobs but also a more effective defensive posture for government networks. Our adversaries are using machine learning and artificial intelligence to attack us; it’s time we match their capabilities.


Eric Trexler is the executive director of national security and civilian programs for McAfee.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    Mark A. Milley

    Army wants cyber contract times to decrease with acquisition revamp

    Read more
    Lt. Col. Dan SchoeniLt. Col. Dan Schoeni

    DoD acquisition ‘slow by design,’ can’t handle cybersecurity defense

    Read more