Researchers at the Pacific Northwest National Laboratory have come up with a cybersecurity software tool that builds on the old notion of honeypots, a way of tricking hackers into thinking they’ve gotten into your systems. The new technology is called Shadow Figment. Thomas Edgar, the labs senior cybersecurity scientist, joined Federal Drive with Tom Temin to talk about how it all works.
Insight by Tableau: Learn about the factors that are important for agencies to improving customer experience by downloading this exclusive executive briefing.
Tom Temin: Mr. Edgar, good to have you on.
Thomas Edgar: Pleasure to talk to you.
Tom Temin: Tell us what you’ve developed here. It’s got a great name Shadow Figment.
Thomas Edgar: So Shadow Figment is a technique to create cyber deceptions to lure away attackers that have gotten into our critical infrastructure so that they don’t target the actual systems that provide the critical services, and target the fictional systems and inform our defenders and operators that something is going on, that they have the time to respond to and prevent taking out our critical services.
Tom Temin: So, some element of it must then emulate or look like the system that the hackers think they’re getting into?
Thomas Edgar: Yeah. The key novelty of what we’ve developed is the ability to model or learn the physics of the physical process that the critical infrastructures controlling. So, if it’s an electrical system, we can model the physics of the electricity being distributed, so that the decoys that you deploy appear to be connected to the physical system, and the attackers can believe that they’re achieving their objectives, and keeping them going, while in reality, they’re not talking to the real system, and we’re protected.
Tom Temin: In other words, you need a different version, depending on what the critical infrastructure is. Electrical grid must look different from, I don’t know, a SCADA system, or a pipeline or that kind of thing?
Thomas Edgar: That’s correct. One of the key features is we’re using machine learning to take data from the real system and learn the physics of the system, the system that we’re protecting. So yes, the Shadow Figment would go into a building and look like an HVAC system, or go into a oil and gas pipeline and look like a pipeline distribution.
Tom Temin: And how, as a practical matter, would a critical systems operator deploy this? Where does it sit in their own systems?
Thomas Edgar: So that’s the powerful piece of this. In IT, we kind of have a common notion of patch as fast as possible to defend ourselves. And that’s not always a realistic thing in OT systems or critical infrastructure. And so the Shadow Figment just installs in the same system, but it operates around the system. So the users can deploy decoys on their network, but it doesn’t really have to interact or be part of the real system. And so it’s a lightweight, easy way to deploy defenses, and ways to lure attackers without impacting the operation of the system.
Tom Temin: It sounds like one of the old cartoons where the chased entity would paint a picture of a tunnel opening on the side of a mountain and the Road Runner would slam into it thinking there was a tunnel there. I guess my question is, how do you make sure that the attacker goes to the painted-on tunnel, so to speak, and not to the real one.
Thomas Edgar: So that’s where we’re trying to take our research now is how to recommend the most effective decoys. So at the current capabilities, it’s up to the defender to define their deception, such that it would learn the different attackers or threats out there. But some of the ideas we want to have is to recommend, based on general understanding of these systems, where some tempting targets would be, or integrating threat intelligence services that know kind of what the threats are trying to do right now, and help them deploy currently relevant deceptions to learn current threat attackers that are operating.
Tom Temin: And what is the current state of Shadow Figment? Is it a compiled runtime piece of software that someone can deploy?
Thomas Edgar: So, as a Department of Energy national lab, we don’t commercialize things, we don’t make products. So, for our efforts, we have prototypes that are developed that are actual operational kind of capabilities. But, we drive to get these to industry by licensing them. And so we have worked with Attivo Networks, which is a deception defense commercial entity that sells a platform. And we’ve licensed these technologies, and they’re working to integrate these concepts and capabilities into their commercial offering.
Tom Temin: We’re speaking with Thomas Edgar, he’s senior cybersecurity scientist at the Pacific Northwest National Laboratory. And how important is it for each user to tailor the package if they decided to develop it and deploy it to their particular infrastructure? That is to say would PEPCO, here on the east coast in Maryland, have the same looking application as Pacific Gas and Electric? I mean, they’re both electric grids. But are they all the same?
Thomas Edgar: To make the deception the most effective, it all depends on how targeted you are. So if the threat is targeting PEPCO specifically, and they know what PEPCO systems look like, to be most effective, you want your deception to be very tailored to your environment. But there is value in just deploying things that look like general systems and we’ve developed some kind of templates to do that type of thing. And there’s been research studies that have looked at even just the belief that there are things like deception in the environment raises the stress of threat and attackers, because they’re worried that they’re going to stumble on something they don’t want to touch. And so just the belief or even lightweight deceptions kind of improves the defenders situation in general.
Tom Temin: And if a hacker does stumbled into the Shadow Figment, is there information that you as the target can gather about the hacker while they’re at it?
Thomas Edgar: Yeah, so that’s one of the big values of deception as a defense, is things should not be talking to this. It’s not a real system. So it’s a very low false positive detector in a sense. If something’s talking to it, either something about your environments change that’s malfunctioning, or you have an attacker, if something interacts with your decoys, it’s a high value information just to know that something’s wrong that you need to pay attention to. But beyond that, and this is more research that we’re looking into is, can we infer the intent of the threat? Are they trying to steal data? Are they trying to take down a specific sub system? Based on what their actions are against that decoy, we believe we can infer their intent and provide more information and support to the defenders to take the correct defensive actions to prevent the bad behavior.
Tom Temin: It sounds like the more copies of this that are deployed in different industries, and the more you collect from would-be attackers, you could almost build a brand new database, almost like a petri dish of different types of attacks and motivations that could be a useful database.
Thomas Edgar: Yeah, absolutely. That’s one of the traditional historical uses of decoys in academia, is to deploy these things to learn what the threats are doing. And so, we have ideas of when we integrate these with threat intelligence platforms, actually feeding data back to these threat intelligence platforms so that you could almost, at a national level, track threat campaigns based on where they’re interacting with decoys, what types of interactions, and just better inform us of what’s going on from the threat landscape.
Tom Temin: Now the recent ransomware attacks against critical infrastructure players has really gone into their business systems and not into the control systems for, say, the Colonial Pipeline itself, or for, I think it was an agricultural place. Therefore, the ransomwares attackers had different motivation, perhaps then altering the course of a system of a piece of critical infrastructure. How can the front office of critical infrastructure operators kind of use this to interact between their business systems, and the systems that are the critical infrastructure operation controls themselves? If that make sense.
Thomas Edgar: Yep. So, you mentioned there’s a long history of honeypots. There was a resurgence about five, six years ago of honeypots as deception defense. And a lot of that was based on the ransomware campaigns. And so traditional deception defense in IT, a major use case of that was spun up to kind of provide traps for the ransomware getting in. And so a lot of the commercial entities today are already providing solutions from the IT side. The focus of ours is to translate those concepts and make them relevant into these OT systems. So, our fear is they pivot, they get in, and then we really start to have serious safety concerns.
Tom Temin: And have you had industry interest in this so far from the operators of critical infrastructure?
Thomas Edgar: Yeah, so we’ve talked to, through our interaction with Attivo Networks, they have some customers and different utilities, I can’t talk about details, but we have had multiple conversations with specific utilities, about their interest in these types of capabilities.
Tom Temin: And it sounds like, just to wind up here, that you need to keep working on this to keep a step ahead of the attackers because they can listen to the radio too, and check the news releases at the PNNL and know what you’re up to.
Thomas Edgar: Yeah, cybersecurity in general is always an arms race. We come up with some new stuff, and then the threats come up with ways to get around that. So, we’re always going to be in a cat and mouse game with the attackers. And so yes, continued research is always an important piece of this and making sure we can defend ourselves.
Tom Temin: Thomas Edgar is a senior cybersecurity scientist at the Pacific Northwest National Laboratory. Thanks so much for joining me.
Thomas Edgar: Thank you. It’s pleasure talking to you.