By all accounts, artificial intelligence is changing how organizations must approach cybersecurity. Not everyone is quite certain how. Now, a really big working...
By all accounts, artificial intelligence is changing how organizations must approach cybersecurity. Not everyone is quite certain how. Now, a really big working group assembled by the Aspen Institute has come up with specific recommendations on dealing with AI in the cybersecurity context. For some highlights, the Federal Drive with Tom Temin spoke with the Institute’s Senior Director for Cybersecurity Programs, Jeff Greene.
Interview Transcript:
Tom Temin And we should note, you have pretty considerable federal experience in the government working on cybersecurity issues, so you know where of you speak. But my first question is, and I’ve heard it a thousand times, AI is really affecting cyber. But exactly how is it affecting cyber.
Jeff Greene I take that question maybe in a couple of different parts. First, what’s happening today and what can happen in the future. Our paper trying to look a bit into the future. But today and in the past, artificial intelligence or machine learning, as we used to talk about it, there’s a bit of an overlap there. It’s helping out in a lot of ways. It’s making it easier for humans to spend their time focusing on the most important alerts and allowing them to put to the side some of the things that maybe they don’t need to spend as much time. The machine learning and AI tools can correlate incidents see through big volumes of data that a human would never be able to do. It can connect up a particular incident to another incident again, that would appear unconnected through whether it’s an IP address or other indicators of compromise that can detect unusual behaviors. Basically, try to solve some of the human capital issues that we’ve had in cybersecurity. So that’s been going on for quite a while. The ability to detect what we call living off the land type of attacks, where you’re not talking about a specific piece of malware that you can detect going forward. That was really what we wanted to get into. There’s a lot of speculation. If I could tell you exactly how it would change cybersecurity, I’d probably be trying to get some seed funding. But the group we got together really try to think about how we thought it could work and what additional capabilities it would add. One of the big ones you hear a lot about is writing better code, trying to get vulnerabilities out from the front end with governments talked about this secure by design, making that a reality, helping with your policies and planning, thinking about, you have four vulnerabilities that are all listed as crucial, which one should you put your resources to patching first? You can have an intelligent assistant help you figure out things like that. A whole range of things that hopefully we’ll see coming online and in the next few years.
Tom Temin And this paper, what we were trying to do here. And how did you go about it sounds like you had really a large mob of people that know something about cyber coming to impinge on this.
Jeff Greene Well, we wanted to do was focus on the end users of these AI tools, as opposed to the developers to try to tell organizations, government or otherwise, that are deploying tools today. Here are some things that you should think about as you’re using them. And we had a conversation. Our cyber group meets over the summer, very open ended about what will your last question, what will AI do for cyber? And it was somewhat hard to get to concrete recommendations and thoughts because it was such an amorphous future. So what we did was we took our group, which ended up being about 40 people, split them in two, and said, half of you write what you think is a feasible future when cybersecurity is really helped by these AI tools. And the other half, right, a bad future where AI is really enabling the attackers. And then we took those two futures, got in the room both physically and virtually, and said, ok, we want to go towards the good and away from the bad. What are things that we both should do and should not do to help steer us in that direction? So that was our thinking to try to put some pounds around what we’re doing. We stopped short of the Skynet future. We said there’s no sentient computer out there, but realistic what we think is going to be coming online.
Tom Temin Yeah. In fact, the reality of most algorithms is that they get dumber over time because of the bad data they get fed.
Jeff Greene Us humans make them dumber over time. Is one the pejorative way to put it.
Tom Temin All right. Yeah. So these two scenarios then are what people felt were realistically going to happen. And in fact, they both will happen, because AI will enable the cyber defenders as much as it enables the attackers.
Jeff Greene Yeah. I mean, I think what you’re likely to see in the future is something that will land, I’ve described it as adjacent to the middle of what we describe the middle, because, as you said, it’s going to help everyone, but adjacent to it, because I’m sure we got some things wrong and didn’t see other things. So hopefully it’ll be somewhere close to it. And one of our distinguished members, Herb Lin, wrote some additional thoughts in the paper where he said, good and bad is in the eye of the beholder. For the United States, it’s good that we have a cyber command that’s able to see into our adversaries activities. And for the United States, it’s bad if North Korea defenses become better. So there is very much a contextual element to this as well. And I encourage folks to take a look at what herb wrote, because I think it added some great thoughts.
Tom Temin All right. We’re speaking with Jeff Greene. He’s senior director for cyber security programs at the Aspen Institute and former director of the Cyber Security Center of Excellence at NIST. And you have some recommendations on what people should do in the way they handle AI. Just give us the highlights.
Jeff Greene For me, one of the biggest, don’t forget everything you already know about cybersecurity. This is new, there are new elements of it, but we have this tendency to find something new and try to say, let’s think about it differently. All the basics of cybersecurity, hygiene, of cybersecurity practices still remain. Don’t forget them as you go. But with regard to these new tools as they’re coming online, we really encourage organizations to proactively manage just how much agency they’re giving over to an AI tool. Underlying this is that it is, in fact ok to hand off decisions, but you want to make a conscious choice as to when you’re doing that. And we tried to put out some factors that organizations can consider as they’re doing it. How much quality control is required of this particular action of quality is more important maybe less towards giving it over completely, or what is the impact or risk, or is it irreversible? If it’s a truly irreversible decision that is up, that the company or that the agency decision probably shouldn’t have a computer making that without any human impact? So we’re not telling organizations exactly what to do there, but we’re telling them, you can have a tendency when you’re sold something new just to drop it in and say, it’s great, and you need to give some serious thought to it.
Tom Temin Sure. And one of the recommendations was log, log and log more.
Jeff Greene Great point. Logging is the basics of cybersecurity. We debated whether to include it because it’s kind of a like no duh kind of recommendation. But in the context of these AI tools, it came up in a few different ways, and we ultimately not only included it, but made it one of our more prominent one. Logs are would allow you to detect intrusions and quantify them as they happened. But logs, which are essentially data, are key to what the AI can do for you. And if it is not getting very, very up to date data, both will be unable to detect things that are going on and see patterns of potential intrusions you’ve never seen before. But also a lot of AI driven, AI enabled intrusion activity is going to look like normal activity, so you need that level of data the most you can get in order to pick out those proverbial needles from an ever growing haystack.
Tom Temin Yeah, it’s almost like the monster in that movie in space looked like a German shepherd. And then when that came in, all of a sudden it morphed into this horrible, man eating, woman eating device. All right. You had some specific recommendations for the government also.
Jeff Greene First thing that we wanted to make sure that the government, our view is the government needs to be willing and comfortable leaning in if it sees an AI tool that is a particular risk, either because it enables malicious activity it could be used to generate whether it is bad physical things or bad cyber things to consider, whether acquiring it. So you can then license it out for good uses or put controls around it. So hopefully that gives some cover to government folks who really see the need sometimes to step in. We shouldn’t just let everything move forward without any government intervention. Second thing is making sure that the proverbial ecosystem, the entire ecosystem, benefits from these tools. If good cyber AI enabled tools are only available to the wealthiest and biggest organizations, we will ultimately all be suffering because the unfortunately, the criminals will know where to go, and the criminal activity will flow up to the rest of us. So focus on whether it is pushing tools out through open source incentives or other ways to make sure they’re widely available. And the final thing we talked about is making sure that there’s integration of computer science, data science, technical training. It needs to be integrated at some level. Coders need to learn cyber, they need to learn AI, vice versa. We don’t want to have just a few unicorns that know how cyber, AI and engineering interact. Necessarily to have a engineer or a true computer developer know the depths of it, but they need to be able to issue spot to know when to say, ok, I want to bring in the expert here to make sure we’re not introducing new risks that I just haven’t thought of.
Tom Temin And what happens with the paper now.
Jeff Greene So we’ve done a few events on it. We’ve gotten a good amount of interest actually, from governments around the globe, and we are hoping to continue to help educate on, again, these simple things. And that’s been for me, one of the most interesting pick ups is people using, this is anecdotally to drive their organizations, both to think before they implement. I think one of our recommendations is don’t fall for the hype, but also to make sure they’re applying all those existing security practices in place. There never will be a silver bullet to cybersecurity. The best we can do is put together a bunch of 10% solutions and five or six of them, and you’re 50% of the way there.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
Follow @tteminWFED