DARPA tries a simple but profound concept to improve cybersecurity

"This idea of compartmentalization has a realization by breaking systems up into small pieces," said Howard Strobe.

The Defense Advanced Research Project Agency is seeking a simple but tricky-to-execute approach to cybersecurity. It would essentially break software into small pieces that are hard for hackers to access. The program manager in DARPA’s Information Innovation Office, Howard Shrobe, joined  the Federal Drive with Tom Temin with details.

Interview transcript:

Tom Temin So what are you trying here in cyber security that hasn’t been thought of already?

Howard Strobe Well, actually, this is a very old idea. And the analogy would help if you think about the way we build ships. The goal, of course, is for them not to fill up with water. And so we try to build them with strong hulls that are hard to penetrate. But we don’t stop there. We also build it into compartments that can isolate the flow. So the analogy to software systems or to computer systems more generally is, the attackers may get in, but we don’t want them to be able to advance from one place to the next. And so this idea of compartmentalization has a realization by breaking systems up into small pieces, each of which executes only with the privilege it really needs to do its job. And that principle goes back a long, long time in computer science. But it’s always been impractical, in fact, to enforce it because the overhead is too high. So the approach we’re taking is to use novel computer architectures, novel extensions to current conventional architectures to make the enforcement easy.

Tom Temin And when you say overhead, you mean the programing required to encapsulate each piece.

Howard Strobe It’s actually two senses here. So one is the actual execution overhead and the other is how hard is it for people to program that way. And that gives us the two main tasks that the program’s addressing. The first is what architectures can enforce the compartmentalization and privilege management scheme at low cost. And the second is how can we automatically determine what it is they ought to be enforcing? So how to break the system up into pieces and what privileges to extend to those pieces. So the second of those tasks is a software analysis task. And there are two ways of doing that. We’re doing both. So the first, which may be the easiest one to understand, is take a large test suite that covers most of what you think a computer would be doing. Run it, do traces of everything it does, and then analyze what privileges it actually needed based on what did it actually do. The trouble with that is it’s no better than your test suite. The other approach is to just analyze the text to the software. And this is a technique called software analysis, and that has a long history in formal methods. It it can build a model of what parts of the system can reach other parts, but often for technical reasons, that model is too permissive. So by combining the two we’re trying to get it to the point where we get a very good envelope around what the system should be doing.

Tom Temin It sounds like you’re combining elements of zero trust and artificial intelligence; that sounds highly contemporary.

Howard Strobe There are elements of that. In fact, once you have a model of, say, from the dynamic analysis by running it, it would be impractical to enforce this privilege management scheme at an instruction-by-instruction level. And so we start grouping together instructions and data objects that behave similarly. And the bigger you group those into compartments, that’s where the name came from. The less overhead you have, because there’s less transition between compartments. But also you’ve extended more privilege than you might have wanted in a completely pure world. And the techniques for doing that actually are old AI techniques. The analysis part is based on what computer scientists call formal methods. So it’s in effect proving what the software can do. And as I said, you need both of these because one tends to overestimate and the other tends to underestimate.

Tom Temin It sounds like a lot of this would back up to the activities of coders themselves. They would have to incorporate this in their work.

Howard Strobe Well, the idea is to not make them do that. Of course, the more they understand this way of programing, the easier it would become to do the analysis. But we’re also concerned with the billions of lines of existing code. So if you take current operating systems, those are millions of lines of code, dozens of millions of lines of code, and they’re critical. So we have to deal with the existing legacy systems as well as new systems.

Tom Temin We’re speaking with Dr. Howard Trobe. He’s a program manager in the Information Innovation Office. It’s called I2O at DARPA. And what are the programmatic aspects of this? What are you seeking from industry? How are you trying to operationalize this notion?

Howard Strobe Right. So we put out a solicitation which is called the Broad Agency Announcement (BAA) a while ago and selected seven groups to be performers in this program. It’s a mix of academia and companies. And it’s in two technical parts. As I outlined before, the analysis part and the enforcement part. There’s one team that’s doing both. And then there’s two other teams for each of those two technical areas.

Tom Temin And once they figure this out, how does it become something that the industry could glom onto if it chose?

Howard Strobe Yes. So everything we’re doing is open source and unrestricted. Our goal is to demonstrate that this can work well first on operating systems and then later on on big application systems. And we will be trying to work collaboratively with the organizations that manage large open source systems. For example, the Linux Foundation. For another program dealing with compilers, we’re going to try to be working with the LLVM foundation, which is one of the big compiler systems. So our idea is to put the software out there and then work with the system people that actually make these, these large critical software systems. We can’t force anybody to adopt anything, but since we think these are good ideas and if we can demonstrate that they’re valuable, then we think that over time they will get adopted.

Tom Temin And how do you see this working in the very dynamic world that is software today? People have scrums and two week cycles and there’s new modules being introduced all the time, plus modules are being mixed and matched to create new applications or integration of separate applications to create better customer experience. All of these things mean software is interacting with new software all the time.

Howard Strobe Yeah, it’s a good point. And as you said, part of the goal here is to try to establish a framework that programmers working on new stuff can follow that would make compartmentalization easier. The other part of this is to be able to use our tools to automatically compartmentalize that. The more programmers have this model in mind, the easier it is to do that. But we want to be able to do this to anything where it’s really critical.

Tom Temin And this is a fairly arcane but actually quite crucial pursuit that you’re on here. What is your background that you are this deeply interested in cybersecurity?

Howard Strobe Yeah. So I’m a principal research scientist at MIT’s Computer Science and AI Lab, where I’ve been since 1978 as a staff member, before that as a graduate student. I’ve worked here at DARPA three times now. This is my third tour. There’s a program that allows people who work for not for profits to rotate in to the government for limited periods of time. So in my second time at DARPA is when I first got interested in this area. Actually, my office director asked me to take on a project in this area. And I tried to beg off on the grounds that this was one of the few areas of computer science that always bored me. And he said, Well, that’s exactly why you’re the right guy, because we need new eyes on this. So at that point, I became very interested in the problem and pretty much adopted two general areas. One is the one that this program’s in, which is the mixing of hardware and software architectures to make systems that are inherently secure. And the other is more in the area of making systems resilient so that even — and this program actually has aspects of that as well — so that even if the bad guys get in, they can’t achieve their goals. And that has a very AI flavor to it also. And my background actually is in both of those areas, which is systems and artificial intelligence.

Tom Temin And you go back far enough that, you know, a megabit of memory was a thousand bucks, you know, in 1978.

Howard Strobe Yes.

Tom Temin And also, you’ve seen operating systems grow from smallish things to where they do everything.

Howard Strobe That’s right. The transition’s been amazing. When I look back on it, as you say, when at one point I was working on a startup company that had some of these aspects to it. And at the time, you needed to buy something like a 300 megabyte disk and a megabyte of memory, and that was just considered totally unaffordable. Now you can’t get things that small.

Tom Temin Well, the small memories they had at least enforced some programing discipline that doesn’t exist at all now, does it?

Howard Strobe That was true. I mean, the system I worked on enforced memory safety, which is another crucial aspect. Your audience won’t know what the term means. Doesn’t matter. It’s one of the most important principles you want to enforce. Because if you don’t have that, bad guys can do anything once they’re in.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories