You could think of integrated circuits (chips), as the smallest building blocks in the nation’s critical infrastructure. Recently, the National Security Agency (NSA) issued detailed guidance on keeping what it called “adversarial influence” out of microelectronics used in Defense Department systems. For more, the Federal Drive with Tom Temin with Neal Ziring, Technical Director of NSA’s Cybersecurity Directorate.
Tom Temin And in this new guidance, we’re basically talking about something called [Field Programable Gate Arrays (FPGA)], and that’s a branch of semiconductor that can do different functions depending on the software you basically dump into it. Just describe this technology for us so we can get a sense of what it is you’re aiming at here.
Neal Ziring Sure, Tom. FPGA is are a very flexible hardware component. They can be used for all sorts of functions in all sorts of systems. They can be used to accelerate networking, communications, cybersecurity. And the main thing that makes an FPGA important for defense systems and other systems is that they bring you the speed of hardware, because they are essentially reconfigurable hardware, but with the flexibility that’s closer to software. As you noted, you can put a new firmware load into an FPGA and have it perform a completely different function, but at hardware speeds.
Tom Temin And in the [Department of Defense (DoD)] context, are these mainly used in embedded systems? Where they might want to change the characteristics of it or say, programing a missile to do this instead of that, and then you hit the gate arrays or where else?
Neal Ziring Well, they’re certainly use in weapons systems of all kinds, and they’re also used in communications systems. For example, you might have military radio, software defined radio, and you want to update it to support a new waveform or a new type of signal. You can do that with an FPGA.
Tom Temin And these are programed by putting them or sending a signal to them which download software to them. Years ago, they used to be erasable with ultraviolet light. There was like a window on top of the chips. Is that still the case anymore?
Neal Ziring No. I remember those types of UV erasable proms, but that’s not common anymore. With most FPGAs today, they would be part of a larger assembly. They would be a microelectronics component on a board, and there would be a memory on the board that at System Startup would load the firmware into the FPGA. And that process is very fast.
Tom Temin And what is the risk, therefore, that the wrong software could somehow find its way into the gate array, and therefore, it would not do what the operator intended it to do?
Neal Ziring That’s correct. That’s a primary risk. And the documents that we’ve published, over the last couple of months, go into more detail on the threats. But a primary threat is that since these devices are so flexible, an adversary might intervene somewhere in the lifecycle, from design through assembly and so forth, and either degrade the functionality of the device or introduce incorrect functionality. And that’s exactly the kinds of problems we’re trying to help folks to avoid in this published guidance.
Tom Temin So besides the act of programing on site or at the moment after in use, there’s the danger that they could come with original sin, so to speak, from a firmware built in manufacture that could change one bit every two years or something, I’m making this up. But you wouldn’t be able to tell that until something went wrong.
Neal Ziring Yeah, that’s a good point. I mean, a FPGA has to be programed. Somebody has to create that program, that hardware designed to load into it. And that comes from a set of tooling and so forth back through the lifecycle, and problems could be introduced at any point in that lifecycle. And some of them can be very difficult to find in testing. So that’s why it’s important for program managers and integrators and so forth to pay attention to these threats. And that’s what we’re trying to help them to do, and then mitigate those various threats at sort of all the points along the lifecycle of this device.
Tom Temin We’re speaking with Neil Ziering. He’s technical director of the Cybersecurity Directorate at the National Security Agency. And what, in general, is the advice you’re giving? What can people do along the supply chain until the Field Programable Gate Array is in use? What are some of the steps that operators ought to take here?
Neal Ziring Oh, there’s lots. The first thing they have to do is understand the system that they’re trying to protect, and understand how it’s built and where it is using FPGAs as components. And then understand their criticality to the overall system function that is being delivered, whether it’s a weapon system or a radar or a communication system. And then the documents, that our experts have written, lay out three assurance levels based on the impact that any kind of degradation or compromise would have on the overall system function. And then you walk back through the life cycle and say, I have to protect my initial designs, I have to evaluate the intellectual property they incorporate into my design, I have to protect it on its way from the designer to the manufacturer. And those steps are all laid out in detail for the different assurance levels in the documents.
Tom Temin And let me ask you kind of, well, two in the weeds question. One has to do with risk management, because in a given system, say an airplane, take it up to that level or a ship or something. There are gate arrays used in a variety of subsystems, some much more critical than others. And since these types of platforms operate on a bus, is there a danger that a gate array in a low risk system could somehow find it’s evil into the bus? And thereby, affect a higher level or higher risk subsystem gate array?
Neal Ziring Oh, that’s a really good observation, Tom. And that’s an important part of understanding the risk, the entire system. So you’re quite right, a component that is on the bus of, say, an airplane or on the network on a ship, could be compromised and then allow an attacker to move laterally to a more important to more critical component. And that’s part of understanding how a given component, let’s say it’s a particular board with an FPGA on it, how that fits in the overall system architecture. And therefore, what assurance investments the manufacturer, the integrator, the designer should take when they’re building it and designing it. So it is not a simple like chip by chip exercise. It’s really something that requires a holistic risk picture of the system in which the component is embedded.
Tom Temin And my second in the weeds question is, with respect to the presidential executive order, for agencies to obtain Software Bill of Materials when they’re buying software, SBOMS. Do gate arrays come with SBOMS since they’re software controlled? Or should they?
Neal Ziring Wow, that’s a great question. So software bills of materials are a great thing and are really going to help software assurance. There is an equivalent for something like a Field Programable Gate Array and that would be an inventory or bill of materials of the intellectual property blocks, as they’re called, that are incorporated into that design. For example, if someone’s designing an FPGA to run on a bus, they probably won’t design their own bus controller circuitry. They’ll obtain that from a manufacturer or an intellectual property provider and they’ll just plunk it down into their design. Well, they have to think about, where did that come from? How was it tested?How is it assured? And one of the documents in this series is a guide on how to perform that evaluation. And then you would incorporate that list of, hey, what intellectual property blocks that I incorporate would be incorporated into the bill of materials for that entire system.
Tom Temin Sure. And by the way, is there any evidence that this potential problem has actually been a problem with Field Programable Gate Arrays?
Neal Ziring So I can’t talk to particular, sort of compromises, but I can talk about this general problem that’s affected microelectronics, including programable parts like FPGA and that’s counterfeits. That’s been a problem across the industry of counterfeit parts that don’t have the full functionality or the full reliability of the real part. And that is one of the threats that we think about in these documents, and how programs and integrators can avoid counterfeits as a problem.
Tom Temin Sure. Or stolen parts that end up on the gray market and then therefore back into the supply chain.
Neal Ziring Yeah, those too.
Tom Temin All right. And this is NSA that has issued guidance for the Defense Department. Is there any tie in with missed guidance, maybe for non DoD agencies that also have systems with FPGAs?
Neal Ziring Not directly at this time. And I should also say that this is guidance that is coming out of NSA, but it was really written in collaboration with other members across the DoD of the Joint Federated Assurance Centers, and their centers at the Air Force and the Navy and so forth. So we collaborate with them. There is no missed guidance down at the sort of deep technical level of FPGAs at this time. But we have a close partnership with NIST. And the risk management guidance, that NIST has been publishing for a long time, would be relevant in all of the risk analysis for systems of this kind.
Tom Temin And this now is in the hands of all the people that you feel should be looking at it?
Neal Ziring Yes, Well, we’re not quite done publishing the entire series, but yeah, we have put the documents out there. They’re out on NSA.gov, as well as Jfac.navy.mil. And our experts who work in this area, are connecting with other parts of the DoD, via the JFAC, to make sure that the programs that are working on highly critical hardware know that this guidance exists and can gain assistance in how to apply it in their critical programs.