Day 2 Keynote of Federal News Network's DoD Cloud Exchange: The Air Force's chief software officer, Nicolas Chaillan, talks with Jared Serbu about the Air Force...
The Department of Defense has been on a long, winding road toward the adoption of commercial cloud computing over the past decade. But there is no better example of the ways in which that journey has accelerated than the full sprint the Air Force has been on for the past year-and-a-half.
Not only does the Air Force have fully-established contract vehicles for cloud consumption up and running, it’s paired its “Cloud One” environment with a tailorable DevSecOps software development pipeline that’s now been declared a DoD enterprise service. That offering, “Platform One,” also earned program of record status in the Air Force’s budget for the upcoming DoD spending request.
By the end of 2020, Cloud One was the home for 60 different systems, hosting nearly 4,000 terabytes of data. Since its launch, it’s been a multi-cloud offering — starting with Amazon Web Services and Microsoft’s Azure. But a key tenet has also been to make that fact completely invisible to its end users.
“We’re destroying silos and trying to move faster,” Nicolas Chaillan, the Air Force’s chief software officer, said during a keynote discussion for Federal News Network’s DoD Cloud Exchange. “At the end of the day, what really matters is there are a few things that are foundational — like a single sign-on experience for the warfighter. We can authenticate once and not have to type pins and passwords 20 million times a day. Those kinds of basic things are what defines an enterprise, versus a bunch of disparate systems. They make our lives successful and efficient to win a war.”
Along the way, the Air Force has come up with several procedural innovations that appear to help get DoD over many of the biggest hurdles that have kept it from embracing modern cloud computing.
One example is the concept of “Cloud Access Points.” For years, those nodes were seen by Defense IT officials as the only way to maintain watch over data moving between DoD networks and commercial cloud environments. In reality, they turned into traffic bottlenecks that mostly served to inhibit cloud access.
So the Air Force reimagined the concept with a model called Cloud Native Access Points (CNAPs) — moving many of the cybersecurity functions that formerly stood between Defense networks and the cloud — to the cloud itself. The logic is that if a healthy chunk of the department’s data and compute power is going to reside in the cloud, its security services should live there too, and be just as scalable.
“CNAP is disrupting that bottleneck concept of perimeter, but it’s also moving us to zero trust,” Chaillan said. “It’s going to assess the device state of the user, whether it’s a mobile or desktop or a laptop, and allow the user, based on his or her identity, to connect to DoD systems. We can whitelist access to resources in the cloud or on-premise, effectively replacing that idea of ingress and egress to the cloud. And that’s game changing.”
But the Air Force’s conception of what cloud means for the enterprise has changed a lot since the “cloud first” mantra of a decade ago. It’s no longer a matter of moving to the cloud for its own sake.
In the case of software development, it turns out it does. Platform One, which is hosted on Cloud One, lets separate programs shave a year or two off of their software development time since they no longer need to stand up their own development platforms for whatever system they’re building.
So far, program offices working on software for the F-16, the F-35 and the Air Force’s ground-based nuclear weapons systems have moved their development work to Platform One. DoD’s Joint Artificial Intelligence Center has done the same, saying it can’t delay its work until the single-vendor Joint Enterprise Defense Infrastructure (JEDI) contract finally makes its way through all of its legal struggles.
One reason that’s all possible is because of the Air Force’s new continuous ATO process.
In the bad old days, a certifying official would need to go through a lengthy examination process for every new software system before it had an authority to operate (ATO) on DoD networks. Now, Platform One — the underlying development environment — is where most of the security attention is paid. If a piece of code makes it through Platform One’s built-in “gates,” those authorizing officials can feel very confident about the end product.
Chaillan said that’s why the software factories that use Platform One churn out new updates, on average, 21 times a day.
“We define the gates at the factory. They’re thresholds where each authorizing official can decide to set the bar,” he said. “For example, you could say for nuclear systems that you need to have 100% of your code covered by tests. A business system might say 60% or 70% is good enough. And those gates will define if you pass the pipeline. Then, each piece of software that you’re pushing through that pipeline gets automatically certified for staging, and then production.”
In many instances, the Air Force will rely heavily on the built-in security features commercial cloud providers bring to the table. Wherever that’s possible, it makes eminent sense to “inherit” those features into decisions about whether to authorize a system on DoD networks.
But Chaillan said the Air Force wants at least some of its systems to be not just cloud agnostic, but not dependent on the cloud connectivity at all.
In those instances, developers in the Air Force’s software factories are trying to do development work with the same agile DevSecOps mindset they’d use for cloud-native applications, but apply them to workloads that cannot practically connect to Amazon or Azure.
That sort of thing comes up when you’re flying a U-2 and need to push a software update mid-flight to Kubernetes containers. Some portion of the previous sentence would have been gibberish to the original designers of that particular reconnaissance aircraft, which first flew during the Eisenhower Administration.
But the Air Force managed to do it, in light of the fact that it wants its systems to be enabled by DevSecOps methodologies while not being entirely reliant on network connectivity.
“We’ve cut DevSecOps into multiple layers. The first one is an infrastructure layer, which could be a cloud provider, but it could also be on-premise, or it could be a jet or an embedded system,” Chaillan said. “If you have the application all the way at the top of that stack, which inherits effectively 90% of the NIST cybersecurity controls, you’re drastically reducing the work per team. But we also wanted to make sure that that the controls were met, regardless of whether you’re running on the cloud or without a cloud. We also wanted and needed to make sure that we could be compliant and secure without the cloud layer. We had to abstract a lot of that that as well, so a lot of the cybersecurity controls are effectively agnostic to the environment.”
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Jared Serbu is deputy editor of Federal News Network and reports on the Defense Department’s contracting, legislative, workforce and IT issues.
Follow @jserbuWFED