FEMA’s cloud journey hitting uphill portion of marathon

Charlie Armstrong, the chief information officer at FEMA, said two recent successful migrations of applications to the cloud demonstrates progress.

The Federal Emergency Management Agency has about 60% of all workloads in the cloud.

Now the hard work really begins.

Charlie Armstrong, the chief information officer at FEMA, said he is pushing the agency to shut down more data centers and expand the number of applications in the cloud.

Charlie Armstrong is FEMA’s chief information officer.

“The engineering and technical pieces get harder and that may slow down our velocity a little bit. But the goal is to get everything migrated out of that data center so that we can start to decommission it and shut it down. That’s the primary work stream that we have going on,” Armstrong said on Ask the CIO. “In addition to that, in the September or October timeframe, we kicked off a small re-hosting effort, that’s actually pulling the covers back on some of these systems that are not planned to be modernized for a while, and maybe doing some database re-platforming, making them ready to be what we call cloud natives so that we can really leverage the value of cloud and be able to scale up and scale down as we need to.”

Over the long term, Armstrong, who joined the agency in February 2023, said he’d like to get FEMA out of the data center business as much as possible.

In the meantime, Armstrong said the migration of applications and workloads to the cloud will pick up steam in the coming year.

“We got off to what I would call a slow start, and mainly that was the complexity around the networking between our existing data center and our connections into the cloud service providers that we’re using. It took us some time to kind of make sure that we had the latency issues worked out so that we had the performance that was required in order to keep those as viable applications,” he said. “We started with development and test because we didn’t want to get the mission critical operational applications in a risky situation. We started to build some velocity up last spring, and as we continue to work out and work through some of the technology challenges, we had about 25% of workloads which we measured by virtual machines, were able to hit that milestone by the end of the fiscal year in September. And then the velocities continue to improve since then, and we’ve gotten up to the 60%.”

FEMA to re-platform systems

To reach that other 40% of applications, Armstrong said his team is taking a few different approaches. One is to re-platform workloads that aren’t ready to move to the cloud.

“Actually, our first application that we’re planning on re-platforming is our training system, which is low risk, but an important system because training is a key part of our mission and making sure that people are trained up, not just to do their day-to-day job, but to be able to respond to disasters, and incident management is really important to the agency,” he said. “It’s taken us a little bit of time to do some analysis around what it’s going to take to migrate that to a newer platform, and we’re working through that schedule of milestones now. Then, we’ve got some second and third order systems that we’re looking at that. I can’t go into the names of them for security reasons, but it would help us from a vetting of people and things like that.”

Armstrong said FEMA has a goal of re-platforming or re-factoring five or six systems by the end of the calendar year.

One big question FEMA still needs to answer is what to do with those systems that would cost a lot of time or money to re-platform, but eventually the agency will shut down when it modernizes the workload.

Armstrong said these are the trade-off decisions that his team is making based on the current state of the system, where it fits in the mission area and the return on investment from modernizing.

“What happens with the things that don’t get moved to cloud? Well, the goal is to get everything out of the existing data center. At a minimum, we would take that remaining amount and move them to some kind of a co-location facility so that we can actually decommission the data center that we have,” he said. “It’s more cost efficient to move to a co-location facility so that we can shut down an aging facility and not have to recapitalize things like the power and heating and cooling, and things like that.”

Cloud brokerage maturing

As FEMA continues in its cloud journey, the CIO office’s cloud brokerage will play a larger role in shepherding applications to the right cloud.

Armstrong said the office remains in its early stages, but is helping more and more mission areas with their cloud decisions.

“Obviously, we’re just like everybody else, we’re still learning our way into cloud. We’re still working on upskilling our workforce on being more cloud centric and savvy,” he said. “We’re working it as we mature on our cloud processes.”

Armstrong said FEMA had two recent cloud migration successes. One is moving 19 grant applications to the new FEMA Grants Outcomes, or FEMA GO, platform. The agency plans to shift approximately 20 more grant programs over to the platform this months. A second is FEMA’s National Flood Insurance Program (NFIP).

Armstrong said both reached full operational capability in the cloud at the end of March and now are finishing the data transition work.

“We struck out on a goal about a year and a half ago to get those applications migrated to cloud primarily through a lift and shift approach,” he said.

A third system that started its cloud journey is FEMA’s financial systems modernization effort. This has long been a challenge for all of the Homeland Security Department.

“I think getting some of our critical customer facing applications in the cloud would really hit the mark. There are some things that whether we get moved to cloud or not, and the next year probably won’t make a big shift in customer satisfaction and or our ability to hit mission. But if you look at areas like individual assistance, which is very much customer facing application because they’re providing assistance to that survivor at the point in time of a disaster, the ability to scale up and meet a surge capacity is something we always try to plan for, or I should say that the agency has tried to plan for the worst day,” he said. “So having a cloud service provider really provides a lot more room for surge capacity. When we talk about resilience, cloud is a key part of that because being able to surge up, being able to leverage different types of software-as-a-service through the cloud that we may need to opt in to on the fly in order to meet a special demand, all those are really reasons as to why we’re so adamant on getting to cloud. I don’t anticipate that there’s going to be some huge cost savings. At the end of the day, we do get to avoid some re-capitalization of the equipment that’s in the facility today. The way I see it, we’re going to get to more resilient data centers and be able to do things like geographic diversity, and have multiple points of entry through the departmental new cap programs.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories