Immigration and Customs Enforcement is more than halfway done with its migration to the cloud, more than a year after kicking off the effort.
ICE started the process after the summer of 2017, according to David Larrimore, the agency’s chief technology officer. That’s when the Department of Homeland Security looked at consolidating the infrastructure it had on its Data Center 1, located in Stennis, Mississippi.
“What ICE said was, ‘Well, instead of moving 125 environments from one zone of the data center and putting our entire customer base through that terrible process of outages and downtime and troubleshooting … we said, ‘We’re just going to go to the cloud,’” Larrimore said Thursday at the Digital Government Institute’s cloud computing conference. “It’s working and we couldn’t have done it if we didn’t spend two years figuring out what does the cloud mean to us.”
Since then, ICE and contractor Knight Point Systems have gotten about 45 of 75 production systems running out of the cloud, and gone from 300 servers running virtually in the cloud to 1,200, in the span of about four months. All told, Larrimore estimated that agency has completed more than 60 percent of the cloud migration.
Insight by Veritas and Carahsoft: Learn about the range of data practices and strategies needed for today’s policy and compliance environment in this free webinar.
“We’re all in,” he said. “We’ve instituted moratoriums on development. We’ve got all the timelines, we’ve got daily stand-up [meetings],” as well as metrics and dashboards.
Due to the law enforcement nature of its mission, ICE is moving ahead with a hybrid cloud model, in which it keeps some legacy applications on its own data centers.
“As long as your agency [or] your component is still dependent upon a private network, you will always be hybrid,” Larrimore said. “There is no chance that you can be all in the cloud.”
ICE has made swift progress in moving the cloud, but Larrimore said the agency has encountered some network latency challenges when users pull data from, for example, a data center in Stennis, Mississippi, a cloud environment out in Oregon and a data center in Vermont.
“It really becomes a physics problem, where when you start to separate and leverage these cloud services, you’re now separating the data and now you’re adding latency to your system and your application,” he said. “If you’re Customs and Border Protection or TSA Secure Flight, that makes a huge difference. As long as that is a known problem, you’re always going to have trouble adopting a hybrid cloud environment. Knowing that latency is out there, you’re always going to have that problem.”
[ad align=”right”]ICE has employees working all over the world at roughly 500 sites that need access to the agency’s data. But according to Larrimore, its parent agency has, until recently, been focused on access to its data centers.
“DHS has spent 15 years making sure that everybody can connect to, very quickly, our Stennis data center,” he said. “Nobody spent a lot of time talking about the cloud.”
And in the context to moving to the cloud, Larrimore said DHS’s approach has been to make sure its data center can talk to the cloud.
“When you’re at your desk trying to access the cloud, the first thing that little packet does is it goes to the data center in Stennis, Mississippi,” before connecting to the cloud. “We have to fundamentally change that. And that’s a trust problem in the federal government.”
Earlier this month, the Office of Management and Budget released the fourth draft of its data center consolidation and optimization memo.
Dominic Sale, the assistant commissioner and director of IT for the General Services Administration’s Technology Transformation Service (TTS), said OMB’s new policy was much more refined and “focused on the realities of the federal government.”
Under the previous guidance, agencies complained that OMB and GSA didn’t negotiate or discuss data center consolidation prior to rolling out the policy.
“That policy set a grand target: ‘We’re going to close this number of data centers,’ and we put that out there, and we pushed agencies to shut the data centers,” Sale said. “The problem is, we didn’t do enough homework upfront.”
That homework, he added, should’ve included an accounting of closet-sized server rooms and a better picture of how many data center agencies had prior any effort to consolidate them.
“We got ahead of ourselves, we got ahead of the agencies, and I think we paid the price for that,” Sale said.
However, the past year has been a good one for compliance with the Federal Risk and Authorization Management Program (FEDRAMP). Sale noted a 60 percent increase in FEDRAMP-authorized cloud usage by all CFO Act agencies between FY 2017 and 2018.
“That’s a huge leap,” he said.
Looking more closely at the numbers, Sale said 40 new agencies began using FEDRAMP-authorized providers in the past year, and 40 new products received FEDRAMP authorization in FY 2018.
Another 65 vendors are currently in the pipeline to receive FEDRAMP authorization in 2019.
“We’re seeing and hearing more cloud offerings that are incorporating emerging technologies – machine learning, AI, robotic process automation, containerization and other cutting-edge technologies,” Sale said. “So not only are we seeing more, but we’re seeing a broader array of services that are available.”
Tom Sasala, the Army’s chief data officer and the director of the service’s Architecture Integration Center, shed more light on the hybrid cloud strategy that’s begun in FY 2019.
The Army estimates it will migrate 70-80 percent of its 8,000 enterprise systems to commercial cloud hosting – whether that’s on-premises, off-premises, enterprise or tactical.
Those mission-critical systems, like those used by nuclear command and control, make up about 10 percent of the overall picture, and will remain on-premises.
The remaining 10 percent of applications, labeled “antique,” will also remain on-premises.
“They’re legitimately antique. They’re the silver that needs to be polished to stay shiny, And we have failed to polish them,” Sasala said. “These [are] applications we need to run our business. We haven’t touched them some period of time. In some cases, 15 years. They are not cyber-ready for any modern compute environment. They are not architected for a modern compute. And frankly, because they’re baked in the core of our infrastructure, we can’t get rid of them, but don’t know what to do with them.”
In keeping these systems walled-off from the rest of its applications, Sasala said the Army will be “managing the risk as opposed to avoiding the risk.”
“We are going to put them in a location and we are going to protect them, knowing they have massive cyber vulnerabilities. We’re going to our defense-in-depth and a variety of other things and say ‘We still need them until we can figure out what to do with them … but we’re going to have them in our environment.”