So your agency has reduced the number of overall data centers — mission accomplished, right? While the Federal Data Center Consolidation Initiative has produced savings primarily due to reduction of real estate and power use — few agencies have taken the time to evaluate the performance, operational and security impact to applications and services after the move.
Server virtualization is very powerful in increasing physical resource consumption and it also provides an easy way to “lift and shift” workloads without any business rationalization. While this reduces the number of physical servers and data centers to manage – it hasn’t necessarily reduced the amount of applications, increased performance, nor eased operational maintenance. These new challenges are becoming rapidly apparent as users, administrators and chief information officers adjust to the concept of fewer federal data centers which are farther away.
Shiny new data center. Same old workloads
Server virtualization was a foregone conclusion when consolidation projects originally started and helped accelerate workload migrations to new data centers. The underlying hypervisor technology has evolved substantially over the past five years, and has accelerated the growth of open source (OS) alternatives – pushing this core data center technology to the point of commoditization. Agencies seeking greater agility, security and improved cost benefits are beginning to adopt open-source based hypervisors and orchestration stacks for a portion of their workloads as a way to drive down virtualization licensing costs.
Federal agencies with advanced DevOps shops are also looking to containers, an alternative to classic server virtualization that’s gaining popularity, to consolidate application workloads even further. The crux of the solution involves isolating an application and it’s dependencies into an isolated “container” instead of a dedicated virtual machine. Reducing overhead of running multiple virtual machines for dedicated applications. This new technology will undoubtedly unlock even greater data center efficiencies, as compute power will be used more for applications and less for platform overhead.
Additionally, Federal Risk Authorization and Management Program (FedRAMP) certified clouds provide an excellent alternative to driving down the size of new centralized federal data centers. Although there will undoubtedly be workloads that need to remain in-house, regularly evaluating workloads to determine whether they make a good candidate for transition to the public cloud should be a best practice for all IT managers. By reducing the size of federal data centers by leveraging cost-effective infrastructure-as-a-service (IaaS) clouds, the overall total cost of ownership of compute has nowhere to go but down.
Physical appliance sprawl
Your new data center undoubtedly has multiple instances of the same type of networking appliance, each one dedicated to a specific department or sub-agency. The “lift-and-shift” of network appliances including firewalls, virtual private network concentrators (VPNs), wide area network optimization controllers (WAN Op), intrusion detection/prevention systems (IDS/IPS), and application delivery controllers (ADC ) have created massive amounts of appliance sprawl eating up expensive real estate in the data center. As several of these network services tend to impact security, it is critical that they are patched and configured appropriately at all times. Several agencies are looking at software-defined networks/network function virtualization as a solution to consolidate these functions in much the same way they virtualized servers. When virtualizing these networking services, it is critical to choose an open delivery platform that can provide simplified centralized management, near line-rate performance and true multi-tenancy for isolation.
Hello, latency
Fewer data centers means an increase in physical distance between users and what they need every day for their mission: applications, services and data. This can result in degraded performance as most classic client-server applications weren’t built to handle cross-country latency. Most agencies tend to use a brute-force approach to solve this new latency challenge by buying bigger, more expensive, more reliable networking pipes to ensure performance. But this approach tends to be extremely costly and at times doesn’t always solve the root performance problems.
Several agencies have leaned on application virtualization to help address these performance challenges by virtualizing and centralizing the client components of the application in the data center. By virtualizing the client interface, client-server traffic is kept in the data center, with only keyboard strokes and mouse clicks traversing the WAN. Additionally, this model simplifies the patching and management of applications by centralizing them into one instance.
Another approach that is helping agencies improve performance over WAN connections is WAN virtualization. As the price of Internet-based network links continue to drop – agencies have been looking into augmenting existing multiprotocol label switching (MPLS) links with Internet-based circuits. WAN virtualization is now emerging as a solution that can provide the reliability of a classic MPLS WAN link by aggregating multiple tiered links. This maximizes utilization of available networks, adds resiliency and can provide immense cost-savings.
Keep it open
Although agencies have made strides with their consolidation efforts, these new challenges indicate that the work is far from over. Agencies should develop strategies to address these new challenges based on the principle of “open” – ensuring that next-generation solutions they choose do not lock them into a single platform, cloud or vendor. This is a critical step in achieving the next round of cost savings and next level of agility for federal data centers.
Faisal Iqbal is the public sector CTO for Citrix. He brings more than 10 years of engineering, consulting and project management experience to his current role, focused on providing mobility, virtualization and cloud solutions for several agencies throughout the federal government.
Federal data center consolidation 2.0
Faisal Iqbal, public sector chief technology officer for Citrix, offers four steps to improving data center performance and obtain real savings.
So your agency has reduced the number of overall data centers — mission accomplished, right? While the Federal Data Center Consolidation Initiative has produced savings primarily due to reduction of real estate and power use — few agencies have taken the time to evaluate the performance, operational and security impact to applications and services after the move.
Server virtualization is very powerful in increasing physical resource consumption and it also provides an easy way to “lift and shift” workloads without any business rationalization. While this reduces the number of physical servers and data centers to manage – it hasn’t necessarily reduced the amount of applications, increased performance, nor eased operational maintenance. These new challenges are becoming rapidly apparent as users, administrators and chief information officers adjust to the concept of fewer federal data centers which are farther away.
Shiny new data center. Same old workloads
Server virtualization was a foregone conclusion when consolidation projects originally started and helped accelerate workload migrations to new data centers. The underlying hypervisor technology has evolved substantially over the past five years, and has accelerated the growth of open source (OS) alternatives – pushing this core data center technology to the point of commoditization. Agencies seeking greater agility, security and improved cost benefits are beginning to adopt open-source based hypervisors and orchestration stacks for a portion of their workloads as a way to drive down virtualization licensing costs.
Federal agencies with advanced DevOps shops are also looking to containers, an alternative to classic server virtualization that’s gaining popularity, to consolidate application workloads even further. The crux of the solution involves isolating an application and it’s dependencies into an isolated “container” instead of a dedicated virtual machine. Reducing overhead of running multiple virtual machines for dedicated applications. This new technology will undoubtedly unlock even greater data center efficiencies, as compute power will be used more for applications and less for platform overhead.
Join us Nov. 18 for Federal News Network's Industry Exchange Cloud to learn how your agency can deliver services effectively, efficiently and securely in a hybrid, multicloud world. Register today!
Additionally, Federal Risk Authorization and Management Program (FedRAMP) certified clouds provide an excellent alternative to driving down the size of new centralized federal data centers. Although there will undoubtedly be workloads that need to remain in-house, regularly evaluating workloads to determine whether they make a good candidate for transition to the public cloud should be a best practice for all IT managers. By reducing the size of federal data centers by leveraging cost-effective infrastructure-as-a-service (IaaS) clouds, the overall total cost of ownership of compute has nowhere to go but down.
Physical appliance sprawl
Your new data center undoubtedly has multiple instances of the same type of networking appliance, each one dedicated to a specific department or sub-agency. The “lift-and-shift” of network appliances including firewalls, virtual private network concentrators (VPNs), wide area network optimization controllers (WAN Op), intrusion detection/prevention systems (IDS/IPS), and application delivery controllers (ADC ) have created massive amounts of appliance sprawl eating up expensive real estate in the data center. As several of these network services tend to impact security, it is critical that they are patched and configured appropriately at all times. Several agencies are looking at software-defined networks/network function virtualization as a solution to consolidate these functions in much the same way they virtualized servers. When virtualizing these networking services, it is critical to choose an open delivery platform that can provide simplified centralized management, near line-rate performance and true multi-tenancy for isolation.
Hello, latency
Fewer data centers means an increase in physical distance between users and what they need every day for their mission: applications, services and data. This can result in degraded performance as most classic client-server applications weren’t built to handle cross-country latency. Most agencies tend to use a brute-force approach to solve this new latency challenge by buying bigger, more expensive, more reliable networking pipes to ensure performance. But this approach tends to be extremely costly and at times doesn’t always solve the root performance problems.
Several agencies have leaned on application virtualization to help address these performance challenges by virtualizing and centralizing the client components of the application in the data center. By virtualizing the client interface, client-server traffic is kept in the data center, with only keyboard strokes and mouse clicks traversing the WAN. Additionally, this model simplifies the patching and management of applications by centralizing them into one instance.
Another approach that is helping agencies improve performance over WAN connections is WAN virtualization. As the price of Internet-based network links continue to drop – agencies have been looking into augmenting existing multiprotocol label switching (MPLS) links with Internet-based circuits. WAN virtualization is now emerging as a solution that can provide the reliability of a classic MPLS WAN link by aggregating multiple tiered links. This maximizes utilization of available networks, adds resiliency and can provide immense cost-savings.
Keep it open
Although agencies have made strides with their consolidation efforts, these new challenges indicate that the work is far from over. Agencies should develop strategies to address these new challenges based on the principle of “open” – ensuring that next-generation solutions they choose do not lock them into a single platform, cloud or vendor. This is a critical step in achieving the next round of cost savings and next level of agility for federal data centers.
Faisal Iqbal is the public sector CTO for Citrix. He brings more than 10 years of engineering, consulting and project management experience to his current role, focused on providing mobility, virtualization and cloud solutions for several agencies throughout the federal government.
Read more: Commentary
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
Cleaning up federal agencies’ cyber hygiene
Agencies must ask first, cut later with data center consolidation
Agencies warned to begin improving IT reform efforts