Hyper-converged infrastructure (HCI) and hyper-converged platforms (HCP) have been gaining attention in the commercial space lately, but does hyper-convergence hold promise for federal agencies? As someone who works hand-in-hand with IT practitioners at federal agencies and has helped deploy hyper-converged platforms for them, I believe it does. Here are some reasons the government should consider shifting from traditional storage, networking and compute solutions and making the change to a hyper-converged technology.
What is hyper-convergence?
Hyper-convergence is a type of infrastructure with a software-focused architecture that closely integrates compute, storage, networking and virtualization resources with other technologies in a commodity hardware box supported by a single vendor. Hyper-convergence means that all three technologies — compute, network and storage — are together. This union is commonly referred to as a hyper-converged platform.
You’ll hear the terms “hyper-converged infrastructure” and “hyper-converged platform,” and while the two differ, neither can really exist apart from the other. I’ll be referring to a hyper-converged platform (HCP).
In a traditional network, storage area networks (SANs) typically run over host bus adapters (HBAs), which provide a separate backbone with a dedicated and redundant path between the host and storage. This traditional setup requires a duplicate set of switches, cables and connectors. If a server loses access to storage, it likely will experience a severe failure or crash at the operating system level, and so it’s best to have at least two independent paths between host and storage. Contrast that with a hyper-converged platform, where the network, storage and host are converged. There’s no need for redundant dedicated storage paths as the storage is local, and less opportunity for a crash due to a lack of access to storage.
Lower cost
HCP is generally cheaper in terms of hardware because it enables an organization to use any more common storage, requires less maintenance, and automates much of the storage configuration such as tiering, deduplication and replication over Ethernet without losing performance or reliability. Additionally, you are essentially maintaining one infrastructure rather than two by combining storage and compute into one platform. Eliminating the maintenance requirements of a dedicated storage infrastructure translates to significant cost savings in reduced hardware and labor.
Easier to deploy new services
Using an HCP gives an organization the ability to grow and expand quickly, easily and efficiently. Prior to HCP, organizations had to rely on SAN storage, typically accessed at the block level, which requires specific storage to be carved out, maintained, classified and set up with redundancies. If a user requests 10 gigabytes of tier II storage, IT needs to find the available storage, reserve it and then map it to a host. If the organization finds they needed an additional 5GB of storage to accommodate new services, the process of provisioning that extra amount is often arduous. The storage engineer must reclaim the 10GB, carve out a new 15GB partition and re-provision. However, because storage is virtualized in HCP, new needs can be accommodated very quickly.
Easier to commoditize storage and compute
Using the same example, it’s also likely that an organization would find that while it provisioned 10GB of storage, it ends up needing only 7GB. Under the SAN model, the remaining 3GB would be wasted. Not so under HCP because all storage is automatically combined. Regardless of user storage needs — whether 10 disks or 100 — the entire storage pool is virtualized, eliminating the need for tiering, partitioning or expansion complications. If IT needs to add another block or node, the additional resources are presented within seconds, and replication occurs automatically in the background.
HCP lends itself to a virtual desktop infrastructure (VDI), an up-and-coming trend in the Department of Defense (DoD) and other federal agencies. The Pentagon, for instance, has a VDI environment as part of the Joint Service Provider (JSP) initiative, an effort to consolidate and reduce redundancies in how DoD delivers IT services in the Washington, D.C. area. The multitenant VDI infrastructure allows for a traditional desktop experience while being more secure than a traditional desktop. It also eliminates the need to “touch” every desktop for hardware refreshes, as all upgrades are performed on the backend and are seamless to the end customer. This saves thousands of person hours and disruption to the customer that would occur during a traditional hardware refresh.
Easier to move to the cloud
Because HCP lends itself to redundancy and clustering at the application level, it greatly simplifies the labor and effort required to move to the cloud. HCP also provides easier and simpler growth and elasticity of your applications and storage, allowing customers to take advantage of cloud-based burst offerings, so applications are never constrained by compute or storage resources. Expanding and upgrading are as simple as adding a new node to the infrastructure, then adding it to the existing cluster. No more concern about mapping storage paths or reconfiguring raw-disk-mapping clusters.
Should federal agencies and organizations move to HCP?
All of the Joint Service Provider (JSP) is already on an HCP. Roughly two years ago, all new machines went on an HCP, and the rest of the machines have been migrated in phases since then. Due to this transition, server build time was reduced from two business days to approximately 30 seconds, and there was a 300 percent improvement in disk, compute, and memory utilization performance metrics.
HCP is definitely cheaper in terms of hardware because it enables an organization to use any ordinary storage, requires less maintenance and automates much of the storage configuration such as tiering and replication without losing performance or reliability. Additionally, an organization is essentially maintaining one infrastructure rather than two by combining storage and compute into one platform. Due to how the storage is virtualized, valuable storage space is not wasted due to over-partitioning. Given there are less “moving parts,” HCP becomes more resilient to failure since it’s dependencies are local, not two or more disparate systems.
While SAN isn’t going away anytime soon, more and more organizations — in both the federal and commercial space — will start shifting from SAN to HCP. Certainly federal agencies have to carefully balance budget restrictions and the increasing demand for IT services. But hyper-convergence is a solution that can empower government leadership to accomplish their organizational objectives while improving the security, scalability, reliability and efficiency of their networks.
Edris Amiryar is a senior systems engineer for NetCentrics, a provider of enterprise IT services and cybersecurity for the federal government.
Hyper-converged platform, infrastructure – A game changer for government?
Edris Amiryar, a senior systems engineer for NetCentrics, provides six reasons why agencies should consider a new approach to technology integration.
Hyper-converged infrastructure (HCI) and hyper-converged platforms (HCP) have been gaining attention in the commercial space lately, but does hyper-convergence hold promise for federal agencies? As someone who works hand-in-hand with IT practitioners at federal agencies and has helped deploy hyper-converged platforms for them, I believe it does. Here are some reasons the government should consider shifting from traditional storage, networking and compute solutions and making the change to a hyper-converged technology.
What is hyper-convergence?
Hyper-convergence is a type of infrastructure with a software-focused architecture that closely integrates compute, storage, networking and virtualization resources with other technologies in a commodity hardware box supported by a single vendor. Hyper-convergence means that all three technologies — compute, network and storage — are together. This union is commonly referred to as a hyper-converged platform.
You’ll hear the terms “hyper-converged infrastructure” and “hyper-converged platform,” and while the two differ, neither can really exist apart from the other. I’ll be referring to a hyper-converged platform (HCP).
Advantages of hyper-convergence
Less complicated, more efficient
Learn how high-impact service providers have helped the government reinvent the way they deliver their mission and services to the public in this exclusive ebook, sponsored by Carahsoft. Download today!
In a traditional network, storage area networks (SANs) typically run over host bus adapters (HBAs), which provide a separate backbone with a dedicated and redundant path between the host and storage. This traditional setup requires a duplicate set of switches, cables and connectors. If a server loses access to storage, it likely will experience a severe failure or crash at the operating system level, and so it’s best to have at least two independent paths between host and storage. Contrast that with a hyper-converged platform, where the network, storage and host are converged. There’s no need for redundant dedicated storage paths as the storage is local, and less opportunity for a crash due to a lack of access to storage.
Lower cost
HCP is generally cheaper in terms of hardware because it enables an organization to use any more common storage, requires less maintenance, and automates much of the storage configuration such as tiering, deduplication and replication over Ethernet without losing performance or reliability. Additionally, you are essentially maintaining one infrastructure rather than two by combining storage and compute into one platform. Eliminating the maintenance requirements of a dedicated storage infrastructure translates to significant cost savings in reduced hardware and labor.
Easier to deploy new services
Using an HCP gives an organization the ability to grow and expand quickly, easily and efficiently. Prior to HCP, organizations had to rely on SAN storage, typically accessed at the block level, which requires specific storage to be carved out, maintained, classified and set up with redundancies. If a user requests 10 gigabytes of tier II storage, IT needs to find the available storage, reserve it and then map it to a host. If the organization finds they needed an additional 5GB of storage to accommodate new services, the process of provisioning that extra amount is often arduous. The storage engineer must reclaim the 10GB, carve out a new 15GB partition and re-provision. However, because storage is virtualized in HCP, new needs can be accommodated very quickly.
Easier to commoditize storage and compute
Using the same example, it’s also likely that an organization would find that while it provisioned 10GB of storage, it ends up needing only 7GB. Under the SAN model, the remaining 3GB would be wasted. Not so under HCP because all storage is automatically combined. Regardless of user storage needs — whether 10 disks or 100 — the entire storage pool is virtualized, eliminating the need for tiering, partitioning or expansion complications. If IT needs to add another block or node, the additional resources are presented within seconds, and replication occurs automatically in the background.
Enables virtual desktop infrastructure (VDI)
Read more: Commentary
HCP lends itself to a virtual desktop infrastructure (VDI), an up-and-coming trend in the Department of Defense (DoD) and other federal agencies. The Pentagon, for instance, has a VDI environment as part of the Joint Service Provider (JSP) initiative, an effort to consolidate and reduce redundancies in how DoD delivers IT services in the Washington, D.C. area. The multitenant VDI infrastructure allows for a traditional desktop experience while being more secure than a traditional desktop. It also eliminates the need to “touch” every desktop for hardware refreshes, as all upgrades are performed on the backend and are seamless to the end customer. This saves thousands of person hours and disruption to the customer that would occur during a traditional hardware refresh.
Easier to move to the cloud
Because HCP lends itself to redundancy and clustering at the application level, it greatly simplifies the labor and effort required to move to the cloud. HCP also provides easier and simpler growth and elasticity of your applications and storage, allowing customers to take advantage of cloud-based burst offerings, so applications are never constrained by compute or storage resources. Expanding and upgrading are as simple as adding a new node to the infrastructure, then adding it to the existing cluster. No more concern about mapping storage paths or reconfiguring raw-disk-mapping clusters.
Should federal agencies and organizations move to HCP?
All of the Joint Service Provider (JSP) is already on an HCP. Roughly two years ago, all new machines went on an HCP, and the rest of the machines have been migrated in phases since then. Due to this transition, server build time was reduced from two business days to approximately 30 seconds, and there was a 300 percent improvement in disk, compute, and memory utilization performance metrics.
HCP is definitely cheaper in terms of hardware because it enables an organization to use any ordinary storage, requires less maintenance and automates much of the storage configuration such as tiering and replication without losing performance or reliability. Additionally, an organization is essentially maintaining one infrastructure rather than two by combining storage and compute into one platform. Due to how the storage is virtualized, valuable storage space is not wasted due to over-partitioning. Given there are less “moving parts,” HCP becomes more resilient to failure since it’s dependencies are local, not two or more disparate systems.
While SAN isn’t going away anytime soon, more and more organizations — in both the federal and commercial space — will start shifting from SAN to HCP. Certainly federal agencies have to carefully balance budget restrictions and the increasing demand for IT services. But hyper-convergence is a solution that can empower government leadership to accomplish their organizational objectives while improving the security, scalability, reliability and efficiency of their networks.
Edris Amiryar is a senior systems engineer for NetCentrics, a provider of enterprise IT services and cybersecurity for the federal government.
Sign up for our daily newsletter so you never miss a beat on all things federal
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
What the UK gets about remote work that the US doesn’t
Network connectivity: An urgent matter of national security
NIST’s quantum standards: The time for upgrades is now