Three critical steps to fast-track agency digital transformation and data management

Michael Coene, the Chief Cloud Architect at Hitachi Vantara Federal, details why course correcting for the longer term requires a detailed approach to strategic...

Suppose you’re a scientist with the Food and Drug Administration reviewing data for a potential COVID-19 vaccine, a NASA analyst processing data from the International Space Station, or a National Oceanic and Atmospheric Administration scientist processing weather modelling for developing tropical storms/hurricanes. Over the last several months, employees like these from across the government are performing their work from home while the data at the heart of that work still lives in the onsite data center.

Clogged virtual private networks and home bandwidth issues can slow productivity and frustrate users. Practically speaking, most agencies’ networked infrastructure was designed to support short-term remote work spikes (like occasional snow days), but not months of high-volume demands. With increased telework now a permanent reality, course correcting for the longer term requires a detailed approach to strategic digital transformation.

That involves keeping datasets and related processing resources close together regardless of the place of performance. While agencies are migrating to the cloud as quickly as possible, it’s not the answer for every situation. An agency may well have existing resource-intensive processes and analytics in the on-premise environment that require proximity to the same data that distributed remote workers utilize. A successful solution has to accommodate both.

That means tomorrow’s architecture needs to support the range of locations wherever work happens and data flows – the edge, the data center core and the cloud. It involves a three-pronged approach.

Enable today’s operations, but plan for disruptions and changes in demand

Strategic architectural design should optimize data management practices for the known, and to the extent possible, the unknown. Start by defining who the users are and what needs to be done to best support them in a way that makes performing their work seamless. Anywhere data will be required, assess its value to the mission, evaluate data collection capability requirements and understand what types of decisions will be made at that location.

For instance, how will NIH scientists using an electron microscope in the lab conduct image analysis on the resulting large datasets while they are working at home? How does a deployed DoD Special Ops platoon push the most important data back to the on-base data center when bandwidth is low? These types of operational decisions require prioritizing what’s most mission critical.

Stretch your thinking to scenarios that may seem unlikely. No one anticipated the bulk of the federal workforce going remote for months on end; COVID has taught us otherwise. Brainstorm on what other kinds of unusual situations could potentially arise, and factor them into your planning.

Bring the data and computing power to where they’re needed

Performance will determine an acceptable user experience and associated productivity levels, so the compute must be close to the data. One approach is to have the data in both locations. Another option is to replicate the data on a just-in-time basis in each environment where and when it’s needed.

Latency is the main concern. Unlike video streamed content that is pre-buffered for smooth viewing, a large data set must be moved completely before it is useful. That may mean partial or full replication, depending on the user request. Obviously the size of the data set determines how long that will take.

In some cases, users will need to anticipate and schedule delivery of what data they want, so a derivative copy of the master data can be available when needed. That can be particularly complicated in a multi-cloud environment, where some users may prefer the capabilities of Google Cloud, some prefer Microsoft Azure, some AWS or another platform. The key is to provide a simple, seamless experience whether for a remote researcher or an on-prem high performance computer.

Set clear governance policy for data availability, storage and retention

Timeframe is an important parameter―for instance, making a particular data set available in a specific cloud location for a specified amount of time. If the data goes unused during that time, it will be removed, and a new request will be required if anyone still wants to work with it.

Clearly define retention and archiving requirements. That includes considering what to do with buffered data that users have stopped accessing. If more on-premise storage capacity is required, it may need to be purchased. But purging unused data sets is a valid option because under a well-defined retention policy, the location of the official copy is clear.

All of this requires thoroughly understanding and balancing edge/core/cloud requirements based on data-driven insights. The goal is defining what data can be pushed back to the core data center, while autonomously doing as much as possible with compute that is local to the edge.

Ultimately, this is all about simplification for the user whose contribution depends on reliable, fast data access and the compute power to work with it. Whether the needed outcome is preventing the spread of disease, keeping soldiers alive or another important mission, IT should not and must not inhibit those on the front lines from doing their work.

Michael Coene is the Chief Cloud Architect at Hitachi Vantara Federal, with over 30 years experience in information technology, including 25 years at the Food and Drug Administration where he retired as the Chief Technology Officer.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories