Test automation as a force multiplier in government

Most projects conducted at federal agencies are traditionally complex, lengthy and resource-intensive. In the past, software releases occurred every six months,...

Most projects conducted at federal agencies are traditionally complex, lengthy and resource-intensive. In the past, software releases occurred every six months, but now, in the digital transformation era, these releases are happening on a monthly basis. This puts additional pressure on federal agencies that are faced with persistent cyberattacks and compliance requirements. Even a minor update to their software can bring operations to a standstill, putting a heavy burden on teams to minimize risks as they release new software. As the public sector continues to migrate to the cloud, these organizations face various new challenges around how to be effective at handling test automation. Today, developers are spending an incredible amount of time on manual testing software for applications, a critical task to avoid bugs and security flaws, but which should also be quick and painless. Manual testing can be very costly to an organization. On average, many organizations spend more than $9 million per year on manual testing, and those numbers are even higher in the federal government. This also cascades into significant delays in product releases and, ultimately, provides for a less-than-optimal user experience. Commercially, these issues are receiving the necessary attention they require. The federal government, unfortunately, can be several cycles behind the commercial space when it comes to automating software testing.

There are several reasons why the public sector continues to struggle with test automation, including the high frequency of application changes, availability of test data and environments, ensuring traceability, optimizing efficiency, and covering specialized legacy systems. The ultimate goal of testing automation is to save your organization time and money and allow for government employees to accelerate the DevSecOps process when it comes to software testing.

Reducing costs, accelerating the mission

First and foremost, it would be a mistake not to mention that automating software testing just for the sake of automation will not produce the results you are looking for. What is the business value of each software element you are testing, and what is the right strategy for testing these software implemented functions? Automation brings with it speed, but we are making incremental improvements to testing which taps into the full power and benefits of automated software testing. That’s the ultimate goal.

On average, organizations are spending upwards of 31,000 days on manually testing software with scripting. This emphasizes the need for scriptless, model-based test automation, but some organizations are slow to get on board. Typically, test automation is limited and expensive due to the number of specialized resources required to run, maintain and review the test and its data. According to the Capgemini World Quality report 2020-21, only 15% of testing is automated and the quality assurance efforts for these tests are consuming, on average, 22% of the average IT budget. Why is this the case? It mostly comes down to organizations doing one of these two things: test everything or test in production.

Instead, especially at federal agencies, streamlining the process is all about testing the right objects. Ask what would be the most at-risk from an update and ensure that you only test those. Tests cause business disruption; therefore, testing the right objects allows your organization to deliver higher-quality releases faster, all while preventing any complications that can cause issues with your software.

This strategy is called “smart impact analysis” and it utilizes AI to identify which objects in your system would be the most impacted by the changes in an update. The AI factors how each object would be used in the system, which allows the right data to be tested because they are the potential sources of production defects. By identifying the right data to test, smart impact analysis cuts the average test scope for a release by 85%. In other words, it gives you up to 100% risk reduction for only 15% of the effort.

Continuous testing to mitigate employee churn

The only way an organization can accomplish this is by evolving testing from a manual test tacked onto the end of release deployment to an integrated part of the delivery pipeline. The way to ensure this is part of your organization’s automation journey is through continuous testing. The public sector often experiences a high turnover due to multiple factors, such as contracted employees or federal employees being assigned to an office or duty for a limited period of time before being transferred, none of which allows for legacy system knowledge to be a part of your organization. When you are doing manual testing, delays in releases happen because your teams are spending more time ensuring the software meets regulatory and compliance standards. This puts added pressure on the DevSecOps department to ensure that their new employees, as well as the existing ones, are trained and understand the goal, missions and direction of their department.

Automation for a better future for all

By focusing on automating software testing, organizations can drastically reduce the number of manual hours spent by employees on testing. This, in turn, increases the speed of deployment for releases, saves money and enables organizations to achieve cloud migration, digital transformation and innovation initiatives quicker. To achieve automation, an organization must utilize continuous testing and take advantage of its power that is more apparent than in SAP-driven digital initiatives. Automation will provide a better digital experience for customers by accelerating time to market for software updates. Organizations must embrace the adoption of continuous testing and shift their delivery pipeline from waterfall to agile. By doing so, your organizations can achieve 10x faster-testing speeds, reduce risk by 90%, and save you upwards of 50% (millions of dollars) in DevSecOps budget per year. This ultimately gives the government the ability to better deliver on its mission to safely advance the populations they serve.

John Phillips is VP of public sector at Tricentis.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories