Six years ago, the federal government began to significantly shift the way it thinks about and tracks its software assets. With the passage of the MEGABYTE Act, the chief information officer of every federal agency is required to develop a comprehensive software licensing policy, keep an inventory of software licenses held by the agency, and track basic data or software usage, with an eye toward eliminating duplicative licenses and ushering in basic efficiencies. This legislation has had a positive impact on government agencies in the management of their software assets, helping agencies save nearly half a billion dollars since enactment.
Fast forward to today, and simply knowing how many licenses we have is no longer enough. In the wake of the pandemic, the digital transformation of work has continued to accelerate, and the public sector is no exception to this trend. We need to understand not only how many licenses we have and which ones are being used, but how they are being used and by whom in order to optimize the efficiency and effectiveness of our software spend. In short, we need to ensure government employees are effectively using the software provided to them.
Earlier this year, the House Armed Services Committee published a report with a proposal that would require the Defense Department to study its software usage with the intent of identifying “underperforming software.” This study, the scope of which is still to be determined, will likely show significant opportunities for DoD to improve the performance and utilization of a host of software products deployed across the enterprise on which the federal government spends billions of dollars every year.
What’s more, the committee’s recommendation is hardly the only proposal of its kind. Recently, Chairman Gary Peters (D-Mich.) of the Senate Homeland Security and Governmental Affairs Committee introduced legislation, known as the Strengthening Agency Management and Oversight of Software Assets Act (SAMOSA), that would update the MEGABYTE Act, moving federal agencies toward a deeper understanding of the software assets they own, while requiring that agencies better plan their software acquisitions to reduce duplication. Peters’ bill would also encourage a real understanding of how that software is being utilized – what’s working and what’s not – by incentivizing use of software analytics that will help make informed future acquisition decisions. The bill would also encourage greater use of enterprise license agreements, particularly for larger software vendors.
So the shift toward more efficient software management is clearly happening, but why does it matter? In recent years, technologies have been developed that can provide agencies with the very type of software utilization analytics that Peters’ bill and the NDAA provision propose. These technologies, when leveraged effectively by agencies, will allow them to maximize the impact of their software spend. They’ll also help eliminate or improve the user experience with unused, underutilized and underperforming software, and thus help provide the government with the biggest bang for the taxpayers’ buck.
Many times, while planning major systems upgrades, enhancements or replacements, agencies struggle to understand the basic facts on the ground: What existing software is critical to maintain? What disruptions will occur when certain software programs are sunsetted or even if or how existing software is being used to begin with? This is a key reason why so many of our major systems projects fail, or at least fail to meet user expectations.
Documenting an inventory of what software assets an agency owns and understanding the licensing agreements that are associated with them is a great start. However, for the government to truly realize the intended value of their digital transformation efforts, we need to take it a step further. As CIOs prepare for this next stage (law or no law) there are three major points to consider:
Are you able to not only inventory, but easily assess the effectiveness of an application based on both quantitative (what users are doing) and qualitative (how they feel about the experience) data with measurable scores and key performance indicators (KPIs)?
Can you quickly identify and resolve areas of friction within your stack of software applications (internal or external) leveraging no-code tools that don’t require extensive development time and effort?
How do you ensure users are compliant in navigating these solutions and not skipping important steps or deviating from the intended path, risking regulatory violations among other bad outcomes?
In today’s fast-moving world of technology, every organization is a software organization. This means that it is the software we run that enables us to achieve our mission, whether that is a back-office function or a program on the frontlines of serving the fighter on the battlefield. Recognizing this, the move to better understand not just what software we have, but how and to what end we use it, is beyond critical.
Ben Straub is the head of public sector at Pendo, where he leads a team focused on bringing Pendo’s product experience and digital adoption platform to the government, education, and nonprofit sectors to enable better digital experiences for all.