Compliance and capability: Building an AI-ready government workforce

The agencies that will continue to deliver on their mission are not necessarily the ones that move first.

Artificial intelligence is moving fast across the federal government. Today, nearly 90% of agencies have adopted it or plan to in the future. But adoption alone does not equal readiness. The same research shows that skill gaps remain one of the biggest challenges agencies face when implementing AI.

This isn’t a criticism. It’s actually one of the biggest opportunities federal agencies have today.

Compliance was never meant to be the finish line

Mandatory annual trainings, cybersecurity awareness, ethics briefings and acceptable use policies are necessary and important. They establish the foundation every agency needs. But that’s exactly what compliance was meant to be: the foundation, not the ceiling.

I’ve led everything from tactical training alongside government teams to federal contracting across various private and public sector organizations. What I saw consistently among these organizations was not a lack of commitment or talent, but a chasm between knowing policy and being able to execute under pressure. Those are two different skills, and only one shows up on a training completion report.

In high-pressure situations, the best-performing leaders were those that had prepared so thoroughly that nothing felt new when it mattered most. Similarly, the best teams did not panic when things went wrong. They smiled.

These leaders made themselves so uncomfortable in training by always testing the limits that the real thing felt like second nature. Those teams wanted to be together. They maximized their time, trusted each other, and were confident in their ability to execute when it counted.

That same discipline is exactly what government agencies need to build capability right now.

Building capability requires discipline

Agencies that perform well under pressure are all disciplined about how work gets done, not just what policy says. When teams are lean and demands are high, foundational practices can slip. Reactive becomes default. And when that happens, inconsistent execution follows. That was true before AI arrived and it is even more true now.

A 2023 Government Accountability Office review found that 15 of 23 inspected agencies had incomplete or inaccurate AI use-case inventories. What might look like a technology problem is actually a discipline problem. You cannot scale what you cannot track, and you cannot track what you have not standardized.

Over years of working inside and alongside government organizations, I developed a standardized framework that separates the agencies that execute from agencies that stall:

Decision ownership. Every high-pressure environment has one common breakdown point: nobody knows who owns the call. AI accelerates decision-making, but if authority is unclear, speed becomes a liability. Commanding the decision means defining upfront who decides, what they decide and when they escalate. When that is established, teams stop waiting for permission and start moving with purpose.

Enforce the standard. A standard that is not enforced is just a suggestion. Agencies can roll out the best AI tools available, but if there is no consistent expectation for how they are used and how outcomes are measured, results will vary and progress will stall. The Defense Department’s Responsible AI framework gets this right. It defines who owns outcomes across the entire AI lifecycle, from development through deployment, so accountability stays clear even as AI scales across the organization. That is enforcing the standard at scale.

Multiply capability. Government leaders are being asked to do more with less. The answer is not working harder in isolation, but developing the people around you so the whole team can carry the load. When capability is multiplied and distributed, the organization becomes resilient. When it sits with one or two people, an unexpected problem can set everything back. AI will not change that equation. Only intentional development will.

Rehearse under pressure. Most training happens in calm, controlled conditions. Most missions do not. The GAO has moved in the right direction with AI training tied to specific use cases, giving employees the skills to act efficiently and responsibly at the same time. But access to training is only the start. Scenario-based practice, real points of decision and simulated pressure build the kind of muscle memory that holds when things get hard. Rehearsal is where capability becomes reliable.

Move the mission. All of the above means nothing if it does not translate into forward progress. The goal of capability training is a workforce that executes consistently, adapts quickly and delivers on the mission no matter what the environment throws at them. Most generative AI pilots fail because they were never fully operationalized, not due to the technology.

Measure what matters. Completion rates tell you who sat through the training, not who can perform when it counts. Meaningful indicators are decision speed under pressure, how quickly teams escalate issues when they should, whether behavior changes after training or reverts within 30 days, and if leaders are reinforcing standards daily or letting them slip. These tell you if your workforce is ready or just compliant.

AI will not be the last shift that demands something new from the government workforce. The agencies that will continue to deliver on their mission are not necessarily the ones that move first. They are the ones that invest in how their people think, decide and execute, and build the discipline to sustain it over time.

Compliance got us here. Capability is what takes us forward.

Ray Resendez is senior vice president of federal solutions at ELB Learning.

Copyright © 2026 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    What enterprise security can learn from U.S. government approaches to AI

    Read more
    DEFENSE_04

    DoD AI Acceleration Strategy marks move toward real-time insight: Here’s what agencies should do next

    Read more