Let’s start out with this basic truism: No one likes the current approach to rating contractor performance.
Neither the agency contracting officers nor program managers, and not the vendors who sometimes wait three to six months after the contract is complete to get a mostly meaningless “satisfactory” rating.
The data itself lacks value and transparency.
Insight by Carahsoft: Learn about the major efforts going on across government to not only secure the technology supply chain, but have a long-lasting impact on all users of technology by downloading this exclusive e-book.
And, to be honest, it seems to have become another checklist activity for many agencies.
A new survey by GovConRx and the Office of Federal Procurement Policy shows, once again, just how little value there is in the current approach to contractor performance assessment ratings (CPARs).
“One of the facts that we heard back was how many agencies still aren’t doing CPARs or certainly not on time,” said Ken Susskind, founder and CEO of GovConRx in an interview. “It was interesting to hear back from industry about not getting CPARs rating because in the end they are needed for future procurements, not withstanding the fact that OFPP is taking measures, having policy and tracking with metrics on the CPARs website, how far or ahead they may be.”
GovConRx worked with industry groups, the Professional Services Council (PSC), Armed Forces Communications and Electronics Association (AFCEA), American Council for Technology and Industry Advisory Council (ACT-IAC), and the Government Technology and Services Council (GTSC), to determine the current state of CPARs.
Among the results of the survey that stood out are:
Mike Smith, a former director of strategic sourcing at the Department of Homeland Security and now executive vice president at GovConRx, said the use of self-assessments is not a new concept, and one that is commonly viewed as a performance management best practice.
“We first saw this in the human resources world, asking employees to provide some kind of summary of their accomplishments and adding any associated metrics to help with annual performance reviews. This is the same kind of thing,” he said. “The idea is not for them to provide their rating, but their key accomplishments and associated metrics. Industry has a vested interest in making sure they provide the right kind of data to help increase the accuracy and usefulness of the data in CPARs.”
Greg Giddens, a former head of acquisition at the Department of Veterans Affairs and now an adviser to GovConRx and a partner with Potomac Ridge Consulting, said the self-assessment can improve communication between the contractor and government.
“In the end, what we don’t want to happen is after the period of performance, the government then gives a report card to industry. We want them engaging during the period of performance so the government actually gets mission-enabled by the performance that industry is doing,” he said. “We’d rather identify something early in the process and through this dialogue and self-assessment discussion and get it corrected versus letting it go on and in the end the government says, ‘it didn’t turn out good.’ That doesn’t help get the mission done. The self-assessment is provide a catalyst for those discussions to make some of those mid-course corrections that may be needed.”
The challenge with self-assessments, however, is clear. It’s like asking a student to grade themselves on a test. No vendor will rate themselves poorly or say they didn’t meet the goals or objectives, and it could lead to more delays over definitions and disagreements.
Giddens said he used this self-assessment approach during his time in government, using a “trust but verify” methodology where the contractor and agency both brought metrics to the discussion.
“There is an assumption that the government is tracking and monitoring the key elements of the contract performance anyway. This is just an additional assistance they may or may not be tracking, or give them some idea of other factors that might lead to an evaluation,” Smith said.
Susskind added the emphasis shouldn’t be on the rating or grade, but the narrative based on the right type of data.
“It has to start with some substantiated documentation and narrative. That’s the problem and what’s missing so the suggestion is industry should participate with the government to provide that narrative,” he said.
Susskind added a theme emerged during the survey process that industry believed the contracting officers and other acquisition officials did provide the kind of feedback from the program side to judge the vendor’s performance.
The data seems to support that too. GovConRx continues to track ratings across the government and the use of “satisfactory” continues to rise, while the determination of “very good” or “excellent” have slid downhill over the last five years.
OFPP and DHS recognized last year there are challenges with CPARs and launched a pilot program using artificial intelligence to collect data to help fill out the ratings.
But Smith said one of the issues the pilot is facing is the quality of the data.
“Our hope is to help increase the quality of the data in the system so when you apply the AI tools to the data, you are able to pull out important, relevant information that is valid in the source selection,” he said.
Susskind said other agencies are looking at running some self-assessment pilots to see how the approach could work. He hopes that the pilots may help show how new tools or methodologies could reduce the burden on the acquisition workforce as they clearly believe CPARs is “just another thing” they have to do.
Smith said when he was in government if a contractor offered a self-assessment, he would’ve welcomed that input, and industry could do that now without any change to acquisition regulations or any new policy from OFPP.
He said the Transportation Security Administration wrote its own policy to obtain contractor self-assessments. Smith said TSA has found them valuable.
Giddens added at VA they would do program reviews reviewing key performance indicators.
“If there isn’t an openness on the government side to accept these reviews and accept them, then they will not be of any use,” he said. “We have to start changing the culture that says ‘well because my process doesn’t have a self-assessment, I can’t take a look at it.’ It has to be a culture of reducing that friction and improving communication so in the end we maximize the opportunity for industry to delivery on the mission.”
The need for culture change is just a talking point. Susskind said almost 75% of survey respondents said they experienced resistance from agency customers to doing self-assessments.
The survey continues to show what we already know: Agencies will continue to make past performance an evaluation factor, thus making CPARs still a relevant and important database.
The respondents to the survey believe better CPARs data with a focus on the narrative would help inform agency source selection committees to make better decisions.
Giddens said a robust CPARs system could give agencies and companies important data to understand the health and performance of their business and mission efforts.
“There is no doubt changes to CPARs is coming,” Susskind said. “We see that both from what we’ve heard from government and industry.”
The question remains what will those changes look like and how long will they take to come about?