If there was ever a case to be made for why agencies and organizations invest in cybersecurity protections, look no further than the recent WannaCry ransomware attack.
The federal government came away unscathed by the malware that hit more than 300 countries and impacted more than 300,000 computers worldwide.
Why did this nasty virus not infect federal computers?
The reason can be traced, in large part, back to 2014. When the Heartbleed bug, a vulnerability in the OpenSSL cryptographic software library, hit the internet, the Homeland Security Department had to scramble to make sure agencies fixed the code.
Insight by GitLab: During this webinar executives from the State Department, U.S. Securities and Exchange Commission, U.S. Patent and Trademark Office and GitLab will discuss how institutionalizing a DevSecOps approach to software development is a journey that must bring together the technology and business sides to change an organization’s culture.
Former DHS Deputy Undersecretary for Cybersecurity Phyllis Schneck told the Senate Appropriations Committee in May 2014 that the process took several days longer than it needed to because DHS didn’t have clear legal authority to scan other agencies’ networks, even though it had the technical ability to do so.
“So as fast as we could, we went door-to-door and got a letter of authorization from each agency, working with each lawyer, to make sure that we could scan their systems. That cost us five-to-six precious days in some cases,” Schneck said three years ago. “The whole world knew about this vulnerability and all the information they could capture, while we were lawyering. If we had the clarification in law that this was our role, we would have gotten started a lot faster.”
Fast forward to 2017, DHS first started receiving reports on the afternoon of May 12 from Europe about the WannaCry attack.
Jeanette Manfra, the acting DHS undersecretary for cybersecurity, said May 16 during a press briefing that because of the investments and efforts over the past three years, agencies knew immediately how WannaCry could impact them.
The Office of Management and Budget and DHS initiated a call on Friday afternoon and told agencies to review their hardware and software asset inventories and apply any missing patches immediately.
A long-time federal cybersecurity executive, who spoke on the condition of anonymity because they didn’t obtain permission to talk about the cyber attack, said there was an immediate buzz across their agency when the WannaCry attack started. The official said the level of understanding across the agencies among all executives, not just the IT leaders, about the danger of the attack was clear by how agencies reacted.
Manfra said the dozen or so agencies that implemented the continuous diagnostics and mitigation (CDM) dashboard saw exactly where they needed to patch.
The other agencies, because of the CDM program, had a more complete inventory of their hardware devices and software tools than they did in 2014, and could begin the patching process much more quickly.
“Over the last couple of years in particular, the federal government had been very aggressive on some of the basics, such as patching vulnerabilities, which this was one of them. We have built-in a lot of blocking mechanisms,” Manfra said. “As we have really significantly upped our game in terms of doing some of these basics like patching critical vulnerabilities, one of our key efforts has been reducing the amount of time it takes to patch a critical vulnerability to under 30 days.”
From a governmentwide perspective, DHS deconstructed the code of the WannaCry attack and added the signature to the EINSTEIN 3A tool within hours on Friday. And because every large agency implemented E3A, the tool started blocking the attack vector within hours instead of days or longer.
DHS finished the implementation of the EINSTEIN 3A tools across more than 90 percent of all agencies and receiving the authority in 2014 from OMB to regularly conduct proactive scans of certain civilian agency networks.
“We have a variety of protocols as we’ve learned over the years in dealing with whether it’s a vulnerability, an attack or a threat, we needed to have similar to emergency management and other missions, we need to have protocols other operators could follow, particularly as we identified something that was escalating,” she said. “Even though something may not be technically significant, the scale or if we knew where it was coming from, how we are assessing those and how everybody is sharing the information and taking action, we have a protocol for getting all the CIOs in the government on the phone, sharing information, providing guidance, that was activated. We have something called enhanced coordination procedures, which is your next step of escalation, that was activated. We did a lot of information sharing in the government, and we did engage with all of the Information Sharing and Analysis Centers (ISACs), all of our information sharing partners and all of the Internet watch and warning networks.”
Trevor Rudolph, the chief operating officer for Whitehawk and former chief of OMB’s Cyber and National Security Division, said the incident response infrastructure in 2014 wasn’t very useful.
“The conversations in 2014 were immature, sometimes chaotic and bordered on being just non-productive at times, but at the end of the day, they were necessary,” he said. “It has been going through an evolution over the past three years.”
Want to stay up to date with the latest federal news and information from all your devices? Download the revamped Federal News Network app
At the same time, the Cybersecurity Threat Intelligence Integration Center (CTIIC) ramped up its efforts in the classified arena.
The federal cyber official said the reliance on the CTIIC by White House senior officials and other agencies was a key part of the effort to keep federal networks unscathed.
“The CTIIC, DHS and the FBI continue to be very active in keeping agencies and the White House up-to-date,” the official said. “The big difference today versus other attacks is we have a more mature DHS, and we have CTIIC. Those are voices that can be helpful, and they give the government a more objective and informed perspective.”
Margie Graves, the acting federal CIO, said at the May 17 VMWare Innovation Summit in Arlington, Virginia, that the WannaCry attack showed how the work done under the 2015 cyber sprint paid off.
She said the cyber sprint’s requirement for agencies to be able to scan their technology environment and report back within 24 hours if there were any vulnerable systems was based on a warning from DHS.
“How would that have helped you last weekend to know that vulnerability existed in advance? I tell you it did help the federal government. To date, I have not heard of a federal victim of this particular incident, which is very, very heartening,” she said. “We looked at our assets. We got vulnerabilities out of our network. It’s not that something else can’t happen because there are always zero-day attacks, but we started to march down this pathway and it’s starting to show results. Some things are starting to come to fruition.”
Graves said agencies are never done with cybersecurity, but the fact is the government is more informed by intelligence about what threats are coming over the horizon.
“We picked the things in the cyber sprint for a reason, because they were primary threat vectors and we needed to fix them,” she said.
Rudolph said the cyber sprint, and the lessons from the Heartbleed bug were important, but what may have been pivotal in the government’s success against WannaCry was a decision made in 2012 by former federal CIO Steve VanRoekel.
Rudolph said VanRoekel, through the PortfolioStat process, pushed agencies to get off of Windows XP, which was one of the operating systems that the WannaCry malware exploited.
“All the data we were getting from DHS showed that if something happened with XP, it could take down an entire agency,” Rudolph said. “Steve was forward leaning to use PortfolioStat to solve the XP problems before the ‘big one’ happened.”
Rudolph added another big factor over the last three years is the maturation of the tools and processes to collect and analyze network data.
“It’s good to know the cyber sprint and other work we did over the past few years is paying off. We now have to get to the next level by getting CDM rolled out as quickly as possible. Getting the dashboards turned on in more than just a few pockets and have data sent to the federal-wide dashboard so we can act on the threats and vulnerabilities in real time. I’m glad to see the Trump administration’s new cyber executive order is building on a lot of the themes from the previous two administrations. We have set up a foundation to make the government more secure and more resilient and that is paying dividends today and it will down the road,” he said.