Securing applications following a cyberattack

Amid the recent spate of high-profile cyberattacks, government agencies and private-sector organizations alike are scrambling to fend off unauthorized intrusion...

Amid the recent spate of high-profile cyberattacks, government agencies and private-sector organizations alike are scrambling to fend off unauthorized intrusions that can compromise digital assets, disrupt operations and derail attainment of critical missions. Yet prevention by itself is not enough because the concept of unbreachable security exists only in theory. Despite organizations’ best efforts to stop cyberattacks, breaches will occur. We can, however, limit the frequency and severity of attacks.

Preparing for a cyberattack must include, in addition to prevention strategies, measures to limit damage in the event of a breach, and to quickly restore capabilities of (and users’ confidence in) systems compromised in an attack.

Preparation for a response to a cyberattack must happen before the attack occurs. Traditional forensic incident responses to cyberattacks are typically not familiar with the application development systems and processes. Responders will look for and remove malware installed by intruders, yet they are less capable of finding and remediating other types of attacker impacts, such as attackers tampering with the machinery of application development itself, including tooling configurations and binary and source code repos, the code signing process and the deployed or shipped binaries themselves. The President’s Executive Order on Improving the Nation’s Cybersecurity acknowledges the imperative of government organizations to be prepared to deal with such threats.

Software security should be a critical focus of cybersecurity. Indeed, the zero trust (ZT) security architecture mandated by the federal government recognizes application security within the workload pillar security requirements. Effective plans for protecting organizations’ applications from attackers must encompass the entire software development lifecycle (SDLC), from application planning, building and distribution to version upgrades and decommissioning.

It’s easier to secure software during the development process than it is to remediate unknown flaws and vulnerabilities introduced later, whether the threat arrives via opaque open-source code or via compromise of application development tools during an attack. Being cyber secure is possible only when the entire software development cycle is secure, including post-build distribution.

Responsible stewardship of IT assets requires taking action to limit the blast radius of an attack before it happens by:

  • Storing the signatures of binaries in order to know if attackers have tampered with them.
  • Conducting anomaly testing to find irregularities in changes to code.
  • Limiting access to the tool chain used by developers.
  • Providing each development team with a separate access control for all GitHub repos and removing access for team members who leave.

The recent attack of a continuous integration and continuous delivery (CI/CD) platform illustrates the importance of containment strategies. In what’s known as a developer tool attack, attackers infected an employee’s laptop on their home network with a keylogger that wasn’t detected by the company’s antimalware controls. Attackers gained access to credentials and assets of the company’s customers. You never want to put yourself in the position in which the breach of a single control results in a “game over” scenario. Sometimes that is easier said than done. In this case the attackers had to first exploit a vulnerability in a home media server on the employee’s home network. The security of an employee’s home network, which is beyond the control of an enterprise, must be viewed as a hostile network.

This threat may be closer than you think. If an organization develops critical software that’s widely used, there is tremendous value to an attacker who can install a backdoor into the software distribution — a classic supply chain attack. In the case of the 2020 SolarWinds data breach that affected users of the company’s business software — including U.S. agencies — Russian hackers put a backdoor into the application, inserting a binary during post-development distribution. More recently, in March 2023, the VOIP company 3CX had their binary distribution compromised to send malware to their customers through automated and manual updates. The suspected perpetrator is a state-sponsored cybercrime group. Sophisticated insertions are difficult to detect, and there is no shortage of sophisticated actors across the globe.

During the build phase of applications, it’s important to use robust, trusted tools; to carefully manage access to software under development; and to build security into software (and the development process itself) from the outset, a practice known as “shifting left.” In a bygone era, developers would frequently bolt security onto applications at the end of the development process. In the current threat environment, it’s advisable for development and security experts to lock arms and develop software in partnership.

Having taken pains to thwart cyberattacks seeking to penetrate networks, IT leaders must be realistic and prepare for failure. They must take action to limit damage in the aftermath of a breach:

  • Scan everything to ensure all endpoints are free of malware.
  • Roll passwords and keys so that they’re all new.
  • Validate all code coming out of repos and into applications.
  • Validate all containers coming out of binary repos.
  • Secure the tool chain.
  • Limit developer access to the minimum required and remove accounts when employees leave.
  • Resist the temptation to cut corners. Err on the side of caution.

The insidiousness of compromised development tools is a well-known problem. Forty years ago, when accepting the Turing Award for contributions to computer science, Ken Thompson gave a speech, “Reflections on Trusting Trust,” in which he showed how to embed an invisible backdoor into a compiler. The compromised compiler enabled installation of backdoors in all applications in the development pipeline. Installing a hidden backdoor using the Thompson hack and other methods means that no software developed by an organization using a compromised compiler can be trusted.

Once a compiler has been “backdoored,” inspecting the compiler source code and even compiling the compiler from scratch won’t restore trust if the binary you’re compiling it with can’t be trusted. In that scenario, you’ll need binary analysis. This actually happened in the wild in 2009 to the Delphi compiler.

Network breaches can be catastrophic. They have the potential to undermine confidence in application security, the security of application development and overall network security. Preemptively securing applications helps organizations to defend against cyberattacks. Yet factors outside the control of an organization, such as unknown vulnerabilities in third-party software, can nonetheless result in a breach that taints the fidelity of an entire network and its applications.

Remain vigilant. Savvy organizations work hard to stop attacks, they have contingency plans for limiting the fallout from successful attacks, and they develop plans for quickly restoring trust in software and the pipeline for developing new applications.

Chris Wysopal is the co-founder and CTO of Veracode.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    Amelia Brust/Federal News Networkcybersecurity

    National Cybersecurity Strategy calls for significant change in critical infrastructure

    Read more