Achieving authority to operate and risk management

Altaz Valani, the research director with Security Compass, and Hasan Yasar, the interim director at the Software Engineering Institute at Carnegie Mellon Univer...

Within the federal government, one of the biggest challenges in application development is the requirement for authority to operate (ATO). In this context, systems need to be pre-approved by an authorizing official (AO) or information assurance (IA) against a set of risk controls before they can be used in an operational context.

In a DevOps flow, the goal is to achieve a continuous ATO. In essence, making DevOps an enabler for a continuous ATO. This implies the need to provide assurance against a risk framework in order to achieve this.

Systems undergo rigorous testing and hardening against an internal standard (such as NIST 800-53) to minimize exposure to vulnerabilities. Manual processes to verify the controls do not permit an easy transformation to DevOps because of the large number of controls. NIST 800-53, for example, contains over 1,000 controls. While this approach is useful from a security and controls perspective, it disrupts the lightweight flows of DevOps where parts of a system may change regularly.

What we need is an approach that is both lightweight and able to address the risk concerns of the business.

From a business perspective, there is a clear need to manage risk. Looking at NIST Risk Management Framework (RMF, 800-53, 800-37), there are several aspects related to controls for risk management:

  • Selection of an initial security baseline and additional tailoring based on appropriate risk assessment
  • Implementation of security controls within a system and the environment
  • Correct implementation of security controls
  • Monitoring of the security controls on an ongoing basis

Using ATO compliant modules to manage risk

We need to manage business risk for systems while adopting DevOps. The challenge is that the traditional definition of a “system” is too high-level when it comes to ATO. We need to break the system into smaller modules. Then, we need to assess each module against select risk controls. Modules can include any combination of application, infrastructure and configuration artifacts.

In the end, we can construct a larger ATO compliant system by using the smaller ATO compliant modules. We can then automate the risk management process for ATO.

Right now, every system needs to repeatedly undergo the same battery of tests to ensure risk management. By focusing on individual modules, the testing is reduced and we can monitor these modules for compliance continuously.

We have identified a few key concepts around this approach:

  1. Break into application modules

What we need to do is break down the system into modules. Each module can then be tested for risk. Over time, only those modules that have changed will need to be re-tested. This greatly reduces the amount of time it takes to achieve risk mitigation.

  1. Achieve greater agility

If the controls for ATO ever change, there needs to be an easy way to invalidate system modules that were previously approved — including those in production. This method should enable a prioritized, risk-based development and testing approach to remediation.

  1. Build toward a system portfolio of modules

It should be easy to determine, based on ATO controls, module rationalization activities such as modernization, retirement or acceptance. The decision for each of these dispositions will be based on risk assessment against business priorities.

  1. Achieve continuous monitoring

By testing each module separately, you can generate a report by leveraging continuous integration, the level of risk against each module, thereby satisfying a particular risk threshold. If any module changes, you can run risk-based security testing against the single module and greatly reduce the regression test suite compared to an entire system re-test.

  1. Report against ATO metrics

Once you achieve continuous risk management, roll up the metrics for each system against historic metrics. This integrates your lightweight approach to higher-level risk management.

In the end, it is important to realize the importance of a risk mitigating approach, like ATO. In order to integrate this type of approach into DevOps, we need to de-layer the system abstraction by volatility (typically at the modules level). This helps drive sound decision making around which modules should be used in creating systems and the impact of any change in business risk requirements.

The goal of DevOps is to provide risk assurance back to the business, not to re-engineer how the business conducts risk management. Many times, a technical team acts on the urge to “do it better” without realizing the enormity of their ask. Instead, consider how you might be able to work within the constraints of a business risk framework and still be able to achieve the benefits of DevOps. This is a foundational element of moving from DevOps to DevSecOps. Security controls can then be mapped to risk. And that will turn out to be different for each organization based on their risk tolerance.

 Altaz Valani is the research director at Security Compass.

 Hasan Yasar is the technical manager of the secure lifecycle solutions group in the CERT Division of the Software Engineering Institute at Carnegie Mellon University.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    Carnegie Mellon UniversityJared Ettinger, Carnegie Mellon University

    CMU researchers says finance, communication agencies doing well in cyber intelligence

    Read more
    Linkedin

    Open systems evolving to improve DoD buying power

    Read more