Insider threats: Taking a holistic approach to protecting agency data

In part 1 of his commentary, Tom McMurtrie, a research fellow with the Army’s Training with Industry Program, offers details on the current state of agency ef...

Insider threats are not new. They have plagued the country throughout its history. Since Benedict Arnold in 1789, insider threats have endured as a challenge for government. Yet the seriousness of insider threats motivates the ongoing efforts to implement systems and processes to inhibit their effects.

Defining an insider threat

The National insider threat policy defines an “insider threat” as a person or persons who threaten U.S. national security by misusing or betraying, wittingly or unwittingly, their authorized access to any U.S. government resource. The policy states that insider threats include those seeking to do “damage through espionage, terrorism, unauthorized disclosure of national security information, or through the loss or degradation of department resources or capabilities.” The definition of an insider threat includes violent actors, exemplified by the 2013 Navy Yard shooter, or a non-violent actor like Chelsea Manning. This policy defines the general responsibilities of departments and agencies and “leverages existing federal laws, statues, authorities, policies, programs, systems, architectures and resources” to counter the insider threat.

Though the term is commonly used in association with government organizations, commercial entities are in no way exempt from the threat. In their 2015 report entitled Grand Theft Data, the computer security company McAfee reported that 43 percent of all serious data breaches resulted from internal actors at the 1,155 companies interviewed. For this reason, the commercial sector has great interest in developing systems and technologies to protect their proprietary information and facilities from insider threats, as well.

The current IT-based and recommended approach to addressing insider threats focuses on the whole-person risk-rating efforts to detect and prevent insider threats.

Agencies should start developing their insider threat risk reduction plan by considering four questions adapted from a cyber intelligence perspective:

  • What is an employee’s role within the organization?
  • What information does that employee have access to?
  • How can that employee’s access negatively affect the organization?
  • What actions should be taken in the event the employee is a potential or suspected insider threat?

By developing their plan, organizations can identify their security priorities and the employees (or employee roles) who can most endanger their security. Importantly, the assessment enables the organization to develop strategies for preventing or mitigating an insider threat based on these hypothetical threats.

Five IT-related strategies currently used to mitigate insider threats

The effects of recent insider threat attacks—such as the leaks of Chelsea Manning in 2010 and Edward Snowden in 2013—illustrated the serious damage that insiders can inflict using their access to privileged information. Because of these leaks, many current Insider threat programs typically employ the following five IT-related capabilities:

User activity monitoring: To observe and record the actions and activities of an individual accessing U.S. government information to detect insider threats and to support authorized investigations. Such activity monitoring typically is conducted on electronic communications from a government employee’s computer or electronic devices.

Data loss prevention: Control how users interact with data and what they can do with it. Some approaches include techniques to prohibit data from being printed, mailed or copied to removable media, limiting the type and quantity of data that could be distributed by an insider threat.

Security information and event management (SIEM) and security operations center: SIEM tools enable the gathering, analyzing, and presentation of information from network and security devices. An insider threat program could leverage SIEM tools to correlate the network logs and provide real-time incident-based alerting to analysts within the center when a pattern is discovered. The center would then conduct an assessment and determine what actions to take, based on their insider threat risk reduction plan.

Analytic techniques: Leverage advanced data mining, machine-learning and statistical capabilities to identify anomalous network activity, including the correlation of identities and authentication/authorization levels depending on risk levels.

Digital forensics tools: Use of IT tools and techniques to investigate digital artifacts on a system or device for actions following suspected insider threat actions.

While these capabilities are a necessary part of any insider threat program, they are insufficient for implementing a comprehensive program because they are reactive in their response to potential threats. Their focus on internal information—network logs and files accessed, for example—limits the organization’s ability to proactively prevent potential insider threats. Moreover, these capabilities almost entirely fall within the organization’s IT administration, meaning that they are only useful in detecting IT-related insider threats. These techniques do not identify employees on the verge of physical harm. And because these capabilities are reactive, they do not provide management with the knowledge required to intervene before an insider threat occurs. In short, these capabilities do not consider all the aspects relevant to an employee’s risk as an insider threat.

The whole-person strategy to mitigate insider threats

In a 2015 report entitled Analytic Approaches to Detect Insider threats, the Software Engineering Institute from Carnegie Mellon University presented an analytic framework for conducting the “whole-person concept” continual evaluation of employees to detect and prevent insider threats. Their framework captures all the capabilities described above, but adds significantly to the list and requires the integration of internal and external information to proactively prevent insider threats. The SEI report decomposes the analytic requirements necessary to detect insider threats into three categories: Activity-based analytics, content-based analytics, and inferential analytics.

Activity-based analytics: Activity-based analytics use content and event-based information derived from network sources to understand user activity. Activity-based analytics are decomposed into three sub-categories:

  • System: Examining the changes or trends in IT asset behavior, data or access patterns.
  • Facility: Analyzing changes in the time or locality of an employee’s physical access patterns.
  • Business capabilities: Analyzing business or mission capabilities either internally or for changes and failure or externally for leaks or capability duplication.

Content-based Analytics: The SEI report describes content-based analytics as using content captured from network components and applications to examine user characteristics. Content-based analytics are decomposed into three sub-categories:

  • Social: Analyze social interactions and communications on social networks.
  • Health: Analyze network activity and content to derive potential indicators of mental health issues.
  • Human resources: Analyze network activity to indicators of external life events or complaints against the organization.

Inferential analytics: Finally, the SEI report describes inferential analytics as using network content to refine the understanding of user behavior considering other information sources. In short, inferential analytics compare the employee’s current status to his or her historical patterns. Inferential analytics are also decomposed into three sub-categories:

  • Financial: Analyze network activity to identify indicators of unexpected changes in wealth or affluence.
  • Security: Analyze network activity for indicators of security violations.
  • Criminal: Analyze network sources for court or criminal activity.

By conducting analysis along these three categories, an organization can develop a regularly updated, whole-person risk-rating score for its employees, akin to a credit score. By setting alerts to indicate high-risk employees, or employees with large changes to their scores, organizations can take proactive steps to deter or prevent insider threats from hurting the organization. A Defense Personnel and Security Research Center initiative is employing a pilot of this type of system to determine the utility of continually monitoring personnel with security clearances “through use of additional public records and appropriate social media data.”

A similar methodology of combing automated records checks and public data into a scoring system has been identified as a solution for reducing the backlog of more than 700,000 applicants awaiting security clearance adjudications.

Disclaimer: The ideas and opinions presented in this paper are those of the author and do not represent an official statement by the U.S. Department of Defense, U.S. Army, or other government entity.
Major Tom McMurtrie is an operations research/systems analyst in the U.S. Army currently serving as a research fellow in the Army’s Training with Industry (TWI) Program.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories