Insider threats: How advanced analytics can further secure agencies
In part 2 of his commentary, Tom McMurtrie, a research fellow with the Army’s Training with Industry Program, details the challenges with implementing new...
Advanced analytic techniques are key to conducting a whole-person continuous evaluation of an employee but there are a host of challenges to implementing this type of insider threat system.
The continuous evaluation process for predicting insider threats suggested in Carnegie Mellon’s 2015 Software Engineering Institute report and in other sources—and required by the National Insider Threat Task Force, National Background Investigative Bureau and Department of Defense Insider Threat Mitigation and Analysis Center—demands the application of advanced techniques to achieve the desired whole-person risk rating.
Aside from the network analysis tools described previously, insider threat continuous evaluation assessment programs will require some or all of the follow techniques to effectively predict insider threats (as described by an April 2017 Intelligence and National Security Alliance report):
Linguistic analytics: Uses data a person generates through blogs, tweets, forum posts and email to generate a score relative to a sample population along a spectrum of cognitive and social characteristics. For insider threat analysis, this analysis can be used in comparing current employee behaviors with a previous baseline.
Natural Language Processing (NLP): A method of computational science allowing a computer to understand human language as it is spoken or written. In the context of insider threat analysis, NLP will be necessary to correctly infer what an employee means in a text or email.
Data mining: Tools and techniques that can be used to extract information from the myriad of social media sites (Twitter, Facebook, Instagram, Tumblr, etc.).
Sentiment analysis: Application of machine learning in which the words used in the months preceding or following a life event can be captured and used to develop an individual profile for detecting future life events and corresponding stress based upon the words used (even if the event is not mentioned).
In addition, recent advances in the broader artificial intelligence and cognitive computing fields will likely have impacts for Insider threat analysis.
Five challenges to automating insider threat evaluations
Though big data analytics and the whole-person concept of continuous evaluations can provide the opportunity to more quickly and automatically identify potential insider threats, such evaluations are not without several related challenges.
Information access: To conduct such whole-person Insider threat analysis, the automated system must have regular access to personal information. For example, using the tools described above to analyze an employee’s Facebook postings implies unimpeded access to that employee’s profile. If the organization cannot get the access required, it will not be able to do this type of analysis.
IT overhead: A related challenge is the information technology overhead required to store the data and conduct these advanced analytic techniques. Continually capturing, storing and analyzing the trove of data collected for an individual employee is no insignificant task, let alone for tens or hundreds of thousands of employees.
Baseline analysis: The previous challenge relates directly to the third challenge: The necessity for a historical baseline of an employee’s behaviors and actions upon which to compare their current indicators. Without the ability to look for long-term behavioral outliers, such an automated system will be ineffective in identifying the true risks from the noise generated by an employee’s everyday activities. This training of the system will require vast data stores and significant model tuning, tasks that require substantial IT resources and human expertise.
Acting on data: The next challenge speaks to the organization’s insider threat program administration: How does the organization react to potential threat information? Without a comprehensive plan for acting upon identified risks, the organization will be ill-prepared to prevent a threat from endangering the organization or to develop mitigation strategies to help an at-risk employee get off a dangerous course. Perhaps most importantly, organizations must acknowledge that the presence of an analytic indicator or increased risk rating is not, by itself, a definitive indicator of a pending insider threat attack. Rather, because of the potential for high false-positive rates of such automated systems, the presence of an indicator of anomalous behavior, or collection of indicators, should be considered carefully and acted upon within the parameters of its insider threat program.
Statutory limitations: It should be noted that these challenges assume that any such automatic insider threat detection systems adhere to the statutory requirements and limitations associated with data collection and use—policy limitations which are still undergoing review and modification. Because such data collection can be perceived as intrusive and invasive, government agencies must ensure that they balance their insider threat actions against violating the privacy of their employees.
Insider threats pose a significant danger to government—and commercial—organizations. The access and trust afforded to employees, while necessary for mission accomplishment, expose organizational vulnerabilities to malicious insiders. Three government organizations— the National Insider Threat Task Force, the National Background Investigative Bureau, and the Department of Defense Insider Threat Management and Analysis Center— are key to achieving the goal of detecting and preventing insider threat attacks. These organizations are guiding efforts to shift insider threat programs from current, IT-based efforts to automated, whole-person risk-rating systems. The automated systems will enable organizational insider threat programs to quickly identify and react to the indicators of potential insider threats, thus mitigating their effects or preventing them altogether. These automated systems will require the implementation of advanced analytic techniques to discern a potential insider threat from benign employee behavior. The challenges of implementing such systems are many—not the least of which are the significant technological requirements and statutory limitations.
Though this blog series focused on the automated detection of potential insider threats, a key component of any insider threat program is the employee. A common theme among the research conducted for this blog series has been the ever-increasing importance for all employees to be engaged in the organization’s mission to prevent insider threats from harming the organization. Just as ongoing cyberattacks require all employees to be vigilant in their network activity, so too does the potential for insider threats. As a recent IBM Center blog noted, cybersecurity must be “a positive part of the culture—an integral element of an organizational standard way of operating, not a separate silo.” This premise is equally true regarding insider threats.
Disclaimer: The ideas and opinions presented in this paper are those of the author and do not represent an official statement by the U.S. Department of Defense, U.S. Army or other government entity.
Major Tom McMurtrie is an operations research/systems analyst in the U.S. Army currently serving as a research fellow in the Army’s Training with Industry (TWI) Program.
Insider threats: How advanced analytics can further secure agencies
In part 2 of his commentary, Tom McMurtrie, a research fellow with the Army’s Training with Industry Program, details the challenges with implementing new...
Advanced analytic techniques are key to conducting a whole-person continuous evaluation of an employee but there are a host of challenges to implementing this type of insider threat system.
The continuous evaluation process for predicting insider threats suggested in Carnegie Mellon’s 2015 Software Engineering Institute report and in other sources—and required by the National Insider Threat Task Force, National Background Investigative Bureau and Department of Defense Insider Threat Mitigation and Analysis Center—demands the application of advanced techniques to achieve the desired whole-person risk rating.
Aside from the network analysis tools described previously, insider threat continuous evaluation assessment programs will require some or all of the follow techniques to effectively predict insider threats (as described by an April 2017 Intelligence and National Security Alliance report):
In addition, recent advances in the broader artificial intelligence and cognitive computing fields will likely have impacts for Insider threat analysis.
Get tips on how your agency should tackle the data pillar of zero trust in our latest Executive Briefing, sponsored by Varonis.
Five challenges to automating insider threat evaluations
Though big data analytics and the whole-person concept of continuous evaluations can provide the opportunity to more quickly and automatically identify potential insider threats, such evaluations are not without several related challenges.
Insider threats pose a significant danger to government—and commercial—organizations. The access and trust afforded to employees, while necessary for mission accomplishment, expose organizational vulnerabilities to malicious insiders. Three government organizations— the National Insider Threat Task Force, the National Background Investigative Bureau, and the Department of Defense Insider Threat Management and Analysis Center— are key to achieving the goal of detecting and preventing insider threat attacks. These organizations are guiding efforts to shift insider threat programs from current, IT-based efforts to automated, whole-person risk-rating systems. The automated systems will enable organizational insider threat programs to quickly identify and react to the indicators of potential insider threats, thus mitigating their effects or preventing them altogether. These automated systems will require the implementation of advanced analytic techniques to discern a potential insider threat from benign employee behavior. The challenges of implementing such systems are many—not the least of which are the significant technological requirements and statutory limitations.
Though this blog series focused on the automated detection of potential insider threats, a key component of any insider threat program is the employee. A common theme among the research conducted for this blog series has been the ever-increasing importance for all employees to be engaged in the organization’s mission to prevent insider threats from harming the organization. Just as ongoing cyberattacks require all employees to be vigilant in their network activity, so too does the potential for insider threats. As a recent IBM Center blog noted, cybersecurity must be “a positive part of the culture—an integral element of an organizational standard way of operating, not a separate silo.” This premise is equally true regarding insider threats.
Disclaimer: The ideas and opinions presented in this paper are those of the author and do not represent an official statement by the U.S. Department of Defense, U.S. Army or other government entity.
Major Tom McMurtrie is an operations research/systems analyst in the U.S. Army currently serving as a research fellow in the Army’s Training with Industry (TWI) Program.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
Senate passes Social Security bill to repeal WEP and GPO
Some final thoughts from one of the leading reformers of Congress
How children of military service members are at war