Security breaches caused by employees resulted in 43 percent of the data loss in North America, with half of those breaches intentional, according to a new study on data exfiltration from Intel Security. The study goes on to offer advice about what organizations can do to combat the insider threat problem, including measures such as employee training, increased network monitoring, risk assessments, incident response plans and data loss prevention (DLP) technologies. These are all good suggestions that can help ameliorate the growing problem of insider threats.
But one thing/aspect seems noticeably missing. What about focusing on the insiders themselves, especially those that might be tempted to engage in security breaches?
What if you could predict which employees or members of a group might be likely to commit a breach — either intentionally or unintentionally?
With the right technology, you can go a long way toward this aim.
Cybersecurity tools such as DLP and intrusion detection and prevention technologies are valuable for focusing on data and its movement through a system. However, you also need to know that the people who have access to those data and systems should be trusted with it. Enter a different kind of technology — big data analytics — that focuses on people and their behavior, and can help predict threats before they occur.
Using sophisticated identity analytics and carefully constructed algorithms, technology can place people — whether employees or members of a group — into risk pools ranging from very high risk to very low. The recipients of this information — most likely supervisors or human resources professionals — can then decide how to approach employees concerning their level of risk. Ideally, the highest risk employees can receive some sort of support so they never become an actual threat to their organization.
This approach, which you might call human behavioral risk assessment, is relatively new and quite complex, but it’s an area we think is essential to mitigating insider threat. I know of insider threat approaches that consider people’s behavior in front of a keyboard — how many times they access certain types of data, when, in what quantities, and from where — but I know hardly any others that consider behavior away from the keyboard and in a work, home, social or public environment. The model I’m talking about combines and analyzes both kinds of data. Information is drawn only from publicly available sources such as criminal records, driving records, credit records and social media sites — never from protected information such as medical or psychological records. Then this public data can be combined with internally available information such as performance reviews, personnel interviews, attendance records, and possibly interviews of co-workers.
The model is excellent at identifying people with extremely low and extremely high risk. For an organization, this kind of risk rating is valuable in that they can forget about the first group and concentrate on the second — those who exhibit behaviors or activities of concern. As mentioned previously, the best scenario is to be able to assist someone with risky behavior — someone with financial difficulties, for example — so that they’re never tempted to commit a breach, steal valuable information or otherwise become an insider threat.
And just as the model evaluates people’s behavior away from the keyboard, an “insider threat” can refer to an act that is not just cyber-related. A high risk insider can crash a plane into a mountain, help criminals escape from prison, shoot fellow students or co-workers or commit other assaults against an organization or the public at large. The same kind of big data analytics applies equally well to this kind of insider.
The problem of insider threat is not easily solved, and no one technology or approach can fully address it. Network monitoring and other cybersecurity technologies are essential to alert you to what’s going in and out of your network. Raising awareness among employees and providing them education about cyber hygiene and best practices are also important tactics. IT departments can also place tighter controls on what employees can access and where. But let’s not forget that the human being and his or her motivations are not touched by any of these methods. Only by focusing directly on people and their behaviors can you begin to get at these, and they’re very important. Targeting big data analytics as the insider threat problem can help organizations know what members of a group might pose the most risk. With this type of risk information, organizations will have the means to potentially avert insider threats before they materialize.
William Van Vleet, chief executive officer at (www.haystax.com) Haystax Technology, has more than 30 years of experience in defense and commercial technology markets. Mr. Van Vleet founded Haystax Technology in 2012, which is headquartered in McLean, Virginia.
Insider threat solutions start with a focus on the insider
William Van Vleet III, CEO of Haystax Technology, makes the case for analyzing employee behaviors at the keyboard and away from the office.
Security breaches caused by employees resulted in 43 percent of the data loss in North America, with half of those breaches intentional, according to a new study on data exfiltration from Intel Security. The study goes on to offer advice about what organizations can do to combat the insider threat problem, including measures such as employee training, increased network monitoring, risk assessments, incident response plans and data loss prevention (DLP) technologies. These are all good suggestions that can help ameliorate the growing problem of insider threats.
But one thing/aspect seems noticeably missing. What about focusing on the insiders themselves, especially those that might be tempted to engage in security breaches?
What if you could predict which employees or members of a group might be likely to commit a breach — either intentionally or unintentionally?
With the right technology, you can go a long way toward this aim.
Get tips and tactics to make informed IT and professional services buys across government in our Small Business Guide.
Cybersecurity tools such as DLP and intrusion detection and prevention technologies are valuable for focusing on data and its movement through a system. However, you also need to know that the people who have access to those data and systems should be trusted with it. Enter a different kind of technology — big data analytics — that focuses on people and their behavior, and can help predict threats before they occur.
Using sophisticated identity analytics and carefully constructed algorithms, technology can place people — whether employees or members of a group — into risk pools ranging from very high risk to very low. The recipients of this information — most likely supervisors or human resources professionals — can then decide how to approach employees concerning their level of risk. Ideally, the highest risk employees can receive some sort of support so they never become an actual threat to their organization.
This approach, which you might call human behavioral risk assessment, is relatively new and quite complex, but it’s an area we think is essential to mitigating insider threat. I know of insider threat approaches that consider people’s behavior in front of a keyboard — how many times they access certain types of data, when, in what quantities, and from where — but I know hardly any others that consider behavior away from the keyboard and in a work, home, social or public environment. The model I’m talking about combines and analyzes both kinds of data. Information is drawn only from publicly available sources such as criminal records, driving records, credit records and social media sites — never from protected information such as medical or psychological records. Then this public data can be combined with internally available information such as performance reviews, personnel interviews, attendance records, and possibly interviews of co-workers.
The model is excellent at identifying people with extremely low and extremely high risk. For an organization, this kind of risk rating is valuable in that they can forget about the first group and concentrate on the second — those who exhibit behaviors or activities of concern. As mentioned previously, the best scenario is to be able to assist someone with risky behavior — someone with financial difficulties, for example — so that they’re never tempted to commit a breach, steal valuable information or otherwise become an insider threat.
And just as the model evaluates people’s behavior away from the keyboard, an “insider threat” can refer to an act that is not just cyber-related. A high risk insider can crash a plane into a mountain, help criminals escape from prison, shoot fellow students or co-workers or commit other assaults against an organization or the public at large. The same kind of big data analytics applies equally well to this kind of insider.
The problem of insider threat is not easily solved, and no one technology or approach can fully address it. Network monitoring and other cybersecurity technologies are essential to alert you to what’s going in and out of your network. Raising awareness among employees and providing them education about cyber hygiene and best practices are also important tactics. IT departments can also place tighter controls on what employees can access and where. But let’s not forget that the human being and his or her motivations are not touched by any of these methods. Only by focusing directly on people and their behaviors can you begin to get at these, and they’re very important. Targeting big data analytics as the insider threat problem can help organizations know what members of a group might pose the most risk. With this type of risk information, organizations will have the means to potentially avert insider threats before they materialize.
William Van Vleet, chief executive officer at (www.haystax.com) Haystax Technology, has more than 30 years of experience in defense and commercial technology markets. Mr. Van Vleet founded Haystax Technology in 2012, which is headquartered in McLean, Virginia.
Read more: Commentary
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
William Van Vleet: Haystax Technology recognized as GovCon finalist