From the Office of the CTO: Building Effective Insider Threat Programs
To begin this latest post and in order to properly frame the Insider Threat conversation, rather than use obscure statistics in an effort to convince you that you should be suspicious of all your employees and coworkers, I think it is much more effective to tell you a few stories. None of these stories come from my clients and none of this information is privileged. You can Google any of these stories and see the details of what happened for yourself. If you did, it would be a bit ironic since you would be using Google to find a story about Google, but that’s neither here nor there. Following these three brief stories, I will dig into the best practices for building effective Insider Threat Programs.
Under pressure to meet aggressive sales quota targets and to meet management expectations for the average number of financial products each of their customers should have, Wells Fargo employees allegedly opened 1.5 million fraudulent deposit accounts, submitted over 560,000 fake credit card applications and racked up over $400,000 in fees for customers who did not open the accounts themselves. After the investigation into the fraudulent activity, CEO John Stumpf resigned and 5,300 employees were fired for their role in the fraudulent activity. The damage to Wells Fargo’s brand reputation has been severe.
On December 23, 2013, Citibank network engineer Lennon Ray Brown received a performance review that he was not happy about. In response, Mr. Brown intentionally transmitted a code that erased the running configuration of 9 of the 10 core Citibank Global Control Center routers, resulting in a loss of connectivity to approximately 90% of all Citibank networks across North America.
Mr. Brown felt justified in his actions and even texted his former co-workers about it. For his actions, he was sentenced to 21 months in prison, but the damage to Citibank’s operations and the outage resulting from what Mr. Brown did is not easily recovered.
Google vs. Uber
Google’s self-driving car division, Waymo, is taking Uber to court based on allegations that Anthony Levandowski stole trade secrets from Google before he quit his job to found a self-driving truck company called Otto. Otto was subsequently acquired by Uber for $680 million. Two other engineers are also accused of stealing secret documents related to self-driving cars shortly before being hired by Uber. The documents are considered to be so sensitive that there is a legal fight over which attorneys should even be allowed to see them as part of the lawsuit.
What do these three short stories have in common? They are all examples of the massive amount of damage that can occur when trusted insiders become a threat. There are few traditional security tools that could detect each of these actions in enough time to potentially mitigate the harm caused, but luckily, technologies now exist that can enable organizations to build programs to identify, investigate, and ideally mitigate the threat posed by trusted insiders. It is never a comfortable conversation to have. Most people do not like to view their friends, co-workers and neighbors as the likely cause of threats their organizations face, and they are right most of the time, but a single insider threat, as demonstrated above, can have massive negative consequences for organizations.
Technology Approaches to Insider Threat Protection
User and Entity Behavioral Analytics (UEBA) offers organizations opportunities to identify risky insider behavior and interdict before that behavior manifests itself in irreparable harm to the organization. What is important to understand, in my opinion, is UEBA is a bit of a misnomer in that it leads consumers to believe that the technology is going to qualitatively analyze a user or entity’s behavior. It will not, at least not effectively.
What it can do very effectively, however, is establish baselines and report deviations from that baseline behavior, or identify specific activities that are determined to be risky in and of themselves. This type of technology forms the basis that allows an Insider Threat Program to be built
There are two types of products that include User and Entity Behavioral Analytics, standalone UEBA products like Exabeam, Securonix, and FortScale, and products that position UEBA as an important feature of a broader capability set like Forcepoint’s Insider Threat, LogRhythm’s Security Intelligence Platform, and IBM’s QRadar.
Building Effective Programs
At this point, most people understand that an effective program requires people and process as well as technology in order to be effective. Raytheon, the organization that developed SureView Insider Threat and has been building programs around it for the last 15 years, has identified two phases of any successful program, Enterprise Monitoring and Incident Investigations. InteliSecure has condensed the Raytheon Best Practices into 5 key programmatic elements for each phase. I will now touch on the components of both phases and how they form the basis for successful Insider Threat programs.
Enterprise Monitoring refers to the activities in your Insider Threat Program that are directly related to gathering information from the environment. This is generally information from an endpoint such as a laptop or desktop, network devices and listening points, or both. The five elements in Enterprise Monitoring are:
Identify Assets at Risk
This portion of the program is a Best Practice for pretty much any security program. InteliSecure calls it a Critical Asset Protection Program and we have been building them for organizations who wanted to deploy Data Loss Prevention technologies since 2002. The truth of the matter is, every organization has more data than they can apply advanced protection to and therefore, in order to protect the information that matters most, they must focus on the information that is most important. Insider Threat Programs are no different. You must first identify the assets that most need protection and that if effectively protected by a program, would result in a favorable cost/benefit analysis. Everything done in the creation of a program after identifying those assets should be directly related to protecting those assets from harm, misuse, or inappropriate disclosure.
Unlike Data Loss Prevention Programs, these assets may not only be information assets. They may be information assets, but they may also be privileged accounts, systems or networks.
Profile Those Most Likely to Put Assets at Risk
It can be a difficult thing to profile users who are most likely to put assets at risk. When you say that, many people may think they need to identify users who are poorly trained, or users that may harbor resentment towards their organization. When I build programs, I think of none of those things. I take the profile from the perspective that the people most likely to put an asset at risk are the people who interact with that asset most. Building programs on that basis removes subjective assessments of people and focuses monitoring on the assets themselves and who uses them as a normal course of their daily duties.
Investigation and Remediation Best Practices
Both Investigation and Remediation activities can be viewed as a pyramid. On the investigation side, the most common actions, like mistakes and broken business practices, are the least harmful and nefarious and by far the most frequent. Intentional malice on the other hand, is far less frequent but potentially far more damaging. The corresponding response is an upside down pyramid then, where the most frequent and least heavy handed actions will be training to correspond to mistakes and the most heavy handed and least frequent actions will be termination or prosecution, which correspond to intentionally malicious behavior.
Decide Where and Whom to Monitor
This decision will largely be affected by the capabilities of your technology and the output of the previous question, who is most likely to put your assets at risk? Decision points include whether to monitor endpoint activity and which points of the network you may want to tap to facilitate monitoring.
Analyze and Measure Areas of Concern and Leading Indicators
This step is where you use your imagination in an attempt to model behavior leading up to a risky or malicious act. It is more art than science at first, but any real instance of insider threat will yield far more information and context that you can look at the next time. Penetration testers may be able to help show you what an attack from an insider may look like as well. Like any other program, try to remember that it is a journey and not a destination. You will never wake up one day magically protected from insider threats, but ideally, you are improving your posture every day.
A large portion of Insider Threat programs is related to investigating behavior that seems to be abnormal. The vast minority of those investigations will not result in anything more than training for the end user, if any action at all. This is a good thing when you think about it. If you are consistently finding instances of true insider abuse and misuse, your organization has a cultural problem. Performing investigations well, however, is a cornerstone of an effective program. The following are five areas to consider when building an investigation program. The five elements for Incident Investigations are:
Identify Incidents to Investigate
It may seem obvious, but you will see far too much information to efficiently investigate everything. Therefore, it is important to set guidelines with respect to what actions meet the threshold for investigation and what the internal Service Level Objective (SLO) or external Service Level Agreement (SLA) should be to have that investigation completed. Timeliness is important, but setting too aggressive SLOs results in requiring more staff and setting too aggressive SLAs generally results in a higher cost from a Managed Security Services Provider (MSSP). The key to doing this well is to identify the organization’s risk tolerance and complete a cost/benefit analysis on the Total Cost of Ownership (TCO) of the program.
Mine Incident Logs for Historical User Activity
In the event that you find an insider who is behaving in a way that is consistent with an insider threat, the historical information that was gathered prior to you being alerted to the behavior likely contains a series of leading indicators or “triggers” that were clues that the person was beginning to behave in a suspicious or potentially malicious manner. The idea behind this exercise is not to bemoan all the signs that were missed, but to build more effective behavioral monitoring so the next time you can catch the behavior earlier in its cycle. It is this activity that most quickly can mature a program.
Evaluate Incidents in their Full Context
The more information you have about a series of events, the better decisions you can make with respect to a qualitative analysis of the behavior. You’re ultimately trying to get to a decision point. Should you Rehabilitate, Terminate, or Prosecute?
This decision should be handled on a case-by-case basis. It is also important to mention that doing this qualitative behavioral analysis well requires an intelligence mindset rather than a law-enforcement mindset. Law enforcement generally seeks to make an arrest as soon as evidence points to the likelihood that someone did something wrong. An Intelligence mindset instead looks at alternate plausible theories and collects information with the intention of eliminating theories. When only one plausible explanation for behavior remains, and only then, is it time to take action against a person, especially a trusted insider.
Isolate Trigger Events
As discussed before, it is important to find events that either triggered behavior or events that could trigger the analyst to begin looking at behavior differently. These triggers will then form the basis for monitoring enhancement.
Utilize Post-Incident Activity
It is important to remember that almost all Information Security best practices operate on a Continuous Process Improvement (CPI) model or something similar to it. That means that it is vitally important to use lessons learned on one investigation or incident to improve the program to identify behavior earlier or respond to it better. Every incident is an extremely important opportunity to improve and you never know when the next will come. Ensure you take full advantage of every opportunity that arises.
If you’d like to learn more about building effective Insider Threat Programs, or you have questions you’d like to ask live, please join my April 13th webinar, Building an Effective Insider Threat Program. We are embarking on an exciting new world in security where the monitoring capabilities have become sophisticated enough to gather and analyze very large data sets. It is important that we build our programs to leverage human analysis where necessary in order to make qualitative assessments of human behavior. That is the key to mitigating the insider threat.