Get the most innovative insights
Understanding Behavioral Analytics
According to Investopedia, Behavioral Analytics is “An area of data analytics that focuses on providing insight into the actions of people.” Contrary to many articles that I have seen, behavioral analytics is neither new nor overly technical. For millennia, those engaging in espionage and counter-intelligence have used behavioral analytics in order to study the behavior of their adversaries in order to identify patterns of behavior in order to predict future actions. Much of what Sun Tzu speaks about in The Art of War is a mix of self-discipline and Behavioral Analytics.
Much of the digital world mirrors the physical world, and the digital security world most closely mirrors the intelligence and military communities in the physical world. It should come as no surprise then that as machines have evolved to possess enough power to process large volumes of data, that information was processed in order to give organizations insight into normal behavior in their environments in an effort to identify patterns of behavior that indicate there is an attack against that environment or to at least identify anomalous behavior that is abnormal in the environment and requires further investigation.
With respect to information security, there are essentially two forms of Behavioral Analytics: First, there is what some call Systematic Behavioral Analytics, or Behavioral Analytics focused on the ways in which systems behave. This form of analytics is most commonly linked to technologies such as Security Information and Event Management (SIEM) technologies. These technologies are improving to the point where they can perform an increasing complexity of correlations and baselining for an increasing number of systems as the technologies are developed to process larger volumes of information faster.
Second, Human Behavioral Analytics is beginning to become more popular. User Behavioral Analytics essentially correlates all activities of a user or entity in order to identify if that user is malicious or if that identity has been compromised. In order to be effective, this type of monitoring must be able to capture and correlate all of a user’s activity across the entire corporate environment. As a result, these types of solutions are still emerging as they require far more comprehensive coverage, and as a consequence, far more processing power than traditional System Behavioral Analytics platforms. The products in this space are often marketed as User and Entity Behavioral Analytics (UEBA) platforms or Insider Threat products. Many of them are very good technologies, but all of them have room to grow with respect to universal coverage.
I often reference Locard’s exchange principle when talking about information security. Locard’s exchange principle comes to us from the criminal forensics world and essentially states that any time someone commits a crime, the criminal will leave something at the crime scene that wasn’t there before they arrived, and take something with them that they did not possess before the crime was committed. That is a paraphrased version of the principle, but that pretty accurately portrays the gist of it. Therefore, the heart of criminal forensics is the identification of evidence that was introduced to the crime scene by the suspect and the pursuit of evidence in the suspect’s possession that originated at the crime scene. This principle is the main tenet behind system behavior analytics, which is most often accomplished by a SIEM system or similar technology.
In our analogy, the systems that make up an organization’s computing environment are the crime scene, and the SIEM system is processing the crime scene as a crime scene investigator would in order to provide analysts with abnormal patterns to be investigated. Abnormalities should not be conflated with malice though, as there are often innocuous reasons why systems may change. There may be an organizational change in personnel, a change in an approved business process, or changes with a system as part of an approved change window. The list goes on. As any good intelligence analyst will tell you, an investigation into anomalous behavior cannot be considered complete until only one plausible explanation of the behavior remains. As long as there are multiple plausible explanations for human or systematic behavior, any conclusions that are drawn are essentially theories.
System Behavior Analytics is often the easiest and most cost effective Behavioral Analytics platform to deploy, since many organizations must collect and retain logs for compliance purposes. If that is the case, the organization often has the capability to perform some System Behavior Analytics against those logs using the tools they have in place to store them.
I use the term “Human Behavior Analytics” instead of “User Behavior Analytics” because User Behavior Analytics is the name of a product, and there are other products that help organizations analyze human behavior. This type of analytics is more akin to FBI profiling than it is to the work a detective would perform at a crime scene. Essentially, the programs are designed to build profiles of normal behavior in an environment. When done as well as they can be, those profiles are specific to different roles, departments and levels inside an organization. For example, the normal access and behavior patterns of an executive should not be the same as the normal access and behavior patterns of an entry level employee, for example.
Human Behavioral Analytics is interesting because it is often far more clear how an organization should respond to an individual once their behavior has been categorized than it would be to respond to a system behaving in an abnormal way. In the end, a user or entity is behind every breach or attempted breach of critical information, therefore, Human Behavior Analytics is more effective for organizations who have the resources, both capital and human, to deploy such programs effectively.
It should be said that tying SIEM to System Behavioral Analytics is oversimplifying the discipline. The leading SIEM providers, such as LogRhythm are developing their systems to take Human Behavior into account in the same ways that leading UEBA vendors like Exabeam are developing their solutions to take System Behavior Analysis into account. What we are beginning to see is the convergence of System Behavioral Analysis platforms and Human Behavioral Analysis platforms into broad Security Intelligence Platforms that allow organizations to analyze behavior from both perspectives.
Hopefully this blog demonstrates the principles behind both User Behavioral Analytics and System Behavioral Analytics and how each is applied. Neither is anything necessarily new, although both have been difficult to accomplish in the digital world for a long time due to an overload of information and a comparative lack of computing power. Now that technology has advanced to the point where organizations can harness the power of these disciplines in the digital world, they are rapidly becoming an invaluable part of any Information Security Program. Organizations can leverage these programs to not only determine what is happening in an environment, but more importantly, qualitatively analyze why it is happening, from innocent mistake to intentionally malicious behavior, in order to apply the appropriate response and respond quickly enough to protect an organization from harm. When paired with a comprehensive Critical Data Protection Program, these more advanced intelligence and response programs can be far more effective at protecting environments than signature based approaches common in antivirus platforms or Intrusion Detection and Prevention Systems.
The Future of Information Security
As we approach the end of the year, many of us are enjoying the holidays with our families and friends. As we prepare for the year to come, many of us are building lists of things we would like to see change as well as building a mental picture for what we expect the new year to bring. Those of us in Information Security are no different, and the things we should resolve to change very much align with what I expect the 2018 to bring.
Information Security as we know it is changing, and the change has been going on for more than a decade. There were some gradual changes and some instances of bring your own device (BYOD) creeping into the business world in the early 2000’s, but the sea change really started as the average consumer began to adopt smart phone technology around 2007. Since then we have seen two major phenomena that have changed the way we do business. First, organizations own a shrinking percentage of the number of devices employees use to connect to corporate assets. Second, organizations control a diminishing percentage of the platforms that house our most critical information asset.
The result of these two trends is that corporate IT security policies must shift from controlling how people utilize hard physical assets like computers and networks, to how employees interact with soft organizational assets like data. These trends will result in Information Security shifting from traditional perimeter-focused programs to Data Security, and Insider Threat/User and Entity Behavior Analytics programs.
That shift will also bring about some changes in the look and feel of security programs. Those changes are:
Tomorrow’s security programs will have little need for servers and system administrators. The technical resources running your firewalls and switches won’t be the people who staff your next-generation Security Operations Center (SOC). You will still need those people, but not for your security program.
Instead, you will need analysts, armies of analysts. Tomorrow’s security programs are going to be 100% focused on behavior analytics in three categories, each with its own skill set.
First, you will need data analysts. These are people with some technical acumen but their primary skill set is likely to be business focused. They will be charged with qualitatively understanding the behavior of data in an environment to determine if that activity presents a high, medium, or low business risk. These types of people are generally not found in today’s security programs, but next generation programs will need them and they will need to interface with business unit leadership frequently in order to do their work. The lack of this particular skill set is one major reason why so many organizations struggle to effectively implement technologies like Data Loss Prevention.
Second, you will need system analysts. Of the analysts’ skill sets of the future, this is the one most likely to have in your security program today. These are analysts who have a technical application and networking background, with a keen understanding of how systems should behave under normal circumstances. This allows them to be able to quickly and accurately identify and respond to systematic anomalies.
Finally, you will need behavior analysts. These people need to identify human behavior for normal and abnormal behavior. Much of that analysis can be done with machine learning algorithms. Where behavior analysts are especially valuable is in doing qualitative analysis of human behavior in order to attempt to establish motive for what they are seeing. Responses vary greatly between malicious actors and those who are well-meaning but making mistakes. Correctly identifying which group a person falls into prior to activating the appropriate response will be critical to protecting the organization from harm in a way that supports a positive culture and work environment. Balancing both of these sometimes competing priorities will be critical to organizations competing in tomorrow’s marketplace.
In the past, security systems were primarily appliance and server based. Newer security systems are often delivered from the cloud or from virtual appliances. This trend is likely to continue, which means server management and maintenance is going to become a waning skill set in the security space. You will still need technical skill sets in your security programs, but they will no longer be focused on server management and patching.
At the same time, we are seeing a trend towards the proliferation of more systems. Most of those systems will tout integration with a host of other technologies, but what the vendors really mean by “integration” is that both systems have an API with a Software Development Kit (SDK) and some documentation. Making those systems work seamlessly together will be incumbent upon the client. As such, you will need technical people with a skill set that looks like a cross between a developer and a system administrator who can understand both systems technically and develop the APIs to make them work together. I would recommend cross-training your current teams in these APIs rather than turning over the technical people on your security staff, assuming the current team has the desire and aptitude to learn this new skill set.
Managed Security Services Providers are not new, many have been around for some time. However, the increased complexity of security solutions along with the global talent shortage of qualified professionals to fill security roles has more organizations turning to service providers to operate security programs than ever before. I expect this trend will continue as security has increased the stakes involved and complexity over the last few years.
Many people seem to want to apply the “If it ain’t broke, don’t fix it,” philosophy to Information Security, as there is often an organizational resistance to change in security while the organization is embracing rapid and transformative change that is quickly making security programs obsolete. The truth is, Information Security in many organizations is broken. That can be validated simply by looking at the increased size and pace of mega-breaches that coincides with an increased spend in security. The truth is that most organizations do not suffer from a lack of spending money on security, but instead suffer from a lack of spending money intelligently, in a way that evolves with the organization at the speed with which they do business.
Another saying that I think is more important to remember going into 2018, is “What got you here won’t get you there.” The truth is, our organizations have changed their direction with respect to how technologies are used and deployed, shifting from a primarily on-premises deployment strategy housed in data centers owned by our organizations, to a cloud services model that is more akin to leasing or renting space in a facility owned by a third party. The associated challenges have been multiplied by the fact that the vast majority of organizations are consuming services from several third parties and not just one or two. These changes are necessarily good or bad, they just are, and we must change with them in order to get to where we are trying to go as their protectors. To support the three main changes I previously mentioned, there are five core strategies we must embrace to stay ahead of the curve to build a future-proofed Information Security program:
“The world’s most valuable resource is no longer oil, but data.” – The Economist
This quote affects the way modern organizations do business in two ways.
First, the data that you hold is valuable, not only to you but to potential adversaries, and many marketplaces exist to buy and sell stolen data. This data exists in a few forms. First, there is data belonging to your client base that you have a moral, and often legal, obligation to protect. If you aggregate that data in any way you are a target. Second, there is information that forms your competitive advantage, no matter who your company is or what you do, there is some form of Intellectual property, some of it legally protected and some of it not, that affords your organization the ability to be competitive in the market place. If you do business in the western world, there is some form of Intellectual property that is allowing you to do so. Simply put, the world has become too small and competition too fierce for proximity to customers alone to keep you in business. It’s your responsibility to find it and figure out how to protect it if you want to survive in tomorrow’s marketplace.
Second, if data is the new oil and the world’s largest and most profitable companies (Alphabet/Google, Amazon, Apple, Facebook, Microsoft, etc.) gain and maintain their competitive advantage by gathering and storing that data, then the ability to gain, retain, refine, and analyze that data is paramount to success. Additionally, global regulations like the European Union’s General Data Protection Regulation (GDPR) are beginning to mandate that companies advise data subjects of their rights related to data and give them the ability to opt out of collection at any time. What this means is that competing in the new marketplace requires the ability to gain data on as many subjects as possible which in turn means that you must build as much trust as possible with the general marketplace. How do you build that trust? Through countless hours of effort and dollars spent to cultivate your brand. How do you destroy it in an instant? Have a data breach that shows you did not value that data properly and protect it as well as you possibly could.
“41% of all enterprise workloads are currently running in some type of public or private cloud. By mid-2018, that number is expected to rise to 60%” – 451 Research
We must stop acting as if the Cloud is one thing and everything else is something else entirely. In 2018, very few organizations will be 100% cloud or 100% on-premises infrastructure. The organization will be operating in some hybrid mixture that will likely continue to ebb and flow over time. Any security strategy must take into account that some systems will be accessed on platforms and services the organization owns while others will not. The same principles of security and enforcement rules must apply universally regardless of whether or not the organization owns the underlying infrastructure in order for a security program to be effective. This means the strategy must be properly defined and applied in such a way that there is no difference between on-premises and cloud security postures.
Any protection methods should be data-centric, not infrastructure-centric, and designed in such a way that there will be no changes to the protection profile if a system moves from on-premises to the cloud or vice versa. I understand this is much easier said than done, but we must challenge ourselves to make this a reality. Cloud security products have evolved to a point where this goal has become achievable.
“Private ownership of property is vital to both our freedom and our prosperity” – Cathy McMorris Rodgers (U.S Representative for Washington’s 5th Congressional District)
When did we come to accept that we can no longer exert control over information we own when it has left our perimeter? If we truly own data and it is the world’s most valuable commodity, shouldn’t we continue to protect it after it has been shared, preserving the right to revoke access to it at any subsequent time for any reason? I would argue that we should and if we do, we can fundamentally redefine what a breach means to an organization, for if a breach is to occur, but the organization has the ability to revoke access to all sensitive information after the fact, the breach has become significantly less harmful to the organization. Employing such strategies would help identify the breach in the first place as well, significantly shortening the time to detection as organizations would start to be alerted on anomalous activity related to data as soon as the attacker started to look through it. This stands in sharp contrast to what often happens today where organizations only find out about a breach several months or years after it has happened and only because authorities happen to find it on an illicit marketplace being bought and sold often as a result of an unrelated investigation.
Well-known names such as Symantec, with their Information Centric Encryption product, and Microsoft with their Azure RMS product are making great strides to make this vision a reality for their customers, along with less known and focused players like Ionic and Vera. These are exciting capabilities, but it also necessitates that organizations think of protecting their information in new and far more comprehensive ways, deciding not only to block or allow information to traverse a network segment at a specific moment of time. They must also define the parameters of acceptable use of information for both internal and external users. In most organizations this has never been done before. As the old adage goes, “with great power comes great responsibility”.
“Data is not information, information is not knowledge, knowledge is not understanding, understanding is not wisdom.” – Cliff Stoll and Gary Schubert
The purpose of the above quote is to demonstrate that data must be refined and correlated in order to increase in its value, ultimately culminating in understanding and wisdom. This concept applies especially to Information Security where data is abundant to the point of being overwhelming. Information is available, understanding is scarce, and wisdom is fleeting. The only way an organization has a realistic opportunity of gaining understanding about the threats they face and the ways which they should address those threats is to correlate all of their data points and be able to run analytics on that data.
The most traditional way of doing so is System Incident and Event Management (SIEM) solutions whose early generations made gathering actionable intelligence challenging and time consuming at best. Modern SIEM technology has improved greatly and if you ask enough people you will find both ardent supporters and strong detractors of the technology. My purpose is not to argue for or against SIEM, but to be a strong advocate for the capability of aggregating disparate data sources. This becomes increasingly important as we seek to shift our programs and rules from trying to guess attack patterns and identify those attack patterns when they occur, to a much better and resource intensive approach of establishing baselines and alerting on deviations from those baseline.
In forensics science, there is a principle known as Locard’s Exchange Principle which states that in a crime, the criminal will bring something to a crime scene and leave with something from it. Forensic science, is based on identifying the items introduced to a crime scene by a perpetrator or the items from the crime scene that can be found in possession of the perpetrator to prove he or she is the person who committed the crime. The digital world is similar in that no crime can be committed against a piece of data or system without that system or data deviating from its normal behavior. The attacker may or may not use a method to attack that is known, but regardless of the attack, from the most commoditized to the most sophisticated zero-day, something must always change. As a result, establishing the baseline for normal behavior is important, and gathering as much information from as many systems as possible and housing that information in a central repository which allows analytics to be performed is a key capability, regardless of the technology you are using to do so.
“The threat of insiders is real and what can happen is you have amazing defenses to protect your intellectual property and other secrets from those who are trying to obtain them from outside your company’s walls, but you forget sometimes to have a program where you are watching those who you trust,” – Assistant Attorney General for National Security John Carlin Despite the grave risks they pose, Insider Threats remain one of least talked about and addressed Information Security challenges in the marketplace. It’s very politically unpopular to go to senior leadership and tell them that their trusted employees represent a grave threat to them, that the very people who helped build the company are among the most likely to do it harm. That is an inconvenient truth, as Al Gore might say. The insider threat is real, and it is potentially far more damaging for someone who has access to your systems and knowledge of your network to attack you than even the most skilled hacker or nation-state actor.
In 2018, we must do better and accept the fact that some people are inherently good and altruistic and some are not. In order to protect those among us who are working towards the greater good of the organization we must find and quickly neutralize those who are not. I know this will not be popular, but that’s why leadership is difficult. Sometimes doing the right thing is much more difficult than doing the wrong thing, but in order for ourselves and our organizations to reach their full potential, we must choose the hard rights over the easy wrongs every day.
I liken security to healthcare. Many people apply home remedies and over the counter medication to simple, low stakes injuries and ailments. There are not complex tools involved in these procedures and in most cases, the negative consequences of making a mistake are not disastrous. This is similar to where security started, as the tools were generally not too complex and the potential fallout, such as a defaced website or limited service interruption, was something not too serious.
Today, the stakes are much higher as, according to the National Cyber Security Alliance, 60% of small businesses that suffer a cyber-attack are out of business within six months. Even in larger companies that can sustain a cyber-attack, it has been common to see turnover in the executive ranks and board positions after a cyber-attack. The threat is existential, or close to it for most organizations, as victims are overwhelmingly either put out of business or have their leadership drastically changed after an incident.
Therefore, modern security, with its complex tools and procedures, scarce and in-depth required skill sets, and high stakes are more akin to brain surgery than first aid, especially when dealing with an organization’s most critical data assets. And while you may splint your own finger if you break it, not many of us are likely to take a scalpel to our own temporal lobe using a cosmetic mirror and a bottle of Tylenol Extra Strength. Complex security initiatives are similarly better left to the professionals.
From the Office of the CTO: Rethinking the SSN in light of Equifax
It has been almost two weeks since Equifax announced that a cyber-attack potentially affected 143 million Americans in an unprecedented and massive data breach. According to the US Census Bureau, there were 125.9 million adult men and women in the United States as of 2014. With a population growth rate of approximately 2.9 million per year, it is a safe bet that if you have received credit for anything in your life, you should assume you are affected and take steps to protect yourself.
The first reaction, as usual, has been to blame the victim, in this case Equifax. Many people want to point the finger at them and say their security program was negligent and we may find out that it was. Many others have pointed to their response as being selfish and irresponsible in that they waited so long to notify the public so they could prepare a website and credit monitoring services for affected people. That may also be true. Many people are righteously indignant about the fact they are likely to sell more credit monitoring services and gain revenue from people freezing their credit reports. That probably will happen, but none of it matters.
Here’s why: The usual parade of cybersecurity vendors trotting out articles about how this should fundamentally change the way we look at cybersecurity by buying more of their products and fewer of their competitors’ products is also sadly prevalent. Maybe this will be the breach that changes things, but it’s not likely. People still don’t patch their systems. It’s a basic premise of cybersecurity, but people just don’t do it, for some mind-blowing reason.
The truth is, aside from a few executives who sold stock with dubious timing, nothing will be done to hold Equifax accountable for the fact that they have exposed most of us to extremely damaging attacks related to identity theft. It isn’t that the government doesn’t want to hold Equifax accountable, it’s that they can’t. If you thought banks were too big to fail in 2008, the credit bureaus absolutely cannot fail without collapsing the global economy. That is not hyperbole, it’s fact. The entire global economy is built on people spending more money than they have. If people suddenly stopped using credit, the standard of living would decrease dramatically around the globe and the resulting global financial depression would be like nothing we have ever seen.
To understand the debt issue, let me give you a quick visualization, courtesy of Visual Capitalist. The Federal Reserve in the United States holds a balance sheet of $4.5 trillion. The US debt is approaching $20 trillion. This means the US government has about four to five times the amount of debt to the amount of money it has on hand. This pattern is similar in many American households. Therefore, up to 80% of U.S. purchasing power is tied to credit, or in other words, if the credit bureaus collapsed you could expect the total US economy to shrink by 80 percent. No one is going to let that happen, and they can’t. Therefore, any meaningful financial penalties against Equifax will result in a taxpayer bailout, meaning we’re really suing our friends and neighbors and not Equifax, and we’re going to be using credit to pay the bill since the government borrows about 80% of what it spends. Oh, the irony!
If anything happens, the executives who purportedly sold their stock, who could be found guilty, would be made an example of to quell the public outrage. They will be very publicly sentenced to the maximum extent of the law. Regarding Equifax as an entity, quietly, don’t expect anything substantially bad to happen.
That doesn’t mean there’s nothing valuable that can be done in the aftermath of this breach, however. It certainly can. This could be the tipping point for us as a global society to finally accept the idea that a serial number given to an individual at birth and kept secret throughout a lifetime as a means to identify that person is a ridiculously antiquated notion we have had to accept. The moment we connected with computers and began storing information on them, the idea of a single identification number became immediately antiquated.
In my 2016 Book, Building a Comprehensive IT Security Program, I wrote ‘The credit card industry provides a good example of how non-persistent identities can be used to effectively reduce the effect of cyber-crime. When a cyber-criminal steals a credit card number, the number is not persistent and can be deactivated within moments of it being reported stolen or observed to be displaying an anomalous behavior pattern. A similar methodology could be applied to identification numbers to make them much safer.’
A system like this would be much better in the current scenario. How great would it be if the exposed 143 million people were simply sent a new identification number and this whole thing was behind us? This kind of solution could actually work and solve the problem, rather than continuing to kick the can down the road and deal with stolen identities being a part of life. It would require administrative overhead, but maybe we could fund it through fines we levy when breaches like this happen. That way the people responsible pay the bill and the fines used to fund the apparatus are used to limit the damage caused by them.
The real tragedy and irony of my position in the information security industry is that breaches like this happen time and time again, and rarely result in any meaningful change. Perhaps this go-around we can begin to shift our paradigm, accept but battle the ‘breach state’ and innovate around meaningful solutions to actually protect our identities!
Top 10 Data Loss Prevention Pitfalls
In this post, we will discuss the top ten reasons many Data Loss Prevention (DLP) Programs fail and how organizations can address those issues to ensure Data Loss Prevention Systems can be leveraged to build a solid foundation for an Information Security program. Doing so will position an organization to build more advanced information protection capabilities like Data Protection in the cloud, and rights management and encryption strategies to protect information throughout its life cycle.
What is Information Security? Information Security as a term is often conflated with the term cybersecurity. In my view, cybersecurity refers to the overall program an organization develops in order to protect from the broad spectrum of cyber threats an organization may face. Information Security is a discipline inside cybersecurity which focuses on protecting specific information that an organization has whose inappropriate access or disclosure can cause irreparable harm to an organization. Those informational assets, which I refer to as Critical Data Assets, must be afforded additional levels of protection from commodity data in order for an organization to prioritize security initiatives to protect what matters most.
In that context, the foundational element of any effective Information Security program is the ability to distinguish critical information from commodity information. That foundational element is facilitated by content analytics, which is most often accomplished through a technology known as Data Loss Prevention.
Data Loss Prevention is not a new technology, it has been around for 15 years. However, despite the fact that Data Loss Prevention has been available for quite some time, it remains one of the more difficult technologies to deploy and leverage effectively as it is fundamentally different than other cybersecurity tools. DLP requires business alignment in a way that many other tools like firewalls, endpoint security, and Intrusion Prevention Systems do not.
Data Loss Prevention programs are different and must be established differently than other cybersecurity products. Data Loss Prevention is not a traditional security tool. It is a business tool facilitated by technology. Consequently, it is important to involve business stakeholders early in the process.
Information is not all of equal value, nor should it be protected in the same way. Information derives its value based on the business impact it may have. For example, if Intellectual Property is lost or improperly disclosed, there will be an impact on the profit and loss statement of at least one business unit. Therefore, in order to quantify the risk to a Critical Data Asset, the business unit that would be most affected by that loss must help define what impact that loss would mean.
At its core, security is the reduction of the impact of something bad happening or the likelihood that a negative event would have on an organization. Therefore, defining the risks for Critical Data Assets is critical to determining the appropriate level of security spending to secure that asset. As the saying goes, you shouldn’t spend a dollar to protect a nickel, but spending a nickel to protect a dollar has a favorable Return on Investment. Many Information Security programs are ineffective because equal spending is applied to all assets. Involving business stakeholders is critical to ensure security priorities are aligned to business priorities.
Further, in order for Critical Data Assets to be leveraged to their full business potential, they must be stored, used, and often shared. Ensuring they are stored properly, used in the proper manner by the proper people, and only shared with appropriate internal and external parties, requires business stakeholders to define the authorized business processes with respect to those assets.
Many Information Security programs are doomed from their inception because the program does not involve business stakeholders in defining the program. If you do not define what is authorized and what should be done in the event of unauthorized activity, how can you possibly protect those assets?
Business unit involvement in Data Loss Prevention programs must not end with the definition of the program. Business units are best positioned to define authorized and unauthorized behavior. Those behaviors do not remain static and continued business unit involvement is imperative to building and maintaining an effective program. Business unit involvement is divided into two separate functions which often involve two distinct groups of people: Governance and Working Groups.
Governance Groups are responsible for the strategic direction of the program and generally made up of business unit leaders. They generally meet quarterly and define the business objectives of the program and milestones for specific compliance or risk reduction initiatives.
Working Groups are responsible for the daily activities necessary for the ongoing support and maintenance of the program. These groups generally consists of security professionals responsible for the operation of the program along with select delegates from the Governance Group. The group is responsible for day to day Incident Response with respect to events that have potential business impact.
These groups generally work together on a day to day basis and have a standing meeting on a weekly or bi-weekly cadence to discuss the operations of the program including activities such as tuning the system.
Many organizations that do not involve business stakeholders often fail to define what Critical Information is. As a result, programs only protect information that is regulated. This may or may not be appropriate based on the nature of the operations of the organization. Failure to define what is critical to an organization results in programs that spend too much to protect commodity information while failing to appropriately protect what is most critical to the organization.
No Information Security program can be effective if it is not focused on protecting the most important information. This seems like a relatively simple statement, but many programs fail to put forth the appropriate level of effort to define their assets.
Identifying Critical Data Assets is not enough. In order to transition from Data Loss Monitoring to Data Loss Prevention an organization must define the actions necessary to protect information once its identified. These protection initiatives fall into two categories: Systematic Protection and Incident Response.
Systematic Protection is leveraging technical responses such as blocking, user notification and confirmation, and quarantining, among other capabilities. Systematic protections have a low tolerance for False Positives as inappropriate actions can have a detrimental business impact that should be avoided at all reasonable cost.
Incident Response Protection is the ability to discover potential issues and respond manually, but quickly enough to prevent harm from coming to the business. Incident Response protections have a higher tolerance for False Positives and are often appropriate for Critical Data Assets that have a lower tolerance for False Negatives or have a likelihood to be intentionally compromised. Systematic responses to information being intentionally compromised often serve to tip off the attacker to the fact their activities are being monitored.
The reason it is important to determine the long term objectives with respect to each Critical Data Asset is that these decisions will impact strategic and tactical operations with respect to the program. Tuning activities and accuracy goals with respect to both False Positives and False Negatives are dictated by the intended response in the event of an incident involving the asset.
Many programs fail because they do not define response actions and policy tuning efforts are paralyzed by a desire to have very few false positives and absolutely zero false negatives. These are competing priorities and it’s impossible to meet both objectives. Successful programs balance these priorities by setting thresholds for systematic actions or creating two policies. One policy will be tuned to have very few false positives for the purpose of systematic actions. The other will be tuned to allow for very few false negatives in order to catch events that do not meet the more strict criteria to enable a system to take action in an automated fashion, which will be addressed through incident response actions.
Many organizations struggle to define Return on Investment for Information Security programs. You can quantify such a return, but in order to do so you must first quantify the risk against your critical data assets. The benefit side of the cost/benefit equation is determined by breaking down the risk treatment of an asset into acceptance, avoidance, and transference and then comparing the costs of executing those three strategies against that of mitigating any incidents to determine ROI. The organization can then analyze the amount of capital being spent on the mitigation strategy in order to define the Return on Investment.
Information Security talent is difficult to find. It is even more difficult to find security talent that is familiar with the specific discipline of Data Protection. As a result, organizations often understaff their programs and have poorly tuned policies as a result. This becomes a self-reinforcing problem as poorly tuned policies yield many more events than policies that are properly tuned and require more effort to sift through the False Positives to get to the incidents that matter. As a result, many failing programs are characterized by hundreds of thousands of events that are not triaged, missing real business issues that should be addressed.
Many successful programs are turning towards Managed Security Services Providers (MSSPs),like InteliSecure, who focus on Information Security programs to assist in the operation of a program to ensure all events are triaged in a timely manner in order to enable incidents to be responded to properly.
Improper policy tuning in Data Loss Prevention programs is often a result of the failure to set long term objectives with respect to Critical Data Assets. However, even when those goals are properly set, organizations can struggle with the ability to tune DLP policies because the skill sets required to do so are relatively unique. Data Loss Prevention policy tuning is equal parts art and science, and the people performing the tuning must have the business acumen to interface with business stakeholders in order to ensure accurate tuning. The people I have just described are roughly as rare as a unicorn, and in today’s market for cybersecurity talent, if you could find a single person with all of these attributes, they would likely be prohibitively expensive to hire. More often, there are several people involved in policy tuning efforts, which also significantly drives up costs for internal staffing programs. This is another reason why many successful Data Loss Prevention programs leverage Managed Security Services Providers.
Information Security programs are as much about business as they are about security or technology. Since businesses evolve and change at an ever-increasing rate, the program must be built and operated in order to evolve with the business. In order to ensure the Information Security program remains aligned with the business, the program should be reviewed regularly. I generally recommend the governance group review the program and make changes as necessary on a quarterly cadence, and that the Critical Data Assets be re-evaluated on an annual basis.
Many solutions are deployed as “Integrated DLP”, which is DLP as a feature of a different product. Common examples are integrated DLP as part of Microsoft Office 365, Integrated DLP as part of a Cloud Access Security Broker (CASB) technology, or Integrated DLP as part of a web gateway or firewall. The Integrated DLP approach generally means there are separate solutions and inconsistent rule sets, as well as responses across different channels such as email, network, web, CASB, endpoint, and Data at Rest (Storage). This inconsistency, or resultant programmatic gaps, often result in poor data protection and an exponentially higher cost of operating a program.
The ironic thing is that many organizations embark on the Integrated DLP journey in order to save money up front, but the increased cost of supporting several consoles for DLP and the time associated with manual correlation between systems quickly outpaces any up-front savings. Since the savings are one time and the increased costs continue in perpetuity, it quickly becomes a far more expensive solution that is far less effective. It is very similar to me saying that I will give you $400 today in exchange for you paying me $100 every month until you die. If you plan on living more than a few months, it really isn’t a good idea.
The reasons these systems are not effective is that the information needed to perform an investigation exists in several different databases without a credible way to correlate the information to establish patterns and risk profiles. As a result, many of the high impact incidents, which have a tendency to occur across multiple channels over a period of time, are missed and the overall value of the program is minimal.
Many organizations are blissfully unaware of just how exposed they are before they put a Data Loss Prevention program in place. As a result, many don’t expect to find anything egregious and therefore do not invest the proper amount of time building a clear and consistent Incident Response process. I always hope that nothing egregious is going on, but in security we all must hope for the best while preparing for the worst. Failure to prepare for the potential of a major incident leads to an ineffective response.
Further, if the proper process isn’t well defined, incidents are often handled based more on relationships, politics, and power dynamics than they are based on the actual facts of the incident. Inconsistent response can lead to organizational risk exposure in a few ways. First, if incidents are improperly swept under the rug, the obvious risk is the organization may fail to respond to an impactful incident. Second, if incidents are not responded to consistently and action is taken on one user and not another, the organization may be exposed to litigation. It is far better to maintain and execute a clear and objective process from the outset of a program.
Many of the points mentioned above will also doom information security programs for firewalls, endpoints and intrusion prevention. For DLP, the failures mentioned highlight what I mentioned in my introduction; that DLP requires business alignment. Without this alignment, organizations will miss protecting the data that really matters – the critical data that significantly impacts their bottom line.
Know What We're Up To!