Get the most innovative insights
Automating Cross-Platform Intelligence – The Next Evolution in Security Technology
Join InteliSecure, Forcepoint CISO Allan Alford, and Forcepoint VP of User & Data Security, Guy Filippelli, for a breakfast briefing highlighting cross-platform intelligence at RSA 2018. Learn more here.
RSA 2018 will undoubtedly include a raft of announcements related to new point products or new capabilities from existing products. In fact, for the past decade or so, the major disruptions in the technology space have come from start-up companies introducing new functionality in the form of point products. Most of those start-ups either diversify their portfolios to the point they can reach an IPO or they are acquired by large general security technology providers like Symantec or McAfee. This has led to the proliferation of a wide array of security products in many enterprises that have complimentary capabilities but do not integrate well with each other, even when they are owned by the same company.
As a result, we have seen vendor fatigue and the challenges posed by swivel-chair analysis; analysis done by personnel having to move from one security component to another to try and identify patterns and truly understand what is going on within their IT environment. This vendor fatigue, combined with a global shortage of qualified security personnel, has led to demand in the marketplace for security platforms rather than a collection of point products. We are starting to see the marketplace demand platforms that not only integrate well with other security products inside the vendor’s portfolio, but also products provided by innovative startups.
However, integration of point products with each other is often still limited to simply seeing cross-platform information through a “single pane of glass”. While this is a positive development, gathering intelligence from one product and applying lessons learned to another technology has largely been a human-driven effort. This can be seen especially around integrating User and Entity Behavior Analytics (UEBA) with other security technologies such as Data Loss Prevention (DLP). Given the talent shortage, companies either struggle with the ability to truly correlate this information in an intelligent way, or they are forced to turn to a Managed Security Services Provider (MSSP) like InteliSecure to fill the gaps and become the connective tissue these security technologies lack. Although many of these MSSPs possess the expertise to execute programs in this fashion, relying on human correlation is expensive and time consuming. Decisions cannot be made in real time and there is always a lag time between information being gleaned from one technology and applied to another.
The future of security technologies is automation. Not necessarily automation of response, although orchestration and automation technologies are compelling, but rather automating the intelligence being gained from one platform and applied to another. Today’s tools are able to provide insights into risky an anomalous behavior when it comes to data protection, but the forensic nature in correlating internal and external activities to identify threats can often take hours, weeks or months. New platform-based integration and intelligence will be able to identify threats quickly and effectively, without the delays seen today.
At the beginning of this blog was an invitation to a breakfast briefing being held at RSA 2018. Forcepoint is one organization building a comprehensive security platform that automates intelligence from different sources and a leading innovator in Data Security and User Behavior Monitoring. They will be announcing significant innovations on the Tuesday at the conference. The briefing during breakfast on Wednesday will let you see the innovations in action as well as allow you to ask questions of their CISO and members of their development team.
Properly Framing the Cost of a Data Breach with Executives and Boards
The origin of this blog was actually my research into building a rock-solid, indisputable return on investment (ROI) model for security programs and initiatives. However, the focus changed as I began poring over statistical models and global research to stitch together all of the elements I would need in order to weave together an amazing ROI tapestry. What I initially found, and what prompted this post, was a stagnant view of the costs associated with data breaches. A view that people often process data breach costs as being linear in nature and that ignores various inflection points that trigger a worsening of the overall situation faced by an organization. A view that does not properly frame the conversation as to how data breaches actually affect an organization monetarily.
There is a lot of research, including Ponemon’s annual Cost of a Data Breach study, which does a good job of quantifying the average cost of each record lost across a large sample of records, and provides some really interesting information across multiple countries related to the difference between direct and indirect costs of a breach. It is a must-read for me every year as soon as it is released. However, the challenge with leveraging current cost of a data breach reports with the organizations I work with is that this type of research, when applied, would yield a graph of breach cost by size that is linear in nature.
Chart 1: Sample chart of data breach costs as intimated by today’s cost of breach studies; numbers are arbitrary for illustration purposes.
My experience has shown that such a graph does not reflect reality. It’s far too simple. There are at least two major inflection points that aren’t accurately identified. The inflection points not identified in linear charts represent the escalation of awareness surrounding an organization’s breach.
All breaches will incur a minimum cost related to identification and remediation, essentially a minimum cost of entry. This entry point is followed by a flattening curve until the size of the breach hits its first inflection point. There are two additional thresholds that may cause a second and even a third inflection point. These thresholds relate to general public awareness and press coverage. The trigger for a second inflection point is where security nerds like myself pay attention, start talking about it, start writing about it, and begin using it as examples in presentations, podcasts and blogs. A third inflection point is triggered when a breach becomes big enough news that it hits the mainstream and everyone becomes aware of it. You can use different logical tests to determine whether a breach has hit mainstream, but I like the non-technical family member test. This is when my least security-minded or technically inclined family member or relative starts asking me about a breach. At that point, I know it is a mainstream event.
The existence of the inflection became apparent as I was reading an entertaining report in USA Todayabout the top 20 most hated companies in the United States. As I scrolled up the list from the bottom, I passed Harvey Weinstein’s company, airlines who beat and bloodied their passengers and companies who have had various public relations disasters. In the number one spot I found Equifax. It should be noted that Experian and TransUnion were not on the list, so one can assume that the respondents did not have some irrational vendetta against credit reporting agencies who may have contributed to them being declined for credit cards, car loans, home loans, etc. Equifax is the most hated company in America because of a data breach. Another article was about how Equifax, as a publically traded company, had lost 31 percent of their marketplace capitalization totaling over $5 billion, a measure of the value of their company, since the breach. That is a ridiculous cost. (https://www.marketwatch.com/story/equifaxs-stock-has-fallen-31-since-breach-disclosure-erasing-5-billion-in-market-cap-2017-09-14)
Another fun research project you can do to start looking at the costs of data breaches that indicate inflection points that increase the cost of a data breach relates to Target. If you review Target’s top line sales in Q3 the year of the breach and Q3 the year after, you will see a decline in sales of more than $1 billion, or 20%, in an industry sector that actually grew during the same period. While the initial breach potentially only occurred over a set period of time, the organization is still feeling the effects much further out.
Both of these examples and subsequent inflection points indicate general awareness, from the initial discovery by the organizations, to industry insider knowledge, to general public awareness and eventual broad media coverage. One can also assume that if an organization does not properly disclose, does not know the extent of a breach, or isn’t forthcoming with information to the public, the additional negative publicity will increase the indirect costs related to a breach.
The real chart for the cost of a data breach in my experience (numbers again used for illustrative purposes) looks more like the following.
Chart 2: Sample chart of data breach costs as they actually happen.
If a CIO, CISO or other person responsible for maintaining data security is only providing damages associated with a cost per record to the rest of the executive team, the executive team or board may not be thinking about, or be able to visualize, how different types of incidents would monetarily affect the organization. To do so, you must account for different categories of incidents and what the inflection points represent. A minimal event, which won’t gather any attention outside the organization, and are often accidental, and can be significantly reduced from happening by utilizing commonly available security tools. Minimal events, depending on the industry, may not be required to be reported externally.
The second type of event is one that contains more records and will gather attention of people like myself, but not necessarily the mainstream press. This category is where organizations start to evaluate brand impact and public relations activities, and where the cost per record starts to increase. An example of this is Deloitte. Most security professionals are familiar with the Deloitte breach, but most non-security people, unfortunately, couldn’t give you much, if any, detail about the breach. The final category is a breach that would make the nightly news and have a major impact on enterprise value. The majority of companies in the world do not even have enough data to have a breach rise to this level, however, for those that do, there are few security expenses that are not justified if they can materially impact the likelihood of such an event occurring.
I am not laying this out to say that companies should hide incidents from their clients, but to illustrate that costs associated with events are not equal, or follow a linear path. The type of incident, its size, overall impact, and the mitigation process all affect the actual cost of a breach. While this is not the fully built ROI model I had hoped to present, I hope this post helps frame the conversation properly with executive teams and boards you may interact with. It is frustrating when all of the conversations revolve around cost per record when it is really not that simple. It’s equally frustrating for security vendors to come into client environments waving the banner of Equifax, Target, or GDPR to try to scare executives into action.
As a security professional, I strongly believe the work we do is vital for individual property rights which does no less than preserve the way of life associated with capitalism and individual freedom. I also believe that well-crafted security programs intelligently designed to mitigate the right types of risk are a good investment. If you accept both of those statements as true, we must spend more time trying to build and perfect realistic investment models and less time cheapening our mission by sowing seeds of fear, uncertainty and doubt. All of that starts with calculating the true cost of a data breach. Now, back to building that ROI model I alluded to.
GDPR: Approaches for Protecting Personally Identifiable Information (PII) and Sensitive Personal Information (SPI)
Many companies are currently in different stages of projects to comply with the European Union’s General Data Protection Regulation (GDPR) ahead of the May 2018 enforcement deadline. Many vendors and service providers speak generally about GDPR and often, in my view, over simplify solutions to issues that are raised. Rather than try to address the whole of the regulation, I want to speak specifically about a practical issue that most companies will, at some point, need to address.
GDPR covers two categories of personal information, Personally Identifiable Information (PII) and Sensitive Personal Information (SPI). The two types of information are very different from each other and require separate approaches in order to accurately identify and protect them as they flow through an organization’s data environment.
The first category of information that GDPR protects is commonly referred to around the world as Personally Identifiable Information. This category of information covers information that is generally accepted as personally identifiable such as names and national identifiers like Social Security Numbers (SSN) in the U.S., European identifiers such as driver’s license numbers in the U.K. and Italy’s Codice Fiscal. It is important to note that GDPR expands the definition of PII to things like email and IP addresses.
While the definition of PII has been expanded to include new types of identifiable information, the identifiers have commonality in the fact that they generally follow defined formats and are relatively easy to program into a content analytics system through the use of regular expression. Because of these commonalities, Data Loss Prevention (DLP) technologies are ideal in identifying and protecting this type of information. DLP technologies can be enterprise class or integrated into other products like firewalls, cloud access security brokers (CASBs), or web gateways.
There are two key areas within GDPR that identify DLP as the optimal solution for PII protection. First, the sections related to data security stipulate that the organization have reasonable controls to monitor the flow of data throughout the environment. In my interpretation, this means that an organization must have the ability to monitor the use of personal information at the endpoint, in transit via web and email channels, and where it is stored throughout an environment. It should also include visibility into how information is stored in cloud applications and how it is transferred between cloud environments. Second, as a practical matter, I cannot imagine a scenario in which an organization could comply with Right to be Forgotten or guarantee a Right to Erasure without the capability to find that PII throughout all of their systems, including cloud applications, and remove it. Therefore, a DLP capability, while not making an organization compliant in and of itself, is a required element in order to achieve compliance.
It should be said that building a proper DLP program for the purposes of complying with the relevant GDPR articles requires planning, coordination between business units, and a good deal of care and feeding. However, protecting PII has been a best practice for more than a decade and many people have experience building such programs.
Protecting Sensitive Personal Information is a far greater operational challenge.
Sensitive Personal Information refers to information that does not identify an individual, but is related to an individual and communicates information that is private or could potentially harm an individual should it be made public. SPI includes things like biometric data, genetic information, sex, trade union membership, sexual orientation, etc. The challenge with traditional data security tools like DLP in protecting SPI is that many of those things exist in common usage without being related to an individual, and it is very difficult to program a content analytics engine to find information that is in scope with GDPR without finding large volumes of information that is not in scope at the same time. The most elegant solution to protect SPI in my experience is to add a Data Classification program to the overall security program and integrate it with DLP programs.
Data Classification allows a user to select a classification from a list to tag data. Many people are familiar with classification schemas used by governments and militaries, which classify information by levels of secrecy. For example, classifications may include public, sensitive, secret, top secret, etc. The most effective Data Classification tools are very flexible, allowing for multiple levels of classification and customizable fields. For unstructured SPI data, an organization could develop a classification schema that had simple drop down menus that ask the user whether a document contains PII and SPI with yes or no choices. Then, the Data Classification solution would apply metadata tags to those documents which would be leveraged by security tools like DLP to apply rules to the information based on those tags. This is a far more efficient and effective method of protecting SPI than trying to find all instances of sensitive personal information categories referencing an individual as opposed to the same terms in common usage.
Data Classification programs can be used to communicate effectively in a human readable fashion as well. Many people may interact with PII and SPI on a frequent basis and not really think about the potential sensitivity of the information they handle. A large part of the spirit of GDPR is to cause people to think about the information they are handling and to handle it with due care. Complying with the spirit of the regulation will require a culture change in some organizations, which can be aided considerably by building a Data Classification program. This way, users can easily identify when they are handling sensitive information and perhaps handle such information with more care as they go about their daily routine. Many Data Classification solutions also have the ability to communicate with the end user through tips or pop up messaging to reinforce the behavioral change.
Breaches of personal data can happen in a variety of ways. Those that garner the most attention are large scale breaches often caused by incorrect technical configurations or a lack of due care on an industrial scale, but far more frequently, information is compromised on a small scale due to carelessness or a general lack of awareness. In these cases, Data Classification can help significantly.
Many organizations have what I call GDPR fatigue, meaning that there have been so many technology and service providers using fear to sell products and services without addressing specific solutions to the challenges posed by GDPR that many organizations have stopped listening. I do not look at GDPR as a reason for fear, but rather a positive way for organizations to enhance their security programs to protect critical client data and personal information.
GDPR compliance is relatively straight-forward. However, the basis of compliance is understanding how to identify and protect Personally Identifiable Information (PII) and Sensitive Personal Information (SPI). Therefore, programs to enable PII and SPI identification and protection are the foundational elements of compliance from a tools and capabilities perspective. Data Loss Prevention and Data Classification form a powerful combination for protecting both PII and SPI. The challenge then becomes one of leveraging those capabilities properly to fulfill controller and processor obligations and protect data subject rights.
Are You Protecting What Matters Most?
This week has been special as InteliSecure launched our first ever Critical Data Protection Benchmark Survey. As the organization who has been deploying and managing DLP technologies longer than anyone else in the world, we are uniquely positioned to share our insights in an effort to help guide organizations down the path of protecting the most critical information in their organizations. The survey is designed to address some key areas inside of people process and technology for data protection programs in order to help an organization assess their current posture and to offer ideas for solutions to identified gaps. It will also allow you to benchmark your organization against your peers. We’re very proud to release this survey and plan to do it annually in order to help advance the protection of critical data assets globally.
In the spirit of the Critical Data Protection benchmark survey, I wanted to write an article that addresses the key question, are you really protecting the information that matters most? In 2017, Gartner indicated that renewed interest in the EU General Data Protection Regulation (GDPR) will drive 65% of data loss prevention (DLP) buying decisions through 2018. While DLP is certainly not the only technology that makes up data protection strategies, it is an important one, and the one most specifically focused on data protection so it acts as a good bellwether for the rest of the data protection marketplace.
GDPR and other global regulations related to the protection of personal information, of which there are many, are driving the adoption and utilization of data security technologies as intended. Unfortunately, security programs are still largely being driven in a reactive nature to legislation. There are a few problems with that paradigm. First, the legislative process is rarely described as fast or agile. Generally, today’s legislation addresses yesterday’s problem. That does not mean the problems the legislation was intended to address do not still exist today, but it does mean that the situation has likely changed significantly.
Second, legislation is public. Therefore, anything prescriptive inside of a piece of legislation is well known to any moderately sophisticated adversary. Therefore, those adversaries will have countermeasures developed for any measure you are mandated to implement. That’s why I often tell people security begins where compliance ends. Compliance is necessary and generally not a bad thing, but true security is about making yourself a hard target and it is graded on a curve. Attackers have finite resources just like defenders do, and they are generally trying to achieve maximum benefit for minimal cost. As I mentioned in my book about building comprehensive security programs, if you are camping with a friend and a bear attacks you, you don’t have to outrun the bear, you just need to outrun your friend. This concept doesn’t necessarily apply to organizations protecting critical infrastructure or secrets that affect the security of nations, but it certainly applies to those protecting financial instruments or Personally Identifiable Information.
Finally, regulations aren’t enacted by governments to protect companies, they are enacted to protect citizens and national security. Unless your competitive advantage is a significant contributor to your country’s Gross Domestic Product (GDP) and all of your competitors are overseas, the regulation isn’t designed to protect your business. The information that they mandate you protect probably isn’t the most important information to your business, its likely the most important information to your government and your customers. That doesn’t mean you shouldn’t comply, you should. You should do your absolute best to comply with the spirit and the letter of every regulation passed to protect information, but it isn’t enough.
Compliance generally has easily quantifiable penalties and risks so achieving and maintaining compliance therefore often gets funded as a cost of doing business. That doesn’t mean the security program cannot be expanded to include information that is not part of a regulation though. If you were to use compliance for initial funding and budget but then build a governance group or business leadership that could identify the information that is most important to the business, wouldn’t you be making better use of the funds you had allocated to your security program?
Allow me to share an example with you. If you are an insurance company, you likely have Personally Identifiable Information (PII) and Protected Health Information (PHI) that you are required to protect. Most insurance companies may never go beyond that initial scope. However, there is very sensitive information that deals with how the business operates that is not regulated. What about the actuary models that allow the insurer to calculate their risk pool and the impact of adding an individual to that pool in order to ensure they can cover that individual while maintaining a healthy profit margin? What if a competitor had access to those models? Could they not price out services in markets and consciously decide to either undercut your pricing at a lower margin or pull out of markets they don’t want to compete in?
What about a health insurance company? Beyond compliance with the Health Insurance Portability and Accountability Act (HIPAA), what else could be important? What about rates they have negotiated with their networks of doctors? What about pricing and discounting structures they use to sell their plans through employers? What about plans for future products and plans and rates they intend to offer? The possibilities are endless, but the true tragedy in much of it is that organizations often own the tools they need to protect themselves at a much higher level than they are.
I encourage you to take the benchmark and I hope you find it valuable and insightful. Regardless, please ask yourself this question when you reflect on your organization, are you really protecting what matters most to your organization? Is your security program built to defend your business or simply to pass an audit? As the world becomes more connected and grows ever smaller, the answer to that question may have a significant impact on your enterprise value.
Know What We're Up To!