Blog

Get the most innovative insights

Sort By

Theft of Intellectual Property Costs More than you Think

Corporate espionage is one of the least understood and most downplayed elements of cyber security. Most people focus on massive breaches involving personal or financial data and that the majority of these breaches are discovered by someone other than the victim organizations. The reason this happens is because the personal data is often sold on illicit marketplaces. Would we ever know the breaches occurred if the information was not sold on the open market? This is the situation that exists in the case of corporate espionage.

According to a recent study commissioned by Bromium unveiled at RSA Conference 2018, cyber crime generates $1.5 trillion per year. If cyber crime were a country, it would have the 13th highest GDP in the world. Based on media coverage and regulations being passed around the world, you would think that regulated data would make up the majority of that revenue, but you would be wrong. Theft of trade secrets and intellectual property accounted for $500 billion dollars or a full third of overall cyber crime, while regulated information accounted for $160 billion or just over 10%.

Back in 2004, a study unveiled at the London Infosecurity Summit indicated that, “more than 70% of people would reveal their computer password in exchange for a bar of chocolate”. The world has changed since then, and most employees are more aware of security in general. Assuming your organization has an access control and entitlement review process, and knows where every copy of critical data is stored, you would not grant access to someone who would trade their password for a candy bar. What if the stakes were much higher? Would your employees trade your secrets for $1 million or $10 million? What if they weren’t asked to actually deliver information or give away their password? What if they were told to simply click on a phishing email and plead ignorance?

A Lack of Awareness

Most organizations use regulation compliance to fund their data security initiatives. Therefore, most programs have an outsized focus on compliance initiatives rather than objectively valuing the data their organization holds. They don’t perform a risk assessment against that data and prioritize security based on that risk assessment. If theft of trade secrets and intellectual property is three times more economically impactful each year than theft of regulated data, why are organizations so much more concerned with protecting regulated information than intellectual property?

There are likely many reasons for this, beginning with the simple lack of awareness. Most intellectual property thefts are conducted in secret with a buyer or state sponsor identified before the theft occurs. Stolen regulated data is often sold on a marketplace and there are far fewer requirements of an organization to publicly disclose the theft of intellectual property. Regardless of the reasoning, this lack of awareness helps to drive increased intellectual property theft as this information is simply not as well protected as regulated data. In the last seven years I’ve noticed most organizations fund their initiatives through a compliance need and many programs begin with protecting data even when significantly valuable intellectual property is owned by the organization. Many organizations never shift from regulated data protection to intellectual property protection, resulting in more theft of intellectual property.

In actuality, if compliance is the driver for security spending in an organization, that organization’s security team has ceded their organization’s cyber security strategy to lawmakers. That is a truly scary proposition. Lawmakers, in most countries, are not cyber security experts. This is not an indictment of lawmakers. Most of them did something else before they were in government. If you were a doctor and a lawyer who won an election and were suddenly expected to be an expert setting public policy on technology and cyber security, you would likely struggle to become an expert over night as well. Therefore, developing a security strategy driven by compliance means an organization will always be behind the curve and likely unprotected.

Sophisticated Actors and Insiders

Regulated data is generally stolen using commoditized tools and by criminal organizations that range from unsophisticated actors to reasonably sophisticated actors. However, intellectual property is often stolen by well-funded and sophisticated actors who often leverage insiders to bypass externally-facing corporate defenses. Firewalls and deception systems, while very good at making it difficult for a true external actor to find what they are looking for, do not help address the insider threat. Insiders generally know exactly where the data they are looking for is located and, by definition, must be able to access it in order to do their jobs. In order to address the threat posed by these actors it is imperative that organizations monitor the movement of the data itself as well as the behavior of users. Advanced endpoint protection platforms employing machine learning to detect advanced malware are similarly useless against an insider as that person is not likely to deploy malicious code to steal data. The truth is many of the products CISOs are spending their budgets on and their time pursuing are useless against one of the most common tactics used to generate 1/3 of cyber crime’s overall revenue.

The insider threat is an inconvenient truth, we’ve all heard of it and know it exists, but no one wants to believe their friend or colleague is going to act maliciously. The truth is people do. There have been recent high profile cases that illustrate this concept such as Waymo vs. Uber, but it stands to reason that there are many others that are not discovered or never reported. This is certainly not the first time Uber has been accused of stealing intellectual property from their competitors. In fact, a recent article from Marketwatch reports that, according to a lawsuit, “Uber Technologies Inc. operated a clandestine unit dedicated to stealing trade secrets.”

Another article in CNN Money details the story of American Semiconductor. American Semiconductor is a company based in Massachusetts who recently won a lawsuit against its former Chinese partner Sinovel, which was convicted of stealing American trade secrets in a US Federal Court.  The short version of the story is that American Semiconductor began doing business in China with Sinovel as a supplier of components to run wind turbines in 2007. In 2011, Sinovel did not pay American Semiconductor outstanding invoices and canceled orders that were ready to be shipped. Upon investigation, it was revealed that an employee at an American Semiconductor subsidiary was offered $2 million to turn over American Semiconductor’s source code for its wind turbine control software. There were even Skype conversations uncovered between the bribed employee and Sinovel telling Sinovel that once they had this source code that they could separate from American Semiconductor. American Semiconductor had a stronger than average security program to protect against attacks from the outside, however, the failure to monitor user behavior and data with respect to trusted insiders, nearly cost them their company.

Think of all of the people inside your organization who have access to critical intellectual property and trade secrets. Not just the few in the middle of the inner circle, but every person involved in storing, processing, or transmitting that information. Assuming the concepts of least privilege and need to know are enforced, this is still likely to be at least ten people. Most organizations would probably admit that they do not implement least privilege well and very few CISOs would stake their reputation on a bet that there are not overly permissive systems and file shares in their environment. As a result, in most organizations, there are more users that can access information than there are users that absolutely must in order to perform their job functions. As a result, privilege misuse, or users accessing data they have no legitimate need to access and then leaking that data, is the second most common incident type in Verizon’s 2018 Data Breach Investigations Report. It accounted for over 20% of total incidents in 2017 and is therefore the most common method through which data is breached, ahead of more publicized incident types like crimeware (#3) or payment card skimmers, which are often talked about, but are by far responsible for the fewest incidents. Due to the fact that most organizations do not implement and maintain stringent access control policies, and most organizations employ relatively flat, non-segmented networks, think of the number of people in your organization who could potentially access such critical information.

Fredrik Lindstrom, Manager CIO Advisory at KPMG, was quoted in a CIO magazine article saying, “Network segmentation, or splitting a network into sub networks, is the best way to phase out outdated security approaches. Unfortunately, it is also one of the most neglected parts of a cyber security program, because most organizations believe network segmentation is too complex and cumbersome.”

Conclusion

In the face of these odds, how can an organization possibly protect themselves? They could start by monitoring the data that matters most and also analyze the behavior of users and credentials against baseline behavior. No data theft can happen without a change in behavior of the data and the user. This is true whether the threat is external, a zero-day exploit, or a trusted insider. The problem is, this requires an organization to admit that the insider threat is real, could affect them, and commit to the hard work required to protect their critical data assets. People naturally gravitate to easy solutions such as technologies that can be deployed with little thought or attention. Fewer people want to do the hard work of building a program to protect critical data and address insider threats.

While it is unpopular to admit that your friends and neighbors may be the people most likely to put critical data at risk, and far more difficult and time consuming to build a security program that takes critical data assets and user behavior monitoring and analytics into account, the consequences for not doing so can be catastrophic. American Semiconductor is not alone, there are many other cases like it that never end up in court and never publicly disclosed. This silence is part of the problem with awareness about these types of challenges. There is hope, however. I have many stories that I cannot share due to confidentiality agreements where properly built and maintained security programs thwarted similar attempts to steal and capitalize on intellectual property theft. These things are happening. Don’t ignore the facts. It’s time to protect your organization. You don’t need to wait for the government to tell you that you must.

Corporate espionage is one of the least understood and most downplayed elements of cyber security. Most people focus on massive breaches involving personal or financial data and that the majority of these breaches are discovered by someone other than the victim organizations. The reason this happens is because the personal data is often sold on illicit marketplaces. Would we ever know the breaches occurred if the information was not sold on the open market? This is the situation that exists in the case of corporate espionage.

According to a recent study commissioned by Bromium unveiled at RSA Conference 2018, cyber crime generates $1.5 trillion per year. If cyber crime were a country, it would have the 13th highest GDP in the world. Based on media coverage and regulations being passed around the world, you would think that regulated data would make up the majority of that revenue, but you would be wrong. Theft of trade secrets and intellectual property accounted for $500 billion dollars or a full third of overall cyber crime, while regulated information accounted for $160 billion or just over 10%.

Back in 2004, a study unveiled at the London Infosecurity Summit indicated that, “more than 70% of people would reveal their computer password in exchange for a bar of chocolate”. The world has changed since then, and most employees are more aware of security in general. Assuming your organization has an access control and entitlement review process, and knows where every copy of critical data is stored, you would not grant access to someone who would trade their password for a candy bar. What if the stakes were much higher? Would your employees trade your secrets for $1 million or $10 million? What if they weren’t asked to actually deliver information or give away their password? What if they were told to simply click on a phishing email and plead ignorance?

A Lack of Awareness

Most organizations use regulation compliance to fund their data security initiatives. Therefore, most programs have an outsized focus on compliance initiatives rather than objectively valuing the data their organization holds. They don’t perform a risk assessment against that data and prioritize security based on that risk assessment. If theft of trade secrets and intellectual property is three times more economically impactful each year than theft of regulated data, why are organizations so much more concerned with protecting regulated information than intellectual property?

There are likely many reasons for this, beginning with the simple lack of awareness. Most intellectual property thefts are conducted in secret with a buyer or state sponsor identified before the theft occurs. Stolen regulated data is often sold on a marketplace and there are far fewer requirements of an organization to publicly disclose the theft of intellectual property. Regardless of the reasoning, this lack of awareness helps to drive increased intellectual property theft as this information is simply not as well protected as regulated data. In the last seven years I’ve noticed most organizations fund their initiatives through a compliance need and many programs begin with protecting data even when significantly valuable intellectual property is owned by the organization. Many organizations never shift from regulated data protection to intellectual property protection, resulting in more theft of intellectual property.

In actuality, if compliance is the driver for security spending in an organization, that organization’s security team has ceded their organization’s cyber security strategy to lawmakers. That is a truly scary proposition. Lawmakers, in most countries, are not cyber security experts. This is not an indictment of lawmakers. Most of them did something else before they were in government. If you were a doctor and a lawyer who won an election and were suddenly expected to be an expert setting public policy on technology and cyber security, you would likely struggle to become an expert over night as well. Therefore, developing a security strategy driven by compliance means an organization will always be behind the curve and likely unprotected.

Sophisticated Actors and Insiders

Regulated data is generally stolen using commoditized tools and by criminal organizations that range from unsophisticated actors to reasonably sophisticated actors. However, intellectual property is often stolen by well-funded and sophisticated actors who often leverage insiders to bypass externally-facing corporate defenses. Firewalls and deception systems, while very good at making it difficult for a true external actor to find what they are looking for, do not help address the insider threat. Insiders generally know exactly where the data they are looking for is located and, by definition, must be able to access it in order to do their jobs. In order to address the threat posed by these actors it is imperative that organizations monitor the movement of the data itself as well as the behavior of users. Advanced endpoint protection platforms employing machine learning to detect advanced malware are similarly useless against an insider as that person is not likely to deploy malicious code to steal data. The truth is many of the products CISOs are spending their budgets on and their time pursuing are useless against one of the most common tactics used to generate 1/3 of cyber crime’s overall revenue.

The insider threat is an inconvenient truth, we’ve all heard of it and know it exists, but no one wants to believe their friend or colleague is going to act maliciously. The truth is people do. There have been recent high profile cases that illustrate this concept such as Waymo vs. Uber, but it stands to reason that there are many others that are not discovered or never reported. This is certainly not the first time Uber has been accused of stealing intellectual property from their competitors. In fact, a recent article from Marketwatch reports that, according to a lawsuit, “Uber Technologies Inc. operated a clandestine unit dedicated to stealing trade secrets.”

Another article in CNN Money details the story of American Semiconductor. American Semiconductor is a company based in Massachusetts who recently won a lawsuit against its former Chinese partner Sinovel, which was convicted of stealing American trade secrets in a US Federal Court.  The short version of the story is that American Semiconductor began doing business in China with Sinovel as a supplier of components to run wind turbines in 2007. In 2011, Sinovel did not pay American Semiconductor outstanding invoices and canceled orders that were ready to be shipped. Upon investigation, it was revealed that an employee at an American Semiconductor subsidiary was offered $2 million to turn over American Semiconductor’s source code for its wind turbine control software. There were even Skype conversations uncovered between the bribed employee and Sinovel telling Sinovel that once they had this source code that they could separate from American Semiconductor. American Semiconductor had a stronger than average security program to protect against attacks from the outside, however, the failure to monitor user behavior and data with respect to trusted insiders, nearly cost them their company.

Think of all of the people inside your organization who have access to critical intellectual property and trade secrets. Not just the few in the middle of the inner circle, but every person involved in storing, processing, or transmitting that information. Assuming the concepts of least privilege and need to know are enforced, this is still likely to be at least ten people. Most organizations would probably admit that they do not implement least privilege well and very few CISOs would stake their reputation on a bet that there are not overly permissive systems and file shares in their environment. As a result, in most organizations, there are more users that can access information than there are users that absolutely must in order to perform their job functions. As a result, privilege misuse, or users accessing data they have no legitimate need to access and then leaking that data, is the second most common incident type in Verizon’s 2018 Data Breach Investigations Report. It accounted for over 20% of total incidents in 2017 and is therefore the most common method through which data is breached, ahead of more publicized incident types like crimeware (#3) or payment card skimmers, which are often talked about, but are by far responsible for the fewest incidents. Due to the fact that most organizations do not implement and maintain stringent access control policies, and most organizations employ relatively flat, non-segmented networks, think of the number of people in your organization who could potentially access such critical information.

Fredrik Lindstrom, Manager CIO Advisory at KPMG, was quoted in a CIO magazine article saying, “Network segmentation, or splitting a network into sub networks, is the best way to phase out outdated security approaches. Unfortunately, it is also one of the most neglected parts of a cyber security program, because most organizations believe network segmentation is too complex and cumbersome.”

Conclusion

In the face of these odds, how can an organization possibly protect themselves? They could start by monitoring the data that matters most and also analyze the behavior of users and credentials against baseline behavior. No data theft can happen without a change in behavior of the data and the user. This is true whether the threat is external, a zero-day exploit, or a trusted insider. The problem is, this requires an organization to admit that the insider threat is real, could affect them, and commit to the hard work required to protect their critical data assets. People naturally gravitate to easy solutions such as technologies that can be deployed with little thought or attention. Fewer people want to do the hard work of building a program to protect critical data and address insider threats.

While it is unpopular to admit that your friends and neighbors may be the people most likely to put critical data at risk, and far more difficult and time consuming to build a security program that takes critical data assets and user behavior monitoring and analytics into account, the consequences for not doing so can be catastrophic. American Semiconductor is not alone, there are many other cases like it that never end up in court and never publicly disclosed. This silence is part of the problem with awareness about these types of challenges. There is hope, however. I have many stories that I cannot share due to confidentiality agreements where properly built and maintained security programs thwarted similar attempts to steal and capitalize on intellectual property theft. These things are happening. Don’t ignore the facts. It’s time to protect your organization. You don’t need to wait for the government to tell you that you must.

Read More

Automating Cross-Platform Intelligence – The Next Evolution in Security Technology

Join InteliSecure, Forcepoint CISO Allan Alford, and Forcepoint VP of User & Data Security, Guy Filippelli, for a breakfast briefing highlighting cross-platform intelligence at RSA 2018. Learn more here.


RSA 2018 will undoubtedly include a raft of announcements related to new point products or new capabilities from existing products. In fact, for the past decade or so, the major disruptions in the technology space have come from start-up companies introducing new functionality in the form of point products. Most of those start-ups either diversify their portfolios to the point they can reach an IPO or they are acquired by large general security technology providers like Symantec or McAfee. This has led to the proliferation of a wide array of security products in many enterprises that have complimentary capabilities but do not integrate well with each other, even when they are owned by the same company.

As a result, we have seen vendor fatigue and the challenges posed by swivel-chair analysis; analysis done by personnel having to move from one security component to another to try and identify patterns and truly understand what is going on within their IT environment. This vendor fatigue, combined with a global shortage of qualified security personnel, has led to demand in the marketplace for security platforms rather than a collection of point products. We are starting to see the marketplace demand platforms that not only integrate well with other security products inside the vendor’s portfolio, but also products provided by innovative startups.

However, integration of point products with each other is often still limited to simply seeing cross-platform information through a “single pane of glass”. While this is a positive development, gathering intelligence from one product and applying lessons learned to another technology has largely been a human-driven effort. This can be seen especially around integrating User and Entity Behavior Analytics (UEBA) with other security technologies such as Data Loss Prevention (DLP). Given the talent shortage, companies either struggle with the ability to truly correlate this information in an intelligent way, or they are forced to turn to a Managed Security Services Provider (MSSP) like InteliSecure to fill the gaps and become the connective tissue these security technologies lack. Although many of these MSSPs possess the expertise to execute programs in this fashion, relying on human correlation is expensive and time consuming. Decisions cannot be made in real time and there is always a lag time between information being gleaned from one technology and applied to another.

The future of security technologies is automation. Not necessarily automation of response, although orchestration and automation technologies are compelling, but rather automating the intelligence being gained from one platform and applied to another. Today’s tools are able to provide insights into risky an anomalous behavior when it comes to data protection, but the forensic nature in correlating internal and external activities to identify threats can often take hours, weeks or months. New platform-based integration and intelligence will be able to identify threats quickly and effectively, without the delays seen today.

At the beginning of this blog was an invitation to a breakfast briefing being held at RSA 2018. Forcepoint is one organization building a comprehensive security platform that automates intelligence from different sources and a leading innovator in Data Security and User Behavior Monitoring. They will be announcing significant innovations on the Tuesday at the conference. The briefing during breakfast on Wednesday will let you see the innovations in action as well as allow you to ask questions of their CISO and members of their development team.

Join InteliSecure, Forcepoint CISO Allan Alford, and Forcepoint VP of User & Data Security, Guy Filippelli, for a breakfast briefing highlighting cross-platform intelligence at RSA 2018. Learn more here.


RSA 2018 will undoubtedly include a raft of announcements related to new point products or new capabilities from existing products. In fact, for the past decade or so, the major disruptions in the technology space have come from start-up companies introducing new functionality in the form of point products. Most of those start-ups either diversify their portfolios to the point they can reach an IPO or they are acquired by large general security technology providers like Symantec or McAfee. This has led to the proliferation of a wide array of security products in many enterprises that have complimentary capabilities but do not integrate well with each other, even when they are owned by the same company.

As a result, we have seen vendor fatigue and the challenges posed by swivel-chair analysis; analysis done by personnel having to move from one security component to another to try and identify patterns and truly understand what is going on within their IT environment. This vendor fatigue, combined with a global shortage of qualified security personnel, has led to demand in the marketplace for security platforms rather than a collection of point products. We are starting to see the marketplace demand platforms that not only integrate well with other security products inside the vendor’s portfolio, but also products provided by innovative startups.

However, integration of point products with each other is often still limited to simply seeing cross-platform information through a “single pane of glass”. While this is a positive development, gathering intelligence from one product and applying lessons learned to another technology has largely been a human-driven effort. This can be seen especially around integrating User and Entity Behavior Analytics (UEBA) with other security technologies such as Data Loss Prevention (DLP). Given the talent shortage, companies either struggle with the ability to truly correlate this information in an intelligent way, or they are forced to turn to a Managed Security Services Provider (MSSP) like InteliSecure to fill the gaps and become the connective tissue these security technologies lack. Although many of these MSSPs possess the expertise to execute programs in this fashion, relying on human correlation is expensive and time consuming. Decisions cannot be made in real time and there is always a lag time between information being gleaned from one technology and applied to another.

The future of security technologies is automation. Not necessarily automation of response, although orchestration and automation technologies are compelling, but rather automating the intelligence being gained from one platform and applied to another. Today’s tools are able to provide insights into risky an anomalous behavior when it comes to data protection, but the forensic nature in correlating internal and external activities to identify threats can often take hours, weeks or months. New platform-based integration and intelligence will be able to identify threats quickly and effectively, without the delays seen today.

At the beginning of this blog was an invitation to a breakfast briefing being held at RSA 2018. Forcepoint is one organization building a comprehensive security platform that automates intelligence from different sources and a leading innovator in Data Security and User Behavior Monitoring. They will be announcing significant innovations on the Tuesday at the conference. The briefing during breakfast on Wednesday will let you see the innovations in action as well as allow you to ask questions of their CISO and members of their development team.

Read More

Properly Framing the Cost of a Data Breach with Executives and Boards

The origin of this blog was actually my research into building a rock-solid, indisputable return on investment (ROI) model for security programs and initiatives. However, the focus changed as I began poring over statistical models and global research to stitch together all of the elements I would need in order to weave together an amazing ROI tapestry. What I initially found, and what prompted this post, was a stagnant view of the costs associated with data breaches. A view that people often process data breach costs as being linear in nature and that ignores various inflection points that trigger a worsening of the overall situation faced by an organization. A view that does not properly frame the conversation as to how data breaches actually affect an organization monetarily.

There is a lot of research, including Ponemon’s annual Cost of a Data Breach study, which does a good job of quantifying the average cost of each record lost across a large sample of records, and provides some really interesting information across multiple countries related to the difference between direct and indirect costs of a breach. It is a must-read for me every year as soon as it is released. However, the challenge with leveraging current cost of a data breach reports with the organizations I work with is that this type of research, when applied, would yield a graph of breach cost by size that is linear in nature.

Chart 1: Sample chart of data breach costs as intimated by today’s cost of breach studies; numbers are arbitrary for illustration purposes.

My experience has shown that such a graph does not reflect reality. It’s far too simple. There are at least two major inflection points that aren’t accurately identified. The inflection points not identified in linear charts represent the escalation of awareness surrounding an organization’s breach.

All breaches will incur a minimum cost related to identification and remediation, essentially a minimum cost of entry. This entry point is followed by a flattening curve until the size of the breach hits its first inflection point. There are two additional thresholds that may cause a second and even a third inflection point. These thresholds relate to general public awareness and press coverage. The trigger for a second inflection point is where security nerds like myself pay attention, start talking about it, start writing about it, and begin using it as examples in presentations, podcasts and blogs. A third inflection point is triggered when a breach becomes big enough news that it hits the mainstream and everyone becomes aware of it. You can use different logical tests to determine whether a breach has hit mainstream, but I like the non-technical family member test. This is when my least security-minded or technically inclined family member or relative starts asking me about a breach. At that point, I know it is a mainstream event.

The existence of the inflection became apparent as I was reading an entertaining report in USA Todayabout the top 20 most hated companies in the United States. As I scrolled up the list from the bottom, I passed Harvey Weinstein’s company, airlines who beat and bloodied their passengers and companies who have had various public relations disasters. In the number one spot I found Equifax.  It should be noted that Experian and TransUnion were not on the list, so one can assume that the respondents did not have some irrational vendetta against credit reporting agencies who may have contributed to them being declined for credit cards, car loans, home loans, etc. Equifax is the most hated company in America because of a data breach. Another article was about how Equifax, as a publically traded company, had lost 31 percent of their marketplace capitalization totaling over $5 billion, a measure of the value of their company, since the breach. That is a ridiculous cost. (https://www.marketwatch.com/story/equifaxs-stock-has-fallen-31-since-breach-disclosure-erasing-5-billion-in-market-cap-2017-09-14)

Another fun research project you can do to start looking at the costs of data breaches that indicate inflection points that increase the cost of a data breach relates to Target. If you review Target’s top line sales in Q3 the year of the breach and Q3 the year after, you will see a decline in sales of more than $1 billion, or 20%, in an industry sector that actually grew during the same period. While the initial breach potentially only occurred over a set period of time, the organization is still feeling the effects much further out.

Both of these examples and subsequent inflection points indicate general awareness, from the initial discovery by the organizations, to industry insider knowledge, to general public awareness and eventual broad media coverage. One can also assume that if an organization does not properly disclose, does not know the extent of a breach, or isn’t forthcoming with information to the public, the additional negative publicity will increase the indirect costs related to a breach.

The real chart for the cost of a data breach in my experience (numbers again used for illustrative purposes) looks more like the following.

Chart 2: Sample chart of data breach costs as they actually happen.

  • Inflection Point 1: Security incident becomes more widely known
  • Inflection Point 2: Security incident hits the mainstream
  • Inflection Point 3: Ongoing media coverage and remediation

If a CIO, CISO or other person responsible for maintaining data security is only providing damages associated with a cost per record to the rest of the executive team, the executive team or board may not be thinking about, or be able to visualize, how different types of incidents would monetarily affect the organization. To do so, you must account for different categories of incidents and what the inflection points represent. A minimal event, which won’t gather any attention outside the organization, and are often accidental, and can be significantly reduced from happening by utilizing commonly available security tools. Minimal events, depending on the industry, may not be required to be reported externally.

The second type of event is one that contains more records and will gather attention of people like myself, but not necessarily the mainstream press. This category is where organizations start to evaluate brand impact and public relations activities, and where the cost per record starts to increase. An example of this is Deloitte. Most security professionals are familiar with the Deloitte breach, but most non-security people, unfortunately, couldn’t give you much, if any, detail about the breach. The final category is a breach that would make the nightly news and have a major impact on enterprise value. The majority of companies in the world do not even have enough data to have a breach rise to this level, however, for those that do, there are few security expenses that are not justified if they can materially impact the likelihood of such an event occurring.

I am not laying this out to say that companies should hide incidents from their clients, but to illustrate that costs associated with events are not equal, or follow a linear path. The type of incident, its size, overall impact, and the mitigation process all affect the actual cost of a breach. While this is not the fully built ROI model I had hoped to present, I hope this post helps frame the conversation properly with executive teams and boards you may interact with. It is frustrating when all of the conversations revolve around cost per record when it is really not that simple. It’s equally frustrating for security vendors to come into client environments waving the banner of Equifax, Target, or GDPR to try to scare executives into action.

As a security professional, I strongly believe the work we do is vital for individual property rights which does no less than preserve the way of life associated with capitalism and individual freedom. I also believe that well-crafted security programs intelligently designed to mitigate the right types of risk are a good investment. If you accept both of those statements as true, we must spend more time trying to build and perfect realistic investment models and less time cheapening our mission by sowing seeds of fear, uncertainty and doubt. All of that starts with calculating the true cost of a data breach. Now, back to building that ROI model I alluded to.

The origin of this blog was actually my research into building a rock-solid, indisputable return on investment (ROI) model for security programs and initiatives. However, the focus changed as I began poring over statistical models and global research to stitch together all of the elements I would need in order to weave together an amazing ROI tapestry. What I initially found, and what prompted this post, was a stagnant view of the costs associated with data breaches. A view that people often process data breach costs as being linear in nature and that ignores various inflection points that trigger a worsening of the overall situation faced by an organization. A view that does not properly frame the conversation as to how data breaches actually affect an organization monetarily.

There is a lot of research, including Ponemon’s annual Cost of a Data Breach study, which does a good job of quantifying the average cost of each record lost across a large sample of records, and provides some really interesting information across multiple countries related to the difference between direct and indirect costs of a breach. It is a must-read for me every year as soon as it is released. However, the challenge with leveraging current cost of a data breach reports with the organizations I work with is that this type of research, when applied, would yield a graph of breach cost by size that is linear in nature.

Chart 1: Sample chart of data breach costs as intimated by today’s cost of breach studies; numbers are arbitrary for illustration purposes.

My experience has shown that such a graph does not reflect reality. It’s far too simple. There are at least two major inflection points that aren’t accurately identified. The inflection points not identified in linear charts represent the escalation of awareness surrounding an organization’s breach.

All breaches will incur a minimum cost related to identification and remediation, essentially a minimum cost of entry. This entry point is followed by a flattening curve until the size of the breach hits its first inflection point. There are two additional thresholds that may cause a second and even a third inflection point. These thresholds relate to general public awareness and press coverage. The trigger for a second inflection point is where security nerds like myself pay attention, start talking about it, start writing about it, and begin using it as examples in presentations, podcasts and blogs. A third inflection point is triggered when a breach becomes big enough news that it hits the mainstream and everyone becomes aware of it. You can use different logical tests to determine whether a breach has hit mainstream, but I like the non-technical family member test. This is when my least security-minded or technically inclined family member or relative starts asking me about a breach. At that point, I know it is a mainstream event.

The existence of the inflection became apparent as I was reading an entertaining report in USA Todayabout the top 20 most hated companies in the United States. As I scrolled up the list from the bottom, I passed Harvey Weinstein’s company, airlines who beat and bloodied their passengers and companies who have had various public relations disasters. In the number one spot I found Equifax.  It should be noted that Experian and TransUnion were not on the list, so one can assume that the respondents did not have some irrational vendetta against credit reporting agencies who may have contributed to them being declined for credit cards, car loans, home loans, etc. Equifax is the most hated company in America because of a data breach. Another article was about how Equifax, as a publically traded company, had lost 31 percent of their marketplace capitalization totaling over $5 billion, a measure of the value of their company, since the breach. That is a ridiculous cost. (https://www.marketwatch.com/story/equifaxs-stock-has-fallen-31-since-breach-disclosure-erasing-5-billion-in-market-cap-2017-09-14)

Another fun research project you can do to start looking at the costs of data breaches that indicate inflection points that increase the cost of a data breach relates to Target. If you review Target’s top line sales in Q3 the year of the breach and Q3 the year after, you will see a decline in sales of more than $1 billion, or 20%, in an industry sector that actually grew during the same period. While the initial breach potentially only occurred over a set period of time, the organization is still feeling the effects much further out.

Both of these examples and subsequent inflection points indicate general awareness, from the initial discovery by the organizations, to industry insider knowledge, to general public awareness and eventual broad media coverage. One can also assume that if an organization does not properly disclose, does not know the extent of a breach, or isn’t forthcoming with information to the public, the additional negative publicity will increase the indirect costs related to a breach.

The real chart for the cost of a data breach in my experience (numbers again used for illustrative purposes) looks more like the following.

Chart 2: Sample chart of data breach costs as they actually happen.

  • Inflection Point 1: Security incident becomes more widely known
  • Inflection Point 2: Security incident hits the mainstream
  • Inflection Point 3: Ongoing media coverage and remediation

If a CIO, CISO or other person responsible for maintaining data security is only providing damages associated with a cost per record to the rest of the executive team, the executive team or board may not be thinking about, or be able to visualize, how different types of incidents would monetarily affect the organization. To do so, you must account for different categories of incidents and what the inflection points represent. A minimal event, which won’t gather any attention outside the organization, and are often accidental, and can be significantly reduced from happening by utilizing commonly available security tools. Minimal events, depending on the industry, may not be required to be reported externally.

The second type of event is one that contains more records and will gather attention of people like myself, but not necessarily the mainstream press. This category is where organizations start to evaluate brand impact and public relations activities, and where the cost per record starts to increase. An example of this is Deloitte. Most security professionals are familiar with the Deloitte breach, but most non-security people, unfortunately, couldn’t give you much, if any, detail about the breach. The final category is a breach that would make the nightly news and have a major impact on enterprise value. The majority of companies in the world do not even have enough data to have a breach rise to this level, however, for those that do, there are few security expenses that are not justified if they can materially impact the likelihood of such an event occurring.

I am not laying this out to say that companies should hide incidents from their clients, but to illustrate that costs associated with events are not equal, or follow a linear path. The type of incident, its size, overall impact, and the mitigation process all affect the actual cost of a breach. While this is not the fully built ROI model I had hoped to present, I hope this post helps frame the conversation properly with executive teams and boards you may interact with. It is frustrating when all of the conversations revolve around cost per record when it is really not that simple. It’s equally frustrating for security vendors to come into client environments waving the banner of Equifax, Target, or GDPR to try to scare executives into action.

As a security professional, I strongly believe the work we do is vital for individual property rights which does no less than preserve the way of life associated with capitalism and individual freedom. I also believe that well-crafted security programs intelligently designed to mitigate the right types of risk are a good investment. If you accept both of those statements as true, we must spend more time trying to build and perfect realistic investment models and less time cheapening our mission by sowing seeds of fear, uncertainty and doubt. All of that starts with calculating the true cost of a data breach. Now, back to building that ROI model I alluded to.

Read More

GDPR: Approaches for Protecting Personally Identifiable Information (PII) and Sensitive Personal Information (SPI)

Many companies are currently in different stages of projects to comply with the European Union’s General Data Protection Regulation (GDPR) ahead of the May 2018 enforcement deadline. Many vendors and service providers speak generally about GDPR and often, in my view, over simplify solutions to issues that are raised. Rather than try to address the whole of the regulation, I want to speak specifically about a practical issue that most companies will, at some point, need to address.

GDPR covers two categories of personal information, Personally Identifiable Information (PII) and Sensitive Personal Information (SPI). The two types of information are very different from each other and require separate approaches in order to accurately identify and protect them as they flow through an organization’s data environment.

Protecting Personally Identifiable Information (PII)

The first category of information that GDPR protects is commonly referred to around the world as Personally Identifiable Information. This category of information covers information that is generally accepted as personally identifiable such as names and national identifiers like Social Security Numbers (SSN) in the U.S., European identifiers such as driver’s license numbers in the U.K. and Italy’s Codice Fiscal. It is important to note that GDPR expands the definition of PII to things like email and IP addresses.

While the definition of PII has been expanded to include new types of identifiable information, the identifiers have commonality in the fact that they generally follow defined formats and are relatively easy to program into a content analytics system through the use of regular expression. Because of these commonalities, Data Loss Prevention (DLP) technologies are ideal in identifying and protecting this type of information. DLP technologies can be enterprise class or integrated into other products like firewalls, cloud access security brokers (CASBs), or web gateways.

There are two key areas within GDPR that identify DLP as the optimal solution for PII protection. First, the sections related to data security stipulate that the organization have reasonable controls to monitor the flow of data throughout the environment. In my interpretation, this means that an organization must have the ability to monitor the use of personal information at the endpoint, in transit via web and email channels, and where it is stored throughout an environment. It should also include visibility into how information is stored in cloud applications and how it is transferred between cloud environments. Second, as a practical matter, I cannot imagine a scenario in which an organization could comply with Right to be Forgotten or guarantee a Right to Erasure without the capability to find that PII throughout all of their systems, including cloud applications, and remove it. Therefore, a DLP capability, while not making an organization compliant in and of itself, is a required element in order to achieve compliance.

It should be said that building a proper DLP program for the purposes of complying with the relevant GDPR articles requires planning, coordination between business units, and a good deal of care and feeding. However, protecting PII has been a best practice for more than a decade and many people have experience building such programs.

Protecting Sensitive Personal Information is a far greater operational challenge.

Protecting Sensitive Personal Information (SPI)

Sensitive Personal Information refers to information that does not identify an individual, but is related to an individual and communicates information that is private or could potentially harm an individual should it be made public. SPI includes things like biometric data, genetic information, sex, trade union membership, sexual orientation, etc. The challenge with traditional data security tools like DLP in protecting SPI is that many of those things exist in common usage without being related to an individual, and it is very difficult to program a content analytics engine to find information that is in scope with GDPR without finding large volumes of information that is not in scope at the same time. The most elegant solution to protect SPI in my experience is to add a Data Classification program to the overall security program and integrate it with DLP programs.

Data Classification allows a user to select a classification from a list to tag data. Many people are familiar with classification schemas used by governments and militaries, which classify information by levels of secrecy. For example, classifications may include public, sensitive, secret, top secret, etc. The most effective Data Classification tools are very flexible, allowing for multiple levels of classification and customizable fields. For unstructured SPI data, an organization could develop a classification schema that had simple drop down menus that ask the user whether a document contains PII and SPI with yes or no choices. Then, the Data Classification solution would apply metadata tags to those documents which would be leveraged by security tools like DLP to apply rules to the information based on those tags. This is a far more efficient and effective method of protecting SPI than trying to find all instances of sensitive personal information categories referencing an individual as opposed to the same terms in common usage.

Data Classification programs can be used to communicate effectively in a human readable fashion as well. Many people may interact with PII and SPI on a frequent basis and not really think about the potential sensitivity of the information they handle. A large part of the spirit of GDPR is to cause people to think about the information they are handling and to handle it with due care. Complying with the spirit of the regulation will require a culture change in some organizations, which can be aided considerably by building a Data Classification program. This way, users can easily identify when they are handling sensitive information and perhaps handle such information with more care as they go about their daily routine. Many Data Classification solutions also have the ability to communicate with the end user through tips or pop up messaging to reinforce the behavioral change.

Breaches of personal data can happen in a variety of ways. Those that garner the most attention are large scale breaches often caused by incorrect technical configurations or a lack of due care on an industrial scale, but far more frequently, information is compromised on a small scale due to carelessness or a general lack of awareness. In these cases, Data Classification can help significantly.

Conclusion

Many organizations have what I call GDPR fatigue, meaning that there have been so many technology and service providers using fear to sell products and services without addressing specific solutions to the challenges posed by GDPR that many organizations have stopped listening. I do not look at GDPR as a reason for fear, but rather a positive way for organizations to enhance their security programs to protect critical client data and personal information.

GDPR compliance is relatively straight-forward. However, the basis of compliance is understanding how to identify and protect Personally Identifiable Information (PII) and Sensitive Personal Information (SPI). Therefore, programs to enable PII and SPI identification and protection are the foundational elements of compliance from a tools and capabilities perspective. Data Loss Prevention and Data Classification form a powerful combination for protecting both PII and SPI. The challenge then becomes one of leveraging those capabilities properly to fulfill controller and processor obligations and protect data subject rights.

Many companies are currently in different stages of projects to comply with the European Union’s General Data Protection Regulation (GDPR) ahead of the May 2018 enforcement deadline. Many vendors and service providers speak generally about GDPR and often, in my view, over simplify solutions to issues that are raised. Rather than try to address the whole of the regulation, I want to speak specifically about a practical issue that most companies will, at some point, need to address.

GDPR covers two categories of personal information, Personally Identifiable Information (PII) and Sensitive Personal Information (SPI). The two types of information are very different from each other and require separate approaches in order to accurately identify and protect them as they flow through an organization’s data environment.

Protecting Personally Identifiable Information (PII)

The first category of information that GDPR protects is commonly referred to around the world as Personally Identifiable Information. This category of information covers information that is generally accepted as personally identifiable such as names and national identifiers like Social Security Numbers (SSN) in the U.S., European identifiers such as driver’s license numbers in the U.K. and Italy’s Codice Fiscal. It is important to note that GDPR expands the definition of PII to things like email and IP addresses.

While the definition of PII has been expanded to include new types of identifiable information, the identifiers have commonality in the fact that they generally follow defined formats and are relatively easy to program into a content analytics system through the use of regular expression. Because of these commonalities, Data Loss Prevention (DLP) technologies are ideal in identifying and protecting this type of information. DLP technologies can be enterprise class or integrated into other products like firewalls, cloud access security brokers (CASBs), or web gateways.

There are two key areas within GDPR that identify DLP as the optimal solution for PII protection. First, the sections related to data security stipulate that the organization have reasonable controls to monitor the flow of data throughout the environment. In my interpretation, this means that an organization must have the ability to monitor the use of personal information at the endpoint, in transit via web and email channels, and where it is stored throughout an environment. It should also include visibility into how information is stored in cloud applications and how it is transferred between cloud environments. Second, as a practical matter, I cannot imagine a scenario in which an organization could comply with Right to be Forgotten or guarantee a Right to Erasure without the capability to find that PII throughout all of their systems, including cloud applications, and remove it. Therefore, a DLP capability, while not making an organization compliant in and of itself, is a required element in order to achieve compliance.

It should be said that building a proper DLP program for the purposes of complying with the relevant GDPR articles requires planning, coordination between business units, and a good deal of care and feeding. However, protecting PII has been a best practice for more than a decade and many people have experience building such programs.

Protecting Sensitive Personal Information is a far greater operational challenge.

Protecting Sensitive Personal Information (SPI)

Sensitive Personal Information refers to information that does not identify an individual, but is related to an individual and communicates information that is private or could potentially harm an individual should it be made public. SPI includes things like biometric data, genetic information, sex, trade union membership, sexual orientation, etc. The challenge with traditional data security tools like DLP in protecting SPI is that many of those things exist in common usage without being related to an individual, and it is very difficult to program a content analytics engine to find information that is in scope with GDPR without finding large volumes of information that is not in scope at the same time. The most elegant solution to protect SPI in my experience is to add a Data Classification program to the overall security program and integrate it with DLP programs.

Data Classification allows a user to select a classification from a list to tag data. Many people are familiar with classification schemas used by governments and militaries, which classify information by levels of secrecy. For example, classifications may include public, sensitive, secret, top secret, etc. The most effective Data Classification tools are very flexible, allowing for multiple levels of classification and customizable fields. For unstructured SPI data, an organization could develop a classification schema that had simple drop down menus that ask the user whether a document contains PII and SPI with yes or no choices. Then, the Data Classification solution would apply metadata tags to those documents which would be leveraged by security tools like DLP to apply rules to the information based on those tags. This is a far more efficient and effective method of protecting SPI than trying to find all instances of sensitive personal information categories referencing an individual as opposed to the same terms in common usage.

Data Classification programs can be used to communicate effectively in a human readable fashion as well. Many people may interact with PII and SPI on a frequent basis and not really think about the potential sensitivity of the information they handle. A large part of the spirit of GDPR is to cause people to think about the information they are handling and to handle it with due care. Complying with the spirit of the regulation will require a culture change in some organizations, which can be aided considerably by building a Data Classification program. This way, users can easily identify when they are handling sensitive information and perhaps handle such information with more care as they go about their daily routine. Many Data Classification solutions also have the ability to communicate with the end user through tips or pop up messaging to reinforce the behavioral change.

Breaches of personal data can happen in a variety of ways. Those that garner the most attention are large scale breaches often caused by incorrect technical configurations or a lack of due care on an industrial scale, but far more frequently, information is compromised on a small scale due to carelessness or a general lack of awareness. In these cases, Data Classification can help significantly.

Conclusion

Many organizations have what I call GDPR fatigue, meaning that there have been so many technology and service providers using fear to sell products and services without addressing specific solutions to the challenges posed by GDPR that many organizations have stopped listening. I do not look at GDPR as a reason for fear, but rather a positive way for organizations to enhance their security programs to protect critical client data and personal information.

GDPR compliance is relatively straight-forward. However, the basis of compliance is understanding how to identify and protect Personally Identifiable Information (PII) and Sensitive Personal Information (SPI). Therefore, programs to enable PII and SPI identification and protection are the foundational elements of compliance from a tools and capabilities perspective. Data Loss Prevention and Data Classification form a powerful combination for protecting both PII and SPI. The challenge then becomes one of leveraging those capabilities properly to fulfill controller and processor obligations and protect data subject rights.

Read More

JOIN NEWSLETTER

Know What We're Up To!