“Creating a future requires a profound and yes, unrealistic, vision of what is possible. But it is fantasy and wonder that drive technology and innovation.”
It’s a new decade, and as we look back on the past one, it’s gratifying to realize that we’ve made significant progress from a security perspective. It’s easy to get lost in the headlines and think Information Security as a discipline is failing miserably.
But consider where we’ve come from. The fact that cybersecurity is in the headlines is progress. Today, the general public has some understanding of things like ransomware, illicit markets on the dark web, and issues associated with the breach of personal information—a significant win for security awareness. The fact that there is awareness of security at the board level of most organizations inspires hope for the future. In 2010, most companies’ security posture consisted of rudimentary patching procedures, antivirus software, and firewalls. In 2020, organizations have a plethora of good technologies defending them.
So, what’s next?
Across the Information Security industry, many leaders focus on those strong security technologies and the promise they hold for improving data protection and overall security posture. In particular, there has been much buzz about artificial intelligence (AI) and machine learning applications. However, I believe we are skipping a crucial step. I would argue that rather than developing more technical capabilities, we should work to develop the connective tissue between capabilities.
If we move to adopt AI too rapidly, without first developing the interconnectivity that mature security programs demand, we’re skipping a significant step on the innovation ladder.
The Limits of Tomorrow’s Technologies Today
Think of a security program like a human body. Firewalls, antivirus software, Cloud Access Security Brokers (CASBs), data classification, data loss prevention (DLP), and similar technologies are like organs or appendages on the human body. Like eyes, hands, and feet, they perform specialized functions and generate massive amounts of information, but they are little good to the overall organism if the information they generate is not processed effectively and their efforts are not coordinated. What good are hands and feet without a central nervous system and a brain?
In security, AI and machine learning attempt to take on the function of the brain. There are two fundamental problems with this approach. First, we still know very little about the nuanced functions of the human brain, and it’s difficult to replicate the function of something we don’t understand. Although many AI programs claim to have the capability to replicate somewhat advanced cognitive functions and might be able to create some fascinating demonstrations, they are not yet capable of replicating the detailed, cross-functional, nuanced brain functions that we take for granted.
Second, even if we could replicate brain function, we are still missing a critical component. The central nervous system is responsible for gathering all of the information from all of the body’s “connected devices” and transmitting the information to the brain for processing. Once the information is processed, the nervous system passes the analysis to all of the appropriate devices to enable them to take action.
Think about it: If your body decides it must run, how many signals are sent? How many different actions must be taken by muscles throughout the body? What physiological changes happen in the background?
Likewise, in security, before we leap ahead to rely on the capabilities of artificial intelligence and machine learning, we ought to be sure we get integration, normalization, and orchestration right.
Applying Artificial Intelligence Out of Context: Multiplying Our Problems
“The real question is not whether machines think but whether men do. The mystery which surrounds a thinking machine already surrounds a thinking man.”
-B. F. Skinner
As the demands of data protection grow more complex, organizations constantly look for ways to simplify and streamline their security tactics. That’s understandable, as a truly effective data protection program can prove overwhelmingly time-consuming for many organizations and is increasingly exasperated by the extreme shortage of information security professionals with the ability to play the role of correlator, decipherer, and orchestrator.
The ultimate goal of applying AI, machine learning, or robotic process automation in security is to cut analysis and response time from human time, generally measured in hours and days, to machine time, measured in seconds or milliseconds. Ideally, the security apparatus would act like a living organism with specialized parts rather than a collection of disparate technologies whose outputs must be correlated and deciphered by human beings.
However, Information Security professionals who try to apply AI in a vacuum miss an essential element that I have observed working with organizations around the world: We do not yet have the technology to bring that integrated, coordinated functionality to life in our security systems.
An issue with artificial intelligence and machine learning as it exists today is that AI functions are being developed in isolation and applied in different ways to individual security products. This is problematic. Imagine if each of the muscles in your body had its own brain and those brains did not natively communicate? A simple task like walking would be an absolute mess. Forget about a coordinated effort like competing in a sport at a world-class level. It simply doesn’t work.
There must be one central decision-making center, independent from each of the input and output mechanisms. The full system can’t work properly unless the central nervous system connects all the disparate parts and the brain orchestrates their efforts.
This means that for the foreseeable future, humans will need to be in the decision loop. The question then is, how do we make human beings more efficient and effective today? As InteliSecure CEO Steven Drew often says, “Apply the gray matter to things that matter.” Security professionals need to be able to devote attention to protecting critical data assets; we can automate the rest.
What does that mean for AI in security? It isn’t useless. However, today’s best applications of AI focus on process automation that sifts and sorts large volumes of information to provide humans with the information that really matters to make a decision or take an action. This use of AI is far more attainable and helpful than trying to develop a security apparatus that thinks for itself, at least at this point.
We are years away from AI technologies that can “think” for themselves and make actual decisions for YOUR business. So how can we make humans more efficient today?
In my opinion, more technologists should be working on this problem first, leaving complex artificial intelligence and machine learning applications aside and instead looking for the ability to normalize information and orchestrate responses across all security technologies.
That is a big innovation ask and one that is likely to take the better part of the coming decade to get right.
Normalizing Information Security Data
“We have largely traded wisdom for information, depth for breadth. We want to microwave maturity.”
-John Ortberg Jr.
We have plenty of information. Gathering data is not our problem. The problem is using all that information to paint a clear, complete picture of what’s going on in your information security program.
For years, System Incident and Event Management (SIEM) platforms have been able to gather information via Syslog, but the information is very basic. The tools essentially log information, which is fine for correlation to known patterns but leaves much to be desired if we are to perform complex analysis. It lacks much of the context and intelligence built into the disparate tool sets.
Since each of the tools in any given security program is specialized, the information resulting from them is also specialized. A holistic normalization engine would take into account the comprehensive picture it is trying to build. When the human brain is analyzing a scenario, it doesn’t try to make the information it is getting from the hands, eyes, and ears all the same, it understands the specialty of the entity providing the information and organizes the information into the complete picture.
In order to perform normalization well, a theoretical technology would need to gather rich information from a disparate toolset, likely through an API connection, then—
- define which information can come from which sources, understanding that there may be multiple sources for some information,
- prioritize the source of the information based on efficacy and richness of data, and
- present a clear picture based on all the information that is known about an event.
If we could then present that holistic picture to a human analyst, they could make good decisions based on that information. At some point in the future then, you could build a machine learning algorithm that could monitor the decision-making process of an analyst and start to automate some of the decisions.
Orchestrating Responses to Security Events
“You can take a team of absolute all-stars in terms of their native abilities, but if they are not working together, they are much less effective than a team where there is less native ability but a higher degree of teamwork and cohesion.”
Once you have your information centralized, how do you respond to true threats?
Let’s switch analogies. Imagine a CISO is the head coach of a sports team; the technologies they select are the players. As the New York Yankees have demonstrated countless times, you cannot buy a championship. Many times, the Yankees have signed the best free agents from all of baseball and still haven’t made it to the World Series. They assemble the most talented players but fail to turn them into a team.
Many organizations in the business world continue to make the same mistakes. They seek out the best security technology in each area and pay little attention to how those technologies do or don’t work together. Likewise, most technology vendors do little to ensure their response capabilities can be orchestrated by an external system. As a result, many security programs are drowning in information, most of it useless, and failing to demonstrate value.
Before we even begin to discuss truly applying AI in a security program, this issue must be addressed. If you are reading this and you have any purchasing authority in your organization, ask your current and prospective vendors what they are doing to enable orchestration every time you meet with them. You have the power to shift the industry.
Once our downstream technologies have the ability to accept orchestration, it’s possible to develop a platform to coordinate response efforts. A single system would normalize the data and present it for decision—and also be orchestrating responses and sending commands to the downstream devices.
This technology vision is ambitious and will take many years to achieve. Therefore, any company making progress towards this vision, regardless of how limited their initial scope, deserves praise and attention. As a marketplace, we choose which innovations we want through buzz and wallet share. If you say you want truly innovative technologies but continue to spend your budget on the same old junk, you will get more of the same old junk and less of the innovative technology. Remember that when you make technology decisions.
A Vision for Practical AI Applications in Information Security
Artificial intelligence in Information Security has promise if it can be done well. I am advocating that security leaders look at an intermediate step before exploring AI as a cure-all. If we can build a normalization and decision assistance engine, we can eliminate countless hours of frustration, a significant amount of alert fatigue, and a great deal of human error. We can also finally give analysts a true single console from which to work.
If such an approach becomes the industry norm, perhaps we can even stop spending so many development cycles building consoles for every independent security tool and instead focus effort on building better tools. This vision is achievable in the next decade. True AI that replaces the need for security professionals at the center of a program is not.
In the spirit of hope, I know of two companies that are starting to build towards this vision. Forcepoint has integrated their User and Entity Behavior Analytics (UEBA) system with DLP, allowing organizations to take different actions related to data transfers based on the risk score of a user. This solution, called Dynamic Data Protection, is a great first step and an incredible capability. I would like to see them extend that capability into the rest of their product suite and ultimately start to extend it into other products that are not under the Forcepoint umbrella.
In addition, Dave Cole, former Chief Product Officer for Crowd Strike and Tenable, is launching a new venture known as Open Raven which has the potential to help solve for the normalization and orchestration problems in the data protection space utilizing a cloud native tool set. An amazing team is behind this effort and it shows a lot of promise.
These companies give me hope for the future and I look forward to working with any other organizations that are moving towards the vision I have articulated. In the interim, we will continue to rely on humans to make disparate technologies work well together.
Connect with the Experts
We all know qualified humans are hard to find in Information Security. If you’re struggling to find humans who are skilled in Data Protection or Cloud Security, contact the data protection experts at InteliSecure. We’re happy to help.