Skip to content

Security Product vs Practitioner – Something’s Gotta Give

There’s one thing about conventional approaches to data security, its success — or failure — are all in the hands of the security practitioner. 

Think about it. Any policy-based security technology relies upon the security practitioner’s foresight. In other words, in order for policy-based security tech to work, the security practitioner must teach the tech. But, teaching the tech is not easy. In fact, it is at the core of why we believe security not only has a complexity problem, but also a productivity problem. According to Forrester Business Technographics 2019, security professionals rate the complexity of the technology stack equally as problematic as the growing complexity of threats. The top three drivers of technology complexity:

  1. Too many alerts
  2. Too many policies
  3. Too many tools

Let’s pick on DLP. In order for a DLP product to work, you — the security practitioner — have to tell it what data to look for and what data to act on. It follows a simple three step process, but as we all know, it’s not that simple. I know I am preaching to the choir a bit here, but I always say, it’s not real until you write it down. 

  • Step 1: Identify all of the sensitive data in the organization.
  • Step 2: Now, create a classification scheme and teach users to tag or label any sensitive data. 
  • Step 3: Finally, create a policy or rule so when data labeled sensitive is compromised, you block it. 

What a policy-based security tool doesn’t tell you is that there is a step 4, 5 and 6. Here is where the pitchforks come out and employees start flooding security screaming, “You’re blocking me from getting my work done!”  This forces you to rethink your foresight, so naturally, you course correct:

  • Step 4: You realize users aren’t classifying data correctly, so you create policy exceptions. 
  • Step 5: More complaints, more policy exceptions and you’re knee deep in policy management.
  • Step 6: Eventually, you give up, turn blocking off and put your DLP in monitor only mode.

Now, you’ve just unleashed the beast that is “monitor mode,” creating a vicious, never-ending cycle of guesswork and doubt. Admit it, you doubt you have all of the company’s sensitive data identified. You doubt end users even follow the data classification scheme much less classify data at all. And you doubt your policies are effective in the first place because the foresight you had was wrong.  What you end up with is more noise. This time the tech is trying to teach you, and it’s doing a miserable job of it.

  • Step 7: Monitor mode starts firing hundreds, if not thousands, of alerts.
  • Step 8: Without any real alert triage capabilities in the tech, you start guessing at what’s a real threat.
  • Step 9: You spend hours upon hours guessing, investigating and finding nothing but false positives. 

I hate to say it (no I don’t) but conventional security approaches have you suckered. All of the accountability for the effectiveness of the technology is on the practitioner. 

So what’s the natural next step? When DLP fails in detecting high-risk data activity and insider threats grow, practitioners typically add a UBA (User Behavior Analytics), UEBA (User & Entity Behavior Analytics) or UAM (User Activity Monitoring) tool. The goal is to surveil users in hopes that collecting user behavior will spot insider threats to sensitive data and you can once again — block them. User-centric approaches don’t always work. And your vicious cycle starts over again — with a focus this time on the user instead of the data.

  • Step 1: Identify all of the users in the organization with access to sensitive data.
  • Step 2: Now, create a baseline for “normal” versus “abnormal” user behavior.
  • Step 3: Finally, create a policy or rule so you can be alerted when users behave abnormally.
  • Step 4: You get an alert, check it out only to find out it’s normal behavior.
  • Step 5: So you start creating exceptions, or redefining your baseline for normal. 
  • Step 6: Now everyone is WFH and what once was abnormal is now normal.
  • Step 7: And you keep refining because the “abnormal user behavior” alerts don’t stop.
  • Step 8: Without any real alert triage capabilities in UEBA, UBA and UAM, you’re guessing again.
  • Step 9: You spend hours upon hours guessing, investigating and finding most alerts are false positives.

Can we just come to grips with the fact that conventional policy-based security approaches are designed for one thing and one thing only — compliance. And, the fact of the matter is — regulatory compliance can’t keep your trade secrets safe. The total focus on compliance from a data risk perspective is core to why we have a growing insider threat problem — despite what Verizon says in their report. DLP was designed for PII, PCI and PHI. It was not designed for IP. And here is a secret. Insiders are more likely to leak or steal the IP than the regulated data your policy taught the tech to monitor and block.

And, here’s the rub about UEBA, UBA and UAM. It’s based on the same policy-based principles. Instead of creating policies that distinguish between sensitive or not sensitive data, you are creating policies for what is normal versus abnormal user behavior. Once again, how could you possibly have that foresight? Good luck, especially while everyone is working from home. What was “abnormal” yesterday is “normal” today. What is “normal” today may be “abnormal” tomorrow. Just try to keep up with that.  

Many say the answer is smarter tech in the form of artificial intelligence and machine learning. These have become — the unproven holy grails needed to ease the pain of the security practitioner — “let the machine do the work.” Sorry to be the bearer of bad news, but you still have to teach the machine what to teach itself to do. Once again, it’s all on you and your foresight. 

Arguably, none of us had the foresight that COVID-19 would happen. We’ve been thrust into the anyone, anywhere, anytime by any means workforce, and the brutal truth is we are never going back. Given this, we all need to come to grips with a few facts of reality:

  1. All data is valuable. 
  2. Every user has access.
  3. Collaboration is constant. 

Let’s start with these basic assumptions, admit that perhaps the old way no longer works and actually set the security practitioner up for success. At this point, with insider data risks on the rise, something’s gotta give.

You might also like: