Words matter — especially in the buzzword utopia that is information security marketing. Let’s add another term to an ever-growing list — insider risk. While insider risk and insider threat are often considered synonymous, in all actuality, there is a difference. And the difference is in the very problem you are trying to solve. Here’s my take.
Insider Threat is a “User Problem”
Probably the most respected definition was written (and updated in 2017) by Carnegie Mellon’s CERT Insider Threat Center:
“Insider Threat – the potential for an individual who has or had authorized access to an organization’s assets to use their access, either maliciously or unintentionally, to act in a way that could negatively affect the organization.”
According to CERT, insider threat is all about the individual, the person, the employee, the user. Every possible user action that may cause harm to an organization is covered. That includes fraud, IP theft, sabotage, espionage, workplace violence, social engineering, accidental disclosure and accidental loss or disposal of equipment or documents.
Given this widely accepted user-centric definition, security buyers often look to user-centric tools — like user behavior analytics (UBA), user and entity behavior analytics (UEBA) or user activity monitoring (UAM). Tools like these collect and analyze mountains of user activity metadata that gets pumped into a SIEM, correlated with other data and automated through a SOAR. Voila — your insider threat problem is solved.
If only it were that simple. The truth is that user-behavior and monitoring tools are just one piece of the puzzle. Relying solely on UBA, UEBA or UAM tools can keep you guessing at what, I mean who, is a real threat.
Insider Risk is a “Data Problem”
Insider risk is a different ball game. When it comes to managing or mitigating insider risk, the focus shifts from centering solely on the user, to taking a broader, holistic approach to understanding data risk. No standards body, to my knowledge (unless you consider Microsoft a “standards body”), has defined insider risk. So, we created a (short and sweet) definition:
“Insider risk occurs when data exposure jeopardizes the well-being
of a company and its employees, customers or partners.”
The keywords are “data exposure.” Insider threat is a user problem. Insider risk is a data problem. At Code42, we solve for both, but our approach centers on the risks of data exposure. Heck, our product’s console is called the “Risk Exposure Dashboard” and our annual research report is titled the “Data Exposure Report.” The fundamental difference between user-centric insider threat tools (UBA, UEBA, UAM) and an insider risk solution like ours is that they take a policy-based approach, whereas we take a math-based approach. Our approach takes into account all sides of the equation:
File + Vector + User = Risk
- We look at all data (not just classified data)
- We factor in vector detail (endpoint, cloud, email, trusted vs. untrusted domains, corporate vs. personal)
- We consider every user (not just users with current or past privileged access)
When all three variables of the equation are taken into account, you end up with an insider risk signal that is — dare I say — real. Here is an example:
|Sales Strategy presentation not labeled or tagged as sensitive
|Uploaded to Dropbox – an unsanctioned cloud service
|The user changed the file type, zipped it and encrypted it
The indicators of insider risk resulting from data exposure are stronger when factoring in the data, vector and user file activity (threat context). There are dozens of insider risk use cases like the one above that completely fly under the radar of most security tools, hence the reason to approach insider risk holistically:
- The tool by rule watches labeled or tagged data (e.g. DLP)
- The tool by rule watches specified vectors (e.g. CASB)
- The tool by rule watches on-network employee application usage (e.g. UBA, UAM)
Now, you could take your DLP solutions for endpoint and email, your CASB, add UBA for users, and pull in network logs, identity and access management logs, etc. into your SIEM, run all kinds of policy-based correlations and queries and say you’re covered. This rules-based approach is designed for large, sophisticated and mature security teams — and even the most sophisticated security teams are strapped for time and frustrated with all the noise and complexity and noise involved in maintaining such systems. And after it’s all said and done, are the systems even working? There are countless examples that they are not.
Insider threat or insider risk? It comes down to deciding to take a policy-based approach centered on human foresight or a math-based approach centered on data exposure. When it comes to solving for insider risk, follow a simple formula and do the math. Because at the end of the day, math — as opposed to guesswork — always wins.
Still not sold on the growing insider risk problem? Check out this survey conducted by Pulse Q&A, where 75% of security professionals admit that insider risk is a major problem.