Skip to content
Blog

Automating Insider Risk Response

man working remotely using automation

Automating Insider Risk response has many benefits, including work load reduction for analysts (praise the sun) and decreased response time, but chief among them is increased consistency in our interactions with end users. This consistency reduces implicit bias that every human being possesses by applying the same responses to associates and executives alike. But where to start? Let’s break it down into more manageable steps.

Insider Risk Investigation Steps

Step 1: Identify Use Case

Start with something that is time consuming to deal with by hand and frequently occurring. For our purposes here, I’ll use an example from my own program: responding to publicly shared files out of our corporate clouds. Prior to automating, my team and I were reaching out to users 15-20 times a month to verify if files being shared publicly were supposed to be in that state.

What we ended up finding was that  the vast majority of the responses we got from users when asked if a file should be shared publicly was some flavor of “Shoot! No! I didn’t mean to do that!”. By automating this use case we are able to reduce our response time to publicly shared files which, in our experience, had high rates of accidental sharing.

Step 2: Define Triggers and Outputs

Next, we need to select what our automation will take in as input and what it will do with that data. 

Looking back to our example, my team would receive an alert that a file had recently been shared publicly, and we would reach out to the user who made the file public to determine if they truly meant to take that action. The answers we got back largely fell into two categories: “Oh jeez, no, that was a mistake.” or “Totally, that data should be public.” So we’ve got our input (the alerts) and our output (asking the users if that data should be public or not).

Step 3: Implement

Implementation is almost certainly the most time consuming part of the process. Continuing with our above example, we chose Workato as our automation platform, as it has existing integrations with our security tools (the source of our data), Slack (how we’ll interact with our end users) and Google Drive (how we’ll log our interactions). We developed a series of workato “recipes” (their term for ~script/playbook) which take in the alert data and then creates a Slack channel with the user who shared the file publicly. 

Once the channel is created, the automation presents the user with the file(s) in question and a series of buttons and asks the user to select either the “Yes, okay to be public” or “No, should not be public” option and respond accordingly. If Yes, the user is thanked for their time and sent on their way. If No, the user is delivered a piece of “just-in-time” training about proper sharing practices from corporate owned clouds and asked to review and acknowledge. 

The user’s responses are logged in a spreadsheet in Google Drive and audited regularly.  Additionally, they are provided with steps on how to adjust the sharing permissions to an appropriate level. 

In the first 6 months post implementation, this automation has had a measurable effect on the rates of unintended public sharing from our corporate clouds. In Q1 of 2022 the automation ran only 16 times against publicly shared files, a dramatic decrease from the prior rate of 15-20 per month

Step 4: Report, Review and Refine

Last, but not least is the post-implementation refinement. As mere mortals, our automation can rarely be counted upon to work flawlessly or account for all edge cases right out of the gate. Once the significant issues were addressed, we turned our efforts to metrics. 

When it comes to metrics it is best to consider how you’d like to measure the outcomes of the automation as early as possible, this way data points can be built in along the way rather than bolted on after the fact. In our specific case we collect:

  • The raw number of alerts the automation operates on (including usernames and filenames)
  • How many alerts were evaluated and dismissed without action (the automation confirms that the user in question hasn’t been contacted about a specific file in question in the past 30 days) 
  • How many just-in-time trainings were sent, and how many users acknowledged having consumed the training

Having run the automation for the past three quarters, we’ve logged more than 185 user interactions without analyst intervention. If we assume that each manual, analyst driven interaction would require 10 minutes (between alert review, interacting with the user, and documentation) the automation has saved ~31 analyst hours. If we extrapolate, let’s say we’ve got 3 such pieces of automation all yielding similar results, a team could be looking at more than 123 hour workload reduction over a 12 month period, or ~3 weeks worth of time you could be using to chase more interesting risks. 

Speaking of future state, next on the automation docket is dealing with third party direct sharing. As end user behavior has shifted away from the sharing of public links toward more secure direct sharing we’re now facing a new challenge of following up on sharing with dozens of specific non-company email addresses. To address this, we’re proposing to take in a non-company email address and search against approved vendor lists prior to conducting similar automatic user outreach.  

Cultural Benefits of Automation

As we reviewed the impacts of the above automation after the first 6 months, it became clear that the perceived softening of a message when it comes from a bot vs. a security analyst, there is a tremendous amount of potential and benefit in this form of fast, friendly bot interaction:

  1. User is more comfortable at the start of the interaction
  2. User interacts with automated response more frequently
  3. User is more willing  to follow the interaction through 
  4. Rates of reoffense are lower

In fact, users who receive the micro training avoid the public sharing pitfall going forward.   

Conclusion

When it comes right down to it, Insider Risk is a very human business. Unlike other domains of security, most every road in IRM leads to talking to a human. And while these conversations can never be wholly replaced with automation, we cannot do our best work when we have too many priorities on our plate. By automating some of the more routine, well-understood risks, we can free up cycles which can be better spent chasing more pressing risks. 

You might also like: