Skip to content
Blog

Live Recap: Developing a Holistic Insider Risk Program

Earlier this week, we had a great discussion on Developing a Holistic Insider Risk Program with Borna Emami, Senior Manager at Deloitte Consulting LLP. and Abhik Mitra, Industry Relations Lead at Code42.

You can read the full transcript or watch the recording below, but here are the key takeaways.

Complacency is just as important as
mal-intent

Not caring about something is just another way of being malicious. Whether it’s intentional or just due to complacency, going around security policies still creates the same amount of risk for the organization. According to Borna’s analysis, upwards of 30 percent of those who are investigated by the FBI and internal security teams fall into this bucket. It doesn’t take a hardened “cyber-criminal” to upload a file to their personal cloud storage or accidentally leave an object storage bucket open to the internet. Don’t sleep on complacency when designing your Insider Risk program.

Things aren’t happening in a vacuum

Insider Risk isn’t (usually) something that happens overnight. Very few people start at a company with the sole-intention of taking or breaking data. There are indicators visible in users’ behavior and data practices that can allow security teams to prioritize and ascertain where likely risks exist. Maybe the user has been printing a lot of things recently, sharing or using an inordinate amount of USB devices. Conversely, perhaps the user has stopped sending as many emails as they normally do. Sometimes these are entirely mundane behaviors that are the result of the user doing their job, other times they signal an imminent risk to organizational data.

Regardless of intent, it’s important to be able to collect, correlate, and visualize these indicators prior to the risk turning into a threat so that the security team can intervene with a right-sized response (either with a quick check-in to see if the user is ok, or other corrective action). We’ve talked about the importance of context on Code42 Live before, but in this case, it’s particularly important to remember that context is what allows the organization to identify the difference between users collaborating while doing their jobs and leaking data.

Insider Risk is a team sport

Everyone needs to be part of a successful Insider Risk program. Security teams, IT teams, executive leadership, the board of directors, the interns; everyone has a stake in solving the problem. That means that the team designing the program needs to be cross-functional and have a high level of executive sponsorship. It also means that awareness and education are going to be your best long-term solutions to the problem–not reprimands and barriers-to-collaboration. Turn your users into expanded members of your security teams; don’t create an insurgency.

Above all, it’s important to include everyone in solving the problem because, as Borna so succinctly put it in his mic-drop to the session, “it’s a people-centric challenge, not a technology one. And you’ve gotta be holistic in terms of how you approach it.”

For the full readout of what we discussed in the session, watch the video below or find the full transcript at the end of the blog:

Now streaming: Code42 Live

This spring, Code42 launched Code42 Live – a series of live community discussion events to help solve the problem of Insider Risk. Recent guests have included Samantha Humphries and Chris Tillett from Exabeam, Elsine Van Os from Signpost Six, an Insider Risk Summit 2021 speaker, and Edward Amoroso from TAG Cyber.

Join us on August 24 for a conversation  “How to start an Insider Risk Program from scratch” To learn more and join the discussion visit code42.com/live.

Video Transcript

Riley Bruce – See, this is why I always just love rocking out to the music. Hello everyone. And welcome to Code42 Live. Today, I’ve got Borna Emami and Abhik Mitra, to talk about developing a holistic insider risk program. And today we’ve got a great conversation. Thank you for joining us, to anybody who is joining at code42.com/live, as well as on LinkedIn or YouTube, thank you. Please throw any questions you have for us or comments throughout the session in the chat, and we’ll be sure to answer them live. I’m gonna keep saying the word live because we are. And to kick things off here, I just wanna take a couple of minutes and have both Borna and Abhik explain who you are, what you’re focused on, kind of in your day job, and then maybe something interesting about what you focus on when you’re not at your day job. So a fun fact or something like that. So Borna, we’ll go to you first, since you are the guest. What’s up?

Borna Emami – Awesome. Thank you, Riley. Are we by the way… Are we live? Just wanted to double check that.

Riley Bruce – We might be live, yes. I think so.

Borna Emami – Okay. All right, we’re live. Thank you for that. So good afternoon everyone. Thank you for joining and Code42, thank you for hosting and putting this together. I look forward to the discussion, a little bit of background on myself. I work at Deloitte Consulting where I’m a senior manager and I’ve been there for the last 15 years. I’ve spent about half of that time, really focused on building holistic risk-based insider threat programs. We’ll talk a little bit about what those qualifiers mean. And I’ve done that in both the private space as well as for the federal government. And we’ve probably built over 60 programs, started to lose track after 50, but excited to talk a little bit about lessons learned and what we’re seeing across the industry.

Riley Bruce – Sweet. And then, is there anything that you wanna share? Maybe, something fun or not so fun?

Borna Emami – Yeah, well, let’s do fun. Let’s keep it lighthearted. And I don’t know that it’s fun. I think it’s more just a window into myself. I would say, if I had to do it all over again and not go into management consulting, I’d probably be a chef, do something in the culinary field, ’cause I really enjoy cooking.

Riley Bruce – Okay. That is a first and also, I would 100% support that food is very tasty and necessary, I don’t know where I’m going with this. But thank you for sharing that Borna. And also thank you for joining us Thomas from Minneapolis, hello to you. And Abhik, same question for you, who are you? What are you do in your day job? And a fun fact, or maybe we can use the same prompt, if you had it to do all over again what would you do? Oh, Abhik, you may be on mute. This is the beauty of that live word. There we go.

Abhik Mitra – Better? Can you guys hear me now?

Riley Bruce – Yes, I can.

Abhik Mitra – Fantastic. Yes. What a way to start, right? I just wanna make sure one last time, you guys can hear me fine?

Riley Bruce – Yes.

Borna Emami – I can hear you.

Abhik Mitra – Awesome, fantastic. All right. Well, I was just gonna say, that’s a very deep philosophical question as to what I would have done instead. So I might need a little more time to think about that, but regardless, hi everybody, it’s great to be back here again, and Borna, it’s great to be here with you also, am looking forward to this session. I’ve been with Code42’s portfolio strategy and marketing team for about five years now. In my role, I work very closely with our analyst community, I work very closely with customers. primarily from a research perspective. So a lot of what I will bring to the table today is what we’re hearing from the market, what we’re hearing from customers, what we’re hearing from the analyst community, and also engage in some lighthearted banter, as well with Borna, is the other appeal of today as well. Fun fact about me, you know, Borna mentioned cooking, it’s kind of interesting because during the pandemic, I think we’ve all kind of channeled these skills. I guess we never knew we had, my parents are fantastic cooks. They grew up eating a lot of Indian food, so I always felt like I had it in my genes. I’ve just, I never tried it. And that changed during the pandemic. So I think that’s turned into something fun that I like to do now and then as well.

Riley Bruce – All right. So then we do have a question from LinkedIn, that is entirely unrelated to the topic of the session. However, it is gonna be lobbied at both of you right now. And that is, “What is your favorite dish to cook, since you have both mentioned cooking now?”

Borna Emami – All right, Abhik, you want me to go first?

Riley Bruce – Yeah, please.

Borna Emami – Okay. So I recently got my hands on some Wagyu beef from a butcher shop in New York called DeBragga, I don’t know if I’m saying that right, but I cook that using a Gordon Ramsey recipe, where you throw some time in there, some compound butter, which is really just a fancy way of saying butter with herbs in it. And you on both sides and it’s phenomenal. Yeah. So that’s probably… Right now, that’s my favorite. But if you asked me in a couple of months, I’d be on to something new.

Riley Bruce – All right. Well, Abhik, I don’t know if you’ll be able to top that, but please try.

Abhik Mitra – Well, I’ve been cooking a dish called chicken vindaloo, which kind of hails from a place called Goa in India. It’s a very spicy dish. Goa has some Portuguese influence, historically speaking. So I don’t know if it’s a dish that was influenced that way, but you know, it can be made mild or extremely spicy. I tend to go the very spicy route, but the nice thing with vindaloo is, in terms of spice and curry, you could make like an egg vindaloo or a fish vindaloo, but, you know, we tend to go the chicken route here. And it’s great, like with like a naan, bread or rice, it is just fantastic. It was one of my dad’s favorite dishes. So I, again, I have the benefit of YouTube when I’m making it, so I don’t quite get the end result, but, hey, you know, it works out just fine in the end.

Riley Bruce – So that is, yeah, I guess the crew is now hungry, officially, but let’s try and get on track here and talk about what we’re intended to be talking about before I get in trouble for not talking about it. That being, how to develop a holistic insider risk program. And the first step to really being able to define what your program is, is we need to define what we’re even looking at. So my first question for the two of you is going to be who or what is an insider, to your way of thinking? And Borna, I will start with you for this and Abhik, feel free to chime in after Borna. And also before I throw away, I do wanna say hello to Kyle and Bob who are also in the chat. So thank you for joining us. But thank you, and Borna, what or who is an insider?

Borna Emami – Thanks Riley. So when we built these programs, I think there’s a couple of threads that run across the definition that we typically see. And so, I would start off by saying, an insider is an individual with trusted and verified access, and that access could be physical or logical, that may commit a malicious ignorance or complacent act. And that act could range from theft of information, could be fraud, could be workplace violence, it could also be espionage, right? And part of what that definition also has to do is start to hone into the critical assets that an organization wants to protect. And so you could double click into each of those, when you talk about trusted and verified access, that could be an employee, it could be contractors, it could be vendors, it could be secondees, right? So, I think specificity matters. The other point I’ll make, and then I’ll pass it on to Abhik, I think a big part of getting what we’ll call the right definition is a function of making sure that human resources and IT, and physical security and all the other stakeholders that should be defining this together are coming to the table. If it just defines IT it, might be a little bit too technical, right? If physical security defines it, it may be too much in that sphere. And so a big part of what we’ve seen around success is this acknowledgement that even down to the first step of how you define a program, it’s really a team sport. And you’ll probably hear me repeat that on and on, but it’s really one of the success factors we’ve seen in building and standing up insider programs.

Riley Bruce – Abhik?

Abhik Mitra – Yeah. I would agree with that.

Riley Bruce – Who or what is an insider?

Abhik Mitra – Yeah. Well, an insider is really anybody, right? And that, I think that’s the point, but I think to kind of build on what Borna mentioned, it’s all in intent agnostic, and that’s a key identity alleviation, in terms of how we’ve often thought and perceived the insider. Because typically, when we hear the word insider, we’re automatically associating that individual with bad intent, malicious intent. And the interesting thing about how the conversation has shifted is, now we’re looking at the insider is essentially us, right? I mean, we are insiders. And, so the way I see it, as long as anybody has access to intellectual property in some form or shape, regardless of intent, that person is a potential insider. And it really goes back to the point, Borna made, whether my intent is malicious or accidental. I as an insider can put data at risk, I can put corporate data at risk. So, you know, think of the insider as your employees, right? Your everyday employees, the folks that we trust, that have been with the company for years, have access to the crown jewels of the organization, like we like to say. So that to me has been a really interesting shift with how the definition of insider has evolved.

Riley Bruce – I think that that actually a great sort of jumping off point to the next question here, unless Borna, is there anything else that you wanted to follow up with?

Borna Emami – No. Well, just one point, I thought a big comment about the malicious intent, that tends to be where people focus the most of their time and their energy. And that could be somewhat problematic, because if you actually look at insider cases at the FBI investigates and I don’t know the number, but it’s pretty significant. I mean, it’s upwards of 30% are what we would define as complacent, people that think they are above the rules or the rules don’t apply to them or individuals that are ignorant in the sense that, you know, they haven’t read the policy or they weren’t paying attention when there was training or there wasn’t training. So I just think it’s important to broaden the aperture of the insider, to include those other two dimensions as well.

Riley Bruce – And I think that that is actually a perfect layup, whether this was intentional or not to the question I was intending to ask next, and that is, is there a difference between an insider threat and an insider risk? I know that there are some consulting firms, as well as Code42 is trying to focus on what’s called insider risk, versus insider threat. And Abhik, I will go to you first on this one, since Borna, I went to you first on the last one. So feel free to dunk on us after that, but Abhik, you first, sir.

Abhik Mitra – Yeah. This is where I think Borna and I get to have some fun, healthy debates because… And I think the verdict is mixed on this one, because based on what the analyst community tells us, based on what we hear from customers, there is a conversation, right? Is it an insider threat problem? Is it an insider risk problem? But when I think of insider risk or insider threat, rather, I go back to the point Borna was making around a lot of our focus tends to be on that malicious employee, that malicious intent. So again, you know, when you think of the word threat, you’ve already kind of pre assumed that someone is doing something bad. The restriction with thinking of it that way, again, in my humble opinion is, you’re kind of dedicating all your focus on the bad. You’re kind of focusing on the user rather than, you know, the more realistic problem which could be the user, it could be the data. And that’s where I feel insider risk is starting to, you know, almost form its own identity, because in the world of insider risk, you’re not just focusing your time on the malicious employee, but now you’re starting to look at the accidental, you know, situation. You’re starting to look at, you know, careless employees. You’re starting to look at employees that are aware of the policies, but are yet choosing to go past those policies, just because again, they wanna get their jobs done. So I think the insider risk conversation is probably relevant now more than ever. You know, we’re in the midst of, or you know, trying to crawl our way out of this pandemic. Well, what does security look like? Right? You know, people are either 100% remote or they’re completely hybrid. So you have to have that insider risk conversation. Now, here’s the big debate. Here’s the question, right? At the end of it all, does insider threat or does insider risk and it emerges the umbrella to which insider threat is a sub entity? But again, I’d love Borna’s thoughts on that.

Borna Emami – Yeah. And I don’t know that my opinion is that much different than yours, Abhik. And I’d love to disagree with you. I was kinda trying to find a way to do that, but I think we’re fairly aligned. So when I think of insider threat, I think of, what are the things that an individual could do, whether it’s intentional or unintentional, to put people, data, facilities, live in danger, right? So that’s to me the threat piece. The risk piece comes in when you start looking at, “What is the likelihood of that threat actually unfolding and taking place? How vulnerable are we as the organization? What is the likelihood? What would the consequence be?” So to me, risk in many ways is a lens to prioritize the threat. So let’s make it real. I would say, if you look at incidents of insider theft, data exfiltration, as a matter of occurrence, much higher than sabotage, but then if you start looking at incidents of sabotage from a consequence perspective, right? Pretty significant, so is theft. So part of what you need to do and what we encourage organizations to do, is use the risk lens to start to prioritize where you spend time versus where you don’t spend time and what you protect versus what you don’t protect. And so I would say, just to put a bow on it, it’s an end clause. It’s both threat and risk, not an oral clause.

Riley Bruce – Yeah. I think that’s a fantastic way to kind of sum that up. And I do wanna open it up to folks in the chat. So of those of you who are thinking about the insider problem, let us know, is there a way that you conceptualize the difference? Are you thinking about it as threat versus risk? And then we’ll be able to address some of those comments or any questions that you have here, again, live. You know, every mistake that we make here is definitely intended to illustrate the fact that we are live. And speaking of illustration, I wanna go back to something that a Abhik said just a minute ago, talking about how we’re all remote versus a hybrid or what like that. And obviously all three of us are not sitting in the same room together right now having this conversation in an office. So obviously the question becomes, with all of these sort of disparate, whether it’s physical locations, virtual locations, collaboration, what have you, how does a program begin to successfully detect insider risk or insiders? And Borna, since I went to Abhik first, the last time I’m gonna go to you first, this time on this one. So how do we successfully detect insiders?

Borna Emami – So the way that we think about detection is across prevention. First and foremost, we like to say big P, because that’s where you wanna spend a lot of your time and your energy, detect and then response. When you look at the detection piece of it, the saving grace we have with insiders are typically, insider acts are not impulsive. Meaning the individual is beginning to display patterns of behaviors as they move from the idea, let’s just use a theft of information, as they move from the idea of taking information to the actual act of taking information. And these precursors, we typically call them potential risk indicators, PRS, I think potential’s really critical there, because they’re not absolute, we have seen insiders that display these behaviors, they walk up to the line and they don’t steal information, or they don’t commit sabotage. So I think that’s a very important distinction, but the idea then becomes, Riley. The way that you detect is, if you’ve defined the threats and the insiders, and you’ve put that risk paradigm onto it, you can start to look at behaviors that reside both on and off the network and start to collect, correlate and visualize those. So, for example, if let’s say, I put in my two week notice and I’m leaving my employer and I start downloading information that I don’t need in the performance of my duties. And I start sending emails with attachments off the network, and I have declining performance. Now, each of these probably reside in a separate and distinct system. And so the idea becomes, how do you take these precursors and correlate them so that you in a risk-based fashion can say, “Maybe you should look at Borna before you look at Abhik.” And the key is, you know, it could be enhanced monitoring for me, it could be a reminder of a policy, it could be a full blown investigation. And that’s part of why it’s so important to have an escalation and triage process. It’s just a fancy word for how the different disciplines come together to actually put a human eyeball on concerning behavior, to be able to say, “Hey, we should take the following act,” but I would just wrap it up by saying, it’s really all about identifying those indicators that the hypothesis is, they’re correlated with the threats that you would like to mitigate, right? Prevent, detect and respond to, and start to bring that together. And we’ve seen the efficacy of doing that, both in the public and in the private space.

Riley Bruce – Abhik, it’s all you. And then we actually had a question come in from LinkedIn. Thank you for that answer, Borna. I think that that has spurred some discussions. So that’s always good. And then Abhik, how does a program successfully start to detect an insider?

Abhik Mitra – Yeah, so, you know, detection, a big part of it is obviously having the right technology in place, but detection should also really be about, “Well, what are you looking for?” You know, from an organizational perspective, what is important in terms of risk that would deem as a risk for you in the first place? So when we talk about detection, going back to Borna’s point, it really is about having the right feeds in place. It’s about having, you know, solutions in place that are feeding you the right level of information whereby you can take intelligent decisions. And it isn’t just looking at one piece of the puzzle, it’s the ability to look at multiple pieces of the puzzle because ultimately risk is a combination of a variety of different things, right? You’re not just looking at the users, you know, behavioral patterns. Well, you might look at the data as well, you know? You might look at, you know, specific vectors that are in play, or is it a USB key? Or is it, you know, a corporate to personal drive type of transfer situation? These are all things that feed into that risk, that ultimately allow you to detect what is the risk for your organization. But the other key that I wanna talk about very quickly that Borna mentioned is around this idea of departing employees. And very correctly, from an organizational perspective, somebody puts in their notice and then we tend to look at it as, “Okay, two weeks. Oh my gosh, I need to like focus on my detection efforts and what’s gonna happen over the next two weeks.” But the reality is if somebody has intent to exfiltrate data or to take it to, you know, their next company, we better believe that that exfiltration event, actually probably started as many as two to three months ago, right? Keep in mind, individuals know that they are already thinking about a change. They’re already rethinking about their next job. So it’s very important as we think about detection that we have these steady streams of information, constantly feeding, you know, whatever our intelligent solution repository might be. And then it’s just a question of, “Okay, I have the right feeds in place, and then I’m actually being proactive. I can get ahead of the problem. Maybe even before this individual puts in their two week notice.” So I think that’s an important point that was made is that, organizations need to continuously refine, they need to continuously adapt what their risk posture is because ultimately, that’s what should feed detection in the realm of risk.

Riley Bruce – So fantastic. And we had a question, I mentioned come in from LinkedIn and we’ve got a couple more coming in here now, which is great. The first one is that, “Borna, you mentioned that there’s this concept of PRIs and Abhik, I know that Code42 has this concept of IRIs, which are insider risk indicators. But the question was, “Could you share what some of those are, that might be important to pay attention to, to be able to then see when there might be the potential for an insider risk?” And Borna, since you’re the one who invoked it, I’ll start with you. And then we’ll go to Abhik on that one.

Borna Emami – So typically we see organizations, you know, start out with 10 to 15 indicators where they’re starting to correlate those and actually see if there’s a there, there. And I think the question is specifically getting at what are some of those examples of indicators. So it’s everything from, you know, non-compliance individuals that have violated policy, as an example, we have seen declining performance, like correlated. We’ve had organizations that have looked at external job searches, so web activity, and what folks are doing to try to get in front of individuals that may be separating from the organization. We’ve seen organizations look at things such as email activity. And the hypothesis there is that, if an individual is leaving, that they start to disengage from the organization and they actually send less emails. And so that’s a more sophisticated one. And the theory there is, if they’re disengaging in sending less emails, they’re likely to leave, if they leave, the percentages are pretty significant in terms of individuals that actually take proprietary information with themselves. I would also say, you know, looking across what I call kind of the big five, around, if it’s data exfiltration, it’s looking at removable media and email anomalies, those are the two most common ways that insiders will exfiltrate data. And then we obviously would look at things like cloud. We would look at file transfer protocol, which requires more technical ability. Somebody that’s a little more technically savvy, and then also printers. Printers are big too. See something say something, policies are also big. I think that’s been… The dial has been turned down on that a little bit. Abhik talked about that because we’re all working from home. But the whole idea is that, any one of those indicators by themselves, isn’t gonna say, “Hey, Borna might be an insider.” I think the real power is correlating and bringing all of those indicators together. So you can start to get a more holistic picture.

Abhik Mitra – Yeah. I wanna build on that as well, because it’s so true, this idea that in order to really surface risk, you need to look at a variety of different things. So going back to the question about IRI, so here at Code42, we call these, you know, risk indicators, and these risk indicators are built off of essentially three pillars, right? We’re looking at, you know, the file itself, we’re looking at the user and then we’re looking at specific vectors as well. But just a quick example, like on the file side of the house, right? What is somebody doing to tamper with the file? Are they masking its identity? Are they attempting to change the extension by design to make sure that it doesn’t get detected in some form or shape? Again, that’s just one part of the puzzle, when you start to now think about the user, well, what are the user’s behavioral patterns? Are they working at unusual times? And just as an example, but again, that as a silo data point is very limited, but when you correlate, you know, somebody’s working at say 11:30 PM, not their usual hours, you know, with the fact that they’re also masking a file. And then by the way, moving on to the vectors situation, are they then emailing these files to themselves, right? To a personal email account, transferring it to a USB. So now that you’ve been able to correlate all that information together, the security analyst has a very compelling and viable case that they can then take to legal or take to, you know, to the other, or perform actions that they need to. But the key here is context. And that I think has been kind of the proverbial struggle in the insider space is, “How do I know what action to take? What is my response protocol going to be, if I don’t have that type of context?” And that’s really where the importance of risk indicators comes into play. But again, to re-emphasize what Borna said, we have to figure out ways to correlate and contextualize all of those indicators, because they all add up to, you know, the decision that we need to be making ultimately.

Borna Emami – And Riley, if I can just piggyback on one of the points that was made. It has been my experience that when you build out these indicators, really what you have is you have a series of hypotheses around the behaviors and the attributes that you think are correlated to an insider. The reality is, once you turn that on, and that data starts to come together and you start to score it and weight it, you will quickly realize the value and the utility of those indicators. And I like to say that it’s a very fluid process, in the sense that it’s constantly being updated. We’ve built programs that have had, you know, monthly indicators drop off and come back on. And the idea is that you’re building in processes to evaluate those indicators and see the fidelity of them, and really test out those hypothesis, because we’ve had ones where we were certain, this is going to help us identify malicious or ignorant or complacent behavior, and it just did not overtime. And then we had other ones that just weren’t on our radar, that we ended up bringing into the program. So I do think if you’re gonna go down the detection route, building and a mechanism to evaluate is really important.

Abhik Mitra – And indicators are ever-growing, I’ll just throw that on there.

Riley Bruce – Yeah. I do wanna double click on that, just for the sake of those who are watching, if you wanna throw an indicator or something that you have seen an insider happen or seen something that maybe should have been a red flag in hindsight, throw that in the chat or in the comments on LinkedIn. And that’ll be good, useful information for everybody else to know. There’ve been a couple more questions that came in from LinkedIn. This first one was addressed directly to Borna, and that is, “In your experience running and building out, you said you’ve built out something like 50 or 60 programs with companies. How often do you see an investigating analyst have a bias?” And I’ll actually, this is me adding on to that, how do you minimize that when trying to go through this process?

Borna Emami – Yeah. I would say that, I don’t know that I can answer the how often, because of the steps that we’ve taken to minimize it. So the first thing is that it’s a legitimate concern and what you don’t wanna have happen is, you know, Riley has an issue with Borna, and I was his former supervisor, and he’s gonna drill down into me and really spend a lot of time and attention on what I’m doing because there’s some animosity or ill will there. And the way that we mitigate against that is, we mask all the individuals. So there is a process whereby a investigating analysts to use the terms in the question would double click once a particular threshold was surpassed. And that double click on the individual’s identity would be part of a tiered process, where there’s segregation of duties, also another really important insider concept that allows it not to be a single point of failure. And what we have found is when legal is involved, as they typically are, this becomes a way to get both the lawyers, as well as other key stakeholders, comfortable with the fact that we are looking at individuals, behaviors, both online and offline. And so, you know, it’s a leading practice that we encourage everyone to do. If they’re bringing these indicators together in a technology is mask and then create a process where you can de-mask.

Riley Bruce – I think that that is fantastic. Abhik, I don’t know if you have anything that you wanted to add. I know that question was addressed specifically to Borna. So I wanna give you the option.

Abhik Mitra – Yeah. I’ll just very quickly add that, I think part of this whole conversation around bias, is taking an interesting angle in that, a lot of it, in some ways comes back to transparency. How transparent are organizations being with the level of monitoring that’s occurring? So in some realms, we hear that, you know, if users are told, you know, exactly what the level of monitoring is, who’s watching potentially, all of these things make a big difference, right? It’s so, I think that doesn’t neutralize the bias issue, but I think it makes… It kind of distributes the power both ways. I think, when we talk about bias, it’s very easy to get trapped into, “Well, this person may have done something in such as such date, so thus they must be guilty.” And that needs to be kind of a two-way conversation, right? Somebody just can’t have the magic keys to be able to conduct, you know, whatever action they might go off and take, and users have to be part of this as well. So, you know, I think that conversation will continue, but I think the more end users are kind of brought into the mix to be able to influence how security architectures are built and programs that are built. I think that’s just gonna be a key part to everything moving forward. Riley, you’re on mute.

Borna Emami – Yeah. Cannot hear you.

Riley Bruce – What about now? We’re live folks.

Borna Emami – Oh, there we go.

Riley Bruce – So, what I was just gonna say is, I wanna get to another couple of questions from LinkedIn. And then there’s one more question that I wanna use to close out here, because I am trying to keep an eye on the time, being conscious of Borna and Abhik’s time. But the thing that we wanna kind of focus on in wrapping this up is that bias is real. It is something that both from an implicit and also because, yes, Borna is correct, he was my supervisor in a past life, and I really wanna figure out what’s going on. So that’s a second sort of bias and being able to focus on how to reduce that, thank you both for those ideas. And the next question here is from Thomas. And this is from LinkedIn. And the question is, “What are the most common vulnerabilities that come from, let’s just say…” They use the term here, “A junior dev.” So I’m assuming this is some sort of web developer or somebody who’s on the front end side. What should we look for in somebody from a development perspective. And Abhik, I’ll throw to you first, since we went to Borna first, the last time?

Abhik Mitra – Yeah, I think the first place to start is, what type of information does that individual have access to, right? So in the case of a junior dev, you might be looking at source code, you know, where does that source code reside? And then also looking at, you know, are there specific examples or again, indicators where that individual might be moving the source code from a, you know, a trusted to an untrusted location? Again, you know, I think understanding that, at a highest level at least gives you some keys into understanding, is this person putting data at risk? Again, you know, from this person’s former employment, it might’ve been totally acceptable for them to put stuff onto like a GitHub, just as an example. And that might’ve been common practice, however, for your organization, that may not be the common practice. So looking out for those types of situations and, you know, in many ways, this could be a very simple conversation with that junior dev, “Hey, we noticed you were doing this. This is not a common practice here. Here’s a best practice way of doing that.” And there are various ways of doing that, right? Instead of going to the extreme of, “You’re doing this, you have malicious intent. Now it’s time to take an action, boom.” So yes, there are vulnerabilities, we’re all human, we all make mistakes, and I think that’s part of it as well, but I’d love to hear Borna’s thoughts as well.

Riley Bruce – Yeah. I just like to say that I never make mistakes except for when I don’t unmute myself, but Borna go for it.

Borna Emami – Well, I was just gonna say, I think, you know, when I think about a developer, I think about sabotage and I think about, you know, malicious code and theft of source code. And I would say a lot of organizations, that lens around what could a particular individual do if they were motivated around bringing down servers, bringing down networks, taking really sensitive information with them. I don’t think that that’s a lens that is always put on to the junior developers. So I think going through the risk exercise that we talked about around what could happen, what are the threats and how are we positioning ourselves to be able to mitigate that? I think is key. There’s two things when I think about a more technical folks, one is around and we talked about it a little bit, segregation of duties, right? So what are those really critical functions that should require two individuals? So there’s not a single point of failure? So that’s one vulnerability to answer the question. And then I think the other one is erosion of access control. So we see this a lot of times, once you get access, you just have access indefinitely. And how does an organization start to tweak that? So that if there is an individual who is motivated to, let’s say, bring down a server and they don’t need access to that particular code, that they don’t have it. And that’s a lens that requires an ongoing analysis of making sure that people have only access to what they need in the performance of their duties. The last thing I will say about sabotage, ’cause I equate it to developers, it is really the lower frequency, high impact insider threat that oftentimes gets ignored. So I think, you know, when you’re going through it with your organization, I think you really gotta spend some time thinking about, “Well, how are we mitigating sabotage?”

Riley Bruce – Fantastic. The last question, and thank you all for throwing questions in the comments, both Nidhi and Thomas have both said thank you for getting to their questions. So thank you both for that. The last question here .

Borna Emami – By the way Riley, I was just gonna, these are great questions. Sorry to cut you off, I just wanted to let the group know, these are really insightful.

Riley Bruce – Yeah. Agreed, so no worries on cutting me off. The last thing though is, we’ve kind of talked about how we can start to build and think about a holistic program here. And obviously 45 minutes is not nearly enough time to really get that completely off the ground, but how do we measure the success of an insider threat or insider risk program? You know, Borna, you just mentioned that we’ve got those high impact, low frequency events that may be, you want to completely mitigate those from happening, how do we measure that success? How do we prove potentially a negative here?

Borna Emami – Yeah. And we’ve seen everything from cases that have been escalated and investigated as a measure, as a metric, we’ve seen organizations highlight things like reminders around policies or training to individuals that may be doing things that they shouldn’t be doing, so that’s captured. We’ve also seen organizations that have seen individuals exfiltrate OGC, their legal, or their OGC got involved, having individuals sign an affidavit, brought the documents back. And that was a measure of sort of success of the program. You know, I look at it slightly differently. I look at it beyond just, how do you identify? How do you catch the insider? I think if you’re building a holistic program and there’s a prevention and the detection and the response arm to your program, it has been my experience that you’re gonna get a lot of insight into opportunities to better communicate with the workforce, opportunities to better train the workforce, opportunities to set behavioral expectations around how to use IT, how to post, what not to post on social media. And so I very much view insider threat programs as recommendation engines. Once you start bringing this data together and you’re having a working group, that’s meeting on a regular basis to talk about how the organization is preventing, detecting and responding. It has been my experience that you will find a lot of fruitful opportunities to improve process and policy and even technical controls, that could be a measure of the program’s success.

Riley Bruce – Abhik, let’s have you close it out. How do we measure success?

Abhik Mitra – Well, that’s an interesting question, because I think numerically speaking, if you wanna strictly look at it in terms of numbers, the effectiveness of a program would be year over year, how many breaches have gone down, right? If I had 50 last year, am I down to 40 this year? What measures were taken and having that happen, the other part of this is the risk posture conversation, right? Has the program helped me influence my level of risk over time? And I think that’s more of an emerging conversation where organizations are looking to better understand risk posture, but again, going back to something Borna said, and this becomes maybe the non-numerical measurement, but I think it’s equally as effective and impactful which is, as a result of your program, have you created a better security culture? Because ultimately, at the end of the day, the outcome of any program should be, “How have I influenced my employees to make better decisions? Right? Day-to-day decisions, are they making better data handling decisions?” Because in essence, if you’re training and educating and empowering your employees to make those decisions, it isn’t just security and IT that are tasked with protecting your data, right? The organization is protecting your data holistically. So that becomes maybe an outcome of risk, but any effective program will get all of those feeds and channel them into, you know, really empowering employees from an educational perspective. ‘Cause that’s such a big part of the puzzle, right? If we’re not doing, let’s just say, we’re making smarter day-to-day decisions, there will be an impact with risk overall. So that was…

Borna Emami – That was a good one, Abhik. I should have… I’m jealous, I should have said something about culture. It was a good point.

Abhik Mitra – I had time to think about that while you were answering. So don’t feel too bad.

Borna Emami – Yeah, yeah.

Riley Bruce – Borna, I have a hunch, you have one more thing that you wanna kinda wrap this up with a bow.

Borna Emami – Well, yeah, no, I appreciate that, Riley. I would say if there’s one takeaway, at least from me, it would be that, insider threat mitigation is a people centric challenge, right? This is about people, it’s not about technology. We have seen many programs go in and buy a piece of technology or user behavioral analytics that brings together all these feeds. And if you haven’t set up the working group and you haven’t defined insider, and you haven’t sort of define the culture around security and your risk tolerance and what you wanna protect, we see those programs fall flat a lot. So just remember, it’s a people-centric challenge, not a technology one. And you’ve gotta be holistic in terms of how you approach it.

Riley Bruce – Boom. That is our mic drop moment for the week, folks. Thank you, Borna Emami and Abhik Mitra for joining us on this session of Code42 Live. If 45 minutes of talking about insider risk was not enough for you, I encourage you to, number one, join us in two weeks where we’ll be talking about how to build a program from the ground up, and also join us on September 14th and 15th for the Insider Risk Summit. So the summit is something that you can register for @insiderrisksummit.com. We had some interesting announcements on that front here over the past week or so, we actually have Chris Krebs, the ex-director of CISA, who is going to be the keynote speaker for that. So if you would like to join us for that, please do. And thank you very much again, we will see you all the next time. This has been Code42 Live.

You might also like: