Skip to content
Blog

Live Recap: Understanding Your Insider Risk & the Value of Your IP

Earlier this week, we had a discussion on understanding your Insider Risk and the value of your intellectual property with Derek Brink, VP and Research Fellow at Aberdeen Research and Strategy and Abhik Mitra, Industry Relations Lead at Code42. You can read the full transcript or watch the recording below, but here are the key takeaways.

This isn’t a new problem

Derek brought up an excellent point in response to the first question of the session. This problem isn’t new. Insider Risk has been around for millennia; just ask the citizens of Troy. The difference now is that for a variety of reasons, it is no longer viable for organizations to bury their heads in the sand and pretend that the problem is “preventable”. People need to work. They need to collaborate, and share, and yes occasionally upload family photos to Facebook from work devices. With over 4 million resignations in April of this year, and numbers showing that 40 percent of workers would rather leave their job than go back to the office full time, Insider Risk due to the way we work isn’t going anywhere. Businesses need to adapt to a problem which has existed forever but is increasing rapidly. The problem isn’t new, but now is the time to do something about it.

Balance (as with anything) is key

This one is very easy to understand conceptually but very difficult to pull off in practice. You’re not going to have any luck trying to completely eliminate all Insider Risk through restrictive policies and/or blocking. That’s a recipe for disgruntled employees which, counterproductively, will only lead to greater Insider Risk. In 2021 it is necessary to balance the needs of the business (keeping IP and other sensitive data safe) while simultaneously embracing the way that human beings expect to work. The way to do this is to define your risk tolerance and then tailor policies to meet that tolerance. Tolerance will vary across organizations, departments, and time. Every security team will need to work cross-functionally to find out which orgs need to accomplish which tasks to be able to do their jobs. After finding that information out, the security team can work with those organizations and leadership to define the proper tolerance of risk for a given workflow. Then, it’s time to move on to the next point

Ready, aim, THEN fire

Too often security and compliance organizations will start with policy recommendations (mandates?) and technical controls at the outset. This will inevitably lead to a cultural understanding of security as being the department of “no” or “know-it-all.” The recommendation from Derek during our session was to start by readying the organization for policy by getting a contextual understanding of what the needs of individual departments are. Then, aim by gaining visibility into normative data behavior patterns through process and technology. Ideally only then, should an organization “fire” policies, procedures, and technical controls at employees. Define risk tolerance, educate employees and gain visibility, then deploy new policy, process and technology; not the other way around.

Resources for quantifying your Insider Risk:

During the stream the following resources were mentioned to assist in quantifying and managing Insider Risk:

For the full readout of what we discussed in the session, watch the video below or find the full transcript at the end of the blog:

Now streaming: Code42 Live

This spring, Code42 launched Code42 Live – a series of live community discussion events to help solve the problem of Insider Risk. Recent guests have included Samantha Humphries and Chris Tillett from Exabeam, Elsine Van Os from Signpost Six, an Insider Risk Summit 2021 speaker, and Edward Amoroso from TAG Cyber.

Join us on August 10 for a conversation with Deloitte on “Developing a Holistic Insider Threat Program.” To learn more and join the discussion visit code42.com/live.

July 27th Code42 Live Full Transcript

Riley Bruce – Hello everyone, and welcome back to another session of Code42 Live coming at you, well live, I guess, from a variety of sources, including code42.com/live, Twitter, LinkedIn, and YouTube. So today I’m gonna be having a conversation with Derek Brink and Abhik Mitra. And we’ll be talking about understanding your insider risk and your intellectual properties value. And rather than spend a whole lot of time just babbling on right now, I wanna go ahead and have our two guests introduce themselves, and Derek since you are the guestiest of the guests, I will start with you. So if you could tell us who you are, what you’re up to, and maybe just an interesting or fun fact about you as a human being.

Derek Brink – Okay Riley, thanks. Hi everybody, it’s great to be here, I am live to say that. I’m a researcher and an analyst with Aberdeen. Since November Aberdeen’s been part of Spiceworks Ziff Davis and I’ve been at Aberdeen since about 2007, so quite some time. Before that I was in industry for about 20 years, and for eight years prior to coming here, I was at RSA security. Here’s the fun fact, for part of that time, I was the product line director for RSA SecurID. That’s the well-known one-time password tokens. And it was during that time when I was the product line director, my team redesigned the token that’s still shipping today. Kinda looks like a keyhole, or if you turn it on its side, a thermometer, with the ball at the bottom and then part where the six digits come. But here’s the fun fact, because of me… I was reading a book, literally, this is the fun fact. Am such a nerd. Reading a book about shepherds in the field and the tally sticks, they keep to count they’re sheep, and they make the marks 1, 2, 3, 4, but then they put the slash like the Roman numerals when you count. That’s because as humans, we can’t mentally see more than three or four things at a time, we have to group them. But if it gets bigger than three or four, we can’t group them anymore. It’s the same reason why phone numbers are the way they are in groups of two or four or three. So anyway, I asked our engineer in this redesign of the token, could you put a little space between the first three digits and the second three digits of the token on the display? And he said, “Yeah, of course I can”. Because that’s how I did it. Maybe you have the same experience you guys, you read the first three digits you type them in you look again for the next three, you type those in. And so they put that space in and to this day I have now noticed that this little space is showing up in my Duo and my Google Authenticator, it’s kind of standard now this little space. So as far as I’m concerned, it’s the Derrick Brink Memorial space. Nobody else knows that, except whoever I’ve told, so there you go that’s my success.

Riley Bruce – Hey, everybody who’s watching now knows including Diana from New Jersey hello. So we’re kinda all over the country today, Derek I think you’re in Florida, right? Abhik and I are Minnesota.

Derek Brink – Yeah North Florida yap.

Riley Bruce – So hello from those locations to you Diana and I should say, thank you everyone who’s joining us live. If you have questions or comments for Derek or Abhik as we go through this, throw them in the chat and we will be sure to address those as they come up. And Abhik sorry for hopping in there, I’m gonna throw to you now, for you to explain who you are and something fun about you.

Abhik Mitra – Yeah, no worries. Thanks you everybody, it’s great to be here, everybody that’s joining us and of course, hello to you Riley, as well as Derek. So I’ve been with Code42 for about five years now. I am part of the portfolio strategy and marketing team. And part of that is I get to work with our industry relations. So I get to converse with Derek and others in the analyst community, learn from their wisdom, get guidance on our positioning, our product as well. In terms of a fun fact about me, well I don’t think it’s as interesting as Derek’s, but I happen to be a movie buff. So if you ever wanna have a serious conversation in the Marvel world of movies or DC movies, I am that adult. Probably a little too grown up, trying to have a very serious conversation about how that movie should have been made and kind of discussing some of the finer points. So I have two young kids now, so sneaking out for movies doesn’t exactly happen as often as I’d like it to, but my secret goal is they grow up and they’re as interested in the movies and then I get to go with them.

Riley Bruce – You know, nothing like subterfuge with your own children. I will say that with regard to movies, art definitely imitates life. And so I think that taking that seriously is 100% a great thing to do. And without further ado, thank you both for being here. We have some prepared questions, like I said, anyone joining us live on any of the platforms, feel free to throw your questions in the chat and we will get to those. But like, we’ve mentioned before, and like we’ve talked about on previous Code42s Live, understanding your risk posture and your insider risk is really important. But I’m gonna ask this to Abhik and Derek, why is now a particularly important time for understanding it says your organizations, but any organization’s risk posture? And you know what, Derek I’m gonna start with you because I feel like you probably have a particularly strong opening salvo here for this question.

Derek Brink – Well, I have a strong point of view, I will say you used the term risk posture and that’s exactly the right thing to ask. The reason that it’s a strong point of views, is because I feel that not everyone has arrived yet at this point of view. What I mean by that is that, if you go back 10 years or more, fewer people were even talking about risk and risk posture, and yet that’s really the only reason we all exist as security professionals, as vendors, just because we’re trying to help the organization understand its risks and then make business decisions about what to do about them, to accept them or take steps to reduce them to a level that is acceptable. The question people used to ask is different. They used to ask the question, how secure are you? And to me, that’s the most silliest question in the world. How secure are you? I mean, what answer can you give to that? Pretty insecure, 42, there’s no answer to that, how can you give an answer to that? But you can say what’s our current risk, and is it acceptable or not? And then if it’s deemed to be non-acceptable, you can have a conversation about, okay, well, what could we do to mitigate that? So anyway, it’s always been important, but it’s important now. To answer your question, I think it’s becoming more and more mainstream which is really gratifying to see. Still, we still have a long way to go, we’re maturing, as a profession and as an organization. Not as an organization, as a discipline, we’re maturing, but we still have a ways to go. So even the fact that we’re talking about risk posture, as opposed to how secure are you, I think it’s a big positive thing.

Riley Bruce – Yeah, I wanna come back to this in just a minute, but a Abhik, I wanna let you go ahead and chime in here. Is there anything about right now that is making this particularly important to be focused on your insider risk posture?

Abhik Mitra – Yeah, I mean, I think it really starts with the fact that the insider problem hasn’t been solved yet, right? We’ve been talking about this for decades. There have been solutions put in place that have promised much, delivered very little. But I think an important point here is what’s happening now, right? As we kind of look around us, we’re in the midst of this pandemic, a lot of organizations are trying to figure their way out of that. And it’s raised a lot of good questions, right? How are people working? The way people work has fundamentally shifted. So, you can’t just have a makeshift insider solution, you now need to start thinking about it more seriously. I mean, we’re all remote, VPN is kind of a thing of the past. We’re all at the Liberty of using our own networks, our own devices. So as you kind of look at what now means, the world has fundamentally changed. So I think there’s a lot that’s driving the urgency now, but I think another key point, and I think this goes back to the research from Aberdeen, where 80% of the type of insider breach is occurring, tend to be non-malicious. Now, the reason I bring that up is, when we talk about this idea of risk, it’s a really good opportunity for organizations to start thinking about things more holistically. Back in the day, and I say back in the day but not that long ago, when people thought about insider threat, it was a very niche, very narrow focus, right? The assumption was you were doing something bad. The assumption was you were a malicious user. But what the risk conversation is now doing is now bringing into the foray, this idea that somebody might be doing something for non-malicious purposes, it’s still risk. But you have to kind of account for the careless users, the ones that are bypassing the policies knowing they exist, and of course the malicious users as well. So the good opportunity now for organizations is to take a step back, think of this less as threat and risk is more thinking of it holistically across the board.

Riley Brue – I think that that’s a particularly-

Derek Brink – Can I disagree Riley. I just want to agree.

Riley Bruce – No disagreement at all.

Derek Brink – He’s spot on.

Derek Brink – I can’t agree. He’s spot on he’s right. And we did for the longest time talk about insider threat. So an insider, of course, we all know by definition, it’s just someone who has authorized access to systems and data and resources. And threat is, again that classic language of how secure are you, whether if there’s a threat it’s gotta be dealt with. Threat is someone who’s got malicious intent, and Abhik is just spot on. If you change the conversation to risk, so instead of how secure are we, say what’s our risk? Risk is just how likely something that we don’t want to happen, in some period of time, and if it does happen, what’s the impact? That’s what risk always means. There’s no confusion about that, it’s what is the proper definition of risk. So if you talk about insider risk, it does change it, it says it made you think differently. You said, well, how likely is it that we’re moving files around, for example, in our conversation today. How likely is it that there’s some chance that that information which could represent, for example, our valuable intellectual property, how likely is that, could it be exposed? And if it does get exposable, how valuable is it? How much impact would it have? That’s a risk conversation and it doesn’t have to be just a malicious rogue employee, it could be all the things that Abhik mentioned. So he’s a hundred percent correct on thinking about it in those terms. And all of this stuff about work from home and stuff that’s currently, of course, that’s what we’re going through, but these conversations are what we should be having always no matter what’s going on, there’s always gonna be something going on.

Riley Bruce – Yeah, I wanna first off chastise you, there was no disagreement in that and I was promised disagreement. So that needs to-

Derek Brink – It’s coming be patient it’s coming.

Riley Bruce – Okay got it, I’m looking forward to it. There actually is a question that came in from the chat here, so that’s gonna supersede the next question, so both of you get to think on your toes. So, this came in from Bob and the question is, how do businesses today manage the different focuses between growth mandates and understanding risk posture while also growing? And so I think that we can kind of amend this to, how do or how would you recommend? So kind of both ways, ’cause I think we probably wanna be giving practical advice here, but also knowing what people are doing today is good in and of itself. So Abhik, you got to go last the first time, so you’re going first this time. You get to think more on your toes. How do you recommend balancing growth.

Derek Brink – Riley I disagree with him going through if you want this agreement. No just kidding, go ahead.

Abhik Mitra – Yeah, yeah, like I said, we’re off to a great start already. You know, they’re both related. And I understand the context of the question, right? Because when you think about growth and when you’re trying to think about risk, I think the ultimate security conundrum is how do you balance the two? Because as an organization, we’re always told you’ve got to move fast, and what does move fast mean? You’re expected to make quick decisions, you’re expected to move, so fast that you could be missing on the intricacies of what you need to start thinking about in terms of risk. But that’s kind of the importance from a security practitioners perspective of weighing, how do I figure out this idea of risk and also understand that the business keeps moving? So a great example right now is, we all need to collaborate, right? We all rely on these tools, whether it’s Slack, whether it’s OneDrive, Google Drive, the tools that make us move and function faster. And all of that is growth oriented, right? Because if we’re all collaborating, centrally working, even though we’re all remote, that’s a benefit to the company. But of course with that comes the risk of, well, I’m also remote. And while these are fantastic tools, they also represent classic exfiltration vectors as well. Because one of the most common pain points out there, is visibility into what might be moving from a corporate Google Drive account to a personal Google Drive account. It seems very simplistic, but it’s actually a harder problem than that. So I think for organizations, it’s really important to start thinking about how fast do we wanna move, and what solutions do we have in place to help us with that fast movement as well. But Derek I’d love to hear your thoughts on that.

Derek Brink – Well, the question is exactly the right question because security has traditionally been viewed, back in the day, and sometimes still today, as the organization that creates friction or road bumps or says no to things, and we don’t want that. We’re trying to enable these positive things. Those, by the way, just as a quick aside, those are risks too. If we’re trying some strategic initiatives like digital transformation or some kind of collaboration or productivity or automation, whatever the case may be, those aren’t certain by any means, there’s some likelihood that we’re gonna get the positive results that we want, and if we do get them there’s also some impact. So it’s really by definition, risk we often talk about it in terms of opportunity, instead of sort of downside. I still blatantly from places like Deloitte who calls the security risks we normally talk about as unrewarded types of risks, versus a rewarded types, but call it opportunity and risk if you want, I don’t care. But it’s still likelihood an impact. So yeah, we wanna enable all those good things, and yet we have to say, yeah, we wanna pursue those positive opportunities but at the same time, we need to advise the organization about it, in a business terms here’s the likelihood that something bad could happen when we pursue those and here’s the corresponding impact. And then we have to make a decision, okay, is that acceptable or not? So how can we achieve the positive things we want, and at the same time be aware of and make business decisions about those, very technical issues often about in our case today, insider risk and the value of our IP and things like that, and they go hand in glove. But you don’t do one in the absence of the other they’re part and parcel they’re part of the same conversation. That’s where we’re going.

Riley Bruce – Yeah, so I want to kinda go back here quickly because both of you had great answers to that. And what I’m hearing is that it’s balance, is the answer to everything, right? It’s figuring out both what your… Going back to the risk posture conversation just a minute ago, and then talking about how do we balance the desire for growth, with that obvious business imperative to then also keep the organization safe and secure. So to put a beat on that, hopefully that synthesizes something for the folks in real life.

Derek Brink – It does its like real life too Riley think about it. I hate to get involved in things that make people upset, but do you wanna do extreme sports? Well, yeah, you’d probably wear a helmet. So you can achieve the things but you also take steps to address the risk. We could get into all kinds of controversial things if that’s what you really want. But there are some responses to pandemics where, we have a very low tolerance for risk. Some people only say everything must be locked down and take extreme measures for mass and et cetera, et cetera. And other people have a different risk tolerance and they’re more open. And there’s different, same situation, same exact context, same facts, people make different decisions and have different points of view about how much risk is acceptable. So it’s no different in real life and other scenarios it isn’t security, it’s the same thing. So if we get in in one arena, we should be able to relate to it in another.

Riley Bruce – I think that’s a fantastic analogy, Derek. Abhik, did you have anything you wanted to follow up with that on? Otherwise I’ll move on to the next question.

Abhik Mitra – I’ll just very quickly say a hundred percent agree with Derek. I mean, this idea of risk is relative, right? Nobody can tell you, this is what your risk will be. So for organizations it’s important to understand, well, what level of risk are we comfortable with, right? Everybody is going to inherit some level of risk, it’s just a question of what you’re willing to be comfortable with. And that could vary by the industry you’re in, the type of business you’re in as well, but I agree. And that’s where the importance of risk comes into play and just understanding what that means.

Derek Brink – Riley, can I say, if you wanna get a little controversial, can I just say Riley, if you wanna get controversial. When someone says… And some vendors might not like to see their practitioners and insecurity space vendors. If they say to you your risk is red. Your risk is 72 out of a hundred. That’s not very helpful really, if you think about it. It’s not helpful to give a qualitative number or a pseudo quantitative number like that. If you wanna say, it’s this like… I’m in Florida now, but my home has been in the Northeast on New Hampshire for 30 years. And so if I get a snowfall forecast on AccuWeather, it’ll say, here’s our forecast for snow. It doesn’t say you’re gonna get exactly 7.2 inches. It’s gonna say you’re gonna get between 6 and 20 inches. And here’s the percentage likelihood for different parts on that range. And then I can make a judgment from my appetite for risk. If it’s more than six, I better make sure I have gas for my snowblower. If it’s more than 15 I really better get to the store and buy bread and milk and eggs, like everybody else does when there’s snow. Blizzard forecasts, you’re in Minnesota, you get this. Because my snowball is only six feet and I’m not gonna be able to get out right away. So, our risk profiles are different and their contextual and a single number. You can always look backwards after the fact and say, oh, there was 7.2 inches, but looking forward, it’s a range of possible things. Risk involves uncertainty. So how we embrace that uncertainty and how we deal with it, we first have to make a decision. Is it our estimate of that risk? What is it and is it unacceptable? And then we can have a conversation about, okay, is it worth making investments in solutions to help reduce that to a more acceptable level? Is the juice worth the squeeze in that sense? We gotta have that process, that’s part of the conversation. We can’t just be a security people. Hey, it’s bad you must spend with us, if you don’t you’re bad, you’re bad manager, that’s not the way it is.

Riley Bruce – I think that you definitely have gotten to the crux of the controversy. Its the old life is pain, highness, anybody who says differently is selling something. But I wanna move on to the next question here. And then we have another question from Bob, but I wanna get to our next question here, ’cause I think it does tie into what you were just saying Derek. And that’s, organizations have historically always tried to quantify risk, but have fallen short. What is so challenging about trying to quantify? And we’ll keep it specific here to insider risk. And Derek, I will throw to you first on this, and then we’ll sort of cascade over to Abhik on my right hand side here.

Derek Brink – Well, I appreciate the question, it’s an awesome question. I’m very motivated personally, by addressing this question. The light bulbs went off for me on this when I was still at RSA, so this is 15 plus years ago. But I since have come to… I teach a course in this very thing, a graduate course at Harvard University on how to quantify risks. The two guys I admire a lot in this space, I recommend you all read their writings. Jack Jones is one, he’s a guy who at one time was the CSO of Nationwide Insurance. He always tells a story. I hate to do this, but Jack always tells a story of his board asked him, well, Jack, how much risk do we have? Instead of how secure are we? How much risk do we have? And the only thing he could do as an answer was well quite a lot. And then say well Jack, if we give you this budget, you’re asking for how much would we have then? And the only answer he could give at the time was, less. And that’s not satisfactory to boards or to security leaders. So Jack went on to develop what’s today known as the FAIR, F-A-I-R, Factor in Analysis of Information Risk. So I recommend reading about that and the FAIR Institute as a way to help quantify it. And another guy I admire quite a lot is Douglas Hubbard. Douglas Hubbard wrote a book that I use in the courses I teach, it’s called “How to Quantify Anything”. And we think that things can’t be quantified, that’s the answer to your question, Riley. Why do we fall down? We say, because it’s hard. How do we quantify the value of intellectual property? How do we quantify the value of reputation? And Hubbard would say, well, if something like that really matters, we should be able to observe something about it. If we can’t think of anything we could observe, let’s say if our reputation is damaged and it’s hard to argue that reputation is important, for example. And if you can observe something about it, then you can measure that thing in a quantifiable amounts. And now you’re off to the races in terms of at least translating that how much risk do we have into business terms in a quantifiable way. And not in a fixed phrasing way like, oh, the average cost of a data breach is $148 a record, that’s just silly, it’s bad analysis. I could go on and on, you don’t have time for me to rail on that. But if you could give that snowfall forecast that analogy plus between this and this, here’s a likelihood of each point on that curve, that’s actually useful to a business conversation, that can be quantified and mere mortals can do it, right? I teach people how to do that, and that’s the nature of the project that we just did together myself and Code42.

Riley Bruce – Yeah, and that’s where it’s a great opportunity to plug both of those books that you just mentioned. And for those who are following along live or watching the review, I will make sure the links to all three of those pieces of material are in the description of this, as well as the blog post that will be created kind of summing up what we talked about today. So we’ll make sure that you have those resources available to you, if you wanna check those out. And before moving on to, I do wanna get to Bob’s question here in the chat and everybody please make sure and throw your questions there as well. Abhik, why has it been so difficult or what makes quantifying risk difficult for organizations?

Abhik Mitra – Well it’s interesting to hear Derek talk about RSA, because for those of you, and I’d love to hear your commentary in the chat as well. But for those of us that have been to RSA, Black Hat in person, when those in-person things were happening. It’s always entertaining to me how cringe-worthy it is when vendors are showing their solution demos, and you’re presented with these traffic light colors, red, yellow, green. And you’re almost expected to understand what those colors mean. And, I’m thinking to myself, well, great, somebody came up with an idea to surface risk on your behalf. And the common feedback I would get was, those numbers mean nothing to me because there’s no context behind them. How do I know that’s a risk? How is that any different from like a false positive? So it’s interesting that, the struggle is real, right? It’s existed for as long as we can think of. But I think there’s something else at play here as well. And this again, kind of takes me back to the research we did with Aberdeen. 75% of organizations said that they have no centralized visibility into what’s happening across their endpoints, across anything for that matter. And that’s a staggering number if you think about it, because look, ultimately when we wanna make decisions and any type of decisions, we need proper visibility. I mean, how else are we going to make those decisions if we’re getting inaccurate data, right? So kind of at the crux of it, if you don’t have solutions in place that are feeding you the right information, even beginning to assess risk, even beginning to try to quantify it, you’re already off to a bad start. So it’s almost like you have to start there, start with the right centralizations. And for those of you that know me, you’ll know that the IT promise of Single Pane of Glass makes me cringe every time. Because I just don’t think it exists, at least not yet. But again, I’d love to hear from our audience if they agree to that generally, but to me I think that’s a major pain point yet to be addressed.

Riley Bruce – I think that you will hear some chatter in social and in the chat about that one. So let us know in the chat, if you have seen a good example of a true Single Pane of Glass. I challenge you to prove Abhik wrong, that one does exist. So I think that the next question that we’ve got is coming in from the chat, and I’m gonna read it and then I’ll try and break it down into a little bit more consumable chunks here. Do you manage risk by processes and control? Is it better to focus understanding, or to better focus on understanding human behavior and the growing bypass culture? Which is something that we’ve kind of talked about with regard to how people will just go around policies and then actually this’ll tie in nicely to a planned question later as well. How do you scale addressing insider risk issues as people more openly collaborate? So I guess the first part of this question is, is it even possible to manage risk through process and control alone? And Derek or actually Abhik.

Derek Brink – Can I actually start?

Riley Bruce – Go for it, yeah.

Derek Brink – You let me start, okay, thank you, I appreciate you.

Abhik Mitra – This way I can contradict you.

Derek Brink – Yeah, there you go. I would first of all say, this is the professor yard I guess. Policy is a form of control, policy is an administrative control. So we can say we all, new hires we all get to click through the acceptable use policies and so on, and sometimes we never see those things again. But we have policies and say, thou shall not send data to these places unless it’s with an approved secure solution. And we may forget about that. So if we can have a policy that might not have a way to monitor that, so there’s a lack of visibility a bit, and we may not have ways to enforce it with technical controls either. So we have to ask ourselves sometimes I think, is a policy by itself actually useful? There’s a couple of truisms I’ve written down when I hear them over the years. One is that, a policy that’s not enforced is weak and ineffective. And a bad policy that’s enforced is the worst thing that can possibly exist. So we have to have the balance right between the right policies and also the controls that can help give us visibility into monitoring what their state is and also technical controls that can help us to enforce them. I think that gets a little ways towards answering the question, but probably not all the way, but I’ll stop there for you Abhik to disagree with me.

Abhik Mitra – No I think my opinion on this is, of course you need policies, of course you need controls, but don’t lead with those. A helpful start is understanding how your organization works, right? How your end users want to work. It’s a little bit of a role reversal, right? Because in the world of security, you generally lead with those policies, with those controls, and then you basically told end users how they need to fall in line. I think that conversation has shifted a little bit today and it should rightfully, because I think, again, with remote work, with this idea of you wanna work on a device of your choice, whether it’s Mac, Windows, or what have you, that conversation needs to occur first. And then of course, you have the policies and controls, but the risk of leading with policies and controls is, well, what if you get it wrong? What, if you haven’t properly assessed how the organization works or the culture even for that matter. So start there, work with your end users, see how they wanna work. And then, I believe you can build policies and controls that are still gonna protect your data. I think it goes back to that idea of balance, right? And I think you achieve better balance if you account for what end users need in the first place. So hopefully that answers the question, but I think it’s a really good question.

Derek Brink – Can I say again, can I add onto that? Riley, do you mind?

Riley Bruce – Yeah go for it.

Derek Brink – This idea of policy that’s how it used to be. Mike, the IT guy in the absence of any real governance, Mike, the IT guy used to say, well, this is what I heard would be the best policy. So he makes it, and then we all are stuck with it. Or maybe some industry group said, here’s a bunch of smart people got together and said, these are the top things that you should do. Or heaven forbid an analyst says, here’s what you should do. But what Abhik is saying is that, yeah, let’s actually see the way the business operates and how it wants to operate and then implement the policies and controls that support that. It’s the smart way to think about it. And to do it in any other way is where you get to these problems where you have people say, yeah, I know the policy, but the policy also is there’s a deadline Friday it’s the end of the month and we have to get this done. And so I need to move ahead and go around. And that’s not malicious, that’s conflicting goals. So management is having policies that conflict with each other, you’re gonna pick the one that gets you rewarded and helps the company make its revenue and gets your bonus paid and all those kinds of things. As opposed to remembering what you’ve signed off on a new hire some years ago, that’s my view on it.

Riley Bruce – So I think that that is actually a fantastic lead into a question that we had planned sort of for the end of the conversation. And I wanna make sure we’ll come back to Robert’s or Bob’s question about bypass culture here in a minute. But sometimes policy is just bad. Sometimes the policies are not good. Like you were just talking about Derek. So this is kind of a fun and also a cautionary tale for everybody. But both of you, what is the worst security policy that you’ve ever seen put into place or heard about being put into place at an organization? And Abhik I’m gonna start with you and then Derek we’ll have you close it down, I’m sure you have some good examples.

Abhik Mitra – Yeah, I’m gonna defer to Derek on this one. But I will just say very quickly, any policy that completely bypasses end user needs to me is a disaster waiting to happen. ‘Cause you’re gonna either deal again with the bypass culture, you’re gonna deal with people, figuring out ways to circumvent that policy. So it goes back to the last question, right? Like make sure that you’re understanding what your end users need. ‘Cause again, if you’re building policies and walls that are going to impede their ability to work, well, that’s just not productive at the end. But Derek might have a more fun story to share.

Riley Bruce – Well, real quick before that Derek. Abhik, if you could, we’ve used this word a couple times and I just wanna make sure that everybody understands it with some context. What is bypass culture? If you could define that for the folks who are watching.

Abhik Mitra – Thank you for pointing that out, Riley. Yeah, so a bypass culture, the best way to explain it is, you have a group of users that are aware of the policies in place. They are aware of the security restrictions, they get it, they understand what those restrictions are. But, they view those restrictions as their ability to get proper work done. So those users, and again, there’s a general understanding of what the policies are. Those users are going to figure out ways to get around security. So let’s just look at a quick example. Let’s say your policies are strictly looking for social security numbers. The user knows that okay, logically I’m looking for a nine digit number. So how am I gonna overtake that? Well, I’m just gonna add another digit or two, and now I know I can potentially bypass, whatever security protocols might be in place. And we’ve seen this happen. And here’s the thing that I think everybody should take away, right? Users aren’t stupid, they’re getting smarter. And as you become more transparent with your security policies and the solutions you have in place, guess what, somebody is at their desk thinking, huh, I know a way to beat that, and it happens. So hopefully that helps with bypass culture.

Riley Bruce – So you’ve met me is what I’m hearing.

Abhik Mitra – Right, am really talking about Riley.

Riley Bruce – Yeah, exclusively. So Derek, now that we’ve kind of level set on what bypass culture is and potentially created an accidental poster child. Could you explain to us, or tell us about some bad policies that you have seen put into place at organizations?

Derek Brink – I’ll mention one, that’s frustrated me personally, company I have worked for in the past. Sorry, can you hear doorbells and things going on? I apologize, this is the real world.

Derek Brink – In the past policy-

Riley Bruce –  Hey you know what its just perfect were live, right?

Derek Brink – Yeah, live. Policy would be, hey, we’re defining for you and your training here’s what personally identifiable information is. And our policy is you may not transfer that to a partner unless you encrypt it that’s the policy. But the frustrating part is, they didn’t tell you what tool you should use. They didn’t say here’s our officially supported sanctioned tool. They didn’t say any of that. And so users are left with either saying, how am I gonna achieve this? And what am I gonna do? And then you spend time trying to figure that out on your own. Or you say, you know what, I’ve got the deadline, the boss told me this is really important, I’m just gonna bypass this policy this one time and do it anyway. So that’s a real example and maybe not the most earth-shattering one, but we see it all the time. Actually, another fun fact about me is, since I started thinking about the policies and ineffective controls, I have started to take photos as I encounter them in real life. So I have a photograph for example, of a big giant gate. And then right next to the big giant gate, a little tiny fence. And so you could just step over the fence, just walk around it. I have nice photos of that. I have a photo at Home Depot, where if you take a piece of lumber and they’ll cut it for, at Lowe’s or other Home Depot type stores. Right on it there’s a big sign on the saw, it says, don’t leave your measuring tape on the saw. And of course the photo I took is right next to that sign was a measuring tape. So again, we can have policies that seem to make sense at the time, and yet we don’t have the visibility then to monitor what’s happening really, and how people need to work. And so we have policies are out of sync with how people do their jobs, and then we don’t have the visibility or the controls actually to enforce our policy. So that’s where we can get into this sort of the battle between the users and security that’s been existing in the past. I think it’s getting better, we want it to get even better where security is seen as an enabler. So we’re getting there. I wanna be positive, we’re moving in the right direction, but we’re impatient, I think we’d like to go faster.

Riley Bruce – Yeah, I think that actually feeds in nicely. Now we kind of diverted away from the question that came through in the chat. Now we’ll come back to the end of it. You know, we’ve talked about how some policies don’t work. We’ve talked about how users will go around them when they do, when they feel like they need to, to get their jobs done. How does an organization, or how would you recommend an organization address insider risk issues at scale, as people are more openly collaborating? And Abhik, I am gonna go to you first on this one, and then we’ll throw to Derek. Where should the folks who are watching this start?

Abhik Mitra – Well it’s interesting because the answer might be sitting right in front of you, right? And that is the data. And, I think we tend to overlook the story that the data tends to tell us, even at scale. Again, going back to this idea of insider risk versus insider threat. You know, with threat you’re looking at users, a handful of users even, and then you’re kind of zeroing in on what you wanna understand about that. But let’s not forget what they’re after here, which is the data, it’s the IP. And only when you start to ask questions around, where is that data moving? Who has access to it? What the contents of that data might be that make it so interesting? Do you start really asking and probing the real questions? And this idea of data centricity has been around for quite some time, but I look at everything that’s happening, right? The solution approach has been to kind of zero in on the user, which to me feels very unfair. Focused on the data because ultimately the user is after the data, I’m not saying overlook the user completely, but only by starting with the data and scaling from there, do you start to understand other things. Because then again, you kind of go back to this idea of viewing the problem more holistically. And it’s interesting what the data itself will tell you, right? The metadata, the data, understanding or having visibility into things like, is somebody trying to change the extension of a file? Are they trying to falsify what the file might appear to be? Essentially masking its identity. So I think focusing in on the data and working your way to understanding, well, then who’s the user and what’s the interest in that data, is a good place to start. Derek would love your thoughts on this too.

Derek Brink – Yeah, I grew up in the country in the Midwest and my grandfather for my 10th birthday… It’s just who I am people, you don’t have to agree with it. But for my 10th birthday, I got a 22 rifle from my grandfather and he taught me how to care for guns. And the idea behind shooting is first ready, and then aim and then fire. And when it comes to data, the ready part has to do with a lot of the things we are just talking about. What data do we have? Where is it? How does it move? Who accesses it and for what business purposes? That’s the ready part. And the aim I think, would correspond to our discussion about risk. How likely is it that something bad could happen? And if it does happen, what’s the impact? And is that acceptable? What should we do about it? And then based on all that, then we should come to the fire part, which is, okay we should have policies and controls, to implement management decisions about risk and so on. But that’s not what happens, in all the research’s that I’ve done over these many years now, the preponderance of the pattern is, it’s really fire first and then the ready and aim. So we fire first means, well, we’re gonna put policies and controls in place, and then maybe eventually, someday we get to the contextual understanding, the visibility and the risk-based of things, so it’s backwards. The right way to start is first ready, and then aim, and then fire. I think that’s what you were saying Abhik, but it’s not how we tend to do it as a group. Some do, more mature organizations definitely do, but it’s not the mainstream way of doing business in the past. I do think its changed, and I’m trying to be Mr. Positive here, Mr. Brightside. See things moving in that direction, which I do think it is. But again, I’m inpatient I’d like to vehicle faster.

Riley Bruce – Yeah, Speaking of Mr. Brightside, no, we are not sponsored by the Killers in case anyone is wondering. But I think that that is a fantastic way to wrap up our session today. Thank you very much Abhik and Derek, for joining us. As well as, Bob and Diane for joining us live in the comments. I just wanna say, if you would like to continue this conversation, number one, we’ll be back with another one in a couple of weeks. And Derek, I think we still had a couple of things to talk about, so maybe we’ll have to schedule another one in the future. But also if you would like two full days of conversations like this, join us for the Insider Risk Summit, September 14th and 15th. You can register for that now @insiderrisksummit.com. And we’ll be announcing more about speakers and sessions here in the coming days and weeks. So thank you again, Derek and Abhik and everyone for joining us. This has been Code42 Live, see y’all soon.

You might also like: