Skip to content

Context is Key to Effective Insider Risk Management

3 learnings from our time with Greg Martin and Mark Wojtasiak

Earlier this week, we spoke with Greg Martin, VP and General Manager of Security at Sumo Logic and Code42’s own Mark Wojtasiak – officially kicking off our Code42 Live Community series. Following last week’s primer on defining Insider Risk, we thought where better to start than arguably the most critical part of any effective security program: context.

You can view the whole conversation linked below, but if you’re just looking for the highlights, read on for our top three.

1) Context comes in many forms

The most obvious example of context during the course of an Insider Risk investigation is metadata. Knowing what files went where, when and as catalyzed by whom is table stakes (and one that too many solutions can’t meet). Context goes deeper, however, and extends to organizational knowledge about things like “when are users expected to work” by department and “what types of files are considered important by line-of business managers?” Having answers to these questions falls under the umbrella of “context” that is vital for security teams to have to be able to solve organizations’ Inside Risk problems. 

It’s necessary to treat individual users and organizations differently based upon their job roles. For example, if a salesperson was penalized for uploading a slidedeck to a prospect’s DropBox, that would restrict their ability to do their job. However, if they suddenly started uploading scads of files to github or, there could be a problem.

2) With great context comes great responsibility

This one is true on two different levels. First, no one on a security team wants to know more than they need to know.  Information is power and also, knowing more about your co-workers than is strictly necessary to get the job done is creepy. So, it’s important to make sure that you collect the proper amount of information while also keeping it restricted to things that are pertinent to the business (knowing where corporate data is going) and not needlessly intrusive (monitoring keystrokes and/or constant screen sharing).

Second, from the perspective of the technology provider, it’s incumbent that context be collected and correlated in such a way that it minimizes the need for human intervention. It should be possible to use the context about line-of-business needs gleaned from conversations you had during the process discussed in the previous section to machine the intuition of your analysts into the technology you use.

3) Focus on MTTD and MTTR

There’s a common belief that “there are two things you can’t out-engineer; carelessness and mal-intent.” You are never going to be able to stop data movement or the risk it creates; it is inherent to doing business, however you can focus on remediating harm. Organizations need to focus on decreasing Mean Time to Detect (MTTD) and Mean Time to Response (MTTR) over trying to prevent things. Humans are and will continue to be chaos agents. By focusing on where there are anomalies in that chaos will allow you to identify when there’s a divergence which needs to be addressed. 

For the full readout of what we discussed in the session check out the transcript below or watch the video here:

Tune into the next session of Code42 Live on May 25th at 12PM CT to hear a discussion about how to start an Insider Risk program. For anything else, feel free to reach out to Greg, Mark or Me on LinkedIn and we’d be happy to chat.


Video Transcript:

Riley Bruce – Hi everyone, and welcome to Code42 Live. My name is Riley Bruce and I’m the Security Community Evangelist here at Code42. Joining me today to talk about How to Improve your Insider Risk Posture, is Greg Martin, the Security General Manager at Sumo Logic. And Mark Wojtasiak, the Head of Security Product Research at Code42. For those of you who are joining us at code, please use that chat box. I’m gonna say liberally, so that we can get lots of conversation going and answer the questions that you would like answered about this thing that’s called Insider Risk. So who am I? My name is Riley Bruce. I said I’m the Security Community Evangelist here at Code42. It’s my job to engage with y’all and make sure your questions are answered. So the first question that needs to be answered today is who are these other two human beings that I have off to my, well, it’s my right, but it’s gonna be to the left side for all of you or something like that. It’s nice to deal with backwards video. So first things first, Greg, if you could tell everybody who you are and what you’re up to, what you’re interested in and maybe a fun fact about yourself.

Greg Martin Thank you. First of all, I appreciate you guys having me on. I’m Greg Martin, I run our security business here at Sumo. I’ve been on board for about a year and a half where my company JASK, which built security analytics was acquired by Sumo, and now it is the basis of our cloud SIEM product. So we have one of the fastest growing next generation SIEMs in the market. It’s delivered as a cloud based SAS product. And that’s our business here at Sumo, and we’re really excited to be on with you guys. Fun fact, I started my career as a hacker for the US government. So a lot of fun stories that I love to share over beers.

Riley Bruce – Does this count is over beers, because I can go grab, I’ve got some nice NA beer to use for that purpose.

Greg Martin – You know, COVID’s made it difficult, but obviously if you work for the government, they don’t love you talking about that experience on camera, so.

Riley Bruce – Thank you for that, Greg, if you questions for Greg or for any of us, enter them into the chat box at For those of you who are joining us on YouTube or LinkedIn, welcome as well, we’re glad to have you, but I’m not gonna be able to keep up as adequately with those questions and comments. So if you would like something answered on the stream go to I’m gonna keep saying that, keep plugging it. So Mark, we heard what Greg’s up to. If you could do the same and just tell us who you are.

Mark Wojtasiak – Hey everybody. First of all, thanks Greg for joining us. I think I’ve done a few of these with you, Riley, and I feel like you’re sidekick. My name is Mark Wojtasiak, people affectionately refer to me as Woj, because of that last name spelling. I Lead a Product Research and Strategy at Code42 and largely market research in the security space. I’ve been at Code for a little over five years now. And I think for the past four years, we’ve been centering a lot of our research on I think what the problem we’re talking about today, insider risk, and I’ll let you Riley, steer the ship on what you’re gonna hammer Greg and I with on that front. But fun fact. Well, I’m a huge Metallica fan, if you see my background, that probably dates me and a Chicago Cubs fan. But also I spent my COVID summer with our CEO, Joe Pane and our CSO, JD Hanson, writing a book that’s over my left shoulder. I don’t know if it’s my right or left on screen, but inside job. So, it’s a lot of what we’re probably gonna be talking about today is in that book. It’s kind of a summary of a lot of the research we’ve done over the past four years and some general practices that businesses can take to help manage the problem. So, yeah, that’s me, this’ll be fun.

Riley Bruce – Awesome. Yeah, I’m really excited to get to the questions that obviously we have some that are prepared, but also if anybody wants to chime in in the chat box, I know that we did have a burning question in the chat from, well, I’m gonna say from myself here and that’s for those who are logged in. what is your favorite Ninja turtle? So that’s the burning question, but the first question for both of you right now is, what is Insider Risk? And how has it historically been dealt with? Mark, I’ll start with you, because obviously this is gonna be more inside the bailiwick of Code42, but then Greg I think you’ll definitely have some important insights on how historically this problem and the problem of data security has been dealt with by organizations in general. So Mark, we’ll start with you there.

Mark Wojtasiak – Yeah, insider risk, as we see it, is any event or activity or behavior that exposes corporate data, exposes it to leak, theft, breach, whatever it might be. One thing that in our view of the insider risk problem and the research that we’ve done is, its intent doesn’t factor into the equation as much as one would think. We look at data risk as data risk, and how is it manifesting inside the organization. We see insider risk for largely is a problem that’s been growing over the years, due large and part to the way organizations are operating and they’re operating at very high speeds, collaboration technology, productivity technology, cloud, email, you name it. There’s literally, when you think about what security is up against from a data risk perspective, thousands of endpoints in an enterprise, right? Connected to thousands of users with access to millions of corporate files, right? And literally hundreds of destinations those files could go to. Whether those destinations are sanctioned by the company or largely unsanctioned by the company, and that’s risk, and that’s corporate data leaking every day. So, how do you begin to wrap your arms around the magnitude of that problem and begin to manage it in a way that is takes into consideration security resource constraints and budget constraints and all these sorts of things, without ever slowing down the business? So, how it has been dealt with-

Riley Bruce – And I think that’s actually a great time to bring Greg in because Mark did just ask a question, how do we deal with that problem? Or how has this historically been dealt with from a technological perspective? From, obviously, Greg, with some of your history, you were almost on the opposite side of that equation, trying to help data leave rather than help data stay. But obviously that’s a really important, you need to be in the headspace of the protectors to be able to outwit them.

Greg Martin – If you’re gonna start a good security program, you have to have at least the context of how the bad guys operate. So, that’s why red teaming, purple teaming is so effective. But to get to the original question of what is insider risk? I concur with Mark, I think in this new world of moving digitally, transforming to the cloud and all our IT assets becoming digitized, and moving to the cloud, especially accelerated now due to COVID and everyone’s working from home or working from anywhere really. When you look at what’s changing, doing cybersecurity and reducing risk in this new environment, well it focuses down on really just the people, the users, the data at this point and the applications, right? Because everything’s changed, the security controls have changed now, that we’re in the cloud, the way that it is being run, the way that we log in and connect to these systems is all changed. So those are the three focus areas. And when you talk about insider risk, it’s talking about the importance of those users in that data, so that’s a such a huge portion of our security footprint today. And, how are we doing it today effectively versus in the past. In the past we didn’t do such a good job in my opinion, we didn’t have the tooling to provide the right context. And, if you go to any really large sock, whether it’s a government federal sock, like they have it like NASA or NSA, or you go to your top security operation centers and some of the fortune 500 and you ask them, what are the most important? And they’ll consistently tell you context is what matters. Context on what’s happening with the data, who’s accessing it, why is this data important, how’s this data, where does it live, Where should it live, Where is it going. All of this requires the right instrumentation to provide that context. So ultimately we can understand what’s going on and where to look around the corner.

Riley Bruce – That’s a fantastic lead in actually to our next question which is, you mentioned that context is so important and that it really does help, and I’m gonna put some words in both of your mouths here a little bit, but it helps tell the story of what’s actually happening versus having to basically guess at things that historically security teams wouldn’t have had any visibility into. Whether that’s things that are happening off the network, obviously I know that hybrid work is going to become more and more of a thing. We’ve had, obviously all of us as the viewers can see, are not working from corporate offices right now, we are working from places that we got to decorate, which is nice, but also that comes with a whole host of security challenges that we didn’t have to deal with on mass really until March of 2020 when it was like the switch flipped overnight. And now, as we’re trying to take the hemostat and roll it back slightly . How are we, what is that low-hanging fruit, that security teams should be focused on, that maybe they’re not right now, from the perspective of trying to pay attention to insider risk, insider threat? Greg, since you just finished up, I’ll throw to Mark.

Mark Wojtasiak – Oh, sure. Low, well, low-hanging fruit is, I think that the tech exists like with what Greg was describing. And I think a lot of what Sumo does in combination with what Code42 does, is provide that context. There’s , I think in the past we relied really heavily on this notion of identify where all your sensitive data is then classify it, right? Tag it and then write a policy to prevent its movement, right? Or prevent it from going somewhere that it shouldn’t as Greg eloquently described as this movement to cloud, and remote workforce and all these sorts of things, that’s just not feasible anymore. So, but what is feasible is harnessing the context and signal that already exists within files, vectors and users. I think Greg said data users and applications. We think exactly the same way. So when we think about the notion of monitoring everything, right? All files, all users, all vectors are all destinations that file may go to or that user may interact with. Within that signal, there is a ton of context, everything from file metadata context, to user behavior context, to whether that destination is trusted or untrusted, whatever it may be. And when you begin to correlate that what you end up with is very strong, high fidelity signal of what is risk and what isn’t, right? And what is material risk to the business and what isn’t. And, when you have that contextual detection or contextual prioritization, the workflow of security becomes much more streamlined. Right now, you’re talking about what’s the right way to respond to that. What’s the right way to remediate it and contain it and things like that, and just get smarter and smarter about how we manage the risk versus relying so heavily on an old, outdated control which is just when in doubt block, ’cause we know users are gonna work around that. They already are. It’s that’s, the last year long experiment or remote work has proven that there’s all kinds of workarounds for them to get their jobs done. And that’s at the end of the day, all they’re trying to do. I can’t hear Riley.

Greg Martin – Yeah, Riley, I think we lost your audio, buddy.

Riley Bruce – I’m back. Yeah, yeah. I was just gonna say that, that only adds to the problem, obviously-

Greg Martin – I thought you’re using profanity with me.

Riley Bruce – Oh, no, I can if you would like to, although that would have to be the “after stream” for those who are joining us. Mark, I wanna touch on a point that you just made about material versus immaterial risk in a follow-up question here in a minute. However, Greg, I definitely wanna hear what you and by extension obviously, Sumo are thinking about when it comes to what is the low hanging fruit to better help organizations with this hybrid work type of scenario protect their data.

Mark Wojtasiak – I think you have to start with the mindset and really the world’s changed very fast, and protecting your cloud security workloads and this new sassified IT environment that we have, we really have to think differently and start from the ground up and revisit how we did security at the holistic level. The kind of joke that I always tell is that, you can’t just put your FireEye appliance in a box and ship it to AWS and say, “Hey, can you please install this for me?” That’s not gonna work anymore in this new world. So we have to take a look at, how do we protect these new workloads which are very different than the workloads of the on-premise data centers of the past? What are the security controls to protect data leakage, to protect insider threat, to protect, monitoring these workloads? Because the problem is just as evident here as it was before. So, it’s not just your employees sending files whether on purpose or on accident out of the network or leaving a laptop and a car and losing records. Now we’re talking about unsecured S3 buckets and Amazon blob or Azure blobs. And really it’s a vast amount of ways that data security, data privacy are being compromised now. And often it just comes to somebody making a mistake. And really the often the insider threat is not malicious. It is really often just mistakes being made by humans that have a lot to do. And let’s be honest, we make mistakes, right?

Riley Bruce – I sure do. But, those are all stories for another time. I am only ever perfect when we’re streaming. But with that said, I think that really does highlight the reason that at Code42 at least, we conceptualize insider risk as being distinct from insider threat, because the threat implies not only intent but it implies mal-intent explicitly. And so that’s really something that is important is paying attention to the fact that most people do want to do the right thing. And the low-hanging fruit, a lot of the time is just being able to, and I’m gonna try and synthesize everything that the two of you were saying here into one thought. But the low-hanging fruit is often just trying to make sure that you have enough information to be able to discern, either through communication with the user after the fact or very close to the point in time that it happens. Like you have that evidence of what happened, but then you can talk to them, talking to people who knew that was a thing you could still do in 2021, and figure out what something that happened on purpose, or was it an accident. And why did it happen to then be able to tune your processes and tools to make sure that it doesn’t in the future. So going back to something that Mark kind of touched on and that we had queued up in our questions, and thank you all for joining us. Those who are joined live at as well as LinkedIn Live and YouTube Live. Thank you. If you’d like to join the conversation in the chat, go to and pepper us with questions. There are a few comments about Ninja turtles so far, however, would love to see any questions that people have about this conversation as well. The thing that Mark, you touched on just a minute ago about the material risk to the business versus something that maybe isn’t. I know that there’s a lot of difference between me going to my personal Yahoo account as an example, and uploading a picture of my dog, who you can’t really see because she’s at a frame behind me here, and uploading maybe like a Salesforce export. And how does an organization define a risk that matters versus just either someone doing their job, or someone doing something that’s personal with their own data? And I will go to, who wants to go first? Mark or Greg?

Mark Wojtasiak – I can take a crack at that one. I think what you’re describing is, not what you’re describing, but what you’re, about. The problem you’re describing is something that we have been working on and building over the past the past year, or couple of years, we call it insider risk management. This framework is for answering that very question, and it starts with, and I think organizations struggle with this is just to get a good grasp on where you have risk, right? In this new world order, do you have an organizational wide view of where data exposure exists, right? What does your shadow IT footprint look like in this new world order? What does your use of personal applications or thumb drives or what does your posture look like when employees leave the organization, third party risk, any number of things to measure your starting point? That’s a great way to have conversations to help inside the organization to really help security define the company’s risk tolerances. And because we contend, and we’ve talked to a number of customers that, if you talk to the chief technology officer, that person’s risk tolerance is probably gonna be different than the chief marketing officer, right? Or a leader in marketing in terms of what sensitive, what they deem as sensitive files or what’s important to them. And once you get an idea of across, this is where the notion of security partnering with lines of business and talking to lines of business, and really understanding what they won’t, what they will and will not tolerate. Maybe marketing tolerance box usage, even though we’re a Google house, but RDO or the R&D team or the developing team, developer team, will not tolerate box or any vector of our destination other than corporate sanctioned. So, understanding those tolerances will help define what we call insider risk indicators. So, I think the first question I was talking about, what’s the signal that’s trapped inside of these files, vectors and users? Is, what context can you pull from it? That context when put in sequence or put in certain orders give you insider risk indicators of, “Okay, if source code moves… To a thumb drive, the risk tolerance is very low, ’cause source code is very valuable to a software company, et cetera, et cetera. I want to deem that as a material risk to the business, therefore my response to that will be X, right?” Versus, “Oh, Mark, in marketing has an open file share of a public presentation he’s working on for RSA on Google Drive.” They’re not gonna tee off an investigation on me and do all kinds of, I start building a case. They’re gonna Slack me and say, “Hey Mark, this is an open file share, is it intended to be, if not please restrict it to Code42.” So that’s the whole, “I’m just doing my job. I’m not a threat.” It’s the context behind the data and the risk tolerance that the business has that will help discern between, “Hey, what is a material risk?” And what is, “You know what, these are just employees trying to do their job, that we just need to nudge back into the right direction.”

Riley Bruce – And that really does bring us back to all of that, that you were just talking about, the conversations that you’re having with those line of business leaders and the file metadata that you’re getting. All of that information is just a greater example of context. So more and more context to be able to understand. And the nuance in each situation is part of really, having good data inputs is part of being able to respond and protect the organization from risk. So, Greg, I wonder if you have any additional perspectives to add here about how do you define something that is a risk that matters versus someone just doing their job.

Greg Martin – Yeah, I think a lot of organizations, depending on their level of maturity and their security policy and processes have either not done this work yet or have done it to some degree where they’re actually labeling systems, files, data shares, users with risk levels. So the more mature organizations will have done this work, some of the smaller shops just don’t have the talent capability to be able to do this. So in those cases, leaning on some tools for automation can be very helpful, but I agree, context is the most important. You have to have visibility, otherwise, how do you know that that a threat or something, potentially dangerous is happening your network? Then once you have all that context, how do you sort it out and make sense of it? And that’s where Sumo comes in with our types of products where you’re bringing that data, streaming it, providing it to an analyst and then filtering the data that makes the most sense, taking advantage of things like SOAR Technology to run playbooks and automation, bit of a shameless plug here. But, with context comes a responsibility to take action with that data. And you have to complete that loop of the security investigation. That’s why at Sumo we love working with guys like Code42, because you bring these two solutions together; context, automation, analytics. It’s a really powerful solution for a lot of organizations. But before you get there and invest in pure technology, you have to do the basic hygiene. You have to start, at least collecting a footprint of what are my assets? Like Mark mentioned, what is my dark IT? You have to have a method or a process for scanning and identifying what’s out running in the first place. And then you can get to the classification where you start to put criticalities on the data and the files themselves. And I think security hygiene is where most organizations fall down and typically have troubles when you’re going backwards and looking, why did this breach happen?

Riley Bruce – Yeah, that’s very true. And then on top of that, it’s the fact that a lot of policies are written such that unless there’s a violation , there’s that flag, it never happened, right? So that’s obviously something that’s sub-optimal and that you never want, but that kind of leads us into our next question here, which is, as we’re collecting more and more data, as we’re trying to have more and more context and that becomes the focus. How do you balance the ingestion of all of that data against of human beings, and, things like fatigue when it comes to alerts or too much, too many flags that get raised. And, Mark, I’ll go to you first. Where should we start with that balance?

Mark Wojtasiak – Well, I think that’s what both Greg and I have been alluding to since this Code42 Live started was when you think about context, what you get from file vector user, the correlation you do, the more correlation you do the more you’re going to service the risks that matter, right? Prioritization, right? So, instead of you think about what an analyst is up against and the sheer number of alerts and logs coming in from a data loss protection perspective, how do you help them sift through that? They shouldn’t have to sift through that, right? There’s enough, if the Code42s and the Sumos of the world, our tech actually works, we should be servicing the ones the risks that matter, so long as the technology understands what the organization will tolerate and not tolerate, right? So giving the… Contextual detection to the analyst so that they know exactly what they need to follow up on and what they need to investigate. Let the tack, if it’s really designed to do what it’s supposed to be doing, it should be triaging, right? It should be doing the triage and then saying, “Hey, based on your policies and governance that you’ve established and what you tolerate and not tolerate, here are the things that are outside those bounds and go dig into these,” right? Do kick off an investigation or in Greg’s case, kick off a SOAR playbook. If you have enough context in the alert itself, that could automatically trigger some sort of automated response like a Slack message, or a nudge or a security awareness training or whatever it might be. For the more severe instances or events, it may trigger the analyst to do a deeper investigation, and start building a case, start looking back in time, start looking at what was this employee doing, and what are the intentions here. They may find out they’re departing employee or they’re just put in their notice. So they may find different attributes or behaviors of that employee that takes an entirely different type of remediation, right? Or escalation. So, I think, we always say monitor everything, right? So we don’t necessarily rely on classification tags or privileged access management. Those are just, that’s just additional context to us, but it’s not necessary if you monitor everything, and the technology, whether it’s Code42s and Sumos, is designed to be able to do the correlation, right? And spit out based on what that organization deems risk, tell the analyst what they need to focus on and where they need to do their work. We call it aside, jokingly aside, it’s like we talked to a number of security analysts and we say, okay, you gotta see of alerts coming in, thousands of them. How do you do it today? And they say, “We just, it’s intuition. Like we can just look and see what is risk and what isn’t.” Well, that’s burnout when you have thousands of them. So how does tech like Code42 and Sumo begin to machine their intuition? Like, what is intuitive to you? And how do we begin to correlate that and give you the alerts that you know you need to follow up on?

Riley Bruce – Yeah, that’s, the short answer there is that hopefully you’ve got good technology and good people that together can kind of be synergistic. But Greg, I am assuming, or I expect you will have some good insight here as well.

Greg Martin – Now this is a topic I’m very passionate about. Look, we don’t have enough skilled cybersecurity workers. And, like Mark referenced, a lot of them when given this type of work, we’ll burn out because it’s so tedious. So, I’ve been saying this for a long time now, security operations is no longer a human scale problem. There’s too much data, too many alerts, too much to determine, is this a false positive? False negative? We have to start leaning on technology and feeding this to the machines to process. So we can really scale this and allow our best defense, our human cybersecurity professionals to be able to have the time, the context, to be able to focus on the threats that matter and not doing this type of level one triage work. So, I think you hit the nail on the head. I think we have to lean on technology. The great thing is that correlation today is much different than correlation in the past where it was just a basic standard of rule. Now we have behavioral detection technology. We have all different types of anomaly detection, machine learning based or otherwise, that can really help move the needle on this correlation, bringing crowdsourcing of intelligence, either from internal, external sources to help level up the success of that correlation. And that’s one of the great things about some of the newer next generation technologies like Code42, like Sumos, they take advantage of all of those innovations in the space. So you’re not just stuck sifting through a mountain of false positives where you have one or two things that, “Okay, this is really interesting.” So, the technology has to work and it has to be deployed, and have the right data feeds, but beyond that, picking the right more modern tech, hopefully it comes SAS. So you don’t have to keep it up and running in patch, it just patches itself with the latest updates, all of these things matter today to these organizations.

Mark Wojtasiak – Riley, can I pile onto that just a little bit,

Riley Bruce – Go for it.

Mark Wojtasiak – I think Greg, yeah , Greg alluded to this earlier. The organization has moved a thousand miles per hour forward, right? With, digital digitization, right? Whether it’s a digital business or digital transformation, you’ve got the entire organization rooted in speed, right? Time to market, time to value, time to innovation, time to everything, right? But we would argue that… Security has been left in the dust. And I don’t know if it’s budget constraints or lack of investment or they’re reactive in nature versus proactive. There’s any number of reasons why, but there needs to be investment, re-investment in security. And, Greg used the words from the ground up. You cannot solve this problem with technology that’s 10 to 15 years old. You can’t force fit on Prem-tech to deal with a cloud problem, right? You can’t, you have to rethink how you’re doing. And, is it a zero trust approach? Is it, it depends. But we gotta acknowledge the fact that when it comes to protecting corporate data, you have to completely rethink the entire approach to it. You’re burning security analysts out. You’re wasting money on taking a compliance first approach that says, “Well, the compliance standards say, we must block. Therefore we gotta put in a DLP to block and I’m not getting value but,” so, we have to rethink that as an industry to help security teams move at the pace of the business.

Riley Bruce – Well, yeah, but part of that is just focusing on the proper problem, right? Like, rather than trying to make that, whether it’s the compliance officer whether it is the auditors, just kinda sign off. “Yes, this is okay, this is good to go.” It’s actually focusing on, how do you protect the organization in a way that actually can start to solve the problem? Because obviously the problem of data leaving your organization is never going to be 100% solved. Sorry to anybody watching who thinks that that’s true. However, you can manage it and you can try and focus more on that problem as a problem rather than just compliance. Greg, sorry, I may have cut you off. If you had anything else you wanted to add there.

Greg Martin – We can just put our heads in the sand and pretend like it’s not gonna happen, but it’s the same thing on the threat side, right? You cannot keep a determined adversary out of your network, just like you cannot keep somebody from downloading your Salesforce and sending it out. But how fast can you detect that? How fast can you track that down? Deal with that problem and hopefully minimize the damage of that data leakage. I think really, measuring your success in that meantime to detect meantime to response. That’s the way that the most successful kind of modern security operational organizations are now thinking and they are, they’re measuring themselves, they’re testing themselves. And I think it’s really cool to see. Yes, there’s a lot of organizations that are behind the curve, behind the eight ball. They have a long ways to go to get there, but there’s great organizations like Code42 and Sumo that have a lot of experience and we’ll help you get there. And, it’s a journey, not a race.

Riley Bruce – Yes, it’s a marathon, not a sprint. So, that was the end of sort of the canned questions that we had. If anybody has a lightning round question that they wanna throw into the chat, go ahead and do that now. But while we’re waiting for that, I had one that, this is one of my favorite questions to ask people. And that is what is your favorite story of data exfiltration that you’ve ever come across? Insider risk, that you’ve ever either experienced or read about, whether it’s humorous, whether it’s just a particularly elegant way of doing things. Greg, since I’ve been going to Mark first every time, if you’d like to jump on this first, go for it. Otherwise I will make Mark my victim again.

Greg Martin – Yeah, look, everybody talks about Snowden and Snowden and Snowden ’cause it was probably one of the most interesting in terms of what was leaked. Everybody always wanted to see what is the US government truly capable of from a cyber perspective. But I think, I’ll talk about something a little different. I always remember being in Silicon Valley, when Uber hired the top engineer from Google’s Waymo self-driving division, they stole all the secrets and brought it with them to accelerate the development and that played out in court. But, to me that was like a very classic, caught red handed insider threat story when you’re dealing with some very sophisticated IP that there was just millions upon millions of dollars invested in it. So anyway, that was what I thought was very interesting.

Riley Bruce – Yeah, that is a particularly intriguing story, especially if you look at the way that that shook out toward the end of last year, the culmination of that whole process. So Mark, I guess, what is your favorite story of data exfiltration or insider risk?

Mark Wojtasiak – We have a number of them in the book “Inside Jobs”. And we had that we were doing a ton of research around these and Greg mentioned probably like two of the most, infamous or famous, notorious? Insider threat cases? The ones I find humorous and scary at the same time are the everyday people, right? So you’re always gonna hear in the news the stories of the engineer that stole the source code. And I think people immediately jump to, “Well, that engineer is tech savvy. They can work around the DLP policies and they figured out loopholes and gaps and they were able to sneak that through 14,000 files, et cetera.” But the scary, funny scary stories are the ones that, an employee that works in… Human resources or marketing or sales or whatever it might be. Sending a file to their spouse because they need help on setting up an Excel, the pivot table, right? Or filter or whatever. Little do they know that file contains every employee’s social security number, and what have you. Now, is that a breach? No, it didn’t get out of, that’s what’s scares me the most ’cause that stuff happens all the time. Like every day it’s flying under the radar and the tech isn’t catching it. And, we often hear about these cases after they happen, months after they happen. And, that’s, it’s not always the engineer that knows how to work around IT and security tech, but it’s the everyday employee that’s just, again just trying to get their jobs done. And, that’s why I keep saying like risk is risk, regardless of the intent, because if you have a super sensitive file that is subject to a regulation or it’s PII, again contains PII, PCI, whatever. The fact that that was leaked accidentally versus stolen, does it really matter? It’s a data leak in, it needs to be identified and contained as quickly as possible. And I just don’t think a lot of organizations, I don’t know if they’re burying their heads in the sand or have blinders on, they just don’t know the magnitude of how much it’s happening, so.

Riley Bruce – Yeah, because the tools that are in place or aren’t built to even tell them, right?

Greg Martin – And so that’s part of the problem.

Mark Wojtasiak – Yeah, I think on top of it, Greg said this just in his last comment about it being a journey. With tech like Sumo and Code42, it is a journey. It is like, what is we talk about this, what is your insider risk posture today? What does it look like? Where do you have as a security organization? You owe that to the C level, the board, the business. This is what our security posture looks like today from an insider risk perspective. With all of the tech we’ve put in what have you, this is what it looks like. Now we have a plan to get it to X within the next three, six, nine, 12 months, right? In order to do that, we need to implement this technology, this process, these playbooks, et cetera, et cetera. Now, imagine if six months later they measure risk posture again and they’ve improved it 33%, right? Or they’ve reduced risk data exposure by X percent. They’re probably gonna, “Yeah, et’s keep doing that. Let’s invest more, let’s get what’s the target. By the end of 2021, we’d love to be at 50% lower risk data exposure levels than we were at the beginning of 2021. There’s tech that can do that.” And I don’t know if enough organizations are realizing that’s what they need to do, and they need to lean on the technology to do that for them.

Riley Bruce – Yeah, obviously we believe that that is the case and that there aren’t enough organizations who are doing that. Thank you, everyone, both Mark and Greg for joining us, there was one question that came in via the chat that I’m gonna add a little bit of a caveat here for both of you, if you don’t wanna answer it, that is okay. But, what’s the riskiest thing you’ve ever done or that you’ve heard of a colleague doing with corporate data? And it’s gonna be heard of a colleague doing, if you wanna go, go about it that way.

Mark Wojtasiak – Riskiest thing I’ve ever done.

Greg Martin – I’ll go first.

Mark Wojtasiak – Well, yeah, go ahead Greg.

Greg Martin – Yes, I once read my boss’s email, but I had basically just been asked to do a penetration assessment. And I thought it’d be pretty funny if I opened his inbox and put it to him, he did not think it was very funny. So, I’m working, I can’t be stopped.

Riley Bruce – Hey, I appreciate the honest answer on that. Thank you.

Greg Martin – I had a get out jail card, but it barely saved my pocket.

Riley Bruce – Yeah, I mean, as long as it’s signed, you’re good .

Mark Wojtasiak – Oh gosh, I’ll probably, I don’t have something nearly as good, but I’ll be transparent. I wouldn’t do this today knowing what we do in our Tech Code42. But yeah, I took corporate files from my previous employer. I had company issued backup drives that I could load my entire laptop onto and take with me, right? Now, did I do anything with it? No, I mean, obviously PowerPoint presentations and public facing stuff, I didn’t , it wasn’t like roadmaps and things like that, but, back then probably not as risky, right? But today, super high risky, especially working at Code42. And knowing that they know if you’re a departing employee, what information are you putting on a thumb drive or emailing yourself or uploading to the cloud or whatever. So, and I mean, let’s all say, yes, let’s admit we’ve all taken data from one employer to the next. Whether it was no matter the intent, but yeah. Look at Riley, he’s,

Riley Bruce – I have, yes.

Mark Wojtasiak – Riley’s been at Code for like 10 years, so.

Riley Bruce – Yes, I agree that the riskiest thing might be doing this Livestream. so that this, the graphic is telling us that. But thank you both very much for joining us. Greg Martin, General Manager of Security at Sumo Logic and Mark Wojtasiak, Head of Security Product Research at Code42. And thank you everyone who joined us Live as well. We’ll be back in two weeks to start talking about how to start building your insider risk program. And you’ll be able to watch this archived on our YouTube and LinkedIn pages. But thank you all very much and have a great rest of your day Bye.

You might also like: