Episode Seven - Paul Proctor

FBI Assistant Director Bryan Vorndran: Hello and welcome back to “Ahead of the Threat.” I'm Bryan Vorndran, the assistant director of the FBI Cyber Division. And as always, joining me is Jamil Farshchi, the chief technology officer at Equifax. Welcome, Jamil.

FBI Strategic Engagement Advisor Jamil Farshchi: Thank you. It's great to be here.

Vorndran: All right, well, we are back and we have our Top Three before we get into our previously recorded episode. So today we're going to talk about UnitedHealth Group and their increased victim size. We're also going to talk briefly about DeepSeek. And then we're going to talk about the cyber incident reporting statute that's coming into effect later this year. It's been in the works for multiple years, but it's time to dust it off and understand what's coming.

So in terms of UHG, UHG announced, earlier this week that they've increased the number of victims from previously what was 100 million victims of the PII theft and other information stolen to 190 million. That's obviously a significant jump for UnitedHealth Group. It takes us back to obviously about a year ago when this occurred.

You know, and why I told Jamil I wanted this in here again is because of the supply chain risk. You know, UHG is integrated in so many different areas, not just about domestically but globally. And those supply chain impacts, those third party risks continue to be, to me, the most important risk for organizations to really understand and deal with, but also very complex for those organizations to understand. Jamil, any thoughts on this?

Farshchi: Yeah, I'll take a different angle on this one. And just provide some color commentary on the number increase. Look, when you go through stuff like this, the amount of data and depending on what the complexity of the underlying infrastructure is and the, you know, the database schemas and things like that, it is a Herculean effort, oftentimes, to do the data analysis on the back end.

And so, you know, I'm sure there's some folks who are like, oh my gosh, it was 100 million. And now, you know, how do you go up by 90 million additional… you know, records or individuals or whatever it might be? It's tough. It's really hard.

And as you go through these kinds of analysis, you got to do a bunch of deduping typically. You've got to be able to marry up to the database schemas. It's just a tremendous amount of work. And then you got to make sure that you’re…it’s all as accurate as it can possibly be, because you also don't want to go on the other end and overestimate the number and then scare people that ultimately then didn't get affected by it as well. And so I'm empathetic having gone through this stuff before UnitedHealth and the teams and investigators and the analysts that have had to go through this stuff, it's not a good situation for anyone. It's just, to me, it's not surprising that the number has changed from the initial estimate that they put out there, just because it's a very hard process to be able to go through.

Vorndran: I certainly know people involved with UHG, both, you know, those that have been brought in as, you know, third party providers, contractors, and then people that are full time employed there in their technology space. And I'm sure Jamil does as well. And you know, they're good people. They are trying to get through a monumental problem set. But they're really good people. And, you know, certainly their goals, I would assume and trust, are to notify people who are impacted, right? And so I applaud their transparency.

Secondly, we're going to talk about CIRCIA, right. This is the Cyber Incident Reporting for Critical Infrastructure Act that came out–it's amazing to me–almost three years ago, which means I've been in my job for quite some time.

But essentially what this does, is it… any material incident, cyber incident that's suffered by any company or organization that's defined as critical infrastructure will have a mandatory reporting requirement to CISA, to DHS/CISA. CISA has been instrumental in doing the notice of proposed rulemaking over the past couple of years, as well as really collecting requirements.

But this is going to start coming into our space certainly within four or five months. And I think it needs to be fully implemented by September of this year. I can just say that CISA’s been a tremendous partner to us, throughout this process, because I think it's important that the FBI has access to material intrusion information that does… that CISA has access to so that we can provide the services to victims that we have available.

But what it does, again, is adds to the complexity of the ecosystem of reporting requirements. And I'm going to go to Jamil here in a second, because he's the guy that has to navigate this. But I was having a conversation with a CISO at a multinational major bank here in the United States. They have 102 different reporting requirements for cyber intrusions. It’s obviously for a global bank. But Jamil how does this actually manifest in your space and what does it look like trying to navigate it?

Farshchi: It looks similar to what I think any one's mental conjuration would be of what you just described. When you've got this many different reporting requirements, it becomes a huge burden and it slows down your ability to be able to respond in the time of threats because you're always second guessing yourself, like, oh, do I need to do this? Or who do I need to report for that? Which I think is counter... It's counterintuitive and it… I don't think it goes toward what the ultimate objective is here.

I think we've talked about this before; the industry has talked about this for years now where we need to reconcile this stuff so we have clear paths so it’s easier. I think organizations want to do the right thing. It’s just so dang hard to be able to figure out, you know, what's… you know, what to do and what sequence to do them in. That it's challenging.

I think on the other, the other point I would make here is it’ll also be good, I think to be able to prioritize critical infrastructure itself. The scope and scale of what we consider critical is mind blowing. And I think just like with any other problem space, anywhere big or small, you know, unless you prioritize what really matters, then you’re probably not going to do a very good job or accomplish any of the goals that you probably have. And so I think that we need to be more thoughtful, practical and more precise around what is critical infrastructure and prioritize the improvements there.

As we talked about, I think, in our last episode, you know, with all the typhoons and stuff, it's just that there's a ton of stuff going on here. There's a ton of requirements. There’s a ton of targets. And we as a country, I think, really need to get our arms around how we're going to tackle this and where we’re going to tackle it first, that we can feel good that we've got those things handled and then move on to the new, sort of the next, tranche of activities that we want to…that we want to execute to be able to protect ourselves

Vorndran: Yeah. I just have two thoughts on that before we go to DeepSeek. You know, I sometimes joke, I think it's easier to talk about the companies or organizations that aren't critical infrastructure than those that are. You know, when we talk about critical infrastructure, when I talk about it with my partners, I'm really focus on, you know, what I would consider core three or four. You know, which are communication, energy, and finance, because it's my opinion that without one of those three, this country doesn't go. Certainly, throw in there from a prioritization perspective, the defense industrial base. And then I also look at who's under resourced, right? And I think that healthcare and education generally are very under-resourced and deserves some attention from us.

DeepSeek—DeepSeek has made the news lately. Jamil, your thoughts?

Farshchi: Man, this takes– this took the world by storm. And there's a million angles on this story. The, you know, the low cost that it took them to theoretically, you know, achieve some level of parity with some of the big boys here in the U.S, in the AI space. You've got the, you know, they steal some of the data and things like that. I think the angle I'll take on this one is the concern around the utilization of the data that, and the query data that people are going to be in the prompts and so forth that people would be utilizing for this. I believe it was the number one, DeepSeek was the number one downloaded app, in the Apple App Store last week or the week before, the last two weeks, whatever it is. And that's a bit concerning.

You know, we talk about the fear, there's been a lot of talk around TikTok and things like that. And you know what these foreign governments could potentially do with all of this data. DeepSeek, in my opinion, you know, this would be monumentally more greater risk for our country.

But I think at an organizational level, we need to make sure that we have the right controls in place in place so that we're limiting the utilization of this stuff. I know a bunch of my peers I've talked to over the last couple of weeks, have been seeing a flurry of activity, or at least attempts from their user bases from trying–and their workforce–from trying… to try to use DeepSeek, because it’s cheaper in some cases and they want to explore it for others.

Be careful. Be careful, because that a lot of the controls, a lot of the requirements, a lot of the restrictions that are in place to help protect you and your organization that have been applied at organizations like, you know, Google for Gemini or OpenAI for their models or whatever. Those do not apply here and so you need to be thoughtful and while I'm not a huge fan of just saying no across the board and let’s be scared of whatever the next new thing is and slowing down innovation and enablement and things like that, there are times when you’ve got to put your foot down and make sure that you’ve got the guardrails in place.

And man, when I look at DeepSeek and what's going on over there like this is one of those areas that I would strongly suggest that we take caution versus leaning right in and just diving into a new player within this space that potentially generates a lot of risk for any of our organizations.

Vorndran: Yeah. And I mean, just to say, I think everybody knows this out there, right? But we do consider China the pacing threat, right? And they are an existential threat to the way we exist, especially economically, and knowing… and this has been reported consistently in open source media…that there is hidden code in DeepSeek that allows for the user data from anywhere in the world…for DeepSeek, users to be sent back to the Chinese government should be alarming to all of us.

Well, that's our Top Three for this week. We are now going to get into a previously recorded episode with Paul Proctor at Gartner. Paul will now help us get ahead of the threat.

***

Vorndran: Joining us today is Paul Proctor from Gartner. Paul, welcome to “Ahead of the Threat.” Let's just start by giving our audience, some background on yourself.

Paul Proctor of Gartner: Thanks, Bryan. Well, so I am a former chief of research for risk and security at Gartner. I started my career working on Orange Book security. So I've been doing security about 40 years, which makes me very old, but I've seen a lot of things. I spent 10 years with a defense contractor. Started my own company. That didn't work out as well as I would have hoped. Not on the resume is being rich! But anyway, I've spent 20 years as a Gartner analyst now, and I'll say more about that later.

Vorndran: Great. Well, the foundation of today's conversation between yourself, Jamil and myself, is security a business decision, and executive communications really matter. So why don't you just open with some background about, like, what have you learned? What are you think are the most key takeaways in this space?

Proctor: Well, so, you know, it's funny, I said I was going to say more about being a Gartner analyst. A lot of people have no idea what Gartner analysts do for a living. So let me explain. You don't talk to a Gartner analyst because we're smart or experienced. You talk to us because we talk to everybody. So each Gartner, each analyst has like a thousand interactions a year. And from those interactions we learn an awful lot about what's working and what isn't.

Well, so with my 20 years at Gartner, that's 20,000 interactions. And mostly what people are doing is broken. There's a number of things that we see out there, a number of behaviors. Let me sort of summarize the failures, which is that when we spend money on security, we tend to buy stuff. And when we do that, we don't really pay attention to whether we got value from this stuff. So, you know, no CFO would ever say, ‘Let's spend $1 million on something called ‘a SIM,’” but they actually agree to do it because that bumps up your maturity a bit. It keeps the regulators off your back and, well, your experts said you needed one.

But what we don't measure is what value we get out of the SIM. You know, you can spend $1 million on something called a SIM and get absolutely no value out of it. But we don't measure that. And as a result of that, we can't really explain the business value of the security investment. And what this leads to is the biggest problem of all.

We are completely disconnected from our executives, and that has led to poor levels of readiness. They basically count on us to make sure that we don't get hacked. But that's an impossible goal, right? You are going to get hacked. And, if you have poor levels of investment, you have unacceptable levels of risk. So, where we turned our attention was to, and I'll say a little bit more about this later is we're benchmarking protection levels. And the goal of that is to give a connection to the executives to understand what they've achieved–what they receive for the money, and then align that to different parts of the business.

And essentially, looking at security through a business lens gives us the ability to control our investment, create defense ability with our key stakeholders. And what this results in is everybody is elevated to a common, sort of, level of protection, which is measurable, reportable. It just solves a bunch of problems.

Vorndran: So this product conversation is an interesting one. And, Jamil, I'll go to you in just a second, but I use this analogy that, you know, buying a Lamborghini only works if you have really nice roads and really good gasoline. But just buying the Lamborghini by itself without those foundational pieces is kind of useless. And sounds like we're talking in kind of similar language in the cyberspace. “Broken” is a strong term. Jamil, you're a industry expert. What's your take?

Farshchi: This hits near and dear. It's near and dear to my heart. This topic, and I feel like a lot of the challenges that we consistently complain about, as CISOs, in this space, does it stems from this exact same problem. We love the shiny new toy. We love to implement stuff. We don't oftentimes take enough care around what the ROI truly is around these things. We are good at articulating the value prop. In many cases, shoot, we don't even do a great job in many respects on operationalizing the tools that we do decide to purchase and roll out.

So, but Paul, with all of that, I mean, you know all of this because this is one of the reasons that, Bryan and I really wanted to have you on here is because I thought we had a great discussion a couple of years ago when we went over this in depth. What are those core measurements that we should be looking at? And how should we view the world in a way that's more genuinely business enabling, versus what we're doing today?

Proctor: Well, so the concept that we developed was something called “outcome driven metrics.” Now on the one hand, they're metrics. But on the other hand, what they're actually doing is measuring a delivered protection level. That means that when the metric gets better, the organization is measurably better protected. And when the metric gets worse, the organization is measurably less protected.

So an example, a very common example I use in vulnerability management, is number of days that it takes to patch critical systems. Now think about that for a second. That's a pretty boring measure, right? Except that it's what rules everything. Every single dollar you spend on security, all of your investment, I'm sorry. Security. All of your investment in vulnerability management is to accomplish one goal, which is to reduce the amount of time that a vulnerability is available for exploitation.

So, when you measure that one goal, basically it not only tells you what you've achieved for current investment, but it also informs future investment. So, if you basically, let's say you spend $1 million, all-in, to achieve 30-day patching, right? That's an outcome driven metric. But knowing that you're achieving 30-day patching is essentially telling you what you've achieved for current investment.

Now this sets up a different type of conversation. On the one hand, well, do we want to spend $2 million and do 15-day protection? Again, I'll take you back to you're measuring the thing that matters, which is the amount of time vulnerabilities are available for exploitation. But now, I can start driving choices and then I can align that number to different parts of my business.

Like any of you out there that are in manufacturing, you know your operational technology is not getting patched in 30 days while your internal systems in IT are, you know, sometimes being patched better than that. So now we can actually drive the investment to different parts of the business. And that's fundamentally how it works.

Farshchi: So a lot of us struggle with board communications and executive communications. Well, it would seem to me, based on what you just said the that a precursor to getting… be more effective on that front, coupled with making better investments on this front, the precursor would be that we're able to identify up front what those key measures are, what those key outcomes are that we're striving for.

Is that fair to say? And then if so, how has what you've done simplify that and make it easier for all of us to be able to apply?

Proctor: Well, I would say that it's more than just like what are the key measures? It's more about the approach. Because once you're measuring outcomes and you explain that to your executives and they understand it, then they start to care and understand what it is you're actually delivering. So let's compare this for a second to something that's commonly used today, just to keep the attention on the approach itself.

So let's talk about risk quantification. Very popular, very broken, right? Like we have to…

Farshchi: Say the least.

Proctor: We actually see in our client base, the fact that risk quantification, it's very expensive to do right. Most organizations fail at it. Most organizations are, you know, like I said, I have a thousand conversations in a year, every time risk quantification comes up, it's always somebody that wants to get into it, right? It's not somebody doing it. And then when you talk to the people that are doing it, it's usually like, ‘We did it. We did it and we abandoned it.’ So very expensive to get it right. And even when you get it right, think about this: It doesn't actually inform any decision making over how to spend your money well. And that's the sort of fine-tuned thing that we need.

So coming back to the outcomes, once you actually have control of them, we also have this concept called a “protection level agreement.” Now PLA, once you're measuring say 30-day patching. Right? Well now you can come to agreement with your executives over the fact that we're going to deliver 30-day patching. And that's going to cost $1 million. And we… once you have agreement with the executives are agreeing to is a couple of very important things. One is they're agreeing that 30-day patching is an acceptable level of protection. Let me come back to that concept in a moment, because, some people will argue like, ‘but my CEO is never going to know if 30-day patching is the right thing or not’ But we'll come back to that.

So one is that they're agreeing that they will defend the fact that 30-day patching is the right number, but they're also agreeing that they won't cut the funding. Think about that for a second. What's one of the biggest problems in maintaining continuous security protection is along come the funding cuts. They're either dumping money at us, saying ‘make sure we don't get hacked,’ or they're taking our money back, right?

Where's the consistency? That's what the outcomes actually fix. Because once they agree that we'll give you a 30-day patching for $1 million, they can't come in and cut that in half and then say, you know, at that point the agreement is broken. It's a protection level agreement. So, once they break the agreement, it's like, ‘I don't know what I can deliver, but I'll do the best I can.’

Now there's is a dimension to this though that you need to consider. Once you also have this type of control and visibility and it's understood by the executives, it is also fair that your CFO, your chief financial officer, will come in and say, ‘You know that 30-day patching that we’re getting for $1 million? Could you do that next year for $800k?’ It's a completely fair conversation, but now we're having…it's a business conversation about security investment delivering a desired outcome.

Vorndran: So, Paul, I’m… so my position, right, with the FBI leadership role in the cyber program, this is kind of a new conversation to me, right? There's undoubtedly going to be people listening to this that this is somewhat new conversation to them. When you think about the conversations, a thousand conversations you have, what are two, three, four primary takeaways for organizations or executives in the space?

Proctor: Well…So the…what you mean? You mean one of the big takeaways we want them to have or one of the…

Vorndran: Yep.

Proctor: You know, okay.

Farshchi: They hate us. They're not happy with how we're performing.

Proctor: But this… what this actually helps address this issue is that executives have no idea what to ask for in security. You know, what I love is when somebody tells me, ‘Oh, I have no trouble getting money. My executives love me. Pretty much whatever I ask for, they give me.’ And I'm like, ‘That's a terrible thing.’

What they have done is handed the responsibility to you. So now it's basically on you to ensure that the organization is fixed. And we have way too many people in this profession who think that's a good thing. Well, the SEC disagrees. The SEC wants the executives to be connected and understand the levels of protection. So that's one of the big takeaways is if the executives aren't engaged right, then you're not going to have productive conversation.

Vorndran: Paul, how prevalent is that from your experience where executives aren't engaged at the right level?

Proctor: You're asking me or sorry or…oh, no… this is actually my day job. So the executives have way too much expectation in that, you know, we hired you to take care of this problem; now take care of this problem. That's also why I bring up the SEC regulations, because the SEC regulations put the focus on the executives and you know, so this is a problem we've been trying to solve for years. But it just, we're not solving it because we still have the same behaviors, which is rely on things like impact and likelihood.

Oh, right, I was picking on risk quantification. Let me go back to that for a second. So most security professionals will say, ‘yeah, but if I can't put it in in a business context,’ which to them means dollars and cents like. Like what… ‘What's the probability of getting hacked now’ Consider this: Impact and likelihood are probabilistic estimates of things we don't control. Can I produce a number? Yes. Can I spend a lot of money producing that number? Yes. Can I convince you that it's credible and defensible. Meh. But is it credible and defensible? And the answer is ultimately we are doing probabilistic estimates of things we don't control.

By turning to the things that we do control, now you're talking about things that are like, again, the idea of measuring a protection level means I can deliver you this protection level for this amount of money. And you have control over that. Now, let me actually address something that you haven't asked, but this is probably the most common pushback we get on this.

But where's risk in this? Right. Like where's the connection to the business? On the one hand, I would say I just explained that we, you know, you measure these outcome-driven metrics and we align it to different business outcomes. And that's how we're putting it in a business context. But the part that we're not addressing is the risk, say, to the business. And what they're referring to is impact and likelihood.

So here's the big transition, and I'll lay this out very simply. We're talking about moving away from impact and likelihood towards control of effectiveness. Basically. Now that I have a metaphor for you. So, security is like a hurricane. The threats are like a hurricane. They're coming at us all the time. Now, let's talk about real hurricanes for a second. How topical? If you're a homeowner on the Florida coast and a hurricane, it's hurricane season, right? This is most of us in security. It's like we know the threat environment out there. It's coming with it.

You have two options for controlling your investments in security. The first option is the one that we all use today, which is go listen to the weatherman. This is this is the risk quantification. Let's run very expensive, very fancy weather models. And that will tell us whether we're likely to get hit by a hurricane or not. But here's the reality of hurricanes. Whether… where the real damage happens is sort of like getting hit squarely by the eye of the hurricane is where the most damage happens. And then, of course, you get an entire variation of what might happen to you.

So that stuff, that's probabilistic estimates of things you do not control. Is it useful? Sure. To a level. Like is the hurricane season going to be bad? Okay. But now let's turn to the second scale. What are you actually going to spend money on to protect yourself?

Think about hurricane readiness. Let's oversimplify this to the amount of wood that you buy to board up your windows. Right? Do you board up 50% of your windows, 75% to your windows, 25% of your windows? That is actually a measure of your risk, because if you get hit by a hurricane, you are certainly going to suffer more damage if you only boarded up 50% of your windows. So that measure, that control effectiveness, this is a metaphor in the physical world to explain what an outcome-driven metric is, is if we measure how much money we've spent, because boarding up 75% of your windows is more expensive, you can certainly save some money if you if you board up 50%.

But you can also start making decisions. What about those windows on the back of the house that aren't, say, directly facing the ocean and the things that are maybe a little bit more protected? All of this is related to control effectiveness, and it's a balance between our protection level and our cost. So now that I have control of all of this, this is when people come in and say, yeah, but ‘you're but you're only measuring the, say, the amount of wood that you're using to board up your windows.’ And that's not the sort of that's not about risk and impact. That's just about how much money you spent and how many, how much of your windows that you boarded up.

Now, in the end, basically how much money that you're, sorry, how much of your windows that are boarded up. I want you to consider this as the kicker. How does the insurance company decide how much to charge you for insurance? You think they come in and look at the weather models? Nope. They look at one thing. How much wood did you buy to board up your windows? Because your insurance rates are going to be lower because you have better protection. And the bottom line in all of this is that control effectiveness, as I've been describing it, is essentially just another scale for the amount of risk that you're experiencing.

The difference between this and risk quantification, sort of impact and likelihood, is these are measurable things that we have direct control over that support direct investment. Whereas risk quantification, again, they are probabilistic estimates of things we don't control.

Farshchi: No. Look, here's the difference. The difference between what you're talking about – going by control effectiveness versus risk quantification – the difference there is it control effectiveness actually works. I've done this. I am one of the people that's in your camp that you referenced earlier that wanted to go down the risk quantification– shoot, my master's degree… My emphasis in my master's degree was decision science. Like, I love this space. I want more than anything else to be able to quantify the risk, use it as a prioritization basis, you know, apply dollars to it, whatever.

But I've done this. I did it in Los Alamos. Some of the smartest physicists in the world, we all worked on this. We came up with something, you know, based on attack trees and all this other stuff. We had Monte Carlo analysis, the data looked sexy as hell, like, it was fantastic. But you know what?

Proctor: Executives love it!

Farshchi: It didn’t. Yeah, but guess what? I couldn't make a decision to save my life from that stuff because it just didn't marry up to reality. And the other problem was when you really look into this space and into statistics in general, you have to have a massive amount of data from a broad variety of different organizations and industries to be able to actually, to calculate this stuff in a meaningful way so that it's actually substantive. You don't–almost nobody does at this point. I mean, maybe some of the cyber insurers with a bunch of actuarial data, but even they'll admit that, at this point in time, they don't either. And so you've got two choices.

And this is why I have completely been sold on this control effectiveness approach because it serves as a really good proxy for what the risk is and what you're willing to do about it. And so what we do is we map out what the predominant attack vectors are. We map in what controls we have in place in how –and this is the important part – how effective are those controls relative to those threats? And then we married up with how much we're going to invest. And we use it as a prioritization decision. Is it perfect? No, it's not perfect. But is it a huge leap from the smoke and mirrors of risk quantification as it stands today? Yeah, it is 100%.

Proctor: And you know–

Vorndran: Jamil, how hard is it to do it well? I mean “well,” you know, we can only do it as well as we can do it. But how hard is it–

Proctor: But hey Bryan? Let me…I want to throw something on top of what Jamil said and then we'll dig a little bit deeper. But you know CISA would agree. So think back to the Colonial hack, right? Like when basically, you know, the big gas panic on the East Coast with what year that was, 2018?

Vorndran: 2021.

Proctor: Or 2021.

Farshchi: Bryan remembers it well.

Vorndran: Oh yeah.

Proctor: Something you might know about. All right. But here's the thing is: When the chips are down, what was the guidance out of CISA? It wasn't, you know, ‘you need to do more risk assessments.’ It wasn't ‘we need to see your risk quant models.’ We– ‘You need to build a better program.’ Well, let's talk about maturity later. Put a pin in that.

It was basically, ‘you need to patch your systems within a certain number of days.’ Like I learned a bit. Jen Easterly came in to the talk at Defcon. It was in 2023 and was basically telling the story directly about then, about the Colonial hack and saying, look, the problem was people weren't patching their systems fast enough.

So when the chips are down, this is actually the way the experienced executive decision makers, people in charge, like, you know, Jen Easterly and you, Bryan, like, this is the way you guys think about it. It's like, ‘are you sufficiently protected?’ because mostly organizations or not.

Vorndran: Yeah. So let's talk a little bit about vulnerability and patch management. Right? So we… you know, whenever I speak publicly, you know, and people ask me, ‘how do you reduce risk?’ I spend time on this topic. Jamil, I don't know a best position for you or Paul, but why is it so hard? Right? I mean, people hear this and they're like, ‘Okay, I go essentially, put a Band-Aid on my cut and I can move forward. I'm good to go.’ But the interdependency conversation, this was really, really prevalent. And Log4j and Log4Shell and the patching that was required there. Like in practice, why is this so hard to do well?

Farshchi: Bryan you want it– Or Paul do you want to start? I'm happy to go on a massive monologue on this one.

Proctor: No, Jamil. Go ahead.

Farshchi: Here's why. Because we understate the difficulty of this thing. Like I think just on its face, when you say do the fundamentals well, do the basics well, what does that tell someone? Well, they’re basic. They're fundamental. So they must be super easy to be able to do. That is fundamentally not true at all. And I'll readily admit I have been a proponent of that narrative since day one. I still am to this day that those are the things that we should be focusing on.

But my gosh, it is hard. In a modern organization, certainly in a global one with all these different business units, with years and years and years of technologies that are all different, that are sometimes independent, to be able to get the teams, to be able to have consistent processes, to be able to manage this stuff and make sure that they're in place on an ongoing day-to-day basis? It is a monumental, it is a Herculean feat.

And then you take the next step of it, which is okay, you mentioned patching, but how many other controls, if you were to survey CISOs or security professionals, fall under the foundational controls or the basic controls? It's almost every single one. Everyone will say that, oh, it's this and it's this and it's this. And so then you get to the point where it's like, okay, so effectively everything that I need to do is now under the same bucket and everything has to work 100% of the time, every single day of the year, in every asset, out of the hundreds of thousands or millions that I have globally. It is a monumentally difficult feat to be able to do. We just understate it.

Proctor: From an external guidance perspective, having a laundry list of 800 controls where you give them some flexibility to pick stuff is not helping. See this comes back to the idea, Jamil, like you talk about large global organizations, but let's also just say from an external guidance perspective where we're trying to “help everybody,” everybody needs to do it differently but the guidance is very prescriptive. You have to do this, this, and this. You have to have one of these and you have to operate it well, etcetera. Right? So here's the thing is, if you measure instead, back to the whole idea of measuring number of days to patch, a smaller organization can make the investments they want to make. A larger organization can make the investments they want to make. But I want both of them to patch within 30 days. Right?

Farshchi: My strategy– oh, sorry. Go ahead.

Proctor: No. Go ahead.

Farshchi: My strategy on this is that first off, you're never going to be perfect on any of this stuff. But my strategy is to find as many ways to be able to simplify that ongoing day to day process as you possibly can. And so whether it's utilizing golden images so that, hey, if you guys just do the golden images, then you're going to get all the patch rates, you're going to get all the right configs.

So as long as you have that within an appropriate window, whatever your time frame is: 30 days, 45 days, then it simplifies it for everyone. You know, we just rolled out, actually, it was just last night, we completed our global implementation for password-less. So now guess what? I don't have to worry about credential rotations anymore if for any of my user space: it's just done. I completely eliminated that entire vector just through that one step.

And so I think for all of us, finding ways to be able to simplify doing these fundamental things or just eliminating them as a risk altogether, is the only realistic way for us to expect that we're going to be able to be successful. Otherwise what's going to happen is that we're going to continue to get hammered by folks saying, ‘Oh my gosh, it's just basic stuff.’ And you'll see people in Congress showing the “Security for Dummies” book whenever you're testifying to say, ‘Oh, you didn't implement this control.’ ‘This is such a basic thing,’ and just effectively keeping us on this hamster wheel, and one where we'll never, ever win. And all of the ire will just continue to be pointed at us, even though, just because people don't fully appreciate how difficult it is to do this every single day.

Vorndran: Well, you know, on this, Paul, if I could jump in for a second on this topic, right? Two thoughts, right: Number one, as I brief, just as part of my job, you know, the director of the FBI fairly routinely, and I can't tell you or the audience how many times, right? I get the question, ‘What was the initial vector in?’ Whether it's an APT or a cybercriminal. And I reference a vulnerability, right? And whether that's an apparent piece of hardware, firmware, or software. And then the follow-up question is, ‘How long has it been known?’ right? And the number of times that I'm saying 2017, 2018 is just mind blowing to me. But it does speak to the complexity in the space.

And, I mean, I'm a huge fan of organizational kind of culture and leadership. And Jamil and I have a lot of conversations about this over the past couple of years. But, you know, I think part and parcel of this conversation is the process matters, right? That really matters. But the organizational culture around these things is equally important in terms of what risk we’re able to accept at the user level.

And the example I use is the FBI is a super complex organization, no different than Gartner or Equifax or some of the others. We have 300 plus domestic offices, 70 plus international offices, 37,000 people; driving consistent behavior across those entities and those people is an impossibility, right? We can't get nine out of 10 people to agree on anything, much less 36,000 out of 37,000. And so, but anyway, I'll get off my soapbox here. And Paul, go over to you.

Proctor: Well, no, but that again is exactly why we developed the outcome stuff. Because the outcome allows Equifax to do what it needs to do to deliver a desired level of protection and allows the FBI to do what they want to do. Like what you need to do for your situation because you're both global organizations with… we could say in through some lens similar scale. Right? You have, but you have different problems. You need to do these things differently. So we need a consistent scale. So the…I wanted to…and I said I put a pin in maturity. Let's go back to that for a second. Do both of you guys talk to your executives about maturity?

Farshchi: We do, or I do.

Proctor: Right?

Vorndran: I talk about maturity of the adversary.

Proctor: Now I have a question for, well now, hang on. Let's put a pin in that for a second. But sticking with the, with the concept of maturity. Now I hope both of you use maturity. So does 98% of our client base, which is like 17-and-a-half thousand clients. It's like everybody. What's the difference between a 3.4 and a 3.6 in maturity?

Farshchi: I'm going to turn that back to you, because that infuriates me that I don't know the answer to that. And I every time I ask, I could never get a super straight answer about it. Bryan?

Vorndran: I have no idea, Paul I have no idea.

Proctor: The answer is point-two. This is the standard that everybody uses to report to their board of directors how protected they are. And now I'll say a little bit more about this. Below about a 2.5, the maturity model is very useful. Let's stick with vulnerability management. If you have no processes, no technology, no people with skills that are working on maturity, you just kind of just do it all over the place, the maturity model is awesome for setting up your processes, getting good at, you know, integrating the technology, and starting to do it in a consistent way. That's the value of the maturity model. Now, once you hit about a 2.5, I go back to the question, what's difference between a 3.4 and a 3.6? You know, getting better at maturity nobody really cares if you're… Oh, sorry, getting better at patching. Nobody cares if you’re good at patching. What I care about is how fast you patch your systems. Now, I'll back this up with an anecdote.

This is true story. Two insurance companies in the neighborhood of 100,000 people each. On paper, they look exactly the same. Same number of people, same number or roughly the same technologies, same level of maturity, same level of investment, same level of support from their executives – they’re global insurance companies. On paper, same, same, same, same, same. One of them patches their systems in 143 days. One of them patches their systems in 20 days.

Our current scales don't work. The idea, Jamil, when you said I, you know, I wish somebody could explain to me the difference between a 3.4 and a 3.6. The answer is you need to stop relying on the maturity model at the higher ends of maturity. Once it…and I'm not throwing maturity models under the bus; I think they have huge benefit and you should keep using them. But relying on them for saying that we've gotten better. We've actually had clients that say, ‘Could you get to two decimal points of accuracy in there?’ And don't get me started. The answer is we should not be doing that.

Farshchi: No, I’ll just give a little color. I think they're fantastic when… and I always use them when I first start at an organization. It helps baseline everything. Draw the line in the sand. Especially given the kind of organizations I joined that are, you know, coming off of a breach. It's useful, and it really is a good communication tool for people that are not technical.

So like boards of directors and things like that. But there is no question that once you get to a certain point, it becomes– it almost is self-defeating because then you start to… there's this notion that the higher you go, the more controls are in place. When in fact, and this is part of the problem, when in fact, the way it's been described to me is that once you get beyond a four, or maybe it's even upper threes, then it really becomes more on optimization and enablement.

And yet people believe it to be more and more security. And so there's almost this push against potentially becoming, even though I think it's just theoretical to get to a five level or whatever it is. So there's just all this confusion around it. And so I'm with you 100%. And what I typically do is once you get to a certain point and it's I don't have a specific number like 3.8 or whatever, 4.2. But you migrate more over to the specific measure that you're talking about, like–

Proctor: Well I told– my number’s 2.5, you know, I was like… I just–because I think at 2.5 you start polishing the apple as opposed to actually, you know, getting more to eat. That was a terrible metaphor that I just made up on the fly.

Farshchi: We’ll go with it.

Vorndran: Paul, we are going to have to bring today's conversation to a little bit of a close. I think we could all, you know, spend a ton of time talking about this. You know, I've learned a lot by listening to our conversation and certainly this conversation about maturity and control effectiveness are powerful concepts that we need to take away.

Jamil, I'll just go over to you, for some closing thoughts. And then, Paul, to you to have any final thoughts. But just Paul. Thanks. And I have some closing thoughts here right before we close.

Farshchi: I think, this is really powerful. I think we should all, as CISOs, security professionals, technology professionals really lean in on this front to truly measure and be outcome based, use it, use this approach as a mechanism to improve our communication skills. Really resonate with executive teams. But I think more than anything, to get away from this notion that enablement is purely about survival necessarily and really focus on the ROI and how we can genuinely help enable the business.

And so I think this is really powerful. I would encourage everyone out there to take a look at what you're doing there, Paul, and to engage, because I think the more data that you're able to pull in, the more effective and beneficial will be for all of us that participate. And, you know, I just think we always talk about, you know, partnership and transparency is absolutely essential. And so the more folks that are participating on this front, the more insights we’ll all have. And we'll all, as a result, be able to make much better decisions.

Proctor: Yeah. That coming full circle, that keynote that you were at, Jamil, and we even interviewed one of your board members. What was his name again?

Farshchi: John McKinley.

Proctor: John. Tell John I said hi. We… back then, the benchmark was a… it was a concept. Today, we've been open in it for about a year, and we have more than 600 organizations in it, and we're getting more every single month. I actually think this is the future of security. And I don't think it's about, you know, Gartner has this benchmark, right? It is not about our benchmark. It is about the fact that we need to move away from things like risk quantification and maturity as our primary tools for making decisions. And I think this is actually something that's going to make a huge difference.

Vorndran: So Paul, I'll just close this out here. I just want to begin by thanking you for your time. Right. But then also thanking our listeners and our viewers because, you know, we're on traditional podcast channels, but also a YouTube channel. So there's always a video option available. I think, you know, for me, a primary takeaway today, because I don't live in this space that you live in is, we may not have answered a lot of questions today, right. In terms of being super prescriptive about you should go do A or B, but I think what we've done really effectively is raise a lot of questions and ideas that people really need to be rigorous about thinking through. And I think that's super powerful for our audience, super helpful to me, and just want to thank you for helping us get ahead of the threat, which is obviously the name of our podcast, and just appreciate your time.

And with that, we'll close out and we'll see everybody next time.

Proctor: Thank you very much.

Farshchi: Thank you guys.