The Next Big Question


Episode 3
Hosted by: Drew Lazzara and Liz Ramey

Ben Sapiro

Global CISO

Great-West Life

Before taking on his current role at Great-West, Ben was a risk management consultant with KPMG, and he developed his own unique framework for measuring security risk.

What Can Risk Management Do for Cybersecurity Strategy?


AUGUST 10, 2020

How can organizations adopt a more robust risk management approach to cybersecurity? That’s what we explore in this episode with Ben Sapiro, Global CISO for Great-West Life. Ben’s whole career is about evaluating risk, and he shares a framework for applying risk management principles to cybersecurity. He discusses the need for clear risk thresholds, strategies for communicating risk to the business, and what CISOs can do to position themselves as risk leaders.

/

Drew Lazzara (00:15):

Welcome to The Next Big Question, a weekly podcast with senior business leaders, sharing their vision for tomorrow, brought to you by Evanta, a Gartner company.

Liz Ramey (00:25):

Each episode features a conversation with C-suite executives about the future of their roles, organizations, and industries.

Drew Lazzara (00:34):

My name is Drew Lazzara.

Liz Ramey (00:35):

And I'm Liz Ramey. We're your cohosts. So Drew, what's The Next Big Question?

Drew Lazzara (00:41):

Well, Liz, this week we're asking how can organizations adopt a more robust risk management approach to cybersecurity. To help us tackle this big question is Ben Sapiro, who is the global chief information security officer at Great West life Insurance in Toronto, Ontario, Canada. Ben is the perfect person to help us think more clearly about the evolution of risk management and cybersecurity. Before taking on his current role at Great West, he was a risk management consultant with KPMG, and he even developed his own unique framework for measuring security risk. As you'll hear in our conversation, Ben believes that organizations intuitively understand about risk in everything from investment to product development, but they often fall short when it comes to applying that same view to cybersecurity. In this deep dive, Ben talks about what it could mean for organizations to raise their threshold for cyber risk, what that could mean for business outcomes, and how security leaders can have a greater, positive impact by approaching their work through a risk-based lens. 

Before we chat with Ben, we want to take a moment to thank you for listening to this week's episode. To make sure you don't miss out on the next Next Big Question, please be sure to subscribe to the show on Apple podcasts, Spotify, Stitcher, or wherever you listen. Please rate and review the show so we can continue to grow and improve. Thanks so much. Enjoy.

Drew Lazzara (02:16):

Ben Sapiro, thank you so much for being with us on The Next Big Question. Now, you're here to talk with us about the evolution of risk management in cybersecurity, but I know that so often the way leaders think about the future is deeply influenced by the past. So I was hoping we could start today with you sharing some of your journey leading into your current role.

Ben Sapiro (02:36):

Sure. So, go back a few years, many, many years, in fact, given how gray the hair is now. And I found myself thinking about what my career could be, and this is as I was starting to look at universities, and I decided that I wanted to go to business school because I wanted to have this ability to bridge technology with business, to be that middle ground. And so surprisingly, despite growing up on the knee of a software engineer and being of that sort of mind myself, I went to B-school, and I learned how business people spoke and thought. And that paid off well. And one day I was sitting in a class, and as you can imagine being a third-year university student, maybe I thought I knew everything, and I was being a bit of a smart ass, and the guest lecturer turned and looked at me and gave me a bit of a look. And anyways, that turned into a job offer. And so I spent summers working in his consultancy shop, being a security advisor. 

And then that turned into a career at the Big Four for consulting and did consulting and various types of, I'll call it vendor and services-related security activities, for years. But I kept on always asking myself the question: I go to a client, I give advice or I help them deploy the solution or whatever it might be, and things don't seem to change. Why? I really want to go client side was the conclusion I came to -- that I want to go and understand the world and how it works from the client's perspective because I think I'm missing something. And so I made the jump out of consulting and services and advisory work and became an independent contractor for awhile and eventually moved into an honest living, being a full-time employee of various organizations. 

And that was informative to me because I found out pretty quickly that all of the advice I'd ever given was wrong. And not wrong in a way that it wasn't good advice. It was very good advice, but what it never accounted for was the whole notion of -- businesses have a lot going on and technology organizations have a lot going on. And compromise is fundamental to everything that we do, unfortunately. But we have to accept the fact that there was compromise. And so that gave me that perspective about how to then say, well, let's be balanced, let's be pragmatic, and let's change the things that are most important, but let's also learn to be okay with the things that are not great, but, you know, at least aren't going to kill us.

Drew Lazzara (05:07):

Right. Well, that's a really important perspective for business leaders to grow into. And it's actually kind of a good thesis statement for today's conversation around risk. I know that part of your background also includes the development of your own risk tool, the binary risk assessment. Can you tell us a little bit about that experience and how it's informed your thinking on the future of risk?

Ben Sapiro (05:26):

Sure. So this is a story that goes back a bit. So I'll give a hat tip to Jack Jones. He's the inventor of the fair methodology and one of the founders of a company called Risk Lens and really been pushing the whole narrative around quantitative risk management when it comes to information security or cybersecurity. And I remember when he released the fair methodology, and I read the few papers that were describing it, and I was excited because this was going to be revolutionary. This was going to change the conversation about risks. So that finally, when we said we needed money for X resources for Y that we would be able to show them, whoever them was at the time, that this was the right path and that they would just open the checkbooks, the volts would start pouring out money and life would be good. And we'll pause on that thought for a second because you know, stuff happened in between there and then. 

And so great, here's this methodology, really interested in it. At the time open fair, which is the community group and the opening of the methodology as a whole, didn't exist. And I was frustrated by that. And I didn't want to go and try and clone this work because I didn't think that was appropriate and nor did I have the necessary expertise to do it. But I decided that maybe there was something that was a bit more accessible that I could do that would really bring a notion of structured thinking to risk analysis. And what really frustrated me was this whole idea of, well what's our risk: high, medium, low? Okay, well, how did you come to that? Well, we multiplied likelihood of high, medium, and low by impact of high, medium and low. Okay, but how did you come to high? Well, it felt like it was high. There wasn't a lot of structure to how what people were describing the things that drove likelihood or contact frequency. There wasn't a lot of structure to the things that people are using to describe the impact. And there are robust methodologies that do ask you to have consideration around that. And so ISF for example, has one, and there were several others where there's a very big set of workbook approaches. 

But my feeling was is that it didn't make sense to build where you use these very large methodologies when really what we're trying to do, or I wanted to do, is lift the game up just a bit. To just make risk conversations a little better, but also in a way that could isolate opinion. Not eliminate it, isolate it. And so I went off, and over a few months, came up with this idea of binary risk analysis. And they say that -- what is it -- impersonation or copying as the highest form of flattery. And so that was my flattery to, and homage to the fair work that Jack Jones had done. 

Binary risk analysis was a structured methodology in the same way that fair is the factor analysis of information risk. It was also structured. It broke things apart. And so with this, wrote up a paper on it, built a methodology around it, and then went and wrote code on it. The core of this and where I think it instructs future thinking is around -- when you're talking about risk, you need to understand what brings you to that risk, and how you're characterizing it to the organization or the business or whatever it is that you're responsible for helping defend. 

And whether your methodology is fair or binary risk analysis or IRA or whatever the thing that you choose to use is, it's important that you're transparent in your structuring of the explanation about this, then this, then this, then this happens, therefore that, then that, then that, then that. And that makes risk conversations and more accessible to the people that you're working with and supporting versus well, security just said the risk was high. Because now people can look then and go, well, I disagree with that particular part of your analysis, not the whole thing. It makes it more defensible. So I think defensible conversations around risk are imperative to the future of good risk management and what comes next for risk conversations overall.

Drew Lazzara (09:50):

So, as you're thinking about the future, this kind of framework or approach, it seems like what you're saying is that in cybersecurity, leaders aren't really thinking about risk in the way that other parts of the business already think about it. So where do you think that gap comes from, and what can security professionals do to influence a change in their own approach?

Ben Sapiro (10:10):

So, when you look at any other part of the business, they are very willing to take reasonable risks. Now, that will vary by definition and what 'very willing' means, or at least 'willing.' But if you look in the context that I live in, which is a financial institution, but I think this has parallels elsewhere, we take a certain amount of money and we invest it for what is hopefully a positive return, which we can then provide to our shareholders and our customers and so on. There is risk in that. We're not saying that there is anything such as a risk-free investment, and in fact, you don't get any returns on risk-free investments, or you get pretty low returns. And so the business already says, they're willing to take risks. 

When you think about a product development company, be it a physical product, like a toy or a software development company, they take a risk as well. They invest a certain amount of funding to go and develop the thing to take it to market. They've done their work to say, is this a good idea or not, one hopes. But they don't know with 100% certainty that the millions of dollars they've invested in product development will turn out positively. And history is littered with failed startups and companies that made bad decisions around product development that cost them their organization. 

But the flip side of that is that they were willing to take these risks for the reasons of that there was going to be a reward. And whether it's innovation or running a business, if you try to eliminate all risk, you're going to spend a lot of money on eliminating that risk. Rather than using that money to actually generate revenue, which is really the reason that organizations exist. We're in the business of taking considered risks because they generate returns. They generate outcomes. 

And so, that's the first thing that I think is very different is for us as a profession -- is that we think that we have to reduce risk, when really what we need to do is, and this comes to the second part of the answer, we need to think about how we reduce risk to acceptable levels. Having that conversation, I think, is difficult for us, and I'll get into why in a bit. But I think that's a conversation that's worth having, because what you really want the business or the organization you support to understand is that we can't eliminate all risk. And doing so would be incredibly prohibitive. And so what we're really after here is understanding what's good enough. 

And you hear this time and time again from executives is -- tell me what's good enough in security. It can't be 100% of ISO, 27,001 implementation. It can't be all of NIST SP 800 dash pick, whatever your favorite publication is, right? That's not going to be achievable, and that's going to be too cost prohibitive. 

The way I like to frame it is that you really want to start with -- what is the bad event that you want to avoid? And for CISOs, occupational hazard is breach. You want to avoid a data security breach. Okay, so let's start talking about how much of a breach. And so when you think about a breach, there are breaches that happen all the time where somebody gets infected by a virus and your incident response team is practiced and got the right tools and the right procedures. And they contain it, and there's no harm. There's no real impact to the business. 

Then, you've got the other side of the spectrum, which is these significant events that will make CISOs famous, or at least the companies they work for famous, not necessarily in a good way. Those are the things that you want to avoid. But now the quanta that you need to figure out is how often and how much. If your company is coming out and saying, well, we won't have breaches. We'll make sure we're doing everything we can so that we never have breaches. That's just not achievable. You could spend infinitely on doing that. 

But if you say, look, we recognize we're going to have a breach. Our job is not to stop every breach. Our job is to stop the ones that kill us. But everything else that sits under that threshold that does us irreparable harm, either to our customers or to our share price or to us as an operating concern, anything that sits underneath that, we're actually okay with. And so then the exercise becomes a figuring out what that threshold is. 

And it's important that the CISOs have that conversation versus saying, I am here to fix all the things and stop all the risks and stop all the breaches. It's not sustainable. We're going to take on something. And we understand that there will be harm because we're not trying to stop everything. And once you get that conversation going about it, it is a possibility to get people comfortable with: we are okay with the potential of happening to this. And I want to say, not okay, nobody's okay with these things happening, but we understand they'll happen and recognize they're a necessary hazard for being in business. 

Then you can start exploring, well, where is that threshold? If you never start the conversation with -- we're going to have a breach, and it's going to be something we're going to have, breaches are going to be with us, roughly with this frequency, and maybe they're going to cost this much. And then somebody says, okay, well, I'm not happy with those numbers, but I would be happy with lower numbers. I would be happy with lower frequency, these numbers, whatever it might be. Okay, now we're negotiating. 

And I think all of this ties to the existential angst that CISOs and security people seem to sort of embed within their professional psyche, which is, we have to stop everything, we can't have a breach. And if, as a CISO, or a security person, I have failed somehow, if a breach occurs. Well, you failed if you didn't do all that you reasonably could. You failed if you didn't advise your business that there were risks. You failed if you didn't go and put in controls and processes and all of those things, as best as you could. But even if you do all of those things really well, there's still a possibility of a sophisticated threat actor out there that's going to be able to do you harm no matter how much you spend, no matter how much you do. You hear stories all the time of CISOs being let go because of a breach. 

And I think we have to get over ourselves. The chief risk officer, the CFO, whoever it might be, who is taking on some sort of risk for the organization, overseeing it, they aren't getting let go any more or less than CISOs. Sure, if they haven't done their jobs properly, that's a conversation. But if they've done their jobs properly, and they're clear to everybody, we've done the best within the boundaries that we have available to us from funding, change management and so on. And if that breach happens, okay, it happened, we did the best we could. Pick ourselves up and let's continue moving. 

And we have to, as a profession, let go of that existential angst. We shouldn't stop caring and stop saying, we need to reduce the number of breaches as much as possible, but we shouldn't let it be the thing that drives us towards building ivory towers around security. That we absolutely have to defend against everything at all cost because that puts us at odds with the business' willingness to take reasonable risk.

Drew Lazzara (17:33):

You know, as you were talking through that, I was thinking of even broad security terminology, something like that word 'firewall,' you know, even that old image sets the expectation of this giant wall of literal flames that repels all attackers. And I think that that creates the kind of expectation of unbreachability that you're talking about. So what can you do to overcome that perception throughout the business? What can CISOs do to reposition themselves as risk leaders instead?

Ben Sapiro (18:02):

I think we need to start the conversation with control effectiveness, sort of efficacy rates, whatever the term you'd like to use is. And I think there are examples throughout the non-security world about how well something might work. And if you look at, I don't know, given the timeframe and unparalleled time in history that we're currently in, masks. Facial masks, in high demand. Sure, don't stop 100% of infectious particles, but they reduce the likelihood. And so we, as a profession should be talking, and I know that we do, we talk about the control is generally effective, but not perfect. And that it is designed to defend against these classes of problem, but not these. 

When you start talking about these controls are good, but not perfect, you're coming back to that acceptable risk conversation. The other part of this though is, and I don't want to vendor blame because that's unfair, but the market has something in the range of, depending on which investment organization you believe, anywhere between, I think it's 4,500 and 5,300 different product security vendors. And every single one of them will go out there and say that they stop this, and they stop that, with definitive certainty. 

And I think as the technology risk professionals that we are in information security, that our job is to diffuse that vendor message and say, 'yeah, it's a good piece of technology,' but we should never be presenting it to the business as 'this will stop these things dead.' This is a risk mitigant that will reduce the likelihood of this event, and we need to be clear on that language when we're talking about investments that do not stop everything, but it's a good thing to do because it reasonably contains the risk to a level that we think is acceptable.

Liz Ramey (19:51):

How do you then change the mindsets of how the board or how executive leadership then views risk? I could see them thinking, as you've been talking about how CISOs think of themselves, how do you change that paradigm in the way that boards or executive leadership view risk as a positive thing as something that can be acceptable?

Ben Sapiro (20:23):

Start with understanding the role between management and the board. It's a pet peeve of mine where I hear, for example, the CISO, or actually typically not CISOs, but I'll call it members of the security community that are earlier on in their career, okay -- well, 'What does the board think about that? We should take that to the board.' That's not how it works, at least in a North American context, be it in the US or Canada, and probably large parts of the world that have -- I'll call it similar governance structures around corporations. The board's job is to ensure that the corporation is well managed. But the board's job is not to manage the corporation itself. And so that's an important distinction. 

A CISO is part of management, and their job is to tell the board, 'Look, we've got this. These are the risks that we know about. These are the ones that we are treating. These are the ones we've chosen not to treat or treat partially because of resource constraints because of other priorities, whatever it might be. However we're managing all of these risks within some threshold that does not give us any concern.' And it's really the threshold that management is supposed to be talking to the board about and getting agreed upon. 

But once you establish that threshold, to say, anything below this line is within a reasonable amount of risk and management can then treat it as they see fit, where they will trade risk for reward, the board should then be able to say, ‘Okay, they've got it.’ Anything above this threshold, well, that's where the board should intervene and go, we are not managing our risk appropriately. And therefore, now we need to give management direction to treat ourselves back down to that appropriate threshold. 

There's always going to be that distance between the board and management themselves. The board will convene some number of times per year, and then we'll consider many, many dozens of topics within a relatively short timeframe. Topics that have dozens of senior executives and management and staff paying attention to as a full-time basis. So, to think that a board conversation is going to cover the entirety of the issue is -- well, that's not a thing given the time constraints that are being worked with in board settings. And so really, you want to be able to focus on what is that threshold. But cybersecurity is a hot topic. And so perhaps there's an opportunity there to educate board members to say, this is what prudent looks like. This is what good enough looks like. And that will still result in some negative outcomes, but those negative outcomes will be within our tolerances, our loss appetite, whatever it might be.

Drew Lazzara (23:23):

Ben, this might go back to your earlier point about the way security tools are talked about in the marketplace generally, but at a high level, it does seem that there's this consensus around the fact that the threat landscape is evolving so rapidly. There's that arms race metaphor that gets discussed when talking about keeping pace with constant change in this area. And you've talked about clearly communicating what's important right now for the business. Does that kind of risk management approach allow you to remain dynamic and adaptable to security thresholds that might change very quickly? Can you stay agile even after you've defined acceptable risk?

Ben Sapiro (23:58):

I would start by first being very careful about the ever-evolving threat landscape, or the dynamic of environment or the arms race. I think that frustrates a lot of CFOs and CEOs and so on because we're basically as a profession turning to everybody and saying, 'This will never be done, and you will have to keep paying and paying and paying.' And it's true that things will continue to evolve. And it's true that we're never quite done, but I think we need to change the framing. This is where we are now. This is the current state of what we have to deal with. And these are the approaches we're using to deal with it. 

That's the same approach that I would think the business takes. They deal with the here and now. They set strategy. They set plan for the next few years, as best as they understand it. But you're not going to hear your head of product innovation say, 'Well, this is what we're investing, but we may have to invest more if our competitors invest more.' Or change their game. That's a given. That's implicit in the conversation that if the environment around you changes, then you will have to adapt. 

So I think as security professionals, we ought to stop saying the dynamic landscape, the arms race, the evolving threat landscape, whatever the term that you like to use is because it signals it won't be done. 

When it comes to the topic of reevaluating or making sure that our thresholds stay current, we have to be comfortable with the notion of -- when things change, our knowledge will increase, and we will go back with a careful explanation about why we're moving the pin on this thing, why we're evaluating and changing. But we should be designing measurements and thresholds and control statements, whatever it might be that have a longevity to them. But that are also designed to be forward looking in nature. 

This is going to have utility that is going to tell you about upcoming things. So forward looking statements you might be able to make are around code security. You're going to ship an application. You want to make sure it's secure. So what are the forward looking things that you might be thinking about that would relate to the risk that could come about if this application wasn't secure. And you can probably think of a few dozen very easily, such as secure code training, web application firewalls, static analysis, dynamic analysis, runtime analysis, vulnerability scanning, management, penetration testing, et cetera, et cetera, identity and access management. You think about a whole bunch of drivers and inputs. 

And you can then say, some of these are going to be things that will, if not attended to right now, will result in risk going up, which is the potentiality of a security event that undermines business objectives. And there are some things that if we do them well enough in advance, we'll decrease the likelihood of risk ever coming into existence or an issue coming into existence that leads to risk. Then, you can start designing measures about what truly matters and design things for longevity because all of the things that I've just spoken about will apply to software developed in five years, just as they applied to software five years ago. 

Now, you're not going to bring all of those to your executive or the board. You're going to find a way to summarize those into ‘this is how we're doing around system security, development, acquisition, maintenance, and operation,’ or something to that effect. And have that forward looking perspective to say, we've got indicators that we're not doing enough on the front end of this pipeline, so it's going to cause things to fall off later on, as these systems go into production. Then you have a regular check in on a yearly basis to reevaluate where your thresholds are and see whether or not you need to move the pin. 

You don't have to be reevaluating every single time. In fact, you could probably set it out at the beginning to say, look, I'm going to tell you about the following five things every time I see you. How we're performing against these key measures. And once a year, I'm going to reevaluate those, and I'll come back to you. And I'll tell you what I've shifted them to -- to new measurements, adjusted the thresholds, whatever it might be. And that's a practice that a board would generally be comfortable with because they're going to see the same happening in financial performance, in setting their budgets for the year, what have you.

Liz Ramey (28:10):

So Ben, you've talked about the different thresholds over time and how those thresholds remain current. I want to take it in a little bit of a different direction. How do you measure the different thresholds or set the different thresholds for say an individual or the overall enterprise?

Ben Sapiro (28:33):

I think when we're talking about individuals, every event that happens to a person is, well, it's impactful. And we can't manage at scale in that way. And I don't want to say that it's not important about what happens to the individual. Absolutely is. We've all had some unpleasantness happen in our life, but we should do everything we reasonably can to safeguard the individual. But you can't manage at the individual level. 

And so when you're setting thresholds, you thinking about how do you manage the aggregate? How do you manage at a corporate level? Or how do you manage at a city level? Whatever your particular operating domain is. And that you recognize that within that mix there, because it is impossible to design perfect controls, that from time to time, that does mean that there will be an individual impact. 

But on the whole, and this comes back to acceptable risk, the system, the environment, the business, whatever it might be, remains safe. I think every organization, every society has some tolerance for negative outcomes to a subset of their members. But again, it comes back to, it's impossible to design perfect controls that keeps everybody safe. 

And so look at a real world example, driving. Driving is a deadly activity. It may vary by which city you live in and the nature of the drivers there, but it is generally a deadly activity. And we've put in a series of rules and safety measures, airbags and seat belts and impact zones. We've put in bollards and medians on the highway and a whole bunch of other things to reduce the likelihood of the event. Yet, sadly, every year, people are injured in motor vehicle accidents. And that's heartbreaking. But society as a whole has not turned around and said, that's it, we're done with cars back to horse and buggy, that'll be safer. 

Because there is this reward that we, as a society, have decided that we are willing to pursue, which is the ability to move goods further for people to work in different places and travel faster. All of those things we think are good for the society overall in exchange for that small potential of impact to a single individual. But overall across the entire community, the entire population, the risk is manageable within reasonable levels. 

So I think businesses and organizations as a whole are generally focused on managing at the aggregate level. But when you step away from that notion of managing risk there, just at the business level, when you think more broadly to the entire population, that seems to me that that's really the work of governments and regulators, who are then going to design legislation and rules and codes, whatever it might be, that say, this is what tolerable looks like. And I think as security professionals, we are very interested in understanding that one, but ultimately, we're focused on managing the interests of the organization and protecting as many stakeholders as we reasonably can, as many data subjects, wherever the term is for you. But then turning to our leaders in government to say, set the right thresholds and policies for the acceptable risk overall within the industry, within society, and what have you.

Drew Lazzara (32:09):

So, I think there are going to be a good number of security leaders listening along that are nodding their heads with you as you've taken us through this paradigm, Ben. But there might be other business leaders listening that are thinking more about the costs that tend to be associated with security. If you're able to be a leader that shifts your organizations toward a more risk management approach to cybersecurity, does that help your organization take bigger risks? Does it allow you to think about your budgeting and the resources that you dedicate to cybersecurity differently and maybe lower that cost while also increasing risk appetite?

Ben Sapiro (32:49):

Well, it depends on how big the budget is, right? Typically the proportion of budget that security is after sits between 5 and 10% of the overall IT budget. And so I don't think that spending less money on security necessarily is going to allow them to shoot for the moon. I wish I could tell you that the CISOs that I've had the pleasure of knowing are having budgets that are so significant that by not spending them some other significant business activity could occur. Where the risk taking becomes possible is in speed of delivery. 

So, if we're thinking about, we're building a new product, and we would talk about all of the controls we would want to put in, all the defenses, all the things to make it safe. And some of those would be things that are mandated by law that we have to do. So, if you're making a toy, then you've got to make sure there are no small parts so that there can't be a choking hazard. But there are some things that you would do in your product development, your project deployment, your platform, implementation, whatever it is that you're building and putting out to market that makes us money for your company, that are decisional. 

And that might mean the difference between going live today and starting to generate revenue today, which then enables future conversations about better improvements because you've got money now to do that. Or it might be that you're going to defer the launch of this thing and try to fix everything. And then, well, the company has lost revenue there. So adjusting the security budget up or down is not going to be the thing that enables the company to succeed more generally. I'm sure there are some scenarios like that, but in general, no. It's rather about the agility of deciding on which controls to put in place, to treat risk by deciding what risk is acceptable, that will give the abilities for the company to take risks and generate better rewards for its shareholders.

Drew Lazzara (34:47):

Well, Ben, thank you so much for joining us today and helping us to think a little bit differently about the impact that a risk management approach can have on your cybersecurity posture. Before we let you go, we always like to ask our guests what would be your Next Big Question for senior business leaders and executives?

Ben Sapiro (35:06):

How do we make technology truly agile in service of the business?

Drew Lazzara (35:13):

Wow, such a great question. Definitely qualifies as a big question, too. I think it may be the central theme facing organizations in the future. And I can't imagine there's a business leader out there that isn't asking themselves that question every single day. We'll be asking our next guest that question, so you've set them a very daunting task, and I'll be interested to hear that answer. Ben Sapiro, thank you, again, for being on The Next Big Question this week. It's such a pleasure speaking with you. Really appreciate your insights and your time and your fascinating look at this challenge. Thanks again for being here. We appreciate it.

Ben Sapiro (35:49):

Thank you for having me.

Liz Ramey (35:52):

Thank you again for listening to The Next Big Question. If you enjoyed this episode, please subscribe to the show on Apple podcasts, Spotify, Stitcher, or wherever you listen. Rate and review the show so that we can continue to grow and improve. You can also visit Evanta.com to learn more about our C-level communities. Network, share, learn with Evanta, a Gartner company.