The Next Big Question

Episode 28
Hosted by: Liz Ramey

Ellen Nielsen

Chief Data Officer

Chevron

Ellen Nielsen is the Chief Data Officer at Chevron, where she focuses on creating a data-oriented culture partnered with value chain thinking. Ellen oversees Chevron’s data strategy to ensure effective, efficient, and responsible use of data and AI.

What Should Enterprises Be Doing with AI?


SEPTEMBER 21, 2023

This time on The Next Big Question, Ellen Nielsen, CDO of Chevron, joins the podcast to talk about how enterprises can approach artificial intelligence. She discusses the challenges and opportunities for AI at an enterprise level and how change management and culture play a significant role. Ellen also talks about how to pivot your adoption strategy with a volatile, rapidly changing technology like AI.

/

Liz Ramey [00:00:12] Welcome to The Next Big Question, a podcast with senior business leaders sharing their vision for tomorrow, brought to you by Evanta, a Gartner Company. Each episode features a question with C-suite executives about the future of their roles, organizations and industries. Thanks for listening. I'm your host, Liz Ramey. Now let's hear what today's next big question is. 

In this episode of The Next Big Question, Ellen Nielsen, Chief Data Officer at Chevron, joins me to discuss the challenges and opportunities with AI that enterprises should be preparing for. Ellen and I talk about culture, innovation, risks and the opportunities for women in the data and AI world. Ellen Nielsen, welcome to The Next Big Question. 

Ellen Nielsen [00:01:00] Hey, Liz, thanks for having me. Really excited to be here. 

Liz Ramey [00:01:04] We're thrilled to have you as a guest on this show. And I am really excited to talk to you about the topic of artificial intelligence at the enterprise level. But before I get into all of that, I'd really like to hear from you just about yourself, your career and your organization. 

Ellen Nielsen [00:01:22] Yeah. Thanks for giving me the opportunity to introduce myself. Ellen Nielsen, I'm the current Chief Data Officer of Chevron. I'm the very first one in this capacity. But of course, I have a pretty long background, over 30 years in total, with different components. One is IT, digital, data…. I worked also on procurement, supply chain. So, I always say to people when they ask me where you're coming from, then I said, I walk on two legs, and I can switch on both legs very, very fast. And that's maybe in my background. I worked for different companies and industries and have also great experience with different customers. If it's in the automotive or it's in manufacturing or consumer goods, and of course, since five years now with Chevron really excited to be part of this energy transition. 

Liz Ramey [00:02:15] That's great. Yeah, I bet there's a lot of innovation and things going on right now in that space. So, love that. And you've also been in this kind of data world and technology world for so long. So, I'd love to get to hear from you about the history of AI, and what sort of recent changes have taken place to provide enterprises with more opportunities to progress really in this space. 

Ellen Nielsen [00:02:42] So, AI is nothing new. I think some people will laugh because it started really in 1940. That's way before I was born. And of course, the human being started always to think about -- how can we create something with the evolution of computers, which can take over tasks typically human beings can do. So, that's basically artificial intelligence substituting certain task typically only humans can do. And that started pretty early on. And then over time, you see kind of a ping pong ball between computer was ready and people were already in terms of taking advantage and creating computer languages and automations and robots, etc., etc.

So, it was always what's faster, you know, the computers were faster. Okay, then we can pick up again with the human brain, what can we do with it? And then, it stopped again. And then, the computer had to go better. And always with this kind of changing, changing environment, the opportunities grew over time so significantly. It's amazing to observe since the 1980s when machine learning started, you know, and that kind of algorithms. And then maybe at a certain point in time you remember this chess game, you know, where a computer was even better at gaming chess than anybody else. That was a big ‘aha’ moment in terms of, okay, wow, it's really ready, and we can take advantage of this. And it continues further on into deep learning. You know, then deeper machine learning algorithms with neural networks came into play. So, really trying to mimic the brain activity, you know, in terms of neural networks, and say, how can we even get better in creating artificial intelligence to take in more tasks and augment us as human beings. So, just imagine how far we have come in this over the last year, let's say 80 years. So, 80 years is a blip in our evolution on the earth, you know, so I think that's it's amazing to see. So, I'm really excited about the evolution in this space. 

Liz Ramey [00:05:05] Yeah, that's great. And so, as we're looking at this from an enterprise perspective, what are some of the obstacles that enterprises are going to be challenged with in adopting this technology? 

Ellen Nielsen [00:05:17] Yeah, so I think I can talk a little bit from my personal opinion and then from a lens of a corporation to this, and there are definitely challenges. It's nothing different than when we had a big technology shift in the past. There were challenges coming with it, and the same is happening, you know, with artificial intelligence. And then specifically now when we look into generative AI, where we create new content. And so, the first things which come to mind is, of course, the data space. So, we always say the AI is only as good as the data provided to it. So, it doesn't change the data aspect in the broader scheme. You cannot say ‘we just focus on AI, and we don't care about data anymore.’ This will be going into too many different avenues which are not healthy. So, data plays a big role. 

Then the other thing is the talent, you know, so you need the people who have education background in this technology and have also the mindset of a lifelong learner because this technology and the things we are learning is so fast. So, you need talent who are able to learn the technology, but also being a fast changer, fast learner, being very curious, look out for new things and evaluate. So, we are not any more in this static or semi-static world. We are really in a high, high pace, high velocity world where people have to be faster in learning and adapting. 

The third is, of course, the third is then the policy topics. You know, when you think about AI and generative AI, responsible AI plays a big role. Many companies already got involved deeply into responsible AI. The same was with us. So, we started that journey a while ago and thought about -- what does it mean for us to do this in a responsible, ethical way? That's definitely something for the look out for corporations, but also for the individual, if you will get in touch with this technology. 

And then, I think the kicker is always the culture, you know, so everything is adoption. How do you adopt it? How do you let it in? How do you use it? Are you using it in the right way? So, I think the watch points are -- how can you adopt this, how can you make people using it for their own efficiency, for their own productivity, for creating new content, make even maybe new business with it. So, I think it will branch out in different spaces. But what I'm maybe least worried about, I think, the technology, we always go further from a system standpoint, but I think the ingredients are the people, the processes we have to watch.

There was actually an interesting article in the Wall Street Journal about a nurse, who was basically one process is supported by AI. And this article talked about that the nurse with her big experience from the past knew this patient didn't need that, let's say treatment. But the AI proposed it, and the process basically said you should follow the AI unless you have a very good reason, and you have to argue why you want to do it differently. So, in effect, the nurse basically did as AI said, because felt if she would speak up, she would get punished by the supervisor, by the system not to follow it. And we also have to recognize AI is also making mistakes. Humans make mistakes. So, AI is not superior to a human in many ways. So, we have to watch that with open eyes. So, this is maybe the challenge to set the right, let's say, processes. What are you looking at that you are not entering into a wrong measurement, which creates a wrong behavior at the end and which leads to a problem. 

Liz Ramey [00:09:33] A lot of ethical dilemmas there, right? And we're writing that. We're writing the code, we're writing the rules. And so, we have to deal with these questions that maybe we've just debated about for years. And, you know, you talked about culture as a challenge. And in a former conversation that you and I had, you were quoting Peter Drucker, his quote about ‘culture eating strategy,’ but you said, I want to take it a little bit further, and you said that ‘culture eats everything.’ What do you mean by this in terms of AI at the enterprise level? 

Ellen Nielsen [00:10:09] Yeah. So there was a comment, which came intuitively out because I'm a big believer that on the opposite side couch it could also kill everything around it. So, in terms of process, technology, if you don't have the culture, then nothing is there. So, and that's why I think culture is so important and is really enabling everything. So, and that's why I believe it's really eating everything. I just read a survey which was done by one of the two wise guys, I think, in the industry, Tom Davenport and Randy Bean, very known for their data thought leadership. And they said in the survey, 80% is related to culture on the success rate. And it has not changed. 

Sometimes you get tired about hearing it because change management plays a role. People play a role, culture plays a role. But we shift sometimes to the, let's say, more tangible things and obvious things, which are the technology is there, the process is written down. But in fact, nothing works if you have not the people and the culture open to this and adopting it. And it was clear -- 80% is a pretty high, high rate for success. So, and sometimes I get tired of it because I've been doing transformation since maybe 20 years now, 15-20 years now. And we always say that, but sometimes we are not executing on it. And I think what is really important for everybody listening, so don't just say it, really take action on it, walk the talk. And sometimes we are very fast with cutting things out, which are not so tangible, but it could hurt the overall adoption and process in this space of adapting. 

Liz Ramey [00:12:00] Yeah, absolutely. And you know, sticking with culture, it's something that really encompasses the entire enterprise. And this podcast is really geared towards C-level leaders. So, I just want to dive a little deeper. What do you think C-level leaders should do differently in terms of change management? You said you've been doing this for 20 years, so you're probably adopting or adapting yourself to this. So, in terms of change management, how should C-level leaders kind of walk down this pathway of adopting AI? 

Ellen Nielsen [00:12:36] Yeah. First of all, I can say I see many C-level leaders getting familiar, so building fluency around it, understanding the topic, and then understanding also the impact on their company they're leading. You know, these are the top leaders of a company. And of course, they see the very broad spectrum of things. And then, they can judge on, okay, where's the impact and how it will impact and where are the opportunities, but also where are the threats. So, I think the adoption is taking it very, very seriously. But don't overturn the volume, you know? So, you have to be careful with the evolution in artificial intelligence and I’m pointing out specifically here generative AI because the evolution is so fast. You know, so when you talk to people, I'm just talking internally to people and they have or externally they've created something maybe in January. And with the evolution and when you ask them, would you do it the same way today? They say no, because it's already evolved, and they would do it differently. 

So, you see, don't think that you set something up, and it will run as such. There is so much velocity and volatility in this topic. You have to pivot. So, you have not to be afraid to shut something down and to start something new because it might be worth it to start new because the evolution of the underlying foundation and technology is going very fast. So, don't be afraid to pivot. I would say that's maybe the challenge and doing also the change management. I would also say that for C-level, there are some risks associated to it. You will get bombarded. I mean, I see the explosion of communication of companies approaching C-level leaders saying the next big driver for their revenue, for their footprint, for their market share. And it's always good to double click on it. What is real, what is maybe just very early stages, and what do you have already in your own shop? And is this something in addition? So, it's difficult to segment the good things from the, let's say, things you already have or which are maybe not really existing there. So, I would be cautious in listening to everything which is out there. And you need to do some due diligence with a setup team who focuses on this. Because -- this is what we decided at Chevron -- is also this is not a sidekick. So, we cannot do generative AI as a sidekick. And we do artificial intelligence since decades now. But this kind of new level of gen AI, which we were looking at over the last three years already and had some use cases, and we did this a little bit on the sidekick. We decided to put dedicated people on looking into it. So, you have to invest even knowing that you have not done a return on investment immediately. But that's a technology you definitely have to watch. 

Liz Ramey [00:15:53] Yeah, absolutely. Those are all really good points. You mentioned this kind of rate of change, right, with this type of technology. So, with the rate of innovation, with these capabilities, it's just stratospheric. It's probably something that we have never quite seen before, but it can also cause a bit of anxiety in C-level leaders. I was just on a town hall with a group of enterprise level CIOs in Dallas, and they were definitely talking about the anxiety that they have felt as leaders and the anxieties about that the other enterprise level leaders are feeling. And so, how do we handle the anxiety that comes with these changes and innovations, and how do we manage that anxiety within our organizations as breakthroughs are happening around us or even inside? 

Ellen Nielsen [00:16:48] Yeah, the anxiety is definitely a real thing, you know. So, when you look into corporations and organizations, people are afraid of new things and new technology, which they don't understand yet. So, if we are thinking, okay, if I'm pushing hard enough against it, it maybe stops for me. It will not stop. So, I think it will progress as other things have progressed in the past. You know, when you think about banking systems, and you went still into your subsidiary to do all your banking things. At a certain point in time, you have only online banking. So, it's definitely coming. But how do you use it? And there is a human augmentation, and it's not a substitution. 

So, learning and make the learning as easy as possible, and invite the people to understand that it's the right way to do. Listening to the anxiety, but also listening to, let's say, words or communication where it says, okay, I don't believe it's coming. I think we should not think this is not coming. We should definitely say it will come in anyways. You'll get earlier on the train and learn it, or you get later on the train and then have to catch up. So, think about it as an opportunity to become a lifelong learner and to embrace that. I mean, we will go through this, and we will go through this together, and we have to go through this responsibly. And we need all people on deck and all hands on deck and watch it. And we have to be careful with it. But it would definitely not be something that we can push on the side and it will not hit us. It will definitely hit us, and we have to get prepared. 

Liz Ramey [00:18:31] It is definitely coming whether we like it or not. And so, let's make these changes for the positive. I'm struggling right now. I’m, you know, talking to a lot of executives about AI currently, and especially at the enterprise level. As a consumer for me, I'm still waiting for some big breakthrough, some big profound business outcome within AI. And maybe I just haven't been reading the right things at. But I'm still kind of really waiting and anticipating for that to happen. We know that the change is going to come sooner than later. So, what's the right approach? Do you, as a C-level leader, adopt early? You mentioned this earlier, you adopt early, but you have to assume some higher risk, or do you sit back in the park and just kind of let those kinks work themselves out before you really adopt it. 

Ellen Nielsen [00:19:30] Yeah, that's always the key question, right? Are you joining or are you waiting on the side and watching how others evolve and then step up at a certain time? But I believe in this thing that it's going so fast. If you tried to sit on the side bench and watch for one or two years and then starting, it has some benefits. Of course, you have not the early trials and the things, but you also lose the opportunity to learn with it and to understand the evolution of it and where the risk is associated with. It's very healthy to do so. 

I believe that you need to – you cannot just sit back. I think everybody says that. Don't sit back, get involved, get involved in your decided pace and decided effort. But I have not heard from -- maybe the very, very, very critics -- they say, okay, don't touch it. But I think from my perspective, that's not an option. And the use cases we have -- there are really interesting use cases out there I've read. And I'm not sure if they touched now the big, huge business workflows at the moment because you have to think about the computer power has to be there. The models have to be evaluated. Maybe you have to change your model if it doesn't go in a week, you know, if you want to adapt something to your business workflow, your business decision. So, there needs to be an adjustment on when you hit the big things because if you hit the big things with an immature thing, you cannot judge on the risk associated with that. And that's why I would say you didn't see these huge, fundamental things because as we know use a large language model, it might come out with a wrong answer today. And that's where you have to rally around -- I believe there will be companies who are creating control mechanisms. ‘Hey, this software can control your language model even if it's in the right quality and takes the right decision.’

So, you will see a cascading control mechanism kick in. But, I would say there are some very meaningful things which already happened in different industries, including ours. So, when we talk about our engineering standards, you know, so we have you can imagine as Chevron, we have tremendous amount of engineering standards, which are very important to do the job in the field and where we work in the right way. And having now a chat function over it, you get the information instantly. You don't have to browse to five different documents and find a research mechanism. What do you need to look after? You have a very fast access and sometimes times matter. Times matter in these kind of roles where we have to act fast and in the right way. So, and that can bring safety aspects, you know, and safety aspects are really big things. So, we have these cases already, and I would say it's not so superficial anymore. I think it comes down to really very useful cases and looking forward to seeing more of that. 

Liz Ramey [00:22:47] I was speaking with a CIO just the other day. They said, you know, there's a little anxiety about ‘AI is taking jobs.’ And they said, ‘Oh, no, no, AI isn't going to take your job. But somebody who knows AI is going to take your job.’ 

Ellen Nielsen [00:23:03] Yeah, I heard the same thing, that you cannot use it because the people who are using it will have an advantage and will be faster. So, you better pick up. 

Liz Ramey [00:23:11] Well, I'd love to shift a little bit and talk about the importance of perspective when building models, when building out technology capabilities. You're very passionate about women leaders in data and AI. So, what does that mean to you and what's the baseline now? And why do you think it's important to change? 

Ellen Nielsen [00:23:33] Yeah, so thanks for bringing this up. I’m really passionate about women leaders in data and AI. I'm almost three years on the board of this organization, which was founded by Asha Saxena, a Columbia professor, and she said, okay, we have to change the world because if you think about AI will be everything, you know, and just making this case. But, you know, to this AI only 20 to 25% women are contributing. Then we know we start maybe not from the right level and doing this right for everybody on the planet. And this is, of course, seeing it from a gender perspective. But you can also, of course, see that from a bigger diversity perspective. And I thought, okay, this really resonates with me. We have to be part of this. We have not enough people, women who are contributing to the algorithms and the very foundational things, and that could lead to an unintended consequence. And we want to make a difference. So, what's the hurdle right now? 

So, and that's why I'm very passionate about supporting women in STEM, of course. But then, of course, going further in data and AI to becoming a leader and bringing other leader women with them and basically growing the population because we want to have equity and parity in the things we are contributing to. So, and that's -- I'm very passionate about this. I know many other women and leader and data and AI fellows feel the same way, and we have a lot of allies too, who are supporting us, who are supporting this cause. But it's not only about the technology, as you earlier said, Liz, it's the people who drive this and that, the people who get involved in contributing to this. And that's a watch point for us. Don't be too, let's say, single sided, siloed. We have to involve a variety and a great, great diversity of people to make this really work in the right way. 

Liz Ramey [00:25:45] That's fascinating, and I love hearing that. I do have -- we're going to kind of wrap up here. And so, I have a question from my last guest to you. And then I do want to hear your question for my next guest. But this is coming from another woman in technology, Angela Williams, the CISO at UL Solutions, and it's kind of around the same conversation that we've had today. AI is huge right now, Angela says. When ChatGPT came out, everyone started racing towards it. There were not a lot of guardrails put around it, and there's a lot of concern as to what type of data is going into it. From a security perspective, how can we use this technology in a very thoughtful way to accelerate or to help improve our cybersecurity strategies in the future? 

Ellen Nielsen [00:26:36] Yeah, this is a great question. I love that question from Angela, and I think that's an intersection between cyber security, data privacy, new technology. So, you have to bring this all in balance. And we thought about it, too, beginning of the year and or end of last year, beginning of the year when ChatGPT came out. What is our approach? And I think that might be different from company to company and depending on the risk understanding and the risk taking. But we said, okay, we put out some rules and we put out some guardrails, and we put out some understanding how we use it. And we didn't want to have hidden usage. You know, if you are not doing it, basically steering it in the company, what are the people doing? Of course, they do it at home, and then you don't know. So, we said, okay, we want to be more proactive and define the rules and say that our citizens in Chevron are good citizens. They know the rules, when they have to apply to certain rules because there is risk associated with that. We have a good risk culture. We understand that from a safety perspective and so on. 

So, we felt pretty comfortable that when we put the rules out, the people can manage it. That's maybe from a company perspective. But, you will, of course, see in the cyber space that people will use this generative AI technology in a bad way, too. There will be bad actors. There will be threats coming on. I saw someone made a kind of a hack-a-thon. Hey, okay, be very creative with your generative AI hacking capabilities, and try to hack in something. And so, to test out this technology and of course, what it leads to is, okay, we can make… we can build protective measures against these new coming threats out of this technology. But I would say, I saw many companies having a very thoughtful approach to it and put rules out. And you, of course, bet kind of somehow that the people you're working with are good citizens and protect also the company they're working for. 

Liz Ramey [00:28:55] Well, I am excited, Ellen, to ask you what your next big question is, that I can propose to my guests, and I'd like you to think about it from a perspective of – you’re a C-level leader in enterprises. It's you as a C-level leader who are really responsible for driving progress within organizations. And so, with that in mind, what sort of questions should or what question should C-level leaders be asking today? 

Ellen Nielsen [00:29:24] Yeah. So, I saw that you asked this question at the end of the podcast, and I put my thoughts a little bit together. And my question would be, what are you really doing to set up the next generation for artificial intelligence in the frame of responsible AI? You know, so how do you do this responsibly in your company, in your environment? And what are the measures you take on to do this right? Because this is a topic which doesn't go away. It will grow significantly. 

Liz Ramey [00:29:58] And I'm excited to see all of the changes that are going to be happening. So, Ellen, thank you so much for being my guest. This was really a great conversation. 

Ellen Nielsen [00:30:08] Yeah, thanks, Liz. It was really a pleasure to be here. 

Liz Ramey [00:30:12] Thank you, again, for listening to The Next Big Question. If you enjoyed this episode, please subscribe to the show on Apple Podcasts, Spotify, Stitcher, or wherever you listen. Rate and review the show so that we can continue to grow and improve. You can also visit Evanta.com to explore more content and learn about how your peers are tackling questions and challenges every day. Connect, Learn and Grow with Evanta, a Gartner Company.