5 Questions to Ask to Ensure AI Ethics


Executive Blog
Written by Dan Power, Managing Director, Data Governance, State Street Global Markets
Edited by Amanda Baldwin

Dan Power

Managing Director, Data Governance

State Street Global Markets

AUGUST 16, 2022

“Data is the new oil.” As this phrase permeates throughout the data and analytics community, increased attention is being brought to AI and machine learning to fuel the digital economy. Before Chief Data Officers can develop an AI strategy, they need to take a step back and consider the ethical responsibilities this entails. AI left unchecked can lead to unintended biases and other negative consequences.

Dan Power, Managing Director, Data Governance, State Street Global Markets, led a session on this topic at Evanta’s Boston CDAO Executive Summit entitled, “Structuring Governance to Keep AI Ethics Front and Center.” Here, he provides some questions CDAOs should ask themselves to ensure they are building AI ethics into their systems from the onset.

- - -

CDAOs often don’t acknowledge AI ethics until after there’s an issue, but they can no longer let ignorance or apathy prevent them from being socially responsible. As a data and analytics leader, you should be deeply involved in creating and executing a strategy for responsible use of AI at your company. In the absence of such a strategy, the potential harm to individuals, other companies and society as a whole is considerable.

Many organizations put so much pressure on Time to Market (TTM) that ethical notions fall by the wayside or are never looked at in the first place. It is critical to recognize marginalized groups and other stakeholders when developing an AI strategy, as AI has the potential to make biases permanent and encode them into the company’s algorithms.

Apple and Goldman Sachs learned this the hard way. The initial launch of the Apple Credit Card was marred by persistent news stories that husbands were receiving 10-20 times more credit than their wives, and this damaged the reputations of both companies unnecessarily. If they had built a “de-biasing” step into their development process for the new credit card and its underlying systems, they could have avoided that reputational damage.

Here are five questions to ask to ensure ethics are built into your AI strategy:
 

  1. Is your AI strategy fair and honest?

Does your approach to AI include an explicit “de-biasing” step in the development process? If not, are you at risk of treating certain subgroups of the affected population (clients, customers, counterparties, regulators, investors, employees, etc.) unfairly or letting an algorithm treat them with bias or some other type of unfairness? Even without intending any harm, does your training data contain historical bias that will be carried over into the resulting algorithm?
 

  1. If your AI strategy is challenged publicly, would I be comfortable sharing the details?

The best explanation for this is “would you be comfortable if the strategy was explained fully on the front page of the Wall Street Journal or your hometown newspaper?” If not, why not? If the strategy is not spelled out explicitly, are you comfortable going ahead with no formal strategy, and having that revealed? Are there other aspects of the strategy that might embarrass the organization or give the appearance of unfair treatment to a portion of your stakeholder population? What’s the most embarrassing thing that could happen if your AI strategy were challenged in a public forum?
 

  1. What would happen if everybody followed your AI strategy?

Take your AI strategy and scale it up as if every company – in your industry and across industries – followed that same strategy. Is that a sustainable approach for society as a whole? If that scaling up looks like it would result in a dystopian future, what would you have to do to change your strategy so that it would be safe for everyone to adopt it? What are the unintended outcomes and who would be hurt by them?
 

  1. Does your AI strategy put long-term relationships at risk for short-term gains?

Looking at your stakeholder population (clients, customers, counterparties, regulators, investors, employees, etc.), are you endangering any long-term relationships with members of the community? Are there harmful outcomes that your strategy does not prevent? How do you mitigate that risk?

If you invited the marginalized members of your various stakeholder groups to sit down with your team to review your ethical AI strategy, how do you assure that they’d come away feeling that your company had their best interests at heart? Do they feel that you built sufficient protections into your development approach for AI that any concerns they might have would end up being unfounded?
 

  1. Is the AI strategy lawful and within the letter and spirit of your policies?

A company’s AI strategy should be explicit. It should consider not only the culture and values of the company, but its policies and procedures and applicable laws and regulations, so that there will not be any awkward conversations with regulators, board members, the press, the public, etc.
 

Planning is Everything

If you fail to plan, you’re planning to fail. Having an explicit strategy for responsible use of AI is a necessity, once you go beyond the stage of prototyping and building capacity, to the stage of building AI and machine learning products, to putting them in the wild. Story after story of companies neglecting these basic strategies - and as a result, getting into hot water in the press and with their communities - makes it clear that not having a strategy is not an option.


This is a continuation of a conversation from Evanta’s Boston CDAO Executive Summit in June 2022. Find your CDAO community, and connect with like-minded C-level executives on mission critical topics, such as this, at one of our upcoming summits or programs

 

by CDAOs, for CDAOs


Find your local community and explore the benefits of becoming a member.