The Center for Data Innovation spoke for Charles Radyclyffe, CEO of EthicsGrade, an AI governance-focused environmental, social, and governance (ESG) ratings agency headquartered in Amsterdam. Charles spoke about why the company chose to focus on AI, the challenge of regional and cultural differences in ethics, and how the sector may evolve.
Patrick Grady: Can you tell us a bit about EthicsGrade, and what motivated the focus on AI?
Charles Radclyffe: There’s lots of ESG ratings companies out there looking at everything from human rights to social justice to climate change. We’ve chosen digitalization as our focus because every company in every industry of every size, if it hasn’t already, will develop this decade a strong and robust mature digital strategy. That will decide the winners or losers of the 2020s and it’s a function of two things: One is how good you become at technological progress, how clever you are building amazing tech; and secondly, how well you manage and mitigate the downsides. We all want to make sure as technologists that it doesn’t cause any harm or unnecessary costs. I used to head up AI at Fidelity (a large pensions firm) and realised that investors were craving data for on these sorts of issues.
Grady: Why is it useful to think about AI ethics in terms of ESG? Is there a danger this practice will come under similar criticisms of greenwashing?
Radclyffe: When investors look at companies, they do financial analysis—how well is this company going to perform—and then there’s a second analysis of how this company is going to be impacted by ESG factors and how it’s going to impact those factors. That second piece of analysis—non-balance sheet risks and liabilities—is what I would call ESG because investors have already started to make those public commitments to say when we invest in companies, and we encourage those companies to digitalize, for example, and we encourage those companies to use AI ethically. Every company digitalizes; therefore, every company creates risk. Investors need to know the relative maturity of the mitigation. That’s where we come into play.
Regarding the other part of the question, about the criticism of the ESG space, we call ourselves EthicsGrade for an important reason: We want to hold this mirror up to people to say, “what do you care about?” Our goal is not to judge that but to help our stakeholders see the world through their own lens because, at the end of the day, we don’t represent the universal standard of good. We help to provide a little bit more subjectivity because ESG issues are really about trade offs. There’s no right answer. The only thing you can objectively measure is how well a company understands its stakeholders, how well the company incorporates stakeholders’ objectives in its approach, and how well the company executes on that.
Grady: Ethics vary across regions and amongst stakeholders. How does EthicsGrade account for this when grading AI ethics?
Radclyffe: I can answer this first-hand because our team at Fidelity was quite geographically distributed. One of the things that excited me one of my first visits to China, was how slick the whole onboarding process was. So many of the processes use smartphone facial recognition and QR codes. But if I take that to the German team and suggested, “Hey, we’re gonna use facial recognition,” you can imagine a different reaction. So it’s all about cultural reference points. If we were to try and come up with a set of standards for the whole world, we would fail. For me, there’s no inconsistency at all being able to do exactly the same thing for clients, maybe in the Middle East or in China or other parts of the world, who have very different priorities in terms of what matters from an ethical standpoint. I quip sometimes that we’ve kind of nailed this the day that we’ve got both the New York Times and The New York Post as clients with the same engine providing both with an ESG index for their target audience.
Grady: Do you think non-tech companies are put off incorporating AI because of public scandals involving its use? How can they be encouraged to deploy AI systems ethically?
Radclyffe: Even though there has been very high profile data breaches and fines levied towards the private sector, I don’t think that’s where the story is with AI. I think companies have to be aware of risks but is it deterring activity now? I don’t see it is at all. We see in our data lots of examples of good practice. As a company, you should have an inventory of where processes happens, how you manage risks, and how you manage quality outcomes. Making sure that stuff which has a high risk outcome also has a high quality attached to it. Until now, development in some cases has been driven by innovation groups and bought into organisations without necessarily the right discipline and and procurement and those sort of controls. I think all of that is about to change over the next two or three years.
Grady: In which area or metric do you see upcoming regulation having the biggest impact on your ratings?
Radclyffe: We’re looking at future regulation, whereas what we have right now is organizations that have committed to certain levels of governance, but they’ve all got different definitions of what matters to them. Each of these organisations already has their kind of goal in terms of what they expect to do and the scope of some of these things will include not just machine learning, but also include robotics and process automation. There’s going to be a more regulated space and I think the role of a ratings organisation is purely to show, not just how well somebody conforms to a standard or to an objective set of goals, but how well do people perform against the things that they say are important? There has to be the way of encouraging companies to go further, especially with regards to transparency.