The Center for Data Innovation spoke to Ioanna Papageorgiou, Marie Skłodowska-Curie Fellow at the Institute of Legal Informatics of Hanover University. Ms Papageorgiou is a legal scholar and doctoral researcher at the NoBIAS project, a pan-European research programme funded by the EU’s Horizon 2020 science programme. The aim of NoBIAS is to research and develop novel methods for AI-based decision-making without bias. Ms Papageorgiou spoke about the development of secure and trustworthy AI, different sources of discrimination, mitigation measures, and issues with dataset bias and discrimination that go beyond AI alone.
Benjamin Mueller: How do you distinguish between discrimination in a technical sense— i.e. an error in accuracy for an algorithm between an estimate and the true value—and discrimination in a wider sense, i.e. the unjust treatment of different categories of people?
Ioanna Papageorgiou: I think that, at least from the perspective of law and social sciences, it is difficult to draw a fine line between these two sides of discrimination. Indeed, the discriminatory effect of AI often relates—without being limited to—diverging accuracy rates across different demographic groups. However, the definition of the “target variable” or the “class labels” from an organization, the data sampling, and the selection of the features an AI system uses for predictions may also fuel discriminatory effects. Bias can creep in many different stages of the AI pipeline, in ways that involve human and organizational decision making and thus cannot be understood in a purely technical sense.
Most importantly, AI does not exist in a vacuum. Instead, it is used—or might be used in the foreseeable future—in crucial social contexts such as law enforcement, surveillance, airport passenger screening, employment, and housing decisions. And once applied AI moves beyond the purely technical realm, legal and social considerations of its impact become relevant.
Mueller: What are the key viable strategies for mitigating algorithmic bias?
Papageorgiou: Appropriate bias mitigation strategies are essentially linked to the main sources of algorithmic bias, along with the priorities and policies we adopt as a society. Their feasibility, in turn, is necessarily related to the state-of-the-art of AI and debiasing research, along with the procedures that different strategies require.
From a technical perspective, I stress the importance of pre-processing debiasing methods in order to create balanced, diverse and more representative datasets. In-processing methods—which, for example, explicitly add “fairness regularizers” or constraints to the objective functions of ML models—and post-processing approaches are also among the debiasing tools available. In addition, I think that what’s important here is testing and monitoring a system’s performance and fairness on an ongoing basis after it’s been deployed.
Furthermore, going beyond technical approaches, an important and feasible strategy is the creation of a more diverse and representative AI ecosystem. People from different scientific and socioeconomic backgrounds and minorities or oppressed groups need to be involved, not only in the discussion around AI’s impact but also in its research, development, and use. This should not be however understood in any sense of tokenism or ethics-washing.
Finally, as a legal scholar, I do think an adequate legal framework which ensures that AI actors take appropriate measures by debiasing, testing and overseeing their models and can be held accountable for harms due to algorithmic discrimination. Of course, lawmaking and law enforcement is a long and rigid process, which could impede the effectiveness of such measures in the short run, especially in the face of rapid technological advances. Thus, the provision of policy guidelines and the adoption of adequate codes of conduct by private companies are also very welcome.
Mueller: What technical solutions is your team working on to reduce bias in algorithmic decision-making?
Papageorgiou: As a team, we are divided into three distinct but interrelated research groups, all of which aim to tackle bias in AI-based decision-making systems. The first group works on understanding bias in data. The second group works on mitigating bias in algorithms. The third group works on accounting for bias in results. More specifically, some colleagues are working on the documentation of bias in data through ontologies, the development of causal methods for understanding bias in data, debiasing ranking methods on top of networks, and building ensemble models for tackling bias in facial classification. My personal research focus is on addressing the legal issues that arise from bias mitigation.
Mueller: To what extent do existing rules and laws provide safeguards against discrimination, regardless of whether it is algorithmic or human?
Papageorgiou: This touches upon a crucial point, that is, the effectiveness of the EU anti-discrimination law regardless of the use of digital algorithms. Without underplaying the significant advances of EU law towards equality, we can observe weaknesses and shortcomings that originated long before the uptake in algorithmic decision making. Many of those issues relate to the enforcement of existing non-discrimination legislation and the provision of redress to victims. In many EU member states, the volume of case law on discrimination is still very low.
Introducing algorithmic decision-making entails the risk of reinforcing existing deficiencies and it creates new challenges to protect against discrimination, given the opacity and complexity of AI systems, as well as the number of actors involved in the AI value chain.
However, the EU’s body of anti-discrimination law is technology-neutral: it still applies and offers protection no matter whether an AI system is involved in a decision. Thus, both the effective implementation of the existing laws and their adequate legal interpretation along with some regulatory amendments, such as opportunities for collective redress actions and public monitoring, are required, in order to tackle the new challenges and safeguard an EU citizen’s protection.
Mueller: Do you have examples of algorithms that have ended up reducing discrimination in society?
Papageorgiou: Multiple fairness tools have been developed by several companies and academic research projects, for instance, IBM’s AIF360, Microsoft’s FairLearn or the Aequitas framework of the University of Chicago. Such tools can certainly have a positive impact on non-discrimination goals, especially as AI becomes commonplace in many areas of our everyday life. After all, that is the reason we work within the NoBIAS project on developing fairness-aware algorithms. The implementation of these methods in the real world, monitoring their performance, and their oversight through time, are key to assessing the broader equality impact in the long run.