Home IssueArtificial Intelligence 5 Q’s for Rediet Abebe, Co-founder of Mechanism Design for Social Good

5 Q’s for Rediet Abebe, Co-founder of Mechanism Design for Social Good

by Hodan Omaar
by
Rediet Abebe

The Center for Data Innovation spoke with Rediet Abebe, co-founder of Mechanism Design for Social Good (MD4SG), an initiative that uses techniques from algorithms, optimization, and mechanism design (a field in economics that studies the mechanisms through which a particular outcome or result can be achieved), along with insights from other disciplines, to improve access to opportunity for historically underserved and disadvantaged communities.

Hodan Omaar: Can you explain how MD4SG got started and what sorts of problems you work on?

Rediet Abebe: Mechanism Design for Social Good is an initiative I co-founded and have been co-organizing since 2016. This initiative started as a small reading group (co-led with Kira Goldner) primarily consisting of graduate students in theoretical computer science, economics, and operations research, that were looking to learn from one another and identify opportunities where we may be able to help improve societal welfare. Since then — along with my co-organizers Wanyi Li, Irene Lo, Francisco Marmolejo, and Ana-Andreea Stoica — we have been able to grow MD4SG into a multi-institutional, multi-stakeholder, multi-disciplinary initiative with active participants from over 100 institutions in over 30 countries. We have several domain-specific working groups that work towards research, implementation, and advocacy projects in areas such as poverty and inequality; housing and homelessness; bias, discrimination, and fariness; environment and the climate; and addressing challenges in emerging nations or under-resourced settings. 

One example of my own research that came out of an early MD4SG meeting on eviction and housing instability, explores the role of income shocks in starting or deepening poverty cycles. We know from a lot of empirical work that uni-dimensional measurements, such as income or wealth, are insufficient measures of welfare because they do not factor in other measures that matter, such as how frequently an individual experiences shocks or how much access they have to resources that can help them absorb these shocks. In our paper, my research team and I identified a stylized model of welfare, incorporating not just people’s income and wealth but also their experiences with income shocks. Using this, we were able to ask questions around how to best allocate subsidies that support families that may experience eviction and found surprising results with deeper policy implications. 

For instance, we found that the objective function we choose, the type of subsidy, and the information we take into account can matter significantly. As one example, the optimal allocation under a model that allocates income subsidies which supplement people’s income compared to the optimal allocation under a model that allocates wealth subsidies which give a one-time upfront subsidy, can target very different groups of people. These results have deep implications on how we understand low-income housing allocation support. Indeed, it argues that it not only suffices to think about what resources we are providing but also how we encode which types of income and shock profiles we prioritize and what interventions we are considering. 

Omaar: In a recent paper you explain that computational research has valuable roles to play in addressing social problems. Can you explain what these roles are?

Abebe: Meaningful advancement toward social change is always the work of many hands, but in that paper we describe four potential roles through which computing work can help us understand and address social problems.

First, computing can play a diagnostic role, which means it can help us measure and characterize social problems. Consider the example we spoke about earlier for income allocations. The problem is that measures of welfare that government assistance programs use rely on simple metrics such as income or wealth. Computational approaches can help better characterize the problem by modeling income shocks into measurements of welfare which can impact how resources are allocated.  

Second, computing can play a role as a formalizer, bringing analytic clarity to the goals of vague social policy objectives. Consider the statement: “Housing assistance programs should help the most number of people.” This could be interpreted in multiple ways. One interpretation is to minimize the total number of families that experience eviction and another interpretation is to first provide the family who is most likely to be evicted with as much assistance as possible, then move on to the next, until the budget is exhausted. Both of these objective functions seem reasonable and are widely used but, as we showed, the consequences of choosing one over another can be drastic. We assume both different objective functions as achieving one goal and yet they are telling us to do opposite things and creating diverse outcomes. Optimization problems force us to be concrete about what our goals are.

Third, computing can play a rebuttal role, clarifying the limits of technical interventions and of the policies premised on them. For example, a group of computing scholars recently called on Immigration and Customs Enforcement (ICE) to reconsider its plans to use an algorithm to assess whether a visa applicant would become a “positively contributing member of society” as part of its Extreme Vetting program. The computer science experts explained that no computational methods can provide reliable or objective assessments of the traits that ICE sought to measure which led to the program being abandoned. Being able to quantify what algorithms can and cannot do, and with how much reliability, allows computational scholars to safeguard against misuses of technologies.

Finally, computing can act as a synecdoche, meaning it can attract attention to long-standing social problems in a new way. For instance, activists have long advocated against cash bails which require individuals to pay a sum of money in order to be released from jail while awaiting a court hearing. However, the advent of risk assessment tools which decide if an accused person should be allowed bail by predicting the likelihood they will miss a future appointment related to their case, has foregrounded the issue that a person’s ability to leave jail and return home to fight the charges depend on money. Risk assessment tools did not create this issue but they offer a focus through which to notice it anew.

Omaar: How can we design and analyze algorithms and computational techniques that seek to improve access to opportunity when the problem set—be it inequality, stigma, or poverty—is so hard to measure? 

Abebe: I think the fact that social problems are hard to measure is inherent to the problem itself. For example, HIV and AIDS are stigmatized conditions, but without effective ways to measure how much stigma is contributing to the spread of the disease, or the mismanagement of the condition, or if there is not a very good measure of how much stigma is contributing to information bottlenecks, policymakers may struggle to implement targeted interventions that can support people. Rather than a challenge to designing algorithms, I see this as an opportunity. 

We conducted a study to collect and analyze health-related Internet searches from across the African continent to better understand what kinds of stigma and discrimination there are, how they are playing out, and how pervasive they are across different age groups, genders, and countries. One of the interesting results we found was that in countries where there are higher rates of questions related to stigma and discrimination, there are also higher rates of HIV. We later found this to be consistent with findings from public health literature that say this positive correlation is because stigma discourages people from getting tested and adhering to treatments, all of which increases transmission rates.

I believe the work we did adds to the question of how we can destigmatize conditions such as HIV and AIDS. For instance, if a user were to search for: “What are the symptoms of HIV?,” we could provide the information they are looking for and additionally direct them to their nearest testing center. Or if a user were to search for: “I am nervous to tell my family about my condition,” we could suggest support groups they could attend.

Omaar: A recommendation for a single problem, say “earlier intervention for the homeless can be more cost-effective and reduce trauma” could have different insights for and require different interventions from different stakeholders, such as policymakers, community groups, or donors. To what extent can current tools provide tailored interpretability?

Abebe: This is something I have been thinking about a lot. We recently published a computational study of a large, longitudinal data set to identify whether income shocks can effectively predict which families will experience poverty. We compared this to the annual income-only markers the government uses and found that the shocks an individual experiences throughout the year—be it unexpected medical bills, loss of public benefits, being a victim of a crime, or the end of a romantic relationship—are as accurate at predicting whether that individual will experience poverty as the annual income-only markers.

This is important because it means there are multiple interventions we can be looking at. For instance, rather than waiting until the end of the year when an individual has already fallen into poverty and, say, undergone the trauma of being evicted, we could use data on the income shocks they are experiencing throughout the year and intervene earlier so they do not fall into poverty in the first place. 

Omaar: Looking to the future, a lot of your ongoing work seems to be in education where there are many allocation problems to think about. Can you talk about the problems in education you are working on now and where you think the future of your work, and work in this area, is going?

Abebe: I never thought of myself as someone who works in education but somehow I keep coming back to it so there’s got to be something there! Currently I’m working with a team in Ethiopia and the Ethiopian Ministry of Education to think about how we assign students to universities. In Ethiopia, the Ministry assigns students to public universities based on students’ preferences and scores, a process that has been in place for a while. I myself went through this process, but back then the Ministry not only assigned your university but also assigned your major. Given the school I went to and the scores I was likely to get, I probably would have been assigned to engineering or medicine, but since I wanted to study mathematics I decided to go abroad for university.

The Ministry no longer assigns majors but the process is still very difficult. Approximately 300,000 people take university entrance exams every year from all over the country, about half of whom pass. These students come from many different regions, with many ethnic backgrounds, speak many different dialects, and all need to be assigned to over 40 universities in a way that is fair and creates equal access to resources. For example, the University of Addis Ababa is one of the best universities in the country but it cannot be predominantly full of students from Addis Ababa schools as this would create an inequitable access issue. This means we have to think about this issue at the individual and national level.  

This is not only an Ethiopian problem though. In Cambridge, Massachusetts, where I am currently based, the city assigns students to public elementary schools in a similar way. While it is not assigning students at the same scale as Ethiopia is, it does still affect thousands of students and have similar equitability concerns. When I came here in 2009, I used to go to Cambridge public school committee meetings to learn about the inequality issues that exist in  economically and racially segregated cities like Cambridge and began working on how to ensure school assignment does not exacerbate existing inequalities.  

In part, I keep coming back to education because it is an important test case for understanding the broader issues we have been talking about, such as deciding who gets to set the objective function and how it is set. The other area I keep coming back to is poverty because ultimately, whether or not an individual lives in poverty determines their access to education, housing, healthcare, and is therefore a root cause of many issues in these areas. 

The last area I keep coming back to is data equality. Misrepresentation or underrepresentation in data sets can lead to invisibility which can perpetuate or even amplify social and economic disparities. Consider, for example, maternal mortality. The United States has one of the highest maternal mortality rates among OECD countries and is one of the very few countries where these rates are going up instead of down. But U.S. maternal mortality is not the same across demographic groups or location. In New York, Black women are 12 times more likely to die during childbirth than white women, while nationally, Black women are 3 to 4 times more likely to die in childbirth than white women. What about the maternal mortality rates for Native American women? We do not know because that data is not collected. Data equality is important if we are to effectively identify and address problems. 

You may also like

Show Buttons
Hide Buttons