Home PublicationsData Innovators 5 Q’s for Michael I. Jordan, Professor at the University of California, Berkeley

5 Q’s for Michael I. Jordan, Professor at the University of California, Berkeley

by Hodan Omaar
by

The Center for Data Innovation spoke with Michael I. Jordan, a professor at the University of California, Berkeley whose research spans the computational, statistical, cognitive, and social sciences. Jordan discussed how economic concepts can help advance AI as well as the challenges and opportunities of coordinating decision-making in machine learning. 

Hodan Omaar: The latest part of your career has focused on bringing economic principles into the blend of computer science and statistics that make up AI. Can you reflect on what the current AI landscape is missing and what markets and economic concepts like game theory can teach us about constructing intelligent systems? 

Michael I. Jordan: Much of the recent wave of activity in AI has been very concerned with learning from a data set and a single agent making a decision in a limited environment. Perhaps it’s a decision about what is in a visual scene, or a decision about what someone said, or a recommendation of some kind. Each of these systems and the decisions they make exist within the context of a bigger working system, and often in the context of other decision makers or other agents. This is largely the focus of economic theory, but machine learning models don’t consider this context as much as they should.

For example, if you’re a company serving millions of people on a large scale site like Netflix or Amazon, it’s not much of an issue to recommend the same movie, or the same book, to hundreds of thousands of people because there is little to no scarcity in the world of virtual books and movies. But in the real world, there’s always scarcity: If a map application is recommending the fastest route to take to the airport and lots of people start using the app, it will likely recommend the same route to a large number of users and create congestion, making that route the slower way to get to the airport. Real recommendation systems are, of course, more complex, but the idea underpinning them is simple to understand and the issue with their focus on optimizing a single agent’s behavior, rather than the larger system, becomes clearer through simple thought experiments. 

Ideally, an effective system would consider human preferences to deal with such scarcity problems. For instance, who is really in a hurry to get to the airport and who can afford to be given a slower route? Who is more risk averse? Or even, who might like to see the more scenic route? One school of thought rather naively believes that these preferences can be known by collecting enough data. But human preferences are so circumstantial, contextual, and in-the-moment that a system cannot simply look at past data and know exactly what an individual wants in a given moment. The mistake is believing that the role of recommendation systems is to find out everything about an individual’s preferences, a goal that one might have in the advertising domain. Instead, I think the role of such systems is to empower individuals to have choice and discover their preferences by creating a two-way market based on informational flows in both directions.

Market mechanisms are a partial way to start approaching these sorts of things. In the early days of information technology writ large, statistics, economics, computer science, and control theory were nearby fields that were developed by a lot of the same people. But over time, as the scope of these fields grew, they have become rather separate branches of stud.. The last wave of AI in particular has focused narrowly on learning systems that take in vast amounts of data.  I think we need to bring more perspective from microeconomics and market design to give meaning and context to data flows as we envision further developments in AI. 

Omaar: AI policy in the United States has been chiefly concerned with maintaining American leadership in AI to drive economic growth and keep the country competitive. In a recent op-ed in the Harvard Data Science Review  you suggest that the current public understanding of AI refers to a soup of different ideas—machine learning, data science, human-imitative AI—that do not all contribute to creating economic value equally. What impact, if any, does this have when one measures AI leadership? 

Jordan: First, I’d like to speak to the scope of what AI is. Broadly, the field is a blend of computing, statistics, economics, and other social sciences. The goal is often to use this blend to build real-world systems that serve collections of people and may even be national or planetary in scope. Some examples of such systems include the medical system, transportation, or commerce. The challenge we face as a society is to figure out how to build such systems so that they deliver promised positive consequences while avoiding unintended negative consequences.

I like to analogize this challenge to the growth of the field of chemical engineering in the 1930s and 40s. Before this era, there was an understanding of small-scale laboratory chemistry and fluid flows, but there were no general principles for the design of large-scale chemical factories. Factories began to be built and in parallel an academic discipline emerged that provided an understanding of the thermodynamic, control-theoretic, and economic principles needed to build chemical plants at scale. Similarly, today we need to build on the basic concepts of computing and inference that have emerged during the past century and forge a new discipline that allows large-scale social systems to be envisaged and built, ensuring that these systems bring value to humans, and are fair, safe, and economically viable.

From this perspective, it is clear that further development of AI will involve not only industry and academia, but also government and other stakeholders.  Moreover, AI will be a highly-distributed phenomenon, with different countries specializing in different aspects of the overall problem that are appropriate to the context of their country.  This notion of local context is critical.  While computers and algorithms are generic and widely available, the challenges that AI aims to address are often local and contextual, and the datasets that are collected to address those challenges are local and contextual.   Moreover, a key aspect of real-world AI systems is the engineering talent needed to build and maintain these systems, and different countries will have local pools of engineering talent.  Finally, such diversification will lead to opportunities for trade. In short, the idea that one particular country or company will dominate in AI is very far off the mark. 

Omaar: Your work suggests that ML researchers and those in industry should not strive for autonomy as a goal, like autonomous drones, but rather integrated societal-scale information-technology systems, like air traffic control. Do you think that if this shift in focus were to happen we would be able to better quell current concerns the community is facing, like automation anxiety over economic displacement or privacy concerns of recommendation systems?

Jordan: I should start by saying I’m not against autonomous systems. I think that in a lot of cases autonomy is good, like drones that autonomously inspect nuclear disasters. But autonomy as the main goal has become a distracting focus of the current era. For me, the desire for autonomy in many instances comes from a desire to build AI systems that look intelligent all on their own—a classical philosophical ideal of AI research to show that one can put intelligence in systems and make them indistinguishably intelligent from us. It’s an interesting goal, but I don’t think it should be viewed as the major goal for information technology in our era.

The aim should be to create a piece of technology that integrates well with other pieces of technology and is transparent, explainable, understandable, and responsive to everything around it. That is, it is the overall system which should behave well. Existing complex social systems and economies provide examples that will inform our design of AI systems. When they work well it isn’t because the participants are super-intelligent, it’s rather because each participant has just enough knowledge so that in the context of an overall system, desired behavior emerges. Other examples may be more centralized, but still partially distributed, such as the air traffic control system you mentioned in your question: The success of air traffic control is not because every plane is super intelligent, it’s that the overall system can guarantee planes won’t hit each other. 

Additionally, the purpose of AI should not only be to provide technology to people, but it should be able to create new connections and markets. My favorite example in this regard is music. Few of us are directly connected to the people who make the music and sounds we listen to. Technology has enabled the music of many young people to get out into the world, but a lack of efficient market mechanisms between producers and consumers means it’s very difficult for many musicians to extract much of the value they are adding to the creative commons. 

If, for example, data-driven innovation could enable these artists to see who listens to them and where they are, these musicians could reach out to venues in those areas, negotiate a price for a show, and increase the utility of all involved. Instead, an unhealthy model has emerged which streams their music and creates revenue through a separate advertising market. 

From this perspective, building market mechanisms on top of data flows could create new “intelligent markets” that currently do not exist. These sorts of markets would create jobs and unleash creativity. This healthier approach to AI integration in society could help alleviate some of the fears and anxieties that surround the technology. What’s more, this approach is more culturally aware and relevant which matters to me as a person in technology and also in academia. 

Omaar: Why should developers not solely aim for accuracy when developing algorithms that make decisions? How should they think about uncertainty and context?

Jordan: Accuracy is always relative; not only to the situational context, but to the question you’re answering and to the so-called prevalence (how often some entity occurs in the population). You can have a system that’s very accurate in terms of false positives and false negatives, but if you look at all of the positive decisions a system makes, most of them could actually be wrong. For example, in COVID-19 data analysis there is a test for antibodies that can have very high accuracy in terms of false positives and false negatives. The issue is that the disease has a low prevalence. If you isolate only the positive test results, there’s a good chance many of them are false. This means that even though an individual test might have high accuracy, the overall test might give the wrong answers.

A problem with current machine learning literature is that there is too great a focus on standard data sets and accuracy measures, assuming that getting these numbers as high as possible will solve many problems the community faces. But it won’t. It doesn’t address many of the real world problems that emerge when systems are deployed and face new situations or novel data. It’s always been about the context of overall decisions. Talking about accuracy, or any kind of individual measure like “fairness”, is overly limiting. We always have to talk about the overall system.

Omaar: Reconceptualizing AI within the context of evolving societal, ethical, and legal norms will require a multidisciplinary approach. Do you find that economists are appropriately engaged? 

Jordan: Historically, economics, statistics, and computer science were very intertwined. Some of the heroes of statistics, like Abraham Wald, were also heroes of economics. David Blackwell is a hero of statistics as well as economics and computer science, and so on. But today, these fields are less integrated, partly I think because the problems have gotten increasingly challenging. Each area needed to focus its resources to tackle one problem at a time. The consequence is that mainstream economists are not very engaged in tackling the issues in the academic machine learning community, though they really should be.

But there’s a different tale in industry. If you go to a company like Uber, Netflix, or Amazon you will certainly see a multidisciplinary team working together on a challenging problem. It would not be unusual to find a computer scientist, statistician, economist, public policy analyst, and lawyer, all sitting together to tackle a problem. 

At Berkeley, we have designed brand new data science classes at the undergraduate level, even for new incoming freshmen, which sits astride disciplines in this way. Even though it is a data science class, they might be solving statistical problems using econometric data related to a legal issue or a justice issue. For instance, one of the problems that we look at in the first class is whether the ethnic composition of juries in Alameda County are the same as the ethnic composition of the region. I find this sort of approach helps students feel empowered by the ability to build systems using algorithms (that’s the computer science part), to have it be rigorous (that’s the statistics part), and to have it be meaningful in the real world (that’s perhaps a blend of many other fields). Ultimately, I think it is really important to rethink education to ensure that problems are being addressed in their appropriate context.

 

You may also like

Show Buttons
Hide Buttons