Some fear that governments may use AI to create “social scores”—risk profiles of individuals based on surveillance of their behavior, such as criminal or financial activities—and then penalize low-scoring citizens by denying them access to essential public services. In response, the EU’s proposed AI Act would prohibit public authorities from implementing social scoring systems. However, the Council of the EU is now pushing to extend the AI Act’s ban on social scoring to the private sector, a change that would hurt consumers.
The myth that China is implementing such a system motivates the concern about social scoring even though there is no countrywide AI-powered Big Brother in China: Local experiments did not gain traction, and a U.S.-commissioned report found no cases where an AI system decided to sanction citizens. Moreover, in specific domains, such as traffic policing, AI can monitor citizens more fairly and objectively than humans. Nevertheless, digital rights activists want to forestall a dystopian future in the EU where future authoritarian regimes could use data about citizens’ school performance or voting records to determine their access to healthcare or welfare allowance. Consequently, the initial version of the AI Act bans the use of AI by public authorities to rank citizens.
The Council is now pushing to extend the ban to the private sector. Recital 17 of its latest version of the AI Act states that AI systems providing public or private social scoring “may violate the right to dignity and non-discrimination and the values of equality and justice.” Proponents of a ban on private social scoring argue that private companies could use such scores to unfairly discriminate against individuals.
But these critiques, and the proposed ban for the private sector, ignore that many companies already use scores based on a multitude of data to assess creditworthiness, evaluate employees, and remove hateful content—to the benefit of users. Twitch, a streaming platform, bans users who commit offline offenses; Match Group, a conglomerate of dating brands including Tinder, implements cross-brand bans for user behavior unrelated to dating. Gig economy workers, too, rely on a form of social scoring to attract clients. Conversely, “social currency” encourages prosocial behavior online. Games, for example, use reputation systems to incentivize good behavior (something that will become more important as avatars roam the metaverse.) Scoring also protects consumers against misleading advertising and better-informs ride-hailing, accommodation hosting, and food delivery choices.
Further, there are already safeguards on how some sectors use social scoring. Private sector scoring practices such as profiling are already addressed in existing Union legislation, especially the Digital Services Act. Banning these regulated practices through the AI Act would be senseless.
A ban on social scores for the private sector would ultimately hurt consumers, businesses, and the economy. Companies should be allowed to use AI to profile, incentivize, and evaluate users to improve their services and create more positive experiences; customers should be able to vote with their money.
Image credit: Jared Rodriguez