Home PublicationsCommentary Europe’s GDPR Regulators’ AI Proposals Reveal Their Privacy Fundamentalism

Europe’s GDPR Regulators’ AI Proposals Reveal Their Privacy Fundamentalism

by Benjamin Mueller

The EU’s two privacy watchdogs—the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS)—recently weighed in on the Artificial Intelligence Act (AIA), a proposal for an EU-wide legal regime for AI. The EU’s data privacy regulators argue that the law does not go far enough and want an even more restrictive approach, driven by a belief that data protection overrides all other civil rights that AI could yet advance. If accepted, the regulators’ proposals would further tighten the EU’s regulatory straitjacket—and impose steep economic costs—instead of allowing European businesses and consumers to take advantage of new opportunities posed by the emerging digital economy.

The two regulators insist that “future AI regulation should prohibit any type of social scoring”—i.e. rating systems—because it “can lead to discrimination and is against the EU fundamental values.” This claim disregards the many beneficial uses of social scoring. In its current form, the AIA permits social scoring by private entities when it serves a clear purpose. There are many uses of such systems. For instance, driver and rider ratings in car-hailing apps protect all users by rooting out unacceptable and potentially dangerous behavior. Why shouldn’t riders know in advance whether their drivers are likely to be polite and respectful, and why shouldn’t unruly passengers be banned on the basis of feedback from drivers? The digital economy often features peer-to-peer interactions mediated by platforms. Scoring tools are vital safeguards against bad actors who undermine the trust that binds such ecosystems together. For European data regulators to demand a ban of all social scoring reflects a mindset that is alien to the realities of the modern digital economy.

The EDPB and EDPS also demand “a general ban on any use of AI for an automated recognition of human features in publicly accessible spaces.” Many uses of biometric identification are innocuous and offer a swathe of benefits with little or no risk to citizens. For example, the Berlin Zoo plans to use facial recognition to offer faster entry to season ticket holders, and airports already use e-passport gates to reduce border waiting times for weary travellers. The privacy regulators’ argument is based on the popular notion that AI for biometric identification spells “the end of anonymity” and paves the way towards a surveillance state.   Why this should happen in Europe, of all places—with its deeply rooted culture of democracy and the rule of law—is never spelt out. It is not a technology that carries risks per se, but how, by whom, and for what purpose it is deployed. To declare that there are no legitimate uses at all for biometric recognition tools in the world’s freest and most democratic region is a form of privacy fundamentalism. It ignores that biometric recognition can be a boon to law enforcement and, by extension, enhance public safety, especially in acute crises like child abductions, terrorism, or public violence. Allowing law enforcement to use such tools, within well-defined guardrails and only in specific situations, does not give governments the right to establish a police state. Instead, it can help authorities find the proverbial needle in a haystack when public safety is at stake. 

Finally, EDPB and EDPS want a ban on using AI to categorize individuals “according to ethnicity, gender, as well as political or sexual orientation.” This restriction would prevent a shop, for example, from using AI to count how long it takes their associates to help male versus female customers. The very regulators who are supposed to stimulate the data economy seem unaware of how data relates to society: Categorization is neither necessary nor sufficient for discrimination. On the contrary, categorizing populations is an indispensable way to identify and root out discrimination. Shining the hot, bright light of data-driven empirics on the darker corners of society makes it easier to fight bias. In 1978, France passed a law that bans all collection and computerized storage of race-based data. As a result, the French state collects no census or other statistical data on ethnicity. It is laughable to pretend that this has in any way, shape, or form reduced racism in France. In fact, the country has a deeply entrenched racism problem, and last year its human rights ombudsman highlighted systematic discrimination against foreign-born citizens. Far from preventing discrimination, banning the collection of demographic data can entrench bias in society and make it more difficult to fight. 

The positions put forward by the EDPB and EDPS demonstrate just how broken the EU’s approach to data protection is. The global economy is on the cusp of a digital renaissance powered by AI. To provide growth and prosperity for its citizens, Europe needs to participate in this digital transformation—something both the Commission and Parliament recognize. Digitization offers a chance to revive Europe’s struggling digital economy. But because the continent is straitjacketed by an inflexible and outmoded data protection law, it stands to fall behind further in global technological progress. In the context of AI, for instance, the GDPR’s focus on purpose limitation is antithetical to discovery, experimentation, and innovation. 

More fundamentally, the EDPB and EDPS base all their reasoning on one single regulatory premise: data protection rights are absolute. The regulators seem willfully ignorant that data protection exists alongside other fundamental rights, such as liberty, security, and enterprise. To the outside observer, their recommendations seem to stem from a fundamental fear of technology and a belief that data protection should trump all other rights and values. Stopping useful data collection, biometric recognition, and social scoring undermine all these equally valid rights. The privacy watchdogs’ stance makes European citizens poorer, less safe, and unjustifiably shelters them from technological advances that can make their lives better. 

At this point, rather than hoping that European data protection agencies are going to moderate their position, the best way forward is for the Commission to establish a European Data Innovation Board to help advance a legitimate case for data analysis that is, at the moment, completely voiceless in the EU’s executive and administrative arms. Sadly, no statutory body in the EU is supporting the cause of data-driven innovation and technological progress.


Photo by Dan Nelson on Unsplash

You may also like

Show Buttons
Hide Buttons