Home PublicationsCommentary Opaque Poll That Negatively Frames AI Will Scare Europeans and Harm Progress

Opaque Poll That Negatively Frames AI Will Scare Europeans and Harm Progress

by Eline Chivot
by and
Ursula Pachl, Deputy Director General at BEUC

BEUC, the self-proclaimed “consumer voice in Europe,” recently released a report arguing that its survey shows consumers do not trust AI and think it will be used to manipulate them. But BEUC has not shared the full survey results, and what it has shared suggests its questions were biased. Incomplete or misleading results not only run the risk of generating over-regulation of AI, but they can lead Europeans to fear AI, slowing the overall AI use in Europe.

There are least four problems with BEUC’s survey. First, BEUC’s survey does not provide any meaningful benchmarks for their questions. For example, the survey asks respondents whether they have experienced “bad service” from various services using automated decisions, but they do not ask whether respondents have received bad service from non-automated services. Similarly, the survey asks respondents whether users of AI products should be able to refuse automated decision-making, but does not ask about the potential implications of refusal, such as whether users would be willing to pay more for a non-automated service.

Second, the survey asks consumers to opine on technologies they have not used. For example, BEUC reports that 40 percent of consumers say they have experienced “bad service” with automated decisions used for customer support, but also notes that only about 15 percent of consumers even have used this AI-based service. And even though only 9 percent of respondents say they are “well-informed about AI,” the survey asks them to comment on whether they think existing legislation is adequate to regulate AI.

Third, BEUC refuses to release the full results of their survey to the public and only shares them with its member organizations. As a result, it is not possible to understand the survey results in detail. The publication was partially funded by the EU’s Consumer Program—money intended to provide transparency to consumers—so it is inappropriate to hide the full results from the public, especially because such lack of transparency contradicts the spirit of the EU’s open data policies.

Fourth, the respondents’ low levels of trust in the protection of their privacy when using AI devices, such as in-home virtual assistants, as well as their lack of trust towards authorities to effectively control AI, suggests the GDPR is not working. One of the principal objectives of this law was to boost consumer trust (based on the untested assumption that this will lead to more digital use) by regulating how companies collect and use data. But rather than admit that the GDPR has failed to increase trust, BEUC recommends even more regulation.

If consumer organizations are truly interested in improving consumer welfare, they should aim to educate individuals about technology and share knowledge about its opportunities rather than fanning the flames of fears in hope of generating more regulation. Indeed, with these types of publications, it is no wonder that consumers express some alarm. Distorted surveys risk leading consumers to oppose the use of AI and encourage alarmist views, while broader support for technological innovation is critically needed for societies to thrive in the digital economy.

Image credits: Wikimedia Commons.

You may also like

Show Buttons
Hide Buttons