The European Union’s proposed AI Act would prohibit AI systems that use “subliminal techniques,” based on unfounded fears that AI will unleash technology-enabled mind control. Article 5 of the AI Act prohibits AI systems that use “subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm.” Such a ban is not only unnecessary, given other legal safeguards, but it would negatively impact the development and adoption of legitimate applications of AI in Europe.
Fears of subliminal stimuli, including audio frequencies, shades of light, or smells that humans cannot perceive, are not new. The idea that probing subliminal stimuli could influence people started in 1957 when James Vacary hid a “Hungry? Eat Popcorn” message (Figure 1) in a film and claimed it led to a 57 percent increase in popcorn sales. Despite the fact the whole experiment turned out to be a hoax, it caused an enduring belief in subliminal advertising. Paranoia about hidden psychological influences emerged during the Cold War’s brainwashing scare, and fears that Satanic organizations were “backmasking” messages of subversion into pop music fueled the Satanic panic in the 80s.
Figure 1: The message that flashed for 1/3,000 of a second during the movie Picnic
There is no consensus that subliminal techniques work. A meta-analysis in 2004 found that the effect of subliminal techniques was not statistically significant. More recent research shows subliminal stimuli can, at best, bring forth already-intended actions. There is also no consensus on what ” subliminal ” means or whether all forms of consumer manipulation are in principle subliminal.
Despite the lack of consensus, the EU has already passed other legislation protecting consumers from subliminal manipulation, so it is unclear why the EU needs to regulate AI in this regard. For example, the EU updated the Audiovisual Media Services Directive in 2018 to include in Article 9 a decree that “audiovisual commercial communications shall not use subliminal techniques.” And, in response to concerns that AI is powering new forms of manipulation online, the Digital Services Act, passed this year, regulates the use of “dark patterns,” described in the legislation as “practices that materially distort or impair…the ability of recipients of the service to make autonomous and informed choices or decisions.” Similarly, the Commission’s Margrethe Vestager warned that an AI-powered toy might use subliminal techniques “to manipulate a child into doing something dangerous,” (although it is not clear why a toy company would want a child to do something dangerous) but exploiting the vulnerability of certain groups (including children) is outlawed elsewhere in the AI Act (Article 5(1)(b)).
Some critics argue that AI will use techniques that weaken people’s “deliberative autonomy” in novel ways and prime people to make choices. But this is no different from humans who use persuasion or suasion (e.g., nudging) to distort behavior in others. For example, salespeople use body language to communicate imperceptible messages, and mentalists prime their audience and reduce their choice architectures. Other subliminal techniques, like hidden messages in logos (Figure 2), are imperceptible only until pointed out.
Figure 2: The smile in Amazon’s logo from a to z, the hidden arrow in FedEx’s
The prohibition on subliminal techniques is also highly subjective because perceptual thresholds vary greatly and can change over time or through experience. An individual’s perception of potentially subliminal techniques may vary based on age, health, or other factors, so it is impossible to establish universal thresholds. The idea of attributing “physical or psychological harm” is similarly brittle.
In his book on the subliminal effects of AI, Rostam Niewirth admits that currently, it is difficult “to point to or even conceive of a concrete AI system that deploys subliminal techniques.” Still, he argues, the regulation raises awareness about the potential dangers of subliminal techniques and cautions the development of future technologies.
The AI Act’s highly-precautionary approach, an outright ban on these techniques, will hurt EU consumers by deterring AI developers from the EU market. Consider possible uses of subliminal techniques:
- A humanoid robot that uses body language to convey friendliness
- A browser add-on that uses certain light waves to track and improve a user’s attention
- A spa treatment that uses AI-generated scents to calm clients
- A smartwatch that encourages users to exercise longer with motivational visuals
- A dieting wearable that uses imperceivable vibrations to reduce the sensation of hunger
- A meditation earpiece that subtly stabilizes the user’s sense of movement
- AI-assisted therapy that probes subliminal stimuli to resurface memories in a patient
In other settings or if misused, however, these techniques could cause discomfort and injury or encourage unhealthy choices and habits. Given this risk, companies should disclose the use of subliminal techniques to their customers, so they have the freedom to choose whether to use the product or service.
In some cases, disclosure is inappropriate (e.g., for immersive experiences) or impractical (e.g., where companies cannot easily restrict consumption). The EU should assign systems that use “undisclosed subliminal techniques” (those that “materially distort a person’s behaviour without disclosing its use to that person”) to the so-called “high-risk” category. Rather than prohibit these systems, they will have to undergo conformity assessments, comply with transparency requirements, and be subject to post-market monitoring. Regulators will better understand potential use cases, and innovators can continue supplying the EU market.
It makes little sense to prohibit something that is so poorly understood. And fears about AI puppetmasters should not confuse EU policymaking.
Image credit: Noah Buscher