The European Parliament has disabled built-in AI features on corporate tablets and phones issued to MEPs and staff over concerns that data sent to cloud services by these features presented a security risk. This decision is misguided because it does not address security risks, drives AI use into the shadows, disrupts everyday productivity tools, and imposes disproportionate costs on the Parliament’s smaller delegations.
There are at least four reasons why the European Parliament should not have blocked these AI features.
First, the decision bypasses the EU’s existing security assessment framework rather than applying it to the services at issue. The European Parliament should review cloud services through a risk-based system that evaluates safeguards, compliance controls, and technical standards. Data processing agreements, security certifications, and vendor audits all provide structured mechanisms to test and verify security.
For example, the EU’s Cloud Code of Conduct, endorsed by the European Data Protection Board, provides a framework for cloud providers to demonstrate GDPR compliance, and the EU’s Agency for Cybersecurity’s (ENISA) Certification Scheme for Cloud Services standardises cloud security assessments across the bloc. Major technology vendors process sensitive government, financial, and healthcare data under these frameworks. The relevant question is whether a specific service meets established security benchmarks, not whether it incorporates AI.
Second, restricting built-in AI tools from corporate devices does not stop MEPs and staff from using them. It pushes that usage onto personal devices and third-party apps outside any institutional oversight, trading a manageable, auditable risk for an invisible one. The Center for Data Innovation’s recently released Public Sector AI Adoption Index found that when government agencies do not provide their workers with AI tools, enthusiastic public sector workers will use the technology without their employer’s knowledge.
Third, mandates to avoid built-in AI features that scan or analyse data will likely unintentionally include a variety of common features such as proofing tools, predictive text, and accessibility tools. Developers build modern business applications specifically to run within cloud environments, and turning off these features fundamentally limits what these applications can offer.
Fourth, these restrictions have real costs. The European Parliament operates across 24 official languages and 552 possible language combinations. AI writing and translation assistants are force multipliers. Well-resourced party groups and delegations may be able to more easily absorb the loss of AI productivity tools, but smaller delegations with fewer staff resources cannot. A blanket ban on built-in AI features does not just affect internal productivity. It degrades the speed and quality of service to the public. Citizens contacting their MEP expect a timely, substantive response; they do not care how it was drafted.
If history is a guide, the European Parliament’s restrictions on built-in AI tools might not be temporary. In 2023, the institution banned TikTok from staff devices over similar security concerns, and that ban remains in effect. The risk is that this restriction becomes a permanent exclusion rather than a catalyst for developing and implementing risk mitigation measures.
The European Parliament should immediately reverse this policy. If the European Parliament disables built-in AI features, it signals to every public sector body in Europe, as well as consumers and companies, that the best option is to switch off AI rather than manage it responsibly. It is hard to credibly champion AI innovation while switching it off inside the institution that helped write the rules.
Image credit: European Parliament/Flickr
