Meta faces legal battle as noyb sends cease and desist over AI training
Privacy group demands opt-in consent as Meta plans to use EU user data from May 27.

Meta's plan to train its artificial intelligence systems using personal data from European Facebook and Instagram users without explicit consent has triggered a significant legal challenge that could potentially lead to billions in damages. Privacy advocacy group noyb (None Of Your Business) sent a formal cease and desist letter to Meta on May 14, 2025, just 13 days before the company plans to begin using EU personal data for AI training.
Meta announced it will process personal data from Instagram and Facebook users across the European Union beginning May 27 to train its new AI systems. Rather than requesting explicit opt-in consent from users, the company claims a "legitimate interest" under Article 6(1)(f) of the General Data Protection Regulation (GDPR) to process this information, only allowing users to opt out rather than actively consent.
The privacy dispute centers on a fundamental question: should users be asked for permission before their personal data is used for AI training, or can companies simply take that data by default?
Max Schrems, Chairperson of noyb, argues that Meta's approach is legally unsound: "The European Court of Justice has already held that Meta cannot claim a 'legitimate interest' in targeting users with advertising. How should it have a 'legitimate interest' to suck up all data for AI training?"
This approach is particularly problematic because it places the burden on users to take action to protect their privacy. While Meta provides an opt-out mechanism, privacy advocates argue this system is deliberately complicated and many users remain unaware of how their data will be used.
"Shifting the responsibility to the user is completely absurd," Schrems stated. "The law requires Meta to get opt-in consent, not to provide a hidden and misleading opt-out form. If Meta wants to use your data, they have to ask for your permission."
Get the PPC Land newsletter ✉️ for more like this
Legal challenges mount across Europe
The cease and desist letter from noyb represents just one front in a growing legal battle. The German Consumer Organization Verbraucherzentrale in North-Rhine-Westphalia (VZ NRW) has already announced its intention to seek a preliminary injunction against Meta in Germany. On May 13, the organization filed for an emergency injunction at the Cologne Higher Regional Court to stop Meta's data collection before it begins.
"With the application for an injunction, we want to prevent Meta from creating facts before the legal situation is clarified," explained Christine Steffen, lawyer and data protection expert at VZ NRW. "Once the data has been used for AI, a recall is hardly possible anymore - that's why quick action is needed now."
If noyb's cease and desist demand is rejected, the organization could pursue EU-wide injunctions as a Qualified Entity under the EU Collective Redress Directive. This legal status enables noyb to bring representative actions in courts across EU member states, not limited to Meta's EU headquarters in Ireland.
Legal experts note that Meta faces a significant risk because any injunction granted would not only stop the processing but could also require deletion of any AI systems trained with unlawfully obtained data. If EU data is mixed with non-EU data, the entire AI model might need to be deleted.
Potential class action and massive financial consequences
Beyond stopping the data processing, noyb has indicated it may pursue class action lawsuits for damages if Meta proceeds with its plans. Under the GDPR, affected individuals can claim non-material damages typically ranging from hundreds to thousands of euros per person.
With approximately 400 million monthly active Meta users in Europe, the financial risk is substantial. "If you think about the more than 400 million European Meta users who could all demand damages of just €500 or so, you can do the math," Schrems noted. "We are very surprised that Meta would take this risk just to avoid asking users for their consent."
Such damages could theoretically reach €200 billion ($224 billion) if successful, making this potentially one of the largest privacy cases in European history.
Privacy experts identify several technical problems with Meta's approach. The company has previously argued that it cannot technically distinguish between EU and non-EU users in its social network, as many data points are interconnected. This raises questions about Meta's ability to properly implement user objections and separate special category data (like religious beliefs or political opinions) from regular personal information.
The processing of special category data, which requires explicit consent under Article 9 GDPR, is particularly problematic. Meta's social networks contain vast amounts of such sensitive information, and privacy advocates doubt the company can effectively filter this content from its AI training datasets.
Another significant issue concerns users' ability to exercise their "right to be forgotten" once their data has been used for AI training. Meta appears to be limiting users' right to object to an ex-ante right (before processing begins), whereas the GDPR provides for objection at any time, including after processing has started.
"Meta is basically saying that it can use 'any data from any source for any purpose and make it available to anyone in the world,' as long as it's done via 'AI technology,'" Schrems argues. "This is clearly the opposite of GDPR compliance."
Regulatory position remains unclear
The role of data protection authorities (DPAs) in this dispute appears ambiguous. While Meta claims to have "engaged" with EU regulators, noyb suggests that DPAs have largely stayed silent on the legality of AI training without consent.
Instead of taking enforcement action, many DPAs have simply informed users that they should opt out of Meta's AI training, effectively placing responsibility on individuals rather than the company. This approach has frustrated privacy advocates who believe regulatory authorities should be more proactive.
"As far as we have heard, Meta has 'engaged' with the authorities, but this hasn't led to any 'green light,'" Schrems stated. "It seems that Meta is simply moving ahead and ignores EU Data Protection Authorities."
Meta's response and justification
Meta defends its approach by comparing it to practices of other AI companies and highlighting the potential benefits of AI systems trained on European data. In a statement published on May 6, Semjon Rens, Meta's Public Policy Director for Germany, Austria, and Switzerland, argued that the training is essential for AI models to better understand German culture, language, and history.
Rens characterized the potential injunction as "a major setback for German consumers who want locally relevant AI technology, for German companies that want to build on AI models that understand local nuances, and ultimately for Germany's goal of maintaining a competitive position in the global AI race."
The company also claims that its approach follows guidance from the European Data Protection Board from December 2024 and that it has worked extensively with the Irish Data Protection Commission to ensure compliance.
Wider implications for AI development in Europe
This dispute highlights broader tensions between privacy protection and technological development in Europe. Meta argues that fragmented regulatory interpretations across Europe create barriers for AI innovation and deployment.
"Every company that wants to develop AI in Europe eventually encounters these hurdles," Rens noted, referencing the recent Draghi report on European competitiveness which warned that significant differences in the interpretation of EU regulations across the continent create enormous obstacles for companies operating throughout Europe.
The case also raises questions about the proper legal basis for AI training in the European market. While Meta argues that "legitimate interest" is sufficient, privacy advocates maintain that explicit consent is necessary, especially when processing personal data from social networks that may include sensitive information.
Why this matters
The outcome of this legal battle will have significant implications for digital marketing, especially as AI development increasingly relies on large datasets that may contain personal information.
Marketers should closely monitor how courts interpret the legal requirements for AI training data, as this will establish precedents for how companies can legally collect and process data for developing marketing AI tools. The case could potentially redefine the boundaries of permissible data usage in AI-powered marketing technologies.
If courts rule in favor of noyb and require explicit opt-in consent for AI training, this could significantly limit the data available to train AI systems and potentially create competitive disadvantages for companies operating in Europe compared to those in other markets with less stringent data protection requirements.
However, requiring explicit consent could also build consumer trust and potentially lead to higher-quality datasets from users who actively agree to share their data for specific purposes.
Timeline
- June 6, 2024: noyb files complaints with 11 European DPAs requesting an urgent procedure to stop Meta from using personal data for AI training
- June 14, 2024: Meta temporarily halts AI plans in the EU following legal challenges
- December 2024: European Data Protection Board issues guidance on AI training
- April 2025: Meta announces plans to restart AI training in the EU using public posts
- May 6, 2025: VZ NRW requests Meta to cease and desist AI training in Germany
- May 13, 2025: VZ NRW files for preliminary injunction at Cologne Higher Regional Court
- May 14, 2025: noyb sends formal cease and desist letter to Meta
- May 21, 2025: Deadline for Meta to respond to noyb's cease and desist letter
- May 27, 2025: Date Meta plans to begin using EU personal data for AI training
Related stories
- Irish DPC launches Grok LLM training inquiry (April 13, 2025): Ireland's data watchdog investigates X's use of EU user data for AI training, examining GDPR compliance in the emerging field of generative AI data sourcing.
- Meta to use public posts to train AI models, users can opt out (April 2025): Meta revealed changes to its privacy policy allowing use of public posts and comments from users over 18 for AI training, with an opt-out deadline of May 27, 2025.
- Rethinking Meta Ads AI: Best practices for better results (May 13, 2025): Article discusses legal requirements for user consent in Meta's AI-powered advertising, noting that server-side tracking does not solve compliance issues with privacy regulations like GDPR.