Commission releases AI Act guidelines and Meta won't sign code of practice

European Commission publishes comprehensive AI model guidelines and Meta announces it won't sign the AI code of practice, citing legal uncertainties.

European Commission building with EU flag and AI Act digital overlay graphics representing new AI regulation guidelines for tech companies.
European Commission building with EU flag and AI Act digital overlay graphics representing new AI regulation guidelines for tech companies.

The European Commission released comprehensive guidelines on July 18, 2025, clarifying obligations for providers of general-purpose artificial intelligence models under the EU AI Act. According to the document numbered C(2025) 5045 final, these guidelines address the scope of obligations for general-purpose AI models as compliance requirements enter into application on August 2, 2025.

The 36-page framework targets four key areas: model classification criteria, provider identification, open-source exemptions, and enforcement procedures. The guidelines establish specific technical thresholds for determining when an AI model qualifies as a general-purpose system subject to EU regulation.

According to the Commission's indicative criterion, models with training compute greater than 10²³ floating-point operations and capable of generating language, text-to-image, or text-to-video content qualify as general-purpose AI models. This threshold corresponds to the computational resources typically required to train models with approximately one billion parameters on large datasets.

The guidelines specify that "training compute has the advantage of combining number of parameters and number of training examples into a single number that is reasonably straightforward for providers to estimate." This approach builds on recital 98 of the AI Act, which identifies models with at least one billion parameters as displaying significant generality across tasks.

For models exceeding 10²⁵ floating-point operations in cumulative training compute, additional systemic risk obligations apply. Providers must notify the Commission within two weeks when models meet or are expected to meet this threshold. The notification process requires detailed technical documentation, including the amount of compute estimated and the methodology used for calculations.

The Commission addresses lifecycle management requirements throughout model development phases. According to the guidelines, "the Commission considers the lifecycle of a general-purpose AI model to begin at the start of the large pre-training run." All subsequent development activities performed by the provider constitute part of the same model lifecycle rather than creating new models.

Documentation obligations under Article 53(1) require providers to maintain current technical information for downstream providers and regulatory authorities. The guidelines specify that this documentation must be "drawn up for each model placed on the market and kept up to date throughout its entire lifecycle."

Copyright compliance receives significant attention in the framework. Providers must implement policies addressing Union copyright law throughout the model lifecycle. The guidelines require "identifying and complying with a reservation of rights expressed under Article 4(3) of Directive (EU) 2019/790."

The Commission establishes specific conditions for open-source exemptions from certain transparency obligations. Models released under free and open-source licenses may qualify for exemptions provided they allow access, usage, modification, and distribution without commercial restrictions. However, these exemptions do not apply to models classified as having systemic risk.

The guidelines clarify that monetization strategies disqualify models from open-source exemptions. Dual licensing approaches requiring payment for commercial usage constitute monetization under the framework. Additionally, collecting personal data for model access represents a form of monetization unless strictly limited to security purposes without commercial gain.

Downstream modification thresholds establish when actors modifying existing models become subject to provider obligations. The Commission sets an indicative criterion of modification training compute exceeding one-third of the original model's training compute. For models where this information is unavailable, alternative thresholds apply based on the original model's systemic risk classification.

The framework addresses enforcement mechanisms beginning August 2, 2025. The AI Office will supervise compliance through a "collaborative, staged, and proportionate approach." Enforcement powers include information requests, model evaluations, mitigation measures, and fines up to 3% of global annual turnover or €15 million, whichever is higher, starting August 2, 2026.

For models placed on the market before August 2, 2025, providers have until August 2, 2027, to achieve compliance. The Commission acknowledges implementation challenges, noting that "providers may face various challenges to comply with their obligations under the AI Act by 2 August 2027."

The guidelines establish detailed computational estimation methodologies for training compute calculations. Providers may use hardware-based approaches tracking GPU usage or architecture-based methods estimating operations directly. All methods must achieve accuracy within a 30% error margin of reported estimates.

Code of practice adherence offers a streamlined compliance pathway. Providers implementing adequate codes of practice demonstrate compliance with Articles 53(1) and 55(1) obligations. The Commission may approve codes through implementing acts, providing general validity within the Union.

Systemic risk assessment requirements apply to models exceeding the 10²⁵ FLOP threshold. Providers must "continuously assess and mitigate systemic risks" including cybersecurity protection throughout the model lifecycle. The framework defines serious incidents as malfunctions leading to events specified in Article 3(49) for AI systems.

The Commission addresses specific scenarios for integrated AI systems. When providers integrate their own general-purpose models into AI systems made available on the market, both model and system obligations apply. For downstream integration scenarios, upstream actors remain responsible for model obligations unless they explicitly exclude Union market distribution.

Training compute examples illustrate the threshold applications across different model types. The guidelines cite eight reference models with parameters ranging from 600 million to 3.8 billion, demonstrating compute requirements from 7.5 × 10²² to 6.5 × 10²³ FLOP. These examples validate the 10²³ FLOP threshold for general-purpose model classification.

Multi-stakeholder development informed the guidelines through public consultation from April 22 to May 22, 2025. The European Artificial Intelligence Board provided input on June 30, 2025, incorporating expertise from the Joint Research Centre's pool of experts.

The framework acknowledges technological evolution requiring periodic updates. The Commission will review guidelines based on practical implementation experience, enforcement actions, and Court of Justice interpretations. This living document approach ensures continued relevance as AI capabilities advance.

Industry response to the guidelines has been mixed, with significant opposition from major technology companies. Meta announced it would not sign the European Commission's Code of Practice for general-purpose AI models, citing legal uncertainties and measures extending beyond the AI Act's scope. According to Joel Kaplan, Meta's Chief Global Affairs Officer, "Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it."

Kaplan detailed specific concerns about the regulatory framework, stating "This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." Meta's opposition centers on what the company views as regulatory overreach that exceeds the original legislative intent of the AI Act.

The company argues that the implementation approach threatens European competitiveness in artificial intelligence development. Kaplan warned that the regulations "will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them." This position reflects broader concerns about Europe's ability to compete with other regions in AI innovation.

Meta's stance aligns with widespread industry criticism of the Commission's approach. According to Kaplan, "Businesses and policymakers across Europe have spoken out against this regulation. Earlier this month, over 40 of Europe's largest businesses signed a letter calling for the Commission to 'Stop the Clock' in its implementation." The company shares concerns that regulatory overreach will harm both AI development and European businesses dependent on AI technologies.

The opposition highlights tensions between regulatory oversight and technological innovation as European authorities attempt to balance safety requirements with competitive positioning in the global AI market.

For organizations developing or deploying general-purpose AI models in the EU market, these guidelines provide essential clarity on compliance requirements. The specific technical thresholds and procedural requirements offer concrete implementation pathways as August 2025 enforcement approaches.

The guidelines represent a significant milestone in global AI regulation, establishing the world's first comprehensive framework for general-purpose AI model governance. As regulatory frameworks continue evolving worldwide, the EU's approach may influence international standards for AI oversight and compliance.

Why this matters for marketing

These EU guidelines create the first comprehensive regulatory framework for AI models powering marketing technologies. For the advertising industry, the implications extend beyond immediate compliance requirements to fundamental shifts in how AI-powered tools can be developed and deployed.

PPC Land has extensively covered the intersection of AI regulation and marketing technology, particularly as 68% of marketers plan to increase social media spending in 2025 with generative AI emerging as the leading consumer technology trend. The documentation requirements under these guidelines will influence how AI-powered advertising tools integrate with major platforms.

Marketing organizations using general-purpose AI models for content creation, customer targeting, or campaign optimization must now demonstrate compliance with detailed transparency and safety requirements. The regulatory landscape continues evolving as marketing leaders seek adaptive AI solutions while navigating new compliance obligations.

Companies integrating AI models into marketing platforms face specific considerations around the downstream modification thresholds. When marketing technology providers modify general-purpose models for advertising applications, they may become subject to provider obligations depending on the computational resources used for modification.

The copyright compliance requirements particularly impact marketing applications that generate creative content. Models used for advertising copy generation, visual content creation, or social media automation must implement policies addressing EU copyright law throughout their operational lifecycle.

Timeline

Key terms explained

General-purpose AI models: These are artificial intelligence systems trained on vast datasets that can perform multiple tasks across different domains, unlike specialized AI designed for single functions. In marketing, these models power tools for content generation, customer service chatbots, predictive analytics, and campaign optimization. Examples include large language models that can write advertising copy, analyze customer sentiment, and generate creative assets across multiple channels.

Training compute (FLOP): Floating-point operations measure the computational resources required to train AI models, expressed in scientific notation like 10²³ FLOP. Marketing teams need to understand these thresholds because they determine regulatory obligations for AI tools. Models exceeding 10²³ FLOP face transparency requirements, while those above 10²⁵ FLOP must implement systemic risk management, affecting how marketing technologies can be developed and deployed.

Downstream modification: This occurs when companies take existing AI models and adapt them for specific marketing applications, such as fine-tuning a general language model for brand-specific content generation. The guidelines establish that modifications using more than one-third of the original model's training compute create new provider obligations, potentially requiring marketing technology companies to comply with full regulatory requirements.

Systemic risk obligations: Advanced AI models with capabilities that could significantly impact markets or society face additional safety requirements including continuous risk assessment, incident reporting, and cybersecurity measures. For marketing applications, this affects large-scale personalization systems, automated bidding platforms, and AI tools that influence consumer behavior at scale across multiple markets.

Open-source exemptions: AI models released under free and open-source licenses may qualify for reduced regulatory obligations, provided they allow unrestricted access, usage, modification, and distribution without monetization. Marketing teams considering open-source AI tools should understand that exemptions don't apply to models with systemic risk, and any commercial restrictions disqualify the exemption.

Provider obligations: Organizations that develop or place AI models on the EU market must comply with documentation, transparency, and safety requirements. Marketing technology vendors integrating AI capabilities need to determine whether they qualify as providers, which depends on factors like model modification extent, market placement activities, and control over model development processes.

Code of practice adherence: Voluntary compliance frameworks allow AI providers to demonstrate regulatory compliance through standardized measures rather than developing custom approaches. Marketing organizations can reduce compliance costs by implementing AI tools from providers who adhere to approved codes of practice, though this requires ongoing monitoring of provider compliance status.

Lifecycle management requirements: AI models must maintain compliance throughout their entire operational period, from initial training through deployment, updates, and retirement. Marketing teams using AI tools must ensure their providers maintain current documentation, copyright compliance, and safety measures throughout the model's active use, not just at initial deployment.

Copyright compliance policies: AI providers must implement procedures ensuring training data respects intellectual property rights, including identifying and following rights reservations expressed through protocols like robots.txt. Marketing applications using AI for content generation must verify their tools comply with copyright requirements to avoid legal liability for generated materials.

Cumulative training compute: This measurement includes all computational resources contributing to model capabilities, from initial pre-training through fine-tuning and synthetic data generation. Marketing organizations need to understand this concept because it determines which regulatory category their AI tools fall under, affecting everything from documentation requirements to safety obligations and compliance costs.

Summary

Who: The European Commission's AI Office published guidelines affecting providers of general-purpose AI models, downstream modifiers, and organizations using AI systems across the European Union. Nearly 1,000 stakeholders contributed to the development process through multi-stakeholder consultations.

What: Comprehensive guidelines establishing the scope of obligations for general-purpose AI model providers under the EU AI Act, including specific technical criteria for model classification, transparency requirements, copyright compliance, systemic risk management, and enforcement procedures.

When: The guidelines were published on July 18, 2025, with AI Act obligations for general-purpose AI models entering into application on August 2, 2025, and enforcement powers becoming effective one year later on August 2, 2026.

Where: The guidelines apply throughout the European Union market, affecting any organization that develops, modifies, or places general-purpose AI models on the EU market, regardless of whether they are established within the Union or in third countries.

Why: The guidelines aim to provide legal certainty for AI value chain actors while ensuring high levels of protection for health, safety, and fundamental rights as general-purpose AI models play increasingly significant roles in innovation and AI system integration across the Union.