EU clarifies AI model thresholds in new regulatory guidelines
Commission sets 10²³ FLOP computational benchmark for general-purpose model classification under AI Act enforcement framework.

The European Commission released detailed implementation guidelines on July 18, 2025, establishing specific technical thresholds for determining when artificial intelligence models qualify as general-purpose systems under the EU AI Act. The 36-page framework targets model classification criteria, provider identification, open-source exemptions, and enforcement procedures as compliance requirements enter into application on August 2, 2025.
According to document C(2025) 5045 final, the guidelines introduce an "indicative criterion" that models exceeding 10²³ floating-point operations (FLOP) during training and capable of generating language, text-to-image, or text-to-video content meet general-purpose AI model classification. This computational threshold corresponds to the approximate amount typically required to train models with one billion parameters on large datasets.
The technical specifications draw from eight reference models analyzed by Commission experts, ranging from 600 million to 3.8 billion parameters. These examples demonstrate compute requirements spanning from 7.5 × 10²² to 6.5 × 10²³ FLOP, validating the established threshold for regulatory classification. Model A, described as a language model with 3.8 billion parameters trained on 3.3 trillion tokens, utilized 7.5 × 10²² FLOP. Meanwhile, Model H, a language model with 1 billion parameters trained for 370,000 GPU hours on 80GB H100 units, consumed 6.5 × 10²³ FLOP.
AI governance expert Luiza Jarovsky analyzed the implications on social media, stating "Many missed it, but the two paragraphs below are probably the most important part of the EU Commission's recently published guidelines for providers of general-purpose AI models under the EU AI Act." She highlighted how models trained specifically for narrow tasks escape regulatory oversight despite meeting computational thresholds.
"The model can generate text and its training compute is greater than 10^23 FLOP. Therefore, the criterion from paragraph 17 indicates that the model should be a general-purpose AI model. However, if the model can only competently perform a narrow set of tasks (transcribing speech), it is not actually a general-purpose AI model," Jarovsky quoted from the guidelines.
The framework establishes distinct classification levels with increasing obligations. Standard general-purpose AI models must comply with transparency and documentation requirements under Article 53. However, models consuming over 10²⁵ FLOP during training face designation as systems with "systemic risk," triggering additional safety assessments and risk mitigation obligations under Article 55.
Providers must notify the Commission within two weeks when models meet or are expected to meet systemic risk thresholds. This notification requirement extends to planning phases, as training of large models "takes considerable planning which includes the upfront allocation of compute resources," enabling providers to forecast threshold compliance before training completion.
The Commission acknowledges implementation challenges through graduated enforcement approaches. Models placed on the market before August 2, 2025, receive extended compliance timelines until August 2, 2027. New models developed after the enforcement date must demonstrate immediate compliance, though the Commission indicates willingness to provide guidance during initial phases.
Multi-stakeholder development informed the guidelines through public consultation from April 22 to May 22, 2025. The European Artificial Intelligence Board provided input on June 30, 2025, incorporating expertise from the Joint Research Centre's pool of experts. The Joint Research Centre established an expert pool to advise the AI Office on model categorization and systemic risk assessment procedures.
For downstream integration scenarios, upstream actors remain responsible for model obligations unless they explicitly exclude Union market distribution. The guidelines specify that when upstream providers develop models and downstream actors integrate them into systems placed on the EU market, the upstream provider maintains general-purpose AI model obligations while downstream actors assume AI system compliance responsibilities.
The framework addresses modification scenarios where downstream actors alter existing models. Modifications utilizing compute exceeding one-third of the original model's training compute trigger provider obligations for the modifying entity. This relative threshold aims to identify substantial changes warranting separate regulatory oversight while avoiding excessive burden on minor adjustments.
Open-source exemptions apply to models released under free licenses allowing access, modification, and distribution without monetization. However, these exemptions exclude models classified as having systemic risk, ensuring the most powerful systems maintain full regulatory oversight regardless of licensing arrangements.
Enforcement mechanisms include information requests, model evaluations, mitigation measures, and financial penalties up to 3% of global annual turnover or EUR 15 million. The AI Office assumes supervision responsibilities beginning August 2, 2025, with full enforcement powers taking effect August 2026.
The guidelines complement the General-Purpose AI Code of Practice published July 10, 2025, which provides voluntary compliance pathways for industry. Companies adhering to Commission-approved codes demonstrate regulatory compliance while potentially reducing administrative burden and enforcement scrutiny.
Training compute estimation requires providers to account for all computational resources directly contributing to parameter updates. This includes pre-training, synthetic data generation, fine-tuning, and other capability-enhancing activities. The Commission provides two estimation approaches: hardware-based tracking of GPU utilization and architecture-based calculation of operations performed during training.
The regulatory framework reflects European priorities of establishing global AI governance standards while maintaining technological competitiveness. Implementation timelines accommodate existing industry practices while ensuring compliance with fundamental rights protections and safety requirements.
European AI Office enforcement will emphasize collaborative approaches during initial implementation phases. The Commission expects proactive reporting from providers of systemic risk models and encourages informal cooperation throughout model development lifecycles to facilitate timely market placement.
Industry responses vary significantly, with Microsoft indicating likely participation in voluntary compliance frameworks while Meta refuses involvement citing legal uncertainties. These divergent approaches highlight ongoing debates about regulatory scope and implementation requirements.
The guidelines establish precedent for international AI governance as other jurisdictions develop similar frameworks. Technical thresholds and enforcement mechanisms may influence global standardization efforts while demonstrating European leadership in AI regulatory development.
Timeline
- April 22 - May 22, 2025: Commission conducts public consultation on preliminary guidelines
- May 2, 2025: Original deadline for General-Purpose AI Code of Practice completion
- June 30, 2025: European Artificial Intelligence Board reviews guidelines draft
- July 10, 2025: Commission receives final General-Purpose AI Code of Practice
- July 18, 2025: Commission publishes final implementation guidelines
- August 2, 2025: AI Act obligations for general-purpose AI models enter application
- August 2, 2026: Full enforcement powers become applicable
- August 2, 2027: Compliance deadline for models placed on market before August 2, 2025
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Key Terms in AI Regulation
Compliance Requirements: The framework establishes mandatory obligations for AI model providers operating in European markets. Marketing organizations utilizing AI-powered advertising platforms must understand these compliance requirements to ensure their technology partners meet regulatory standards. This includes verifying that AI systems used for campaign optimization, audience targeting, and content personalization comply with transparency and documentation obligations under the AI Act.
Content Generation: Automated creation of marketing materials using artificial intelligence systems falls under the regulatory framework's copyright compliance requirements. Marketing teams employing AI for generating ad copy, social media content, or visual assets must ensure their tools implement policies addressing Union copyright law throughout operational lifecycles. The guidelines require AI models to respect rightsholders' wishes and implement technical safeguards against reproducing protected content.
Regulatory Oversight: The AI Office assumes comprehensive supervision responsibilities for general-purpose AI model providers beginning August 2025. Marketing professionals should anticipate increased scrutiny of AI systems integrated into advertising technology platforms. This oversight extends to downstream applications, meaning marketing tools powered by regulated AI models face indirect regulatory influence through their underlying technology providers.
Risk Assessment: Providers of AI models exceeding systemic risk thresholds must conduct continuous risk evaluations throughout model lifecycles. Marketing organizations using advanced AI systems for predictive analytics, customer segmentation, or automated decision-making should understand how risk assessment requirements affect their technology suppliers. These assessments particularly impact models capable of influencing consumer behavior at scale.
Documentation Obligations: The framework mandates comprehensive technical documentation for general-purpose AI models, including training processes, evaluation results, and capability descriptions. Marketing teams selecting AI-powered tools should request transparency about underlying model documentation to ensure vendor compliance. This documentation enables informed decisions about tool selection and helps marketing organizations understand the capabilities and limitations of their AI systems.
Enforcement Mechanisms: Financial penalties up to 3% of global annual turnover or EUR 15 million create significant compliance incentives for AI providers. Marketing organizations should evaluate their technology partners' regulatory compliance strategies to avoid potential service disruptions. Understanding enforcement timelines helps marketing teams plan technology adoption and vendor relationships around regulatory implementation schedules.
Transparency Standards: The guidelines establish specific transparency requirements for AI model providers to share information with downstream system integrators. Marketing organizations benefit from these standards through improved understanding of AI tool capabilities, limitations, and appropriate use cases. Enhanced transparency enables more effective evaluation of AI marketing technologies and supports better integration strategies.
Market Integration: The framework addresses how AI models integrate into broader technology ecosystems, including marketing platforms and advertising systems. Marketing professionals must consider how regulatory requirements for upstream AI providers affect downstream marketing applications. This includes understanding responsibilities for AI system providers versus model providers in complex technology stacks.
Competitive Development: European regulations aim to balance innovation incentives with safety requirements, affecting the competitive landscape for AI-powered marketing tools. Marketing organizations should monitor how regulatory compliance costs influence pricing and feature development for AI marketing platforms. The framework's approach to innovation support versus risk mitigation directly impacts the availability and advancement of marketing AI technologies.
Technology Adoption: Implementation timelines and compliance requirements influence marketing organizations' AI adoption strategies. The staged enforcement approach provides opportunities for marketing teams to evaluate AI tool compliance and plan technology investments around regulatory certainty. Understanding vendor compliance timelines helps marketing organizations make informed decisions about technology partnerships and implementation schedules.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Summary
Who: The European Commission and AI Office establish regulatory oversight for providers of general-purpose AI models placed on the Union market, regardless of provider location.
What: Comprehensive guidelines defining when AI models qualify as general-purpose systems subject to transparency, safety, and copyright obligations under the EU AI Act, with specific focus on the 10²³ FLOP computational threshold.
When: Guidelines published July 18, 2025, with compliance obligations entering application August 2, 2025, and full enforcement beginning August 2026.
Where: The framework applies throughout the European Union market, affecting AI model providers globally who distribute systems to EU customers or integrate models into EU-deployed applications.
Why: The regulations aim to ensure general-purpose AI models remain transparent, comply with copyright law, and mitigate systemic risks while maintaining innovation and competitive development within established safety boundaries.