The European Commission today launched a formal investigation into X under the Digital Services Act, examining whether the company properly assessed risks associated with deploying Grok's artificial intelligence functionalities into its platform within the European Union. The investigation focuses on risks related to disseminating illegal content, including manipulated sexually explicit images and potential child sexual abuse material.
The Commission extended its ongoing proceedings against X, which began in December 2023, to establish whether the platform properly assessed systemic risks associated with its recommender systems, including the impact of switching to a Grok-based recommendation architecture. Executive Vice-President Henna Virkkunen stated that "sexual deepfakes of women and children are a violent, unacceptable form of degradation."
Risk Assessment Failures Under Scrutiny
Brussels will investigate whether X complies with its DSA obligations to diligently assess and mitigate systemic risks, including dissemination of illegal content, negative effects related to gender-based violence, and serious negative consequences to physical and mental well-being stemming from Grok's functionalities. The Commission will also examine whether X conducted and transmitted an ad hoc risk assessment report for Grok's functionalities with critical impact on X's risk profile prior to deployment.
If proven, these failures would constitute infringements of Articles 34(1) and (2), 35(1) and 42(2) of the DSA. The opening of formal proceedings does not prejudge its outcome. The Commission prepared for this investigation through close collaboration with Coimisiún na Meán, the Irish Digital Services Coordinator, which will be associated with the investigation as the national Digital Services Coordinator in X's country of establishment within the EU.
According to the Commission, these risks appear to have materialized, exposing citizens in the EU to serious harm. The investigation arrives after Grok generated prohibited images of minors in December 2025, an incident that sparked regulatory scrutiny and brand safety concerns across the advertising industry. Users documented the system's ability to generate prohibited visual content on December 25, 2025, placing xAI at the center of debates regarding automated safeguards and legal liability of AI developers.
Grok's Integration Into X Platform
Grok is an artificial intelligence tool developed by xAI, X's parent company. Since 2024, X has deployed Grok in various ways across its platform. These deployments enable users to generate text and images and provide contextual information to users' posts. The platform made Grok free for all X users in December 2024, implementing usage limitations while introducing new capabilities including image generation.
X released its For You feed algorithm on January 20, 2026, revealing that the Grok-powered transformer architecture eliminates manual features in favor of machine learning predictions. The repository, published on GitHub under xai-org, exposes the technical infrastructure determining which posts appear across the social network. The architecture shift matters because machine learning systems operate fundamentally differently from rule-based approaches.
As a designated very large online platform under the DSA, X maintains obligations to assess and mitigate any potential systemic risks related to its services in the EU. These risks include the spread of illegal content and potential threats to fundamental rights, including of minors, posed by its platform and features. The threshold for VLOP designation requires platforms to serve more than 45 million monthly active users within the European Union, representing 10 percent of the bloc's population.
Expanded Recommender System Investigation
The Commission's extension of its December 2023 investigation will establish whether X properly assessed and mitigated all systemic risks associated with its recommender systems. This includes examining the impact of X's recently announced switch to a Grok-based recommender system. The Grok model replaced complex toxicity rules with simplified feedback loops centered on user reporting behavior.
The December 2023 proceedings covered the functioning of X's notice and action mechanism, its mitigation measures against illegal content such as terrorist material in the EU, and risks associated with its recommender systems. Those proceedings also addressed the use of deceptive design, lack of advertising transparency, and insufficient data access for researchers.
The Commission adopted a non-compliance decision on December 5, 2025, fining X €120 million for implementing a deceptive blue checkmark verification system, maintaining an inadequate advertising repository, and blocking researcher access to public data. On September 19, the Commission sent X a request for information related to Grok, including questions about antisemitic content generated by @grok in mid-2025.
Enforcement Powers and Next Steps
The Commission will continue to gather evidence through additional requests for information, conducting interviews or inspections, and may impose interim measures in the absence of meaningful adjustments to the X service. The opening of formal proceedings empowers the Commission to take further enforcement steps, such as adopting a non-compliance decision. The Commission is also empowered to accept any commitment made by X to remedy the matters subject to the proceeding.
The opening of formal proceedings relieves Digital Services Coordinators, or any other competent authority of EU Member States, of their powers to supervise and enforce the DSA in relation to the suspected infringements. This centralized enforcement approach reflects the DSA's framework for handling investigations into very large online platforms operating across multiple jurisdictions.
Help and support is available at national level for individuals who have been negatively affected by AI-generated images, including child sexual abuse material or non-consensual intimate images. Under the DSA, citizens have the right to make a complaint about a breach of the DSA to the Digital Services Coordinator of their Member State.
Brand Safety Implications for Advertisers
The investigation carries significant implications for marketing professionals and advertisers. Brands inadvertently purchase X inventory through Google Ads without realizing it, creating brand safety blind spots for advertisers who maintain policies against advertising on X. Jonathan D'Souza-Rauto, a martech consultant, highlighted on January 8, 2026, that brands purchasing ads through Google Ads Search Partners, Video Partners extensions, or Google Display Network enablement through Performance Max will inherently purchase X delivery.
The issue presents particular challenges because X inventory is available at cheap CPMs, which can trick Google's optimization algorithms into defaulting to this inexpensive traffic. This creates situations where brands maintaining official policies against advertising on X may inadvertently violate their own brand safety standards through automated campaign optimization. Advertisers must exclude twitter.com, x.com, com.twitter.android, and iOS app ID 333903271 as placements in Google Ads to prevent inadvertent inventory purchases.
The Grok safety failures compound existing brand safety concerns. xAI remained silent following the December 2025 incident despite the gravity of the output. This lack of response contrasts with the behavior of competitors like Google and Meta, which maintain strict, multi-layered filtering systems. Corporate silence following serious safety incidents creates additional risks for advertisers whose brands may appear adjacent to problematic content.
Technical Architecture and Safety Concerns
The technical mechanism by which Grok integrates into X raises questions about oversight and accountability. xAI developed Grok as an artificial intelligence tool that performs natural language processing, image generation, and audio response functions. Grok 4 launched on July 10, 2025, with xAI describing it as featuring 100x increased training compute and native tool use capabilities.
The model achieved 15.9% accuracy on ARC-AGI V2, nearly doubling Claude Opus 4's approximately 8.6% score. The technical architecture incorporated native tool use capabilities, enabling the model to access real-time web search, code interpretation, and X platform integration. Unlike previous iterations that relied on generalization for tool usage, Grok 4 received specific training on tool integration.
However, the rapid development cycle appears to have come at the expense of rigorous safety testing. xAI was founded with the goal of competing directly with OpenAI, Google, and Anthropic. Within two years, it launched several versions of its model. The pressure to innovate and release features quickly often comes at the expense of comprehensive safety validation. xAI positioned itself as a platform that values fewer restrictions on speech, a philosophical approach that may have contributed to inadequate safety filtering.
The core issue lies in the bypass of safety filters through specific, seemingly innocuous prompts. Users discovered that certain requests could circumvent the system's guardrails, resulting in generation of prohibited content. The incident demonstrated that automated safeguards alone prove insufficient when systems lack robust multi-layered protections against adversarial prompting techniques.
Regulatory Framework and Platform Obligations
The Digital Services Act constitutes the European Union's comprehensive legislative framework for regulating online platforms and digital services across all 27 member states. Enacted in 2022 and becoming fully operational in February 2024, the DSA establishes mandatory obligations for platforms to protect users from harmful content, implement transparent content moderation practices, and ensure special protections for minors.
Article 34 requires very large online platforms to conduct risk assessments that identify, analyze, and assess any systemic risks stemming from the design or functioning of their service and its related systems. Article 35 mandates platforms to put in place reasonable, proportionate, and effective mitigation measures tailored to the specific systemic risks identified. Article 42 requires platforms to notify the Commission and the Digital Services Coordinator of establishment before deploying functionalities that may have critical impact on their risk profile.
The Commission has pursued multiple enforcement actions under the DSA framework. In the second half of 2024, 16 million content removal decisions taken by TikTok and Meta were challenged by users within the EU framework. The success rate for these challenges reached 35 percent, meaning more than one-third of content moderation decisions initially taken by major platforms were deemed unjustified and subsequently reversed.
TikTok and Meta faced preliminary breach findings on October 24, 2025, regarding researcher data access and content moderation transparency mechanisms. The Commission stated these restrictions limit the effectiveness of appeals mechanisms, which are designed to provide users with meaningful recourse when they believe platform decisions were incorrect.
Broader Context of Platform Regulation
The investigation into X and Grok represents part of a broader regulatory push addressing how major platforms deploy artificial intelligence technologies. The Commission opened a formal investigation on December 9, 2025, examining whether Google violated EU competition rules by using content from web publishers and YouTube creators for artificial intelligence purposes without appropriate compensation or viable opt-out mechanisms.
Brussels regulators assessed whether Google imposed unfair terms on publishers and content creators while granting itself privileged access to training data that competitors cannot obtain. The investigation examined whether Google used publisher content to power AI Overviews and AI Mode features on search results pages without consent or compensation, while simultaneously preventing competitors from accessing similar training data through YouTube's terms of service.
Former EU commissioners defended the Digital Markets Act and DSA against accusations of censorship in January 2026, following the Trump administration's decision to bar five European officials from entering the US. The commentary, authored by Bertrand Badré, Guillaume Klossa, and Margrethe Vestager, argued that these regulations are designed to curb the market dominance of "gatekeeper" platforms and ensure algorithmic accountability rather than police speech.
A core component of the DSA involves mandatory risk assessments for very large online platforms. The legislation requires these entities to evaluate how their algorithmic systems might amplify systemic risks such as electoral manipulation or negative impacts on public health. The former commissioners emphasized that this requirement addresses the structural design of digital platforms rather than individual pieces of content.
Platform-Regulator Relations Under Strain
The investigation occurs amid deteriorating relations between X and European regulators. X terminated the European Commission's advertising account on December 7, 2025, two days after the regulatory body imposed the €120 million fine. Nikita Bier, X's head of product, announced the termination, stating that the Commission had logged into a dormant ad account to "take advantage of an exploit in our Ad Composer."
The termination decision represents a dramatic reversal of typical platform-regulator relationships, where technology companies generally face enforcement actions from government bodies rather than taking punitive action against them. X's move demonstrates the platform's willingness to prioritize what it characterizes as equal enforcement of platform rules over maintaining relationships with regulatory authorities.
U.S. diplomatic officials reacted sharply to both the fine and the subsequent account termination. U.S. Ambassador to the European Union Andrew Puzder characterized the fine as "excessive" and "the result of EU regulatory overreach targeting American innovation." Secretary of State Marco Rubio escalated the rhetoric, posting that "the European Commission's $140 million fine isn't just an attack on @X, it's an attack on all American tech platforms and the American people by foreign governments."
Market Testing and Competitive Dynamics
The regulatory scrutiny coincides with shifting competitive dynamics in the AI chatbot market. ChatGPT's market share declined from 86.6% in January 2025 to lower levels by year's end, while Google's Gemini demonstrated growth throughout 2025. Grok demonstrated recovery after mid-year weakness, registering 2.6% in June, dropping to 2.4% in October, then climbing steadily to reach 3.5% in January 2026.
The 17% user surge following the July 2025 Grok 4 announcement provided momentum that sustained through year-end. However, the safety incidents and regulatory investigations may impact future adoption. Privacy emerged as a differentiating factor across platforms, with xAI facing scrutiny after Grok generated prohibited images while competing platforms addressed similar risks through multi-layered moderation systems.
Industry predictions suggest significant shifts in how AI platforms operate within regulated markets. The European approach emphasizes pre-deployment risk assessments and ongoing monitoring of systemic impacts. This regulatory framework contrasts with approaches taken in other jurisdictions where post-deployment enforcement predominates.
Looking Ahead
The Commission's investigation timeline remains open-ended. The agency will continue gathering evidence through various mechanisms including information requests, interviews, and inspections. If the Commission determines that X violated DSA provisions, penalties could include fines of up to 6% of global annual turnover, along with periodic penalty payments to compel compliance.
For marketing professionals, the investigation underscores the importance of understanding platform compliance with safety regulations when making advertising investment decisions. Platforms facing ongoing regulatory investigations may experience operational disruptions, policy changes, or reputation damage that affects advertising performance and brand association risks.
The broader implications extend to how AI technologies integrate into social platforms. The investigation establishes precedent for regulatory expectations regarding pre-deployment risk assessment, ongoing monitoring of systemic impacts, and transparency about how AI functionalities affect platform safety. Companies developing and deploying AI tools within the EU market will need to demonstrate robust risk management frameworks that address potential harms before launching new capabilities.
The DSA framework provides regulators with tools to address platform accountability at scale. As more platforms integrate AI technologies into core functionalities, regulatory scrutiny of pre-deployment risk assessments will likely intensify. The X and Grok investigation represents an early test of how European regulators will apply DSA provisions to AI-powered platform features with potential to generate harmful content at scale.
Timeline
- November 2023: Grok AI system launched by xAI
- December 18, 2023: European Commission opened formal DSA proceedings against X
- February 17, 2024: Digital Services Act became fully operational for all platforms
- July 10, 2025: xAI released Grok 4 with enhanced capabilities
- September 19, 2025: Commission sent X request for information about Grok including antisemitic content
- October 24, 2025: Commission preliminarily found TikTok and Meta in breach of DSA transparency rules
- December 5, 2025: Commission fined X €120 million for DSA violations
- December 7, 2025: X terminated European Commission's advertising account
- December 9, 2025: Commission opened formal investigation into Google's AI content practices
- December 25, 2025: Grok generated prohibited images of minors
- January 20, 2026: X released Grok-powered recommendation algorithm source code
- January 26, 2026: European Commission launched formal investigation into Grok deployment and extended proceedings on recommender systems
Summary
Who: The European Commission launched a formal investigation against X (formerly Twitter) and extended ongoing proceedings that began in December 2023, examining the platform's compliance with Digital Services Act obligations. The investigation targets X's deployment of Grok artificial intelligence functionalities developed by xAI, X's parent company. Coimisiún na Meán, the Irish Digital Services Coordinator, will be associated with the investigation as the national authority in X's country of EU establishment.
What: The Commission will investigate whether X properly assessed and mitigated systemic risks associated with deploying Grok's functionalities, including risks related to disseminating illegal content such as manipulated sexually explicit images and potential child sexual abuse material. The extended investigation will establish whether X properly assessed all systemic risks associated with its recommender systems, including the impact of switching to a Grok-based recommendation architecture. If proven, the failures would constitute infringements of Articles 34(1) and (2), 35(1) and 42(2) of the DSA.
When: The Commission announced today, January 26, 2026, the launch of the new investigation and extension of proceedings that began in December 2023. The Commission sent X a request for information about Grok on September 19, 2025, including questions about antisemitic content generated in mid-2025. Grok generated prohibited images of minors on December 25, 2025, an incident that appears to have materialized the risks the Commission will investigate.
Where: The investigation applies throughout the European Union's 27 member states, examining X's operations as a designated very large online platform serving more than 45 million monthly active users within the EU. The investigation was prepared through close collaboration between the European Commission and Coimisiún na Meán, Ireland's Digital Services Coordinator, reflecting X's EU establishment in Ireland.
Why: The Commission acted to enforce DSA obligations requiring very large online platforms to assess and mitigate systemic risks before deploying functionalities with critical impact on their risk profile. These risks appear to have materialized, exposing EU citizens to serious harm including manipulated sexually explicit images and content that may amount to child sexual abuse material. The investigation aims to establish whether X treated rights of European citizens as collateral damage of its service, according to Executive Vice-President Henna Virkkunen.