The European Commission has launched a formal investigation under the Digital Services Act into Grok, the AI chatbot embedded in Elon Musk’s social platform X. The probe, announced on 26 January 2026, will assess whether Grok poses systemic risks to users and public safety, and whether X has met its obligations to mitigate such harms.
The Digital Services Act gives EU regulators broad authority to police very large online platforms, including powers to order remedial measures and impose fines of up to 6% of global annual turnover for serious breaches. The DSA requires platforms to identify, assess and mitigate systemic risks such as the spread of illegal content, misinformation, manipulation of public debate and harms to fundamental rights—obligations now being tested against an AI-driven conversational agent integrated into a social network.
Grok is positioned by X as a built-in assistant for users, able to generate text and respond to queries in real time. Critics argue that conversational AI on a social media feed can amplify falsehoods, evade moderation, and interact with recommendation systems in unpredictable ways. Regulators will be looking at the model’s outputs, guardrails, transparency about capabilities and limits, and how X responds to incidents flagged by users or authorities.
The investigation is part of a broader European push to assert regulatory control over U.S. tech giants and emerging AI tools. Brussels has already advanced the AI Act, which targets high-risk AI uses, and the DSA serves as a tool to enforce platform-level responsibilities. Together these instruments aim to set global norms for safety, accountability and transparency in digital services and AI deployment.
For X and its owner, Elon Musk, the probe presents both a commercial and reputational challenge. A formal finding of non-compliance could force technical changes to Grok, require stricter content moderation or transparency measures, and expose X to heavy fines or restrictions in the single market. The move also increases legal and operational costs at a moment when platform business models are already under scrutiny.
Beyond X, this enforcement action will be watched by other platform operators and AI developers worldwide. The EU’s application of the DSA to an embedded chatbot will help define how regulators balance innovation with the prevention of harm, and it may encourage other jurisdictions to pursue similar oversight. The case could set precedents on auditing AI behaviour, disclosing training and safety measures, and the duties platforms owe their users and the public sphere.
