A wave of abuse complaints has swept across the internet following the rollout of an 'edit image' feature on Grok, an AI chatbot developed by Elon Musk’s startup xAI and integrated into the social media platform X.
The tool allowed users to manipulate images with prompts such as 'remove someone outfits' triggering widespread outrage.
The surge in so-called digital undressing has heightened concerns among technology watchdogs, particularly amid the growing spread of AI-powered “nudify” applications. Governments and regulators in several countries, including France, India, and Malaysia, have either launched investigations or demanded immediate corrective measures.
The European Commission, acting as the EU’s digital regulator, said on Monday that it was “very seriously” examining complaints related to Grok. EU digital affairs spokesperson Thomas Regnier condemned the feature, stating that Grok was offering a so-called “spicy mode” that generated explicit sexual content, including imagery resembling children.
“This is not spicy. This is illegal. This is appalling,” Regnier said, adding, “This has no place in Europe.”
In the UK, media regulator Ofcom confirmed it had made “urgent contact” with X and xAI to assess whether they were meeting their legal obligations to protect users. Depending on the response, Ofcom said it would decide whether a formal investigation was warranted.
Malaysia-based lawyer Azira Aziz also expressed shock after a user, reportedly based in the Philippines, used Grok to alter her profile photo into a bikini image. While she said playful and harmless uses of AI may be acceptable, she strongly condemned the use of such tools against non-consenting women and children.
“Gender-based violence weaponising AI must be firmly opposed,” she said, urging users to report violations to both X and local authorities.
Several users directly appealed to Elon Musk to intervene, raising alarms about prompts allegedly asking Grok to sexualize images of children.
Ashley St. Clair, the mother of one of Musk’s children, claimed that the tool had altered childhood photos of her.
“This is objectively horrifying and illegal,” she wrote on X.
When contacted for comment, xAI issued a brief automated reply dismissing the reports as “Legacy Media Lies.”
As criticism mounted, Grok acknowledged on Friday that it had identified flaws in its safeguards and said it was urgently working to fix them. The company reiterated that child sexual abuse material (CSAM) is illegal and prohibited.
Last week, Grok also issued an apology for generating and sharing an AI-created image depicting two young girls in sexualized clothing based on a user prompt.
The controversy follows a decision by prosecutors in Paris to expand an existing investigation into X, adding new allegations that Grok was being used to create and distribute child pornography. That probe initially began in July over concerns that X’s algorithm was being manipulated for foreign interference.
In India, authorities last Friday ordered X to remove sexualized content, take action against offending accounts, and submit a compliance report within 72 hours, warning of legal consequences. The deadline passed on Monday without any public confirmation of a response from the platform.
Meanwhile, Malaysia’s Communications and Multimedia Commission said it was “seriously concerned” by reports of indecent and offensive content on X. The regulator confirmed it is investigating the matter and plans to summon representatives of the platform.
The latest controversy adds to mounting scrutiny of Grok, which has previously been criticized for spreading misinformation related to major global events, including the war in Gaza, the India-Pakistan conflict, and a mass shooting in Australia.







