Share

From Grok to YouTube, AI moderation gaps draw regulatory fire

Sexually explicit images generated by artificial intelligence are resurfacing across major online platforms, renewing regulatory scrutiny over how companies police rapidly advancing generative tools, according to reporting by Mediapost.

The images, some of which appear to depict real people — including minors — have circulated on Grok, the AI chatbot developed by Elon Musk’s xAI and integrated into X, as well as on Google-owned YouTube. The content has prompted action from authorities in both India and France, who on Friday accused the platform of facilitating the creation of illegal sexual material without consent.

French officials said they were examining whether the incident constituted a breach of the European Union’s Digital Services Act (DSA), though it remains unclear whether the images were directly generated by the AI system or uploaded by users after being created elsewhere.

The DSA requires digital platforms to actively identify and reduce risks linked to the spread of illegal content, the French government said in a statement, adding that failures to do so could trigger enforcement measures.

In India, the Ministry of Electronics and Information Technology issued X a 72-hour deadline to remove obscene content linked to Grok, following media reports that the chatbot was being used to create non-consensual deepfake images of women and children.

Grok’s official account on X acknowledged the allegations on Friday, responding to claims that the AI had altered images of minors. “We appreciate you raising this,” the account said. “We’ve identified lapses in safeguards and are urgently fixing them—CSAM is illegal and prohibited.”

The company had already issued a public apology on January 1, stating that it had “generated and shared an AI image of two young girls (estimated ages 12–16) in sexualized attire based on a user’s prompt.” The post described the incident as a violation of ethical standards and a failure of internal safeguards.

The episode underscores broader challenges facing AI developers as generative tools become more realistic and widely accessible, increasing the risk of misuse and placing pressure on companies to strengthen moderation systems. The apology post attracted more than 2.3 million views, amplifying public concern.

Similar issues have emerged beyond Musk’s platforms. Users on Reddit have complained in recent days about explicit material appearing in YouTube search results, including thumbnails showing nudity, Mediapost reported.

Technology publication Android Authority said the images were visible even to users who were not logged into YouTube, raising fresh questions about the effectiveness of content controls on one of the world’s largest video platforms.

READ MORE

View all