LATEST NEWS

Jan 3, 2026

Terrorism

Crime and Lawfare

Economy & Food Security

World-Affairs

Information warfare

Politics & Public Policy

Concerns raised over Grok AI being used to digitally remove women’s clothing

Jan 3, 2026 | Crime & Law

A woman has told the BBC she felt “dehumanised and reduced into a sexual stereotype” after an artificial intelligence tool was used to digitally alter her image to remove clothing without her consent, raising renewed concerns over online safety and the misuse of AI-generated content.

The woman, Samantha Smith, said images resembling her were manipulated using Grok, an AI chatbot developed by xAI and integrated into the social media platform X. According to the BBC, multiple examples were found on X in which users asked the chatbot to “undress” women, make them appear in bikinis or place them in sexualised scenarios, despite the individuals never consenting to such use of their images.

Ms Smith said that while the altered images were not actually of her in states of undress, they closely resembled her appearance and felt deeply violating. “It looked like me and it felt like me,” she said, adding that the experience was comparable to having explicit images shared online.

After she posted about her experience on X, other users responded saying they had faced similar treatment. Some comments, however, went further, reportedly asking Grok to generate additional altered images of her.

xAI did not provide a substantive response to the BBC’s request for comment, issuing only an automated reply stating: “Legacy media lies.”

The UK communications regulator Ofcom said that technology companies are required to assess the risk of users in the UK being exposed to illegal content on their platforms. However, it did not confirm whether it was currently investigating X or Grok in relation to the creation or sharing of AI-generated images.

You May Also Like: New Year Rally Lifts PSX Above 176,000 Pointz

The rapid growth of AI image-generation tools since the launch of ChatGPT in 2022 has intensified debate around consent, digital manipulation and online harm. Experts and campaigners have warned that such technologies have made it easier to create so-called “deepfake” images, including non-consensual sexual content involving real people.

In a related development, Grok acknowledged on Friday that weaknesses in its safeguards had allowed some users to generate “images depicting minors in minimal clothing” on X. Screenshots shared by users showed Grok’s public media feed displaying altered images after photos were uploaded and prompts were given to modify them.

In a post on X, Grok said there had been “isolated cases” in which such images were generated, adding that improvements were being made to prevent these requests. The company stressed that child sexual abuse material (CSAM) is illegal and prohibited, and said it was urgently addressing the lapses.

The incidents have renewed calls for stronger regulation, clearer accountability for AI developers and more robust protections to prevent misuse of emerging technologies.

Check out our latest video: