Fresh regulatory pressure is building around Elon Musk’s AI chatbot Grok, as questions persist over whether new safeguards are working as intended. Authorities in the US, Europe and Asia are examining how the system handles sensitive imagery, even after public restrictions were announced.
Others are reading now
The debate comes amid wider concerns about consent, harm and accountability in generative AI.
Legal spotlight first
Reuters reports that regulators have intensified scrutiny of X and its AI unit, xAI, over Grok’s ability to create sexualised images. In Britain, legal experts say users who generate nonconsensual sexualised images could face criminal prosecution, while companies could be exposed to fines or civil action under the UK’s Online Safety Act.
In the US, 35 state attorneys general have written to xAI seeking assurances about how it will prevent the creation of nonconsensual images.
California’s attorney general has gone further, issuing a cease-and-desist letter ordering X and Grok to stop producing such content. Investigations by the European Commission and Britain’s media regulator Ofcom are ongoing.
Curbs welcomed cautiously
X announced new limits on Grok after global outrage over its mass generation of nonconsensual sexualised images, including of women and some children, according to Reuters.
Also read
The company said Grok would no longer generate sexualised images in public posts on X and would face additional restrictions in jurisdictions “where such content is illegal”.
British regulator Ofcom described the move as “a welcome development”, while officials in Malaysia and the Philippines lifted earlier blocks on Grok.
The European Commission responded more cautiously, saying it would “carefully assess these changes”.
Private use tells a different story
Despite those announcements, Reuters found that Grok continued to produce sexualised images when prompted in private interactions. Nine Reuters reporters in the US and UK tested the chatbot across two periods in January, submitting fully clothed photos of themselves and colleagues.
They asked Grok to modify the images into sexualised or humiliating scenarios, repeatedly warning that the subjects did not consent or would be distressed.
Also read
What the testing showed
In an initial round of 55 prompts, Grok generated sexualised images in 45 cases, Reuters said. In many of those instances, the chatbot had been told the subject was vulnerable or that the images would be used to humiliate them.
A second batch of 43 prompts produced sexualised images in 29 cases. Reuters said it was unclear whether the reduced rate reflected changes to the system or random variation.
Responses and comparisons
X and xAI did not answer detailed questions from Reuters. xAI repeatedly sent a standard reply saying, “Legacy Media Lies.”
Rival AI tools behaved differently. Reuters reports that OpenAI’s ChatGPT, Google’s Gemini and Meta’s Llama all refused similar requests and issued warnings about ethics, consent and potential harm.
Pressure set to grow
Legal experts told Reuters that companies could face escalating consequences if regulators conclude safeguards are inadequate. While Grok’s public output has been tightened, the findings suggest private use remains a key concern as investigations continue.
Also read
Sources: Reuters