
In brief
- Australia’s eSafety Commissioner flagged a spike in complaints about Elon Musk’s Grok chatbot creating non-consensual sexual images, with reports doubling since late 2025.
- Some complaints involve potential child sexual exploitation material, while others relate to adults subjected to image-based abuse.
- The concerns come as governments worldwide investigate Grok’s lax content moderation, with the EU declaring the chatbot’s “Spicy Mode” illegal.
Australia’s independent online safety regulator issued a warning Thursday about the rising use of Grok to generate sexualized images without consent, revealing her office has seen complaints about the AI chatbot double in recent months.
The country’s eSafety Commissioner Julie Inman Grant said some reports involve potential child sexual exploitation material, while others relate to adults subjected to image-based abuse.
“I’m deeply concerned about the increasing use of generative AI to sexualise or exploit people, particularly where children are involved,” Grant posted on LinkedIn on Thursday.
The comments come amid mounting international backlash against Grok, a chatbot built by billionaire Elon Musk’s AI startup xAI, which can be prompted directly on X to alter users’ photos.
Grant warned that AI’s ability to generate “hyper-realistic content” is making it easier for bad actors to create synthetic abuse and harder for regulators, law enforcement, and child-safety groups to respond.
Unlike competitors such as ChatGPT, Musk’s xAI has positioned Grok as an “edgy” alternative that generates content other AI models refuse to produce. Last August, it launched “Spicy Mode” specifically to create explicit content.
Grant warned that Australia’s enforceable industry codes require online services to implement safeguards against child sexual exploitation material, whether AI-generated or not.
Last year, eSafety took enforcement action against widely-used “nudify” services, forcing their withdrawal from Australia, she added.
“We’ve now entered an age where companies must ensure generative AI products have appropriate safeguards and guardrails built in across every stage of the product lifecycle,” Grant said, noting that eSafety will “investigate and take appropriate action” using its full range of regulatory tools.
Deepfakes on the rise
In September, Grant secured Australia’s first deepfake penalty when the federal court fined Gold Coast man Anthony Rotondo $212,000 (A$343,500) for posting deepfake pornography of prominent Australian women.
The eSafety Commissioner took Rotondo to court in 2023 after he defied removal notices, saying they “meant nothing to him” as he was not an Australian resident, then emailing the images to 50 addresses, including Grant’s office and media outlets, according to an ABC News report.
Australian lawmakers are pushing for stronger protections against non-consensual deepfakes beyond existing laws.
Independent Senator David Pocock introduced the Online Safety and Other Legislation Amendment (My Face, My Rights) Bill 2025 in November, which would allow individuals sharing non-consensual deepfakes to be fined $102,000 (A$165,000) up-front, with companies facing penalties up to $510,000 (A$825,000) for non-compliance with removal notices.
“We are now living in a world where increasingly anyone can create a deepfake and use it however they want,” Pocock said in a statement, criticizing the government for being “asleep at the wheel” on AI protections.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.

















