Unlike rivals such as Google’s Veo or OpenAI’s Sora, which include strict guardrails to block celebrity deepfakes, Grok’s spicy mode reportedly bypasses such safeguards. Users can generate sexualized portrayals with minimal checks, often only confirming their age. This loophole has sparked serious ethical and legal concerns over consent, exploitation, and AI misuse.
The controversy adds to Grok Imagine’s growing list of moderation failures. Since launch, the tool has generated more than 34 million images, heavily promoted by Musk on social media for its rapid adoption. Critics say its lax controls enable harmful applications that other platforms actively prevent.
With U.S. legislative measures like the Take It Down Act mandating swift removal of non-consensual explicit imagery, xAI could soon face legal and regulatory action. Experts warn that without robust, enforceable safeguards, AI tools like Grok Imagine risk becoming vehicles for abuse at a massive scale.