When AI Crosses a Line: Deepnudes, Grok, and Real Concerns

When AI Crosses a Line: Deepnudes, Grok, and Real Concerns

I’ve been talking about deepnudes for years in my trainings.

A deepnude is an image where someone’s clothes are digitally removed—or a nude body is entirely created—using AI. Five years ago, this technology was clunky and fairly easy to spot. Today, it’s nearly indistinguishable from a real photo. Research consistently shows that this isn’t happening evenly across genders: approximately 99% of non-consensual sexual deepfake images target women and girls. Over the years, I’ve spent a lot of time talking about the emotional, social, and psychological impact this technology can have on women and teens—and that impact only grows as the images become more realistic and easier to create.

That’s why a recent update to Grok caught my attention.

Grok is one of my go-to AI image and video creation tools. Over the holiday break, I had a lot of fun with it—making videos of my dog, our Elf on the Shelf, and even wooden statues talking and moving. It was harmless, creative, and honestly fun for the whole family.

But here’s where my concern comes in.

Unlike some other platforms, Grok does not add a visible watermark to show that content is AI-generated. And while I enjoy using the tool, the potential for misuse is high. Very high.

Recently, I came across an article that confirmed many of the concerns I’ve been raising for years. The article describes global backlash over Grok’s ability to generate sexualized images—including realistic images of real people without their consent. This is exactly the direction many of us worried AI image tools could head if safeguards didn’t keep pace with the technology.

Key takeaways:

  • AI image tools are advancing faster than policies, protections, and public understanding.
  • The ability to create realistic, sexualized images of real people—especially women and teens—raises serious ethical and psychological concerns.
  • When AI content isn’t clearly labeled, it becomes harder to tell what’s real, what’s manipulated, and what’s intentionally harmful.
  • These tools don’t just live in adult spaces; they inevitably trickle down into teen culture and peer dynamics.

What adults can do right now:

  • Talk early and often. Don’t wait for an incident. Ask kids what AI tools they’ve seen or heard about and how people are using them.
  • Explain consent clearly. Help young people understand that creating or sharing AI images of someone’s body without permission is harmful—even if it’s “fake.”
  • Discuss permanence and impact. AI images can spread quickly and cause real emotional damage, regardless of whether they’re real.
  • Pay attention to tools, not just platforms. Many risks today come from standalone AI apps, not just social media.
  • Model curiosity with caution. It’s okay to explore AI creatively, but it’s also important to name where fun ends and harm begins.


If something does happen:

One of the best, most practical resources I recommend is from the National Center for Missing & Exploited Children. They’ve put together clear guidance on what to do if AI-generated or manipulated images are created or shared—especially involving minors. It walks adults through documentation, reporting, and next steps in a calm, actionable way.

I’m not anti-AI. I use it. I teach about it. But as the tools become more powerful, our conversations—and our guardrails—need to become stronger.

As always, my goal isn’t fear—it’s awareness, conversation, and thoughtful use in a world that’s changing fast.

Stay connected. 

~ Ryan