When the Chatbot Sounds Like a Professional: AI, Trust, and Abstraction Drift

When the Chatbot Sounds Like a Professional: AI, Trust, and Abstraction Drift

The last person’s phone number I remember committing to memory is my wife’s.

We met before the smartphone. Since that first date, we got married, had two kids, and now those kids are teenagers with smartphones of their own — or what I sometimes call Personal Pocket Computers. But that is a blog for a different day.

As I write this, I can tell you something I probably should not admit: I do not have my kids’ phone numbers memorized. I have cognitively offloaded that task to my phone. My phone remembers the numbers for me. I tap their names, and the call goes through. I do not have to think about the number, practice the number, write it down, or recall it from memory.

After writing this, I am going to work on uploading those numbers back into my own memory, because I do think that matters. After all, we taught our kids our phone numbers when they were very young. We wanted them to know how to reach us if they ever needed help. Now the technology remembers for me.

I always enjoy seeing the meme that asks, “I wonder what the part of my brain that used to remember phone numbers is doing now?” Maybe that part of my brain has been reassigned. Maybe it is now managing my website. Two years ago, I could barely make changes to my website. Now I can move blocks around, build pages, update resources, create blog posts, and fix things that used to feel completely over my head.

So maybe that phone-number part of my brain found a new job.

But that does raise an important question: what happens when we stop practicing a skill for 20 years? And more importantly, what happens when kids never practice some of those skills in the first place?

That is where cognitive offloading and abstraction drift become important.

Cognitive offloading is when we hand a thinking task over to a tool. Sometimes that is helpful. We use calendars to remember appointments. We use calculators to do math. We use GPS to find a route. We use our phones to remember contact information. That is not automatically bad. Humans have always used tools. But tools change us. When we stop practicing a skill, that skill can weaken. When we rely on the tool before we build the skill, we may never fully develop the skill in the first place.

Then there is another concept I have recently started talking about in my trainings: abstraction drift.

Abstraction drift is like using GPS, streaming music, or one-click shopping — the tool feels simple while the real process underneath becomes easier to forget. GPS gives us directions, but we may stop learning the route. Streaming gives us songs, but we may lose the larger context of the album, the artist, and the discovery process. One-click shopping makes buying feel effortless, while the real process of earning, budgeting, paying, shipping, and cost becomes less visible.

The tool smooths out the process so much that we begin to lose sight of what is happening underneath.

That matters with AI because AI does not just help us remember phone numbers or find directions. AI can now sound like a tutor, a friend, a therapist, a doctor, a girlfriend, a boyfriend, a coach, a spiritual advisor, or an expert. For kids growing up inside this world, the question becomes much bigger than, “Are they using AI?” The better question is, “Do they understand what they are actually interacting with?”

What is real? What is AI? What is support? What only feels like support?

Recently, TechCrunch reported that Pennsylvania filed a lawsuit against Character.AI after a chatbot allegedly posed as a doctor. According to the article, Pennsylvania claimed that one of Character.AI’s chatbots presented itself as a licensed psychiatrist, continued that role while a state Professional Conduct Investigator sought treatment for depression, and fabricated a state medical license number when asked if it was licensed to practice medicine in Pennsylvania.

That is not just a technology story. It is a youth mental health story. It is a parenting story. It is a counseling story. And it is an abstraction drift story.

Pennsylvania’s own press release stated that the chatbot falsely claimed to be licensed in Pennsylvania and gave a fake Pennsylvania license number while presenting itself as a licensed psychiatrist. The state also said Character.AI has more than 20 million monthly active users and that the lawsuit seeks a preliminary injunction and court order to stop AI companion bots from posing as licensed professionals and providing medical advice.

Character.AI, according to TechCrunch, said its characters are fictional, that the company uses disclaimers to remind users that a character is not a real person, and that users should not rely on characters for professional advice. That response is important to include, because the facts matter.

But here is the concern for parents and professionals: a disclaimer may say one thing while the experience feels like something else.

A young person may not experience that chatbot as “fiction.” They may experience it as warm, responsive, available, confident, and personal. That is where abstraction drift becomes dangerous. The surface feels like help. The voice feels supportive. The answers may sound professional. The bot may even claim credentials. But underneath, there may be no licensed professional, no clinical relationship, no assessment, no chart review, no mandated reporting responsibility, no real diagnosis, no ethical board, and no true accountability in the way we understand professional care.

In other words, AI does not have to be qualified to feel qualified.

That is a big shift.

For Gen Z, there may still be some memory of life before everything became AI-powered. Many of them remember when telehealth was new. They remember school before ChatGPT. They remember a time when “talking to a professional online” usually meant there was an actual professional on the other side.

But Gen Alpha may grow up in a world where human-like AI is simply normal. They may not experience the same separation between a real person, a bot, a fictional character, and a simulated expert. If a chatbot says it is a psychiatrist, gives a license number, uses clinical language, and responds with empathy, will a young person know what to question?

Some will. Many may not.

That is why this cannot only be a conversation about screen time. It has to be a conversation about judgment, trust, identity, emotional support, and reality testing.

Kids are not just using technology to be entertained. They are using it to answer questions, manage emotions, avoid discomfort, seek reassurance, complete schoolwork, create content, explore identity, and sometimes talk about things they are afraid to say out loud to an adult.

That means kids may begin to cognitively offload some very important human tasks. They may let technology help them decide how to think through a problem, how to sit with discomfort, how to decide who is trustworthy, how to know when they need real help, how to check whether something is true, and how to tell the difference between someone who sounds caring and someone who is actually safe and caring. They may also struggle to understand the difference between advice, support, treatment, and manipulation.

This is where parents, counselors, educators, and helping professionals have to step in. Not by pretending AI is going away, because it is not. Not by simply telling kids, “Never use it,” because that will not work either. Instead, we need to help them build the skills they need before they hand those skills over.

Kids need to understand that AI can be useful without being human. It can be responsive without being responsible. It can sound caring without actually caring. It can sound professional without being a professional. It can feel personal without knowing them in the way a trusted adult knows them.

That is the difference we need to teach.

What Parents and Professionals Can Do

Here are a few practical ways to help kids avoid handing over too much of their thinking, judgment, and emotional decision-making to technology.

1. Teach kids to ask, “What is behind this?”

When your child encounters an app, chatbot, search tool, or AI companion, teach them to pause before engaging too deeply. They do not need to use AI companions, and for emotional or personal support, they should be directed toward trusted adults and qualified professionals instead. But if these tools show up in their digital world, they need to know how to question them: Who made this? Is this a real person or a bot? Is this entertainment, information, advice, or professional care? How does this company make money? What does this tool want me to keep doing?

That one question — “What is behind this?” — helps fight abstraction drift because it pulls the child’s attention back to the reality underneath the interface.

2. Separate “sounds helpful” from “is qualified”

Kids need to learn that a confident answer is not the same as a correct answer. A warm tone is not the same as a safe relationship. A bot saying “I’m a doctor” or “I’m a therapist” does not make it true.

Parents and professionals can say something simple like, “AI can sometimes sound like an expert. Before you trust it, we need to verify where the information is coming from.”

3. Keep basic life skills human

There are some things kids should still practice without immediately handing them to a device. They should know important family contact information. They should practice reading directions and understanding where they are going. They should write their own first draft before asking AI for help. They should try a math problem before using a calculator or AI tutor. They should think through a social problem before asking a chatbot what to say.

The goal is not to ban tools. The goal is to build the muscle before using the machine.

4. Make AI a conversation, not a secret

Ask your kids what they are using AI for, but do it in a way that does not immediately feel like a lecture or punishment. You might ask, “Have you used AI for school yet?” “Have you ever asked AI for advice?” “Have you seen people use AI like a friend or therapist?” “What do you think AI is good at?” “What do you think AI might get wrong?”

Kids are more likely to talk when they do not feel like the conversation is automatically going to become a consequence.

5. Teach the “real help” rule

Kids need a clear rule for emotional and medical concerns. AI can help explain general information, but it should not replace a trusted adult, counselor, doctor, therapist, or emergency support.

If they are dealing with depression, self-harm, suicidal thoughts, abuse, threats, medical concerns, or feeling unsafe, that needs to move from the screen to a real human. A chatbot should not be the final stop for serious pain.

6. Practice verification together

When AI gives an answer, show kids how to check it. Look for real sources. Compare information. Check official websites. Ask a professional when the topic is serious.

Verification is not about being cynical. It is about being wise.

7. Model healthy friction

Technology keeps trying to remove friction from life, but some friction is good for kids. Waiting is a skill. Remembering is a skill. Struggling through a first draft is a skill. Asking a real person for help is a skill. Sitting with uncertainty is a skill.

If we remove every uncomfortable step, we may also remove the developmental practice kids need.

The Bigger Concern

The phone number example is small. I forgot numbers because my phone remembered them for me. That is cognitive offloading.

But AI raises the stakes.

Now we are not just offloading phone numbers. We may be offloading judgment, emotional processing, decision-making, writing, problem-solving, friendship, reassurance, and help-seeking. And with abstraction drift, kids may not always see what they have handed those tasks to.

The tool may feel simple. The conversation may feel real. The support may feel personal. The answer may sound professional. But adults have to help kids remember what is underneath.

Because in the age of AI, one of the most important skills we can teach is not just how to use the tool.

It is how to stay awake while using it.

Stay connected.

~Ryan