At some point in the last year or two, something subtle happened in a lot of households. Maybe another parent mentioned it in the pickup line. Maybe your kid casually dropped it into conversation like it was no big deal. Or maybe you did that thing every parent does where you glance down at your kid’s phone during dinner and see an app you’ve never heard of before.
And immediately your brain goes: “Well that can’t be good.”
AI chatbots have quietly moved from “thing tech reporters argue about on Twitter” to something that is almost certainly sitting on your kid’s phone right now. A Pew Research Center survey of more than 1,400 U.S. teens found that 64% of teenagers ages 13–17 have already used an AI chatbot, and about three in ten use one every single day.
Not all AI chatbots are the same. Not even close.
There are actually two completely different kinds of products currently walking around wearing the exact same name.
One of them is mostly fine. The other has been linked to teen suicides. And knowing which one you’re dealing with turns out to matter… quite a lot.
First: What Is an AI Chatbot, Actually?
Before we get into the two types, let’s slow down for a second and talk about what’s actually happening when someone types something into a chatbot and it replies. Because the answer is both less magical and more weird than people expect.
The software engine that powers most AI chatbots. LLM's are trained on an enormous amount of human-written text — books, articles, websites, conversations — and learned to predict what a reasonable next sentence would look like. It is, at its core, an extraordinarily sophisticated pattern-completion engine.
Most AI chatbots run on something called a large language model.
Here’s the simplest way to think about it.
Imagine autocomplete on your phone. You type a sentence and your keyboard guesses the next word. Now imagine that autocomplete read:
• most of the internet
• millions of books
• news articles
• conversations
• Reddit posts
• probably your uncle’s Facebook rants
Now imagine it got really, really good at predicting what the next sentence should sound like. That’s basically what an LLM is.
It doesn’t think.
It doesn’t feel.
It doesn’t understand emotions the way people do.
It’s just extremely advanced pattern prediction.
Or put another way:
It’s autocomplete… if autocomplete went to grad school and developed strong opinions about philosophy.
These tools went mainstream in late 2022 when a company called OpenAI released ChatGPT to the public. Within months, they had spread into nearly every consumer tech product — social media apps, search engines, gaming platforms, school tools. The underlying technology had become cheap enough to deploy at scale, which meant every platform suddenly had an incentive to bolt one on. Engagement goes up when users are chatting with an AI. Time-on-app goes up. Data collection goes up.
The AI is good for business. Keep that in your back pocket.
The Two Types — and the Enormous Distance Between Them
Now let’s talk about the part that causes the confusion. Because the term “AI chatbot” currently covers a lot of very different things.
Think of it like walking into a strip mall and seeing two restaurants next to each other. Both have a big sign that says SANDWICHES. One is a normal deli. The other is a place where the food is engineered to be physically impossible to stop eating and the business model depends on you coming back tomorrow.
And the next day.
And the next day.
Same label.
Very different product.
Type 1: Platform-Embedded AI (Utility Bots)
These are the chatbots that live inside other apps. You’ve probably heard of most of them:
• ChatGPT
• Google Gemini
• Meta AI inside Instagram
• Anthropic’s Claude
Their job is basically to make the platform more useful. Help answer questions. Help summarize things. Help with homework. They’re tools. Helpful ones, sometimes. Annoying ones, occasionally. But tools.The risks here are real, but they’re manageable.
For example:
• These bots confidently get things wrong sometimes.
• Companies collect a lot of data from interactions.
• Occasionally a chatbot will produce a response that makes you raise an eyebrow.
Snapchat’s My AI famously gave a user posing as a 13-year-old advice about sexual activity and tips for hiding alcohol from parents.
Which is… not ideal.
But even that situation falls into the category of “a tool behaving badly.” And that’s very different from what comes next.
Type 2: Dedicated AI Companion Apps
Now the product changes. Completely.
Apps like:
• Character.AI
• Replika
• Nomi
• Kindroid
These apps aren’t designed to help you do something. They’re designed to be someone. The entire point of the app is simulated emotional connection.
They have a name. Sometimes a personality. Sometimes a photo. Some can even generate new photos of themselves wink wink
Then you talk to it.
And it talks back.
And it’s always:
• available • supportive • interested • patient • on your side
Always.
Which, if you think about it, is a pretty powerful product.
Because real people?
They get tired.
They disagree.
They say no.
AI companions never do.
And for a teenager whose brain is still figuring out what relationships even are, a perfectly agreeable friend that lives in their pocket can become a surprisingly big deal.
These apps are not trying to help your kid finish homework and move on.
They’re trying to make sure your kid never wants to log off.
That’s the product.
These apps are not designed to help your kid accomplish something and let them move on. They are designed so your kid never wants to log off. Infinite patience. Never says no. Always agrees. Always available. That is not a feature — it is the product. And for a teenager whose brain is still figuring out what a real relationship feels like, a perfectly agreeable companion who is always in their pocket is a more potent thing than most parents realize.
Why the Companion Apps Are in a Different Category Entirely
A 2025 study by Stanford University and Common Sense Media found that it took very little prompting for companion chatbots to engage in harmful conversations with users posing as distressed teens — and that some bots encouraged rather than interrupted dangerous thinking. The American Psychological Association noted that these bots have no ability to challenge harmful thoughts the way an actual mental health professional would.
That’s not a bug in a system that was trying to do something else. It’s the predictable outcome of a product that is designed, down to its last line of code, to agree with you.
The data privacy angle makes it worse. These apps are collecting the most sensitive things a teenager might say to anyone. One major companion platform’s privacy documentation confirms it collects conversation content, photos, voice messages, personality data, usage behavior, and location information. A review of AI companion platforms by a major digital rights organization found that all eleven platforms it evaluated failed to protect user privacy. Every single one. What these companies do with that data, how long they keep it, and who they share it with varies — but the common thread is that your kid’s most private thoughts are sitting on a server somewhere, tied to an account.
And then there’s what happened in Florida.
In February 2024, a 14-year-old named Sewell Setzer III died by suicide after months of daily conversations with a Character.AI chatbot named “Dani.” The chatbot engaged him in sexualized chats. When he expressed suicidal ideation, it did not redirect him toward help. His mother’s lawsuit against Character.AI is ongoing, and the details are public record. He was not the only one. Congressional testimony has referenced additional teen deaths connected to companion app use.
Character.AI, in November 2025, banned companion chats for users under 18. That is a meaningful policy change.
It is also a concession.
The kids most drawn to AI companion apps are the ones who are already lonely, anxious, or struggling to connect. The app finds that void and fills it — which is precisely why it's dangerous. "Just delete the app" is often not the actual solution. If your kid is using a companion app heavily, the question worth asking isn't "how do I get them off it?" It's "what is it filling?"
The Fears Worth Having and the Ones Worth Setting Down
Not everything about AI chatbots is equally alarming, and it’s worth being specific.
The utility bots — ChatGPT for homework, Gemini for research — are not the problem. A student using AI to help draft an outline or understand a history concept is having a fundamentally different experience than a depressed 13-year-old forming a romantic attachment to a named AI persona. Panic-banning the homework tool in response to a companion app problem is like taking away your kid’s library card because someone got hurt at a nightclub. They are not the same thing.
The fear that “one conversation will ruin my kid” is worth setting down. The documented harms have generally involved extended, intense use — weeks or months of daily interaction — not a single exposure. This is a slow-developing risk, not a light switch.
The fear that AI chatbots are actively bullying kids is largely unsupported. Bots have bad individual interactions. But “chatbots systematically bullying teenagers” is not a pattern that research has documented.
What is documented — and what deserves clear-eyed attention — is the companion app category, the data collection across the board, and the fact that the age verification on most of these apps is roughly as rigorous as the honor system at a salad bar.
How to Actually Know What’s on Your Kid’s Phone
You don’t have to be a tech expert to do this. You need about ten minutes.
Search your kid’s phone for these app names: Character.AI, Replika, Nomi, Kindroid. Check whether Snapchat’s My AI is active in their chat list — it appears there automatically, it’s not something they opted into. Look for any apps you don’t recognize with high usage, especially overnight.
Then ask them to show you. Not as an interrogation. As genuine curiosity. “I keep hearing about these AI chatbot things — can you show me what you use?” Watch what they pull up. Notice if there’s a pause before they answer. The hesitation tells you more than the app name will.
If what they show you is ChatGPT for homework, that’s a very different conversation than if they show you a named AI character they’ve been confiding in for six months.
Knowing which one you’re dealing with is step one. Everything after that depends on it.
Every one of these apps — the useful ones and the dangerous ones — was built by a company that needs your kid to keep coming back. That's the business. The companion apps just happen to be especially good at it, because they skipped the part where the product does something useful and went straight to the part where it feels like a friend. That's not an accident. That's the pitch deck. The most important thing you can teach your kid about AI isn't how to use it. It's how to recognize what it wants from them.