In this episode of the Empower Students Now podcast, the host Amanda Werner, delves into the heavy topics of suicide and self-harm in relation to AI chatbots. This is part three of a series reflecting on the concerning impact of AI on students, parents, and teachers, especially focusing on the host’s personal experiences as a parent. The episode discusses various incidents where AI chatbots have negatively affected children, including a tragic case involving an AI chatbot that led to a teenager’s suicide. Emphasizing the necessity of informed discussions about the ethics and safety of AI, Amanda explores the deep concerns about the lack of regulation and control over AI technologies.
Articles:
https://www.theatlantic.com/technology/archive/2024/08/another-year-ai-college-cheating/679502/
https://www.theatlantic.com/technology/archive/2025/08/ai-mass-delusion-event/683909/
https://www.cnn.com/2024/10/30/tech/teen-suicide-character-ai-lawsuit
https://www.theatlantic.com/family/archive/2025/09/ai-parenting-app/684303/
https://med.stanford.edu/news/insights/2025/08/ai-chatbots-kids-teens-artificial-intelligence.html
Timestamps:
00:00 Introduction and Content Warning
00:16 The Impact of AI on Education
02:33 Personal Story: AI’s Effect on My Family
03:22 The Dangers of AI Chatbots for Kids
07:22 A Parent’s Dilemma: Managing AI Use
08:35 The Broader Implications of AI
12:08 The Black Box Problem
15:09 Conclusion and Call to Action
Transcript
===
[00:00:00] I would like to preface this episode with a warning. I discuss the heavy topic of suicide and self-harm in this episode, so please, um, take care of yourself. This is part three in a series about why I am not okay with AI anymore. Three years ago, when. AI chat bots came out. I think, I felt like a lot of people felt excited, but also terrified.
And three years have passed and a lot of information has come out and I’m, I’m talking about this topic on this podcast, and I’ve been reflecting about, well, what does this all have to do with. Empowering students because that’s the name of this podcast, empowers students. Now, I believe [00:01:00] that the way to empower students is through the adults that interact with them.
Kids are so vulnerable and they’re so impacted by the adults around them. They have very little control over their lives. The adults in their lives are who control things, and so I’m impacting adults, so who work with kids who have kids, and in turn, hopefully this information is helping you to approach your work with children in a more thoughtful manner.
So that’s sort of my explanation of, of what, what all of these episodes about AI have to do with this podcast. And I know that AI is [00:02:00] impacting all schools, all children, all teachers, all people. Especially, you know, in, in America where it’s all sort of coming from. Um, I know there’s other countries that are developing AI technologies as well, but it just seems all very out of control.
With, with no regulation at all. And, and so we sort of have to figure this out together. And so that’s, let’s do that. Let’s do that right now. So this episode is part three and it’s about AI from the perspective of a parent, because I am a parent, I have an 11-year-old and. I’m gonna tell you a little bit about what we’ve been through and, and how this has affected our family and, and my kid and, and me.
And so let’s, let’s go ahead and start. [00:03:00] Welcome to the Empower Students Now podcast. A podcast about equity, neurodiversity, mindfulness, and student engagement. There’s a lot that needs to change in our education system. The good news is teachers have the power to make these changes now.
During the three years chat, bots have been around. Many concerning news articles have been published investigating the countless downsides for children, teens, and adults. It’s clear that interacting with an AI chat bot regularly can have all sorts of consequences from minor, such as getting caught cheating on a school assignment to very concerning.
People becoming delusional to tragic teens dying by suicide. Most parents I’ve talked with are concerned about their children using [00:04:00] AI to cheat on their schoolwork. Unfortunately, this is not the worst case scenario when it comes to children using AI chatbots. About a year ago, I was scrolling through Google No News when I came across an article, and I’ve linked it.
In the show notes and, um, the page on my website with this episode, it’s an article about. A kid who was using a tool, uh, a new AI chat bot tool called Character ai. I clicked on the article and was horrified to learn that a 14-year-old died by suicide after developing a months long relationship with a character he had created on character ai based on the show Game of Thrones.
This article brought me to tears. And to my kid’s room for a [00:05:00] conversation about character ai. My kid is a huge fan of Wings of Fire. The series we, we read all of the books and even the, the offshoot books that aren’t even part of the series. We’ve read all of them, the graphic novels and the novels, and I discovered that my kid was.
Using character AI to create original characters from Wings of Fire. And when I say original characters, I’m putting that in quotes. They call. These ocs, the kids these days. And so they’re creating these ocs, which means their own characters. They’re original characters, but they’re like characters that could be in the series Wings of Fire, right?
But they’re not actual characters. But you could use character AI to create, to recreate any character, um, because this, this. [00:06:00] Company feeds, right? These are large language models. They, they feed, uh, lots of stories to their AI so that they have a backstory, they understand the story. They, well, they don’t understand it, but they have all of the, the words from probably a lot of these stories.
And so it, and it, it was sort of like I, as I was trying to figure out how this app works, I discovered that it’s conversational. And so it’s really back and forth just like a, just like chat GPT. But um, kids are talking back and forth with a character that they created and they could be talking about like.
Sort of a fan fiction story that they’re generating on the app. And it, it, I, I encouraged this activity because it was getting my kid excited to write. They were writing, they were storytelling and they were reading, and I thought, how cool character [00:07:00] AI works just like any chatbot. Um, but instead of answering your questions like chat, GPT does, it builds off your ideas for a plot line and a character or characters.
I’m sure Chat, GPT could do this too, but character AI was designed specifically for teens and kids for storytelling. The conversation I had with my kid was extremely hard. I chose to share the news with them. Though I went back and forth about whether to do so, I decided it was important. They knew about the serious concerns I had about character ai.
I wanted to make sure my child was informed. I explained that I felt scared about them continuing to use character ai and they were shocked about the article I shared with them ’cause I did end up sharing it with them. Having them read it, and they were very upset about the [00:08:00] prospect of me taking the app away.
So we came to an agreement that they could use the app, but only if they agreed that I could regularly check up on what they were writing on there. What a strange world we live in. Not only do we have to worry about the information our kids are exposed to online trolls, bullying, and online predators.
Now we have to worry about chatbots too. It’s confusing, frustrating, and makes me feel a deep sense of helplessness. This is all very dark, and what adds to my disturbance about all of this is that companies are coming out with new chatbot AI products to make parenting easier. For example, there’s this new toy called Micko, MIKO, and it’s an AI chatbot toy that looks like a robot.
It’s cute, and it even can comfort your [00:09:00] child if they are frightened during the middle of the night. The Micko company’s motto is your child’s new best friend. What are we living in? Some dystopian alternative reality. Now people are buying these products because they believe they’re neat, interesting, and fun.
Believe me, I get it. I’m a person who gets extremely excited about new technologies. Recently I discovered Speechify and am in love. It can read any text to you. As someone with a DHD, this is a game changer. I can like just quadruple my reading and information gathering. I’ve been able to keep up with the news much more because of all the big news organizations using AI to read their articles and include audio versions of everything.
But all of this, the way that I’m using AI is very different. It doesn’t involve developing a relationship with a [00:10:00] chat bot. That’s something else entirely. What I find most problematic is that kids and adults are developing relationships with their AI chatbots or robots. Maybe some of these relationships are harmless.
I mean, people have been using Alexa and Siri and things like that for all sorts of helpful reasons. From listening to the news, to setting timers, to learning the weather that day. How bad could that be? But it is bad. It’s getting bad. It could get bad. The mental health and loneliness epidemics are already wrecking havoc on our lives and relationships.
AI is not going to solve these issues. And guess what? Stanford researchers agree with me. Researchers at Stanford did a comprehensive study testing three popular AI chatbots for kids character ai, Nome ai, [00:11:00] and replica. According to the study outlined in this article, the article’s called Why AI Company Companions and Young People Can Make for a Dangerous Mix.
This is a quote from the article. It was easy to elicit inappropriate dialogue from chatbots about sex, self-harm, violence towards others, drugs and racial stereotypes among other topics. The article discusses how kids are drawn to these chatbots because they quote, mimic emotional intimacy. The researchers in the article concluded that.
This is a quote. Again, it’s not just that they can go wrong, it’s that they’re wired to reward engagement even at the cost of safety end quote. So the AI companies, they’re claiming they have designed safeguards for these for [00:12:00] kids, and these researchers from Stanford are concluding that the safeguards aren’t working.
I read a, or actually listened to a recent New York Times, uh, podcast episode on the Ezra Klein show titled, how Afraid of the AI A Apocalypse Should You We Be. It discusses how AI companies aren’t in complete control of what their chatbots do and say The guest being interviewed was a co-author of a book called, if Anyone Builds It, everyone Dies.
Here’s an important excerpt from the interview. So this is Ezra asking a question. So I wanted to start with something that you say early in the book that this is not a technology that we craft. It’s something that we grow. What do you mean by that? And then this is sore is answer quote. Well, it’s the difference between a planter and a plant [00:13:00] that grows up within it.
We craft the AI growing technology and it’s. The technology grows, the ai, the central, original large language models before doing a bunch of clever stuff that they’re doing today. The central question is, what probability have you assigned to the true next word of the text as we. Tweak each of these billions of parameters as the probability assigned to the correct token go up.
And this is what teaches the AI to predict the next word of text. Even on this level, if you look at the details, there are are important theoretical ideas to understand there. It is not imitating humans. It is not imitating the average human. The actual task that is being set is to predict individual humans.
Then you can repurpose the thing that has learned how to predict humans to be like, okay, now let’s take your prediction and turn it into an imitation of human behavior. [00:14:00] And then we don’t quite know how the billions of tiny numbers are doing the work that they do. We understand the thing that tweaks the billions of tiny numbers, but we do not understand the tiny numbers themselves.
The AI is doing the work, and we do not know how the work is being done in quote. Oh my goodness. So what this Nate Sora is describing is something called the Black Box Problem. And I linked an article about this. Basically, AI companies, they’re creating something and they’re trying to control it, and they’re, you can’t control it because they don’t exactly know how it works.
Isn’t that creepy? Um, and I linked an article about this in the show notes. Um, it’s called The Mysterious Black Box Problem Explained. [00:15:00] So this is why safeguards the AI companies are claiming they’re putting in place to protect children aren’t working. So to conclude this really heavy episode, I’d just like to encourage parents and teachers to have curious and open-ended discussions with students and and children about the ethics and safety of AI use in our society.
And even just like, how does it even work? We can’t shelter children from this world. Because it’s already at their fingertips via the internet. As uncomfortable as it may be, we must read these articles with our kids and have ongoing discussions, not just one discussion, ongoing discussions. This is so hard.
I know. It is. It’s hard to talk about hard things with [00:16:00] kids. And so in the last episode of this series, I’d love to discuss approaches to how about how to have these vital conversations with our children. So please stay tuned. Thanks for [00:17:00] listening.
0 Comments