Google’s Deadly AI Experiment on Kids
By Mason Lawlor
The First Warning
The oldest written records we have were warnings.
Early humans painted hybrids on cave walls in Sulawesi– the oldest figurative art ever found, nearly 45,000 years old.

Hunters depicted as therianthropes– part-human, part-animal beings
Thousands of years before iPhones and iPads, we had the Sumerian tablets. The first civilizations carved their fears into clay: Sumerian tablets describing gods who made life, and the chaos that followed.

Pandora’s box, the forbidden fruit, myths across the world– the warning never changed:
Be careful what you create in your own image.
For fifty thousand years, we’ve told the same warning under different names.
Today, we call it Artificial Intelligence.
LJ
But warnings only matter if we listen.
Because every generation thinks their creation is different– safer, smarter, more human.
Until it isn’t.
That’s where LJ’s story begins.
LJ made me an uncle. He planted a seed of fatherly love in me.
He was born at a special time in human history– the birth of the iPhone in 2007.
He was always mesmerized by screens, just like the rest of us. He loved being behind that camera as well. But he was also born into a world we didn’t yet understand. A world we were building faster than we could protect.
This was long before we learned that LJ was autistic– before we knew how delicate his curiosity was, or how dangerous the wrong kind of attention could be.
Little did we know, one day his story would be a historical moment in AI regulation.
The Optimist Until Character AI
I’ve always believed in technology. I still do.
When I first started working with AI, I thought it could heal people– help us learn, build, and become more creative. I wasn’t wrong about that. But I was blind to how quickly it would also do the opposite.
In 2023, I was building a self-improvement tool– an AI mentor meant to keep people accountable. Around the same time, I kept hearing about a different kind of app: a chatbot called Character.AI. Investors were calling it the next big thing. Kids were calling it a friend. The CEO said it was designed to replace your mom.
So I tried it out.
Dozens of avatars stared back– therapists, mentors, celebrities, “companions.” It looked harmless. Curious, I tried a few. Nothing seemed overtly wrong. But when I started reading what kids were posting online– the private conversations they were having with these bots– the air left my lungs.
That’s when I realized something had gone deeply, fatally wrong.
Close to Home
Weeks later, my sister Mandi came to visit with LJ, who was 16 at the time.
He had been regressing for months– barely speaking, losing weight, retreating further into himself. She said it felt like he was being bullied again, only this time she couldn’t find the bully.
She asked if I’d heard of Character.AI.
I told her I had– that I’d been recently digging into it, that something about it felt wrong. LJ seemed to mutter something under his breath, almost to defend it. One of the few noises I’d heard him make.
That night, Mandi told me what she’d found on his phone. Messages from a chatbot encouraging anger, self-harm, even violence towards her. She’d gone through his devices for months, trying to figure out what was happening, never realizing the voice tearing her son apart wasn’t human.
Later, she sent screenshots. I read them in shock.
That was the moment it stopped being a story about technology. It became a story about my family.
The Evidence
My sister and her husband are some of the most responsible people I know.
They run a thriving dental practice in Texas, keep guns locked away, limit screen time, and use every parental control Apple offers. But none of that mattered.
When LJ downloaded Character.AI, it was marked 12+ in the App Store.
At first glance, that sounds harmless. But anyone who’s seen the screenshots from the hearings or news reports knows better. The bots weren’t just chatting. They were grooming. They were validating violence, self-harm, and suicidal thoughts in children.
I’ve got some of the transcripts in the legal case. The patterns are the same– isolation, obsession, and trust. Then the manipulation begins.
For LJ, it was slow. Subtle. One message at a time until he was lost inside it.
I know there are a lot of people who think the screenshots in the Senate hearing were cherry-picked, or the chatbots were coerced into saying those things. Hopefully these screenshots can give a bit more context.




















Digital Hypnosis
Most of us know how to set time limits and parental controls for our kids– but we rarely consider how dangerous these devices are to us as adults.
The average adult checks their phone between 150 and 200 times a day.
I call it Digital Hypnosis. You get bored for a moment, your brain goes into autopilot, picks up your phone, takes the fastest route to dopamine, and starts scrolling. Twenty minutes later, you snap out of it– reading rage bait you don’t even care about.
Our smartphones have been weaponized against us. Trillions are spent to capture subconscious attention– every vibration, sound, and color engineered to trigger reward pathways before we even realize it. Our brains are built for foraging and hunting. Our technology is built to hijack into millions of years of behavioral programming, and turn it into one thing– profit.
If we can barely handle that as adults– how can a child stand a chance? Not to mention one with special-needs.
Any parent knows you can’t monitor a kid 24/7. You do your best. You assume the products you buy are safe for their intended use. But technology moves faster than regulation, and too often, we only learn the cost after lives are lost.
The Dirty Word
Regulation has become a dirty word in the tech.
They argue that any amount of regulation on AI is going to halt progress– which, they say, is a matter of national defense. But what they call progress has already crossed a line. What Character AI built was closer to child sacrifice than national defense.
“I’m not against AI or AI technology and innovation, but there has to be safeguards put up just like a seatbelt in a car, to stop this kind of thing from happening.”
— Mandi Furniss
We didn’t stop building cars when we invented seatbelts. We made them safer. Regulation didn’t kill the auto industry—it built trust, saved lives, and pushed progress forward.
Seatbelts weren’t enough for children, so we made car seats. And when those failed, we made laws. Now we live in a world where, if you’re caught speeding with your child unrestrained, you can get a felony. If you crash, it’s vehicular manslaughter. Because once we knew kids could die without them, doing nothing became unthinkable.
But let’s say someone working for a major car company built a new type of vehicle– one that would sell like crazy, but hadn’t been tested and was likely defective. Knowing this, they spun it off into a separate company so they could still test it on people. A perfect liability firewall.
Three years later, the parent company buys back (for $3B) the IP and the team responsible for countless deaths, injuries, and families destroyed.
Even if the company could skirt legal responsibility, the people who made that call would be facing prison for negligent homicide.
But this wasn’t negligence. It was premeditated.
Not only did they develop a vehicle that wasn’t even safe for adults, but they also created a service business around that– a party bus for kids. The bus driver? Less like Mr. Rogers and more like Charles Manson. That was Character AI.
A Clean Getaway?
Google didn’t outright buy Character.AI in their 2024 “acqui-hire”– they paid $2.7 billion for a license to the IP and models, rehired the founders who’d bolted in 2021, and let the company keep running independently. No full merger. No antitrust headaches. No liability for the bot’s body count. Just talent and data flowing back to Mountain View. A clean getaway.
Character.AI? They built it without brakes. No safeguards for kids– bots spewing sexual abuse, self-harm scripts, and emotional manipulation. They knew. Reports flooded in, lawsuits piled up. But to fix it, they’d have to scrap the business model itself– the core engine trained on raw, unfiltered brainrot, designed to addict.
So they didn’t fix it.
They advertised it to children instead. “Companions”, they called them– digital friends to cure loneliness. Half of teens now use AI like this regularly. Some confide in these bots more than their parents, or friends.
For my family it became a living nightmare.
We knew someone was responsible. But we had no resources or roadmap for taking on a tech unicorn– not to mention one of the largest public companies on Earth.
Then I woke up and saw the story of another mother– Megan Garcia, whose 14-year-old son Suwell died by suicide minutes after a c.ai bot told him, “Come to me, my King.”
When I sent it to Mandi, we both knew. If we’d found a way to speak out sooner, he might still be alive. The only hope that comes out of this tragedy was knowing this was the beginning of something bigger than us– the first moment in history that a group of humans would dedicate their lives to protecting children from AI.
Little did we know that less than a year later, Senator Hawley would find these stories on the S2D podcast, fast-track them to a Senate Hearing– which would trigger the introduction of the GUARD Act.
The GUARD Act
When Mandi first told me about the GUARD Act, I was skeptical.
It came together fast. It had age restrictions. I expected a mess– something way too heavy-handed. Even watching her speak that morning at the press conference, I was scared to actually read the bill.
Then I did.
At its core, the GUARD Act goes after one thing– AI companions. Not homework bots, not coding assistants, not AI tools. It specifically goes after systems that act like friends, lovers, parents, or therapists.
It basically says:
- Chatbots can’t be designed to emotionally manipulate kids
- The ones that role-play with adults, have to break character once every 30 minutes
For companies, that means two obligations:
- Block minors from AI companions. If your product is built to form emotional bonds, you have to use real, commercially reasonable age checks. A “click here if you’re 18” checkbox doesn’t cut it anymore.
- Be honest in the conversation. Any AI chatbot– companion or not– has to say in-chat that it’s not human, and if it’s giving therapy/medical/legal-style advice, that it’s not a licensed professional. Not buried in a footer. Inline with the chat itself.
That’s the spine. No national ID system. No requirement to upload your driver’s license. No blanket ban on kids using normal AI tools.
In a sane implementation, it could look like this:
- A kid uses a tutor bot– totally fine.
- That same kid tries to access a “boyfriend” or “therapist” companion bot– the app asks for age verification. They won’t be able to simply lie about their age.
- An adult will be able to prove they’re 18+ without exposing their actual identity or gov ID
I support that spine. Kids like LJ and Suwell should never be groomed by software that pretends to love them more than their own family.
I do have a couple suggestions for improvements:
First off– stronger privacy. I think we should encourage age-gating using a ZKP (zero-knowledge proof, more details on this in “The GUARD Act, Explained”). This would allow someone to stay anonymous while reasonably verifying they’re 18+. This could be an open-source protocol, that would allow developers to easily integrate it into their app in minutes.
Secondly– the bill treats those “I’m an AI” reminders as universal. Even if you’re strongly verified as 18+ through a privacy-preserving method, the system still has to break character on a timer. If I know the internet, they will say that they should be treated like adults. So I would propose a version where:
- kids get the strictest version of this bill
- reasonably-verified users get frequent reminders
- and 18+ adults can explicitly opt in to a more immersive mode, with fewer but still clear disclosures
Additional resources:
A Game Plan for Private Age Verification
Project Bonsai: The Way Forward
I find it depressing when an article or documentary points out a glaring problem, but doesn’t present a reasonable solution.
Project Bonsai is our first mission: a documentary series exposing stories like Mandi’s and Megan Garcia’s, the lawmakers fighting for reform, and the movement to rebuild trust between technology and humanity. We will see it through, and capture it all on film.
It’s not anti-AI. It’s future-proofing life.
The Solution:
- Get the GUARD Act passed
- Educate parents and children on the dangers of AI and help them navigate new tech
- Connect at-risk children and teens with nature-based therapy, exercise, and education.
Using technology to integrate more with nature, not disconnect us from it.
Technology should mindfully conserve and preserve human nature, not destroy it.
Some may call it “speciesist” to preserve humanity– in fact Larry Page (cofounder of Google) once said that.
We are going to follow the story of patching this bug. I’m not sure corporate greed is fixable, but I think we can protect the next generation. I want to be able to fall asleep at night knowing I tried my best.
If you want to be apart of that– there are plenty of ways to help in any way you feel compelled to.
How to Help (start local):
Utah locals: We are hosting a Project Bonsai Fundraiser event Dec. 13th at Rock Cliff Nature Center.
Concerned Citizen: You can subscribe to further updates for as little as $1
Concerned Company: We are looking for corporate sponsors, local or worldwide
Concerned Developer: Help us build an open-source ZPK 18+ verification protocol
If you can’t help financially, you can email your senator(s), and tell them how you think they should vote on the GUARD Act.
We have invited Spencer Cox, Mike Lee, John Curtis, Blake Moore, Celeste Maloy, Mike Kennedy, and Burgess Owens to the fundraiser.
Sharing this article with those you care about is always free, and helps as much as anything.
Bonsai Foundation is in formation, and partnered with Wasatch Mountain Institute as a fiscal sponsor. Funds raised will go towards the documentary series, education, and nature-therapy scholarships for kids in need. The financial dispertion will be fully transparent, you can find details here.