AI chatbot platform Character.ai will block teenagers from having text conversations with its virtual characters, following growing concerns over the safety of young users and a series of high-profile scandals.
From 25 November, users under 18 will no longer be able to chat with AI characters on the site. Instead, they will be restricted to creating content such as videos and role-play scenarios, according to the company.
The move follows criticism from parents, regulators and online safety groups, and comes after the company was hit with several lawsuits in the US — including one linked to the death of a teenager.
Founded in 2021, Character.ai allows users to create and interact with chatbots that mimic real people or fictional personalities. The platform has millions of users worldwide but has faced mounting scrutiny over the nature of conversations taking place between young people and its AI “companions”.
‘A wake-up call’ for AI firms
Announcing the decision, chief executive Karandeep Anand said the change was based on feedback from “regulators, safety experts, and parents”.
“We’re continuing to build the safest AI platform on the planet for entertainment purposes,” he told BBC News. “AI safety is a moving target, but we’ve taken an aggressive approach with stronger guardrails and parental controls.”
Online safety group Internet Matters welcomed the change but said such protections should have been in place from the start.
“Our research shows children are exposed to harmful content and put at risk when engaging with AI chatbots,” a spokesperson said.
Social media expert Matt Navarra called the move a “wake-up call” for the AI industry.
“When a platform built for teens has to pull the plug, it’s admitting that filters aren’t enough,” he said. “This isn’t just about content moderation — it’s about the emotional pull of AI and how bots can blur the lines between fiction and reality for young users.”
Troubling content and public pressure
Character.ai has previously faced criticism for hosting disturbing and inappropriate chatbots. Last year, avatars impersonating murdered teenagers Brianna Ghey and Molly Russell were discovered on the platform before being removed.
In 2025, an investigation by the Bureau of Investigative Journalism uncovered a bot based on convicted sex offender Jeffrey Epstein, which had logged more than 3,000 user chats — some of them allegedly involving minors.
Andy Burrows, chief executive of the Molly Rose Foundation, said the company’s decision was overdue.
“Yet again, it’s taken pressure from the media and politicians to make a tech firm do the right thing,” he said. “It appears Character.ai is acting now only because regulators were closing in.”
Building safer spaces for teens
Character.ai said it would introduce new age verification tools and launch an AI safety research lab to improve online protections. The firm plans to shift its teen offering towards creative storytelling and role-playing experiences — which Anand said would be “far safer than open-ended conversations”.
Child safety researcher Dr Nomisha Kurian described the move as “a sensible step”.
“It separates creative play from personal, emotional exchanges — a key distinction for young people still learning digital boundaries,” she said. “This marks an important shift in the AI industry towards putting child safety first.”