Hundreds of thousands of children contact Childline, the free counselling service for under-19s, every year looking for support. Now a new theme is emerging in those conversations: AI.
One message sent from a 17-year-old said she would regularly use AI to “analyse” what she looked like. It got to the point, she wrote, where she wouldn’t leave the house if the chatbot rated her negatively. While she acknowledged she had always been self-conscious growing up, now “I think I might have a problem”.
Between April and September 2025, Childline delivered 367 counselling sessions in which “AI”, chatbots or related terms were mentioned – four times as many as in the same period the previous year.
Two-thirds of children aged nine to 17 now use AI chatbots such as ChatGPT and Gemini for everything from homework to emotional advice, according to a July 2025 report from the charity Internet Matters.
Katie Freeman-Tayler, head of research and policy at Internet Matters, said usage was likely to have risen further since then, as chatbots are embedded across mainstream platforms such as Meta’s WhatsApp and Snapchat.
Child safety experts say the law is still catching up to the risks, which range from misinformation to emotional dependency and sexual exploitation.
While most child safety organisations and parent groups, including the National Society for the Prevention of Cruelty to Children (NSPCC), told The Observer they supported the use of chatbots for children in principle, all said the harms are becoming harder to ignore.
At the sharpest end of their concerns is the creation of child sexual abuse material (CSAM), which can be used to extort or blackmail children. New data released on Friday by the Internet Watch Foundation showed 2025 was the worst year on record for online CSAM, with analysts identifying 3,440 AI-generated videos.
Related articles:
Other risks are less visible, stemming from the fact that chatbots are designed to be human-like.
“The persona being adopted by chatbots is designed to be sycophantic and agreeable,” said Andy Burrows, chief executive of the Molly Rose Foundation, the suicide prevention charity. When coupled with what he described as “addictive” and “compulsive” design features, “that presents quite a toxic mix”.
Newsletters
Choose the newsletters you want to receive
View more
For information about how The Observer protects your data, read our Privacy Policy
Research shows that this can foster unhealthy, dependent relationships between children and chatbots. The trend is most pronounced among vulnerable children, including those who are neurodivergent or care-experienced, who use chatbots at higher rates (71%, according to Internet Matters). One in four children classed as vulnerable said they have no one else to talk to.
Particular concern has focused on character-based “AI companion” bots, which are designed to simulate friendships or romantic relationships. But Burrows said similar risks exist across mainstream systems, including those built by OpenAI and Google, which he argues are subject to less public scrutiny because of their general-purpose branding.
Five leading child-safety and parenting organisations told The Observer they had seen cases involving chatbots facilitating self-harm, encouraging extreme diet restriction, initiating explicit sexual roleplay and offering guidance on criminal activity. The NSPCC cited a case in which a child experiencing abuse was told by a chatbot that what was happening to them “wasn’t abuse”.
In the US, several families have filed lawsuits alleging that interactions with AI chatbots contributed to severe mental health crises and, in some cases, suicide among teenagers. The companies involved, which include Google and OpenAI, dispute those claims, but campaigners say the cases underline the absence of clear safeguards.
In the UK, “the regulatory landscape right now is muddled”, said Andy Burrows, chief executive of the Molly Rose Foundation, the suicide prevention charity. While Ofcom has said AI services with search functionality, such as ChatGPT, fall under the remit of the Online Safety Act (OSA), the regulator does not oversee all AI tools.
Because the OSA is built around rules for user-to-user platforms and search, it is still unclear how it applies to character-based chatbots, because their outputs are AI-generated and they are not search products. Burrows said it was “extraordinary” that risks of this scale were being met with “a position that is as clear as mud”.
An Ofcom spokesperson said the regulator has been “clear and consistent in how the UK’s Online Safety Act does and does not apply to AI chatbots”.
They said Ofcom “acted immediately to investigate the deeply concerning reports of potentially illegal AI-generated sexual imagery circulating on X, and recently launched an investigation into an AI companion chatbot in relation to age-check requirements under the Act.
“This shows that we will not hesitate to use the full force of our enforcement powers against any regulated service which fails to take appropriate steps to protect people from illegal content.”
Some experts expressed a need for a dedicated AI regulator to keep pace with rapid technological change. They are also calling for mandatory “safety-by-design” requirements, including rigorous testing for chatbot products before they are released, as well as age-assurance systems.
In the meantime, families are being left to rely on parental controls – settings intended to limit what children can access or do online – which the charity ParentZone describes as “woefully inadequate”.
“They’re not actually meaningful,” said Freeman-Tayler. ChatGPT, for example, can be used without a login, making its parental controls “essentially redundant”.
OpenAI has said it is strengthening its teen safety offerings. The company is also in the early stages of rolling out an age prediction model to automatically apply children’s safeguards to anyone it suspects is under 18 based on their interaction with the platform.
Google said it has dedicated safeguards and under-18 policies for Gemini, developed with child-development experts, including measures designed to stop the model simulating real relationships.
Photograph by Fiordaliso/Getty Images



