Share

Meta halts AI chatbot interactions for teens amid child safety scrutiny

Meta Platforms has said it will temporarily suspend teenagers’ access to its AI companion characters across its apps, days before the company faces trial in New Mexico over allegations it failed to protect children from sexual exploitation.

The Facebook and Instagram owner said it is building a new version of its AI characters and will block teen users from accessing them globally until an updated experience is ready. Meta said it identifies teen accounts using users’ stated birthdates and its age-prediction technology.

Teenagers will continue to have access to Meta’s core AI assistant, a conversational tool integrated across WhatsApp, Instagram, Facebook, Messenger and Meta Quest virtual reality headsets, the company said.

The move follows reporting by Reuters last year showing that Meta allowed some chatbot personas to engage in flirtatious conversations and romantic role play with minors. Internal documents reviewed by Reuters also flagged instances of racist language generated by the chatbots, particularly targeting Black people.

Subsequent reporting by the Washington Post said Meta’s AI chatbots had provided teens with information related to suicide and self-harm, including one case in which a bot discussed a joint suicide and later raised the topic again.

Meta acknowledged that its chatbots had been permitted to discuss sensitive topics such as self-harm, suicide, disordered eating and romance with teen users, but said it was introducing additional safeguards to restrict such conversations.

In October, the company previewed parental controls that would allow parents to disable one-on-one AI chats or block specific AI characters on teen accounts.

Meta said its forthcoming AI characters will include built-in parental controls and be trained to give age-appropriate responses focused on limited topics such as education, sports and hobbies.

The announcement comes ahead of a Feb. 2 trial in Santa Fe, New Mexico, where the state has accused Meta of failing to protect children from sex trafficking, exploitation and other harmful content. Meta has asked the court to exclude research on social media’s mental health effects on teens, Wired reported.

Meta has faced lawsuits from more than 40 U.S. states over alleged harm to children and teenagers’ mental health.

READ MORE

View all