
FTC Launches Probe Into AI Chatbots Over Child Safety Concerns
The US Federal Trade Commission (FTC) launched investigations into several AI chatbot suppliers over the potential risks their products could have on kids and teenagers, as announced on September 11.
“The FTC inquiry seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products,” reads the FTC’s release.
According to the document, the companies it intends to investigate “include” Alphabet (owner of Google and its AI chatbot Gemini), Character Technologies (owner of Character.ai), Instagram, Meta Platforms, OpenAI, Snap, and X.AI, though it is currently unclear if other companies will be investigated as well.
In particular, the FTC seeks to learn about how the companies monetize engagement, monitor and enforce compliance with their own internal terms of service, collect and use the personal information of users, and monitor the negative impacts their product can have on underage users, among other metrics.
“We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature,” said Character.AI in response to the FTC’s announcement. “We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”
Last year, a mother filed a lawsuit against Character.AI and Google after her 14-year-old son committed suicide, which was allegedly encouraged by a character on Character.AI’s platform. In May, the case’s judge came to the historic ruling that chatbots don’t have “free speech” rights under US law, as the company had argued.
Snap also welcomed the FTC’s investigation in a public statement, while other implicated companies either declined to comment on the investigation or have yet to respond to comment requests by various media outlets.
Recently, a US senator called for a separate investigation into Meta after a leaked document showed the company explicitly allowed AI chatbots to have “romantic” chats with children.
“I expect that the study will provide valuable information regarding children’s and teens’ use of AI companion chatbots,” wrote FTC commissioner Melissa Holyoak in a separate statement.