1. Website Planet
  2. >
  3. News
  4. >
  5. FTC Launches Probe Into AI Chatbots Over Child Safety Concerns
FTC Launches Probe Into AI Chatbots Over Child Safety Concerns

FTC Launches Probe Into AI Chatbots Over Child Safety Concerns

Headshot of Andrés Gánem Written by:
Headshot of Maggy Di Costanzo Reviewed by: Maggy Di Costanzo
Last updated: September 25, 2025
The US Federal Trade Commission (FTC) launched investigations into several AI chatbot suppliers over the potential risks their products could have on kids and teenagers, as announced on September 11.

“The FTC inquiry seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products,” reads the FTC’s release.

According to the document, the companies it intends to investigate “include” Alphabet (owner of Google and its AI chatbot Gemini), Character Technologies (owner of Character.ai), Instagram, Meta Platforms, OpenAI, Snap, and X.AI, though it is currently unclear if other companies will be investigated as well.

In particular, the FTC seeks to learn about how the companies monetize engagement, monitor and enforce compliance with their own internal terms of service, collect and use the personal information of users, and monitor the negative impacts their product can have on underage users, among other metrics.

“We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature,” said Character.AI in response to the FTC’s announcement. “We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”

Last year, a mother filed a lawsuit against Character.AI and Google after her 14-year-old son committed suicide, which was allegedly encouraged by a character on Character.AI’s platform. In May, the case’s judge came to the historic ruling that chatbots don’t have “free speech” rights under US law, as the company had argued.

Snap also welcomed the FTC’s investigation in a public statement, while other implicated companies either declined to comment on the investigation or have yet to respond to comment requests by various media outlets.

Recently, a US senator called for a separate investigation into Meta after a leaked document showed the company explicitly allowed AI chatbots to have “romantic” chats with children.

“I expect that the study will provide valuable information regarding children’s and teens’ use of AI companion chatbots,” wrote FTC commissioner Melissa Holyoak in a separate statement.

Senior Writer:
Rate this Article
5.0 Voted by 2 users
You already voted! Undo
This field is required Maximal length of comment is equal 80000 chars Minimal length of comment is equal 10 chars
Any comments?
Reply
View %s replies
View %s reply
More news
Show more
We check all user comments within 48 hours to make sure they are from real people like you. We're glad you found this article useful - we would appreciate it if you let more people know about it.
Popup final window
Share this blog post with friends and co-workers right now:
1 1 1

We check all comments within 48 hours to make sure they're from real users like you. In the meantime, you can share your comment with others to let more people know what you think.

Once a month you will receive interesting, insightful tips, tricks, and advice to improve your website performance and reach your digital marketing goals!

So happy you liked it!

Share it with your friends!

1 < 1 1

Or review us on 1

3749005
50
5000
143204114