Character.AI is walking the safety tightrope

AI TECH

Character.AI has launched a new Parental Insights feature that lets teens send a weekly summary of their chatbot’s activity to a parent or guardian.

The report includes how much time they’ve spent on the app, which characters they’ve spoken to most, and how long each conversation lasted.

What it doesn’t include is the content of those chats.

It’s optional and can be turned on by the user in their settings; no parental account is required.

This update is part of a wider effort to address ongoing concerns about young people spending too much time on AI chat apps or being exposed to inappropriate content.

Character.AI, which is especially popular with teens, has faced legal complaints over sexually explicit material and references to self-harm.

In response, the company made a few changes:

  • Under-18s are now on a filtered version designed to avoid sensitive topics.

  • There are clearer warnings reminding users that the bots aren’t real.

  • Content moderation tools have been improved.

Fixing things... under pressure

With more attention on AI regulation and online safety for children, this likely won’t be the last move we see from platforms like this.

POV: You’ve just been snitched on by your own app.