Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Character.ai adds parental reports for teen chatbot usage


Chatbot service Character.AI is adding a new ‘Parental Insights’ feature, which lets teens send a weekly report of their chatbot usage to a parents’ email address. As outlined in an announcement from the company, this report includes the daily average time spent across web and mobile, the characters that a user most frequently interacted with, and how much time they spent talking to each character. It’s part of a series of updates designed to address concerns about minors spending too much time with chatbots and encountering inappropriate content during chats.

The report, which doesn’t require parents to have an account, is optional and can be set up by minor users in Character.AI’s settings. The company notes that it’s an overview of teens’ activity, not a complete log — so the contents of chatbot conversations won’t be shared. The platform bars kids under 13 years of age in most locations and under 16 in Europe.

Character.AI has been introducing new features for underage users since last year, coinciding with concerns — and even legal complaints — about its service. The platform, which is popular with teenagers, allows users to create and customize chatbots that they interact with or share publicly. But multiple lawsuits allege that these bots have offered inappropriately sexualized content or material that promoted self-harm. It also reportedly received warnings from Apple and Google (which hired Character.AI’s founders last year) about its app’s content.

The company says that its system has since been redesigned; among other changes, it moved under-18 users to a model trained to avoid “sensitive” output and added more prominent notifications reminding users that the bots weren’t real people. But with enthusiasm for AI regulation and child safety laws booming, this likely won’t be the last step the company is called to take.



Source link