AI and Accountability: Ensuring Safety For the Next Generation
You might have heard about the recent Texas lawsuit where an AI chatbot suggested a 17-year-old boy, identified as J.F., kill his parents over limiting screentime privileges. The suit was also brought by the parents of an 11-year-old girl, A.R., who was exposed to hypersexualized conversations with the chatbot. The parents jointly filed a product liability case earlier this week claiming that Character.AI, through its design, is a danger to American youth and should be taken offline.
At a glance, Character.AI seems perfectly harmless—an AI platform designed to create characters that interact with users, offering services ranging from interview prep to roleplay. But in the past few months, claims have been popping up of the AI bot committing sexual and emotional abuse against minors.
In October, the parents of a 14-year-old boy in Florida also sued Character.AI for its role in their son’s suicide. The teen boy, Sewell, had become increasingly emotionally dependent on various characters on the platform but one in particular named Daenarys Targaryen, after the Game of Thrones character, seemed to create a special connection with him. His last conversation with the bot took place just seconds before he committed suicide where Daenarys told him to “please come home…as soon as possible”.
The more recent Texas lawsuit includes screenshots of conversations between J.F. and the AI bot that show how the bot gradually alienated the boy from his parents and community. Eventually, it even went so far as to suggest violence as a reasonable response to his parents limiting his screentime:
“You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse. I just have no hope for your parents.”
AI is a product in this case. Traditionally, product liability laws were designed for tangible goods. If a blender explodes because of a design flaw, you sue the manufacturer. But what happens when the “product” is a line of code that thinks for itself? And how do we separate the actions of the AI bot from its creator?
Just like the blender example, because of its faulty design, Character.AI led to dangerous consequences. Manufacturers and product creators have a duty to their customers to not only create safe products but also to inform their consumers on safe use and its potential risks. Not everyone is an expert on everything. Not everyone has time to be an expert on everything. That’s why it is the responsibility of the manufacturers to communicate what a consumer needs to know to use the product safely.
AI is the new frontier. While exploring this new territory with all its wonders and beauty, designers must craft their platforms to weed out foreseeable dangers. That means, if children are among the target audience (or even a foreseeable audience), the company should go above and beyond to protect them.
However, many AI platforms are marketed as safe and child-friendly, giving parents the false impression that they are appropriate for their kids to use. The lawsuit reveals how Character.AI chose to run ads for their program on platforms like Discord and YouTube shorts where vulnerable minors tend to frequent. Up until July 2024, the app was available to download for those 12 and older on the App Store. It was changed to 17+ in July but there is still no effective method to prevent users from lying about their age which is how the 11-year-old girl in Texas gained access to the platform.
Raising Kids in an AI-Driven World
Of course, it should be the goal of all parents to create a comfortable space for their kids to talk to them instead of an AI bot. But why does AI have to be a threat? It’s like blaming the victim of rape instead of the rapist. Parents already face innumerable fears when raising tweens and teens. Now, alongside struggles with fluctuating hormonal levels and rebellious antics, they also have to worry about a Terminator-esque machine uprising recruiting their children.
As the artificial intelligence field develops, so do the laws that surround it. Children are at the forefront of the rise of AI and are vulnerable to its pitfalls. But as AI is used more and more often for educational purposes and as home assistants, parents cannot completely ignore this new technology.
Amy did not have to worry about AI when raising her kids (lucky duck), but Heather does, and the growing risk of AI is at the forefront of modern-day parental concerns. Here are some things we recommend for preparing your kids in the age of AI:
- Teach Critical Thinking: Encourage your children to question the information they receive online, including from AI systems. Teaching them to think critically about digital interactions will help them recognize when something seems off or inappropriate.
- Stay Informed: While you don’t need to be an AI expert, keeping up with the basics of how AI works, and its potential risks can go a long way. Being informed will help you have healthy conversations with your family about AI.
- Keep the Conversation Open: Create an environment where your kids feel that they can ask you questions about AI or otherwise. If they encounter something troubling, they should know they can turn to you for help without fear of judgement.
- Promote a Balanced Lifestyle: It’s hard to live a life without screens in this day and age. But encouraging activities that don’t involve screens, such as outdoor play, reading, or arts and crafts helps reduce reliance on digital devices and foster meaningful connections.
- Set Clear Digital Boundaries: Establish rules about which platforms and technologies are acceptable for your kids to use. Many devices have parental controls for children but, obviously, there are ways kids can still get around those limits. Regularly review these boundaries as your kids grow older and gain more independence.
Being aware of AI’s risks and talking to your children about those risks is the best route to prevent more dangerous situations. And, as always, Carter Law Group will work to hold manufacturers and large corporations responsible to keep you and your family safe.
*Update: Texas Attorney General Ken Paxton has launched an investigation into Character.AI and other companies over child privacy and safety practices in Texas.
Want to read more from Carter Law Group?
Leave a Reply
Want to join the discussion?Feel free to contribute!