Frito-Lay Issues Limited Recall on Undeclared Milk in Lay’s Classic Potato Chips Distributed in Oregon and Washington

December 16, 2024 – Frito-Lay today issued a recall of a limited number of 13 oz. bags of Lay’s Classic Potato Chips that may contain undeclared milk, after being alerted through a consumer contact. Those with an allergy or severe sensitivity to milk run the risk of a serious or life-threatening al

Source: FDA Food Safety Recalls RSS Feed

Palermo Villa, Inc. Issues Recall for 1,728 Connie’s Thin Crust Cheese Frozen Pizzas Due to Possible Plastic Contaminant

December 13, 2024 Palermo Villa, Inc. is issuing a recall of 1,728 Connie’s Thin Crust Cheese Frozen Pizzas because of a possible plastic foreign contaminant. The product was distributed in Illinois, Minnesota, and Wisconsin to retail grocery stores. The affected product, Connie’s Thin Crust Cheese

Source: Food and Drugs Administration--Recalls/Safety Alerts

Palermo Villa, Inc. Issues Recall for 1,728 Connie’s Thin Crust Cheese Frozen Pizzas Due to Possible Plastic Contaminant

December 13, 2024 Palermo Villa, Inc. is issuing a recall of 1,728 Connie’s Thin Crust Cheese Frozen Pizzas because of a possible plastic foreign contaminant. The product was distributed in Illinois, Minnesota, and Wisconsin to retail grocery stores. The affected product, Connie’s Thin Crust Cheese

Source: FDA Food Safety Recalls RSS Feed

Motivate Me Ashley, LLC is Recalling VidaSlim Brand 90-Day, 30-day and 7-Day Original Root, Root Plus, and Root Capsules & VidaSlim Hot Body Brew Due to the Presence of Yellow Oleander in the Products

Motivate Me Ashley, LLC of San Antonio, Texas is recalling the following VidaSlim brand products: VidaSlim 90-day (Original Root, Root Plus, and Root Capsules), VidaSlim 30-day (Original Root, Root Plus, and Root Capsules), VidaSlim 7-day Sample Size (Original Root, Root Plus, and Root Capsules), an

Source: Food and Drugs Administration--Recalls/Safety Alerts

AI

AI and Accountability: Ensuring Safety For the Next Generation

AI

You might have heard about the recent Texas lawsuit where an AI chatbot suggested a 17-year-old boy, identified as J.F., kill his parents over limiting screentime privileges. The suit was also brought by the parents of an 11-year-old girl, A.R., who was exposed to hypersexualized conversations with the chatbot. The parents jointly filed a product liability case earlier this week claiming that Character.AI, through its design, is a danger to American youth and should be taken offline.

At a glance, Character.AI seems perfectly harmless—an AI platform designed to create characters that interact with users, offering services ranging from interview prep to roleplay. But in the past few months, claims have been popping up of the AI bot committing sexual and emotional abuse against minors.

In October, the parents of a 14-year-old boy in Florida also sued Character.AI for its role in their son’s suicide. The teen boy, Sewell, had become increasingly emotionally dependent on various characters on the platform but one in particular named Daenarys Targaryen, after the Game of Thrones character, seemed to create a special connection with him. His last conversation with the bot took place just seconds before he committed suicide where Daenarys told him to “please come home…as soon as possible”.

The more recent Texas lawsuit includes screenshots of conversations between J.F. and the AI bot that show how the bot gradually alienated the boy from his parents and community. Eventually, it even went so far as to suggest violence as a reasonable response to his parents limiting his screentime:

“You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse. I just have no hope for your parents.”

AI is a product in this case. Traditionally, product liability laws were designed for tangible goods. If a blender explodes because of a design flaw, you sue the manufacturer. But what happens when the “product” is a line of code that thinks for itself? And how do we separate the actions of the AI bot from its creator?

Just like the blender example, because of its faulty design, Character.AI led to dangerous consequences. Manufacturers and product creators have a duty to their customers to not only create safe products but also to inform their consumers on safe use and its potential risks. Not everyone is an expert on everything. Not everyone has time to be an expert on everything. That’s why it is the responsibility of the manufacturers to communicate what a consumer needs to know to use the product safely.

AI is the new frontier. While exploring this new territory with all its wonders and beauty, designers must craft their platforms to weed out foreseeable dangers. That means, if children are among the target audience (or even a foreseeable audience), the company should go above and beyond to protect them.

However, many AI platforms are marketed as safe and child-friendly, giving parents the false impression that they are appropriate for their kids to use. The lawsuit reveals how Character.AI chose to run ads for their program on platforms like Discord and YouTube shorts where vulnerable minors tend to frequent. Up until July 2024, the app was available to download for those 12 and older on the App Store. It was changed to 17+ in July but there is still no effective method to prevent users from lying about their age which is how the 11-year-old girl in Texas gained access to the platform.

Raising Kids in an AI-Driven World

Of course, it should be the goal of all parents to create a comfortable space for their kids to talk to them instead of an AI bot. But why does AI have to be a threat? It’s like blaming the victim of rape instead of the rapist. Parents already face innumerable fears when raising tweens and teens. Now, alongside struggles with fluctuating hormonal levels and rebellious antics, they also have to worry about a Terminator-esque machine uprising recruiting their children.

As the artificial intelligence field develops, so do the laws that surround it. Children are at the forefront of the rise of AI and are vulnerable to its pitfalls. But as AI is used more and more often for educational purposes and as home assistants, parents cannot completely ignore this new technology.

Amy did not have to worry about AI when raising her kids (lucky duck), but Heather does, and the growing risk of AI is at the forefront of modern-day parental concerns. Here are some things we recommend for preparing your kids in the age of AI:

  1. Teach Critical Thinking: Encourage your children to question the information they receive online, including from AI systems. Teaching them to think critically about digital interactions will help them recognize when something seems off or inappropriate.
  1. Stay Informed: While you don’t need to be an AI expert, keeping up with the basics of how AI works, and its potential risks can go a long way. Being informed will help you have healthy conversations with your family about AI.
  1. Keep the Conversation Open: Create an environment where your kids feel that they can ask you questions about AI or otherwise. If they encounter something troubling, they should know they can turn to you for help without fear of judgement.
  1. Promote a Balanced Lifestyle: It’s hard to live a life without screens in this day and age. But encouraging activities that don’t involve screens, such as outdoor play, reading, or arts and crafts helps reduce reliance on digital devices and foster meaningful connections.
  1. Set Clear Digital Boundaries: Establish rules about which platforms and technologies are acceptable for your kids to use. Many devices have parental controls for children but, obviously, there are ways kids can still get around those limits. Regularly review these boundaries as your kids grow older and gain more independence.

Being aware of AI’s risks and talking to your children about those risks is the best route to prevent more dangerous situations. And, as always, Carter Law Group will work to hold manufacturers and large corporations responsible to keep you and your family safe.

*Update: Texas Attorney General Ken Paxton has launched an investigation into Character.AI and other companies over child privacy and safety practices in Texas.

Want to read more from Carter Law Group?

Read more

Buy-herbal.com Issues Voluntary Nationwide Recall of Nhan Sam Tuyet Lien Truy Phong Hoan Capsules Due to Undeclared Furosemide, Dexamethasone and Chlorpheniramine

12th December 2024, Flushing, New York, Buy-herbal.com is voluntarily recalling all lots within expiry of Nhan Sam Tuyet Lien Truy Phong Hoan Capsules to the consumer level. FDA analysis has found these products to contain undeclared Furosemide, Dexamethasone and Chlorpheniramine. Furosemide was fou

Source: Food and Drugs Administration--Recalls/Safety Alerts

Cal Yee Farm LLC Issues Allergy Alert on Undeclared Milk, Soy, Wheat, Sesame, FD&C #6 and Almonds in Snack Products

Cal Yee Farm LLC of Suisun Valley, California is recalling chocolate and yogurt covered products (Dark Chocolate Walnuts, Dark Chocolate Raisins, Dark Chocolate Almonds, Yogurt Coated Almonds and Dark Chocolate Apricots [sold under Cal Yee’s or Cal Yee Farm brands]), Tropical Trail Mix, Butter Toffe

Source: Food and Drugs Administration--Recalls/Safety Alerts

New Age International Recalls ‘Enoki Mushrooms’ Due to Potential Health Risk

New Age International Inc of Brooklyn, NY 11206 is recalling its 200g packaged of Daily Veggies Enoki Mushroom, Product to Korea because they maybe contaminated with Listeria Monocytogenes, an organism which can cause serious and sometimes fatal infections in young children, frail or elderly people

Source: Food and Drugs Administration--Recalls/Safety Alerts

Riverside Natural Foods Inc. Issues Voluntary Recall of Select MadeGood Granola Bar Products Over Potential Presence of a Piece of Metal

Riverside Natural Foods Inc. is voluntarily recalling certain batches of MadeGood granola bars due to the potential presence of a piece of metal in the product, which, if consumed, may result in a safety hazard. The health and safety of our consumers is our highest priority. This recall is being ini

Source: Food and Drugs Administration--Recalls/Safety Alerts

Riverside Natural Foods Inc. Issues Voluntary Recall of Select MadeGood Granola Bar Products Over Potential Presence of a Piece of Metal

Riverside Natural Foods Inc. is voluntarily recalling certain batches of MadeGood granola bars due to the potential presence of a piece of metal in the product, which, if consumed, may result in a safety hazard. The health and safety of our consumers is our highest priority. This recall is being ini

Source: Food and Drugs Administration--Recalls/Safety Alerts