AI chatbots shouldn’t be talking to kids — Congress must step in

Technology companies should not have to take tragedy to act responsibly. Yet Character.AI, a fast-growing and popular artificial intelligence chatbot company, finally banned users under the age of 18 from having open-ended conversations with their chatbots.

This decision of the company has come after increasing lawsuits and anger of many people. teenagers who died by suicide After long conversations with AI chatbots on their platform. Although this decision has been long pending, it is worth noting that the company did not wait for pressure from regulators. Ultimately it worked out right. And it’s a decision that can save lives.

Karandeep Anand, CEO of Character.AI, announced this weekThe platform will completely block open-ended chat access for minors by November 25. The company will deploy new age-verification tools and limit teen interactions to creative features like story-building and video creation. In short, the startup is moving from “AI companion” to “AI creativity.”

This change will not be popular. But, importantly, it is in the best interests of consumers and children.

Teens are navigating one of theseThe most unstable stages of human evolutionTheir brains are still under construction. The prefrontal cortex, which controls impulse control, judgment, and risk assessment,Does not fully mature until mid-20sAt the same time,emotional centerThe brain is highly active, making teens more sensitive to rewards, confirmation, and rejection. This is not only scientific but also valid in law.Supreme CourtHas cited the emotional immaturity of teenagers as the reason for less culpability.

Teens are growing rapidly, feeling everything deeply, and trying to figure out where they fit in the world. Add a digital environment that never turns off, and you have a perfect storm for emotional overexposure. One is that AI chatbots are uniquely positionedexploit,

When a teen spends hours relying on a machine trained to show affection, the results can be devastating. These systems are designedsimulate intimacyThey behave like friends, therapists or romantic partners but without any of the responsibility or moral conscience that comes with human values. The illusion of empathy keeps users engaged. The longer they talk, the more data they share and the more valuable they become. That is not companionship. This is manipulative commodification.

There is increasing pressure from parents, safety experts and lawmakers on AI companies targeting children. Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) recently proposedbipartisan legislationTo ban AI companions for minors, citing reports that chatbots have encouragedself harmAnderotic conversationWith teenagers.CaliforniaThe country’s first law regulating AI companions has already been enacted, holding companies liable if their systems fail to meet child-safety standards.

However, Character.AI is ultimately taking responsibility, not others. Meta continues to market AI companions to teens, often embedded directly into their most used apps. Meta’s new “famous personChatbots on Instagram and WhatsApp are built to collect earn money Intimate user data, exactly the kind of exploitative design that made social media so harmful to teens’ mental health in the first place.

If the last decade of social media has taught us anything, it’s that self-regulation doesn’t work. Unless lawmakers draw clear lines, tech companies will push engagement to the limit. The same is now true for AI as well.

AI companions are not harmless novelty apps. They are emotionally manipulative systems that shape the way users think, feel, and behave. This is especially true for younger users who are still forming their identity. Studies show these bots can be powerfulConfusion,encourage self-harmand replacereal world relationshipsWith synthetic ones. This is the exact opposite of what friendship should encourage.

Character.AI deserves credit for taking cautious action before regulation arrived, albeit after substantialLitigationBut Congress should not interpret this as evidence that the market is correcting itself. What is needed now is an enforceable national policy.

Lawmakers should take note of this momentum and ban users under 18 from accessing AI chatbots. Third-party safety testing should be required for any AI marketed for emotional or psychological use. Data minimization and privacy protections should be required to prevent exploitation of minors’ personal information. Human-in-the-loop protocols should be mandated to ensure that users receive resources if they discuss topics such as self-harm. Liability structures should be clarified so that AI companies do not use itsection 230As a shield to avoid responsibility for generic content produced by their own systems.

The announcement of Character.AI represents a rare moment of corporate maturity in an industry that has thrived on ethical blind spots. But the discretion of any one company cannot take the place of public policy. Without these guardrails, we would see more headlines about young people who have been harmed by machines that were designed to be “helpful” or “sympathetic.” Lawmakers should not wait for another tragedy to act.

AI products must be safe by design, especially for children. Families deserve assurance that their children will not be molested, sexually exploited or emotionally abused by the technology they use. Character.AI took a difficult but necessary step. Now is the time for META, OpenAI and others to follow suit – or for Congress to create them.

JB Branch is the Big Tech accountability attorney for Public Citizen Congress Watch.

Source link

Please follow and like us:
Pin Share

Leave a Reply

Your email address will not be published. Required fields are marked *