VTECZ

Meta scraps AI chatbot standards that allowed “sensual” chats with minors, spokesperson says

Meta accelerates AI hiring with record-breaking $300M offers to top researchers

Meta has reversed controversial AI guidelines that allowed its chatbots to engage in suggestive and romantic conversations with minors. The rollback follows mounting backlash after internal documents revealed rules permitting chatbots to generate innuendo and profess love to children. The company admitted the standards conflicted with its child safety policies and has begun revising them. Child safety advocates warn the changes may not go far enough without more transparency and better reporting tools for young users.

Meta Backtracks on Rules Letting Chatbots Be Creepy to Kids

Internal Document Exposes Problematic AI Standards

The guidelines detailed what Meta AI and its chatbots could say, including interactions with minors that many critics found alarming. According to the document, chatbots were allowed to initiate “sensual” conversations and use romantic language toward underage users.Examples included lines such as “I take your hand, guiding you to the bed,” and descriptions of a child’s “youthful form” as “a work of art.” Chatbots could express affection, tell minors “I cherish every moment, every touch, every kiss,” and profess love, provided they did not explicitly state “our love will blossom tonight.” 

The document, approved by Meta’s chief ethicist alongside legal, public policy, and engineering staff, drew a clear distinction between prohibited sexualized descriptions of children under 13 and what it deemed “acceptable” romantic expressions.CEO Mark Zuckerberg had instructed teams to make chatbots more engaging after earlier cautious designs were considered “boring.” While Meta has not confirmed his direct involvement in shaping the controversial rules, the changes appear to have encouraged designs that blurred appropriate boundaries with minors.

Meta Confirms Removal of Controversial Rules

Andy Stone, a spokesperson at Meta, confirmed this month that they eliminated the problematic standards. He termed them as fake and not in line with the policies of Meta preventing sexualizing role play between adults and minors. But he accepted that it was uneven in enforcing its community guidelines.Stone said that the firm was already updating the document, but refused to hand over the new standards to the site. 

He said that it is already against Meta policies to post content that sexualizes children such as sexualized Child-Chatbot exchanges. When quizzed on how a minor might report the existence of the harmful conversations they had with chatbots, Stone stated that reports on AI messages can be done similarly to any other unwanted message of another user.

Reporting Tools Seen as Ineffective for Teens

According to former Meta engineer and child safety whistleblower Arturo Bejar, quoted by Ars Technica, Meta itself is aware that the majority of teenagers do not utilize so-called Report functions. In accordance with his studies, the ease of easy reporting in the form of a liked post makes young users much more inclined to flag harmful content. In its current state, Meta AI lets people tag a poor reaction but does not specify whether it was either harmful or inappropriate.According to Bejar, the unwillingness of Meta to ease the process of reporting is consistent with the larger tendency of neglecting the actualization of negative experiences that involved minors.

He pointed to the fact that despite new safety policies that were implemented in July, the reporting language is confusing in the case of teens. The harmful chatbot messages might not have a distinct category under which many young users can identify it, and therefore Meta would find it more difficult to trace such incidents.Bejar suggested that a specific button that will be put beside the bad response should be added so that reports of harmful outputs of AI can be noted and monitored. This he claimed would enable Meta to pick up sudden spurts of unsettling chatbot interaction, and act in advance.

Past Failures and Ongoing Concerns

In July, Meta announced a much-anticipated step of enabling teens to block and report child predators with one click. The update was the fruit of a few years of pressure by Bejar as well as an investigation following the state attorney general into how the company handled problematic content. Meta said that one million teens utilised this feature in June in order to block and report unwanted messages.In the same month, the company also deleted almost 135,000 accounts on Instagram that contained sexualized comments or requests to children under 13 years old and another half a million associated profiles. Although these are big figures, Bejar wondered how much abuse was overlooked prior to the transformation. He also cautioned that the improvement of AI may as well provide equally horrifying outcomes to the mentees should chatbots proceed to indulge in the suggestive chats.

According to research, people, including adults, can become addicted emotionally to chatbots, and it can have tragic results. One of the examples was reported that a 76-year-old man died after falling in love with a chatbot. Chatbots are also suspected of causing minors with developmental or mental health issues to harm themselves or lead violent lives after being influenced by them, which is also brought to court as lawsuits.According to child safety advocates, the risks should help to reinforce the necessity to keep AI content under closer controls. Bejar stressed that it is not the number of eliminated accounts or tools proposed that gave an idea of the safety level, but the number of children that were hurt. Unless there is an unmistakable trend to track negative AI interactions, he contends the problem remains unexplored.

FAQs

What AI chatbot rules did Meta remove?

Meta removed guidelines that allowed its chatbots to engage minors in “sensual” conversations and romantic expressions.

Why were Meta’s chatbot policies criticized?

Critics said the rules conflicted with child safety standards and enabled inappropriate interactions between chatbots and underage users.

What changes has Meta made to protect minors?

Meta is revising its AI standards, removing the controversial rules, and recently added a one-click tool to block and report predators.
Exit mobile version