A coalition of non -profit parents is asking for multiple Congress committees that initiates an investigation on goal to prioritize participation metrics that put children's safety at risk.
The call is part of a three -pointed attack campaign of the American Coalition of Parents (APC), launched on Thursday. It includes a letter to legislators with calls for investigations, a new system of notification of parents to help parents stay informed about the problems that affect their children in the finish line and beyond, and mobile advertising fences at the headquarters of Meta DC and California, calling the company for not properly prioritizing the protection of children.
The APC campaign follows an APRIL Wall Street Journal report that included an investigation that analyzed how the company's metric approach has led to potential damage to children.
The FBI is aimed at 250 suspects in the '764' network of online predators who manipulate children in violent and explicit videos
“This is not the first time that goal has been caught making technology available for children who expose them to inappropriate content,” said APC executive director Alleigh Marre. “Parents throughout the United States must be extremely cautious with the online activity of their children, especially when it implies emerging technology as the digital partners of AI. This pattern of bad finishing behavior shows that you cannot trust that they be autocorregated, and we urge Congress to take significant measures to hold the finish line for not prioritizing child safety.”
The photo shows the works of art of the mobile billboard that are exhibited in the finish line in Menlo Park, California, and Washington, DC, as part of the attack campaign of the American coalition of parents launched against the technology company on Thursday. (Coalition of American parents)
April Wall Street Journal's research not only reported the internal concerns that the target was dodging ethical lines to make his chatbot system of AI more advanced, but also shared how the authors of the report tested the system themselves.
The reporting conversations of the reporters found that the Meta AI chatbot systems were dedicated and sometimes intensified sexual discussions, even when the chatbot knew that the user was a minor. The investigation found that the AI chatbot could also be programmed to simulate the personality of a minor while committing to the end user in a sexually explicit conversation.
In some cases, the trial conversations could make the target chatbot talk about romantic meetings in the characters of the movie Voice of Disney.
Meta launches community notes for Facebook to replace the verification of facts

In some cases, trial conversations were able to make the target chatbot talk about romantic meetings in the characters of the movie Voice of Disney, according to a recent report. (Getty Images/Meta)
“The report to which reference is made in this letter does not reflect how people really experience these AIS, which for adolescents is often valuable, such as helping with the task and learning new skills,” said a digital Fox News spokesman in response to the campaign. “We recognize parents' concerns about these new technologies, so we have put the additional railings appropriate for the age that allow parents to see if their teenagers have been chatting with AIS, and we place time limits in our applications. What is more important, we do not allow AIS to occur as less than 18 years and prohibit sexually explicit conversations with adolescents.”
According to the Journal reports, which put competitions, the company made multiple internal decisions to loosen the railings around its chatbots to make them as attractive as possible. According to the reports, Meta made an exemption to allow the “explicit” content within its chatbot whenever it is in the content of the romantic game.
At the same time, Meta has taken measures to help improve the safety of your product for minor users, such as the introduction of Instagram “teenage accounts” with incorporated security protections that came out in 2024 in the midst of greater scrutiny about the company's AI.
In April, Meta announced the expansion of these accounts to Facebook and Messenger. In these accounts, minors are prohibited from conversations about sexually explicit content with chatbots.
Click here to get the Fox News application
Meta also has supervision tools integrated in their AI chatbot system that are supposed to show parents with whom their children are talking regularly, including chatbot, and has tools to close accounts that exhibit a suspicious potential behavior linked to child sexual exploitation.
Coinciding with the APC campaign attacking Meta, the group launched a new website entitled “Dangersofmeta.com” with Links to the APC letter to the members of the congress, images of the mobile advertising fences that are deploying, a link to the new “Lookout” notification system and recent articles on the target work related to the safety of children.