Senator Dick Durbin, the Democratic chairman of the committee, highlighted the alarming increase in financial “sexual extortion” reported last year by the National Center for Missing and Exploited Children (NCMEC). These include predators who trick minors into sending explicit photos and videos. “This alarming increase in child exploitation is due solely to changes in technology,” Durbin declared during the hearing.
At the beginning of the meeting, the committee published a video on social networks with children who suffered from exploitation. In the shadows, one child said: “I was sexually exploited on Facebook”, highlighting the personal impact of the crisis.
In a tense room filled with parents clutching photos of their children, Sen. Lindsey Graham addressed tech leaders, pointedly telling Meta CEO Mark Zuckerberg, “Your product is killing people,” calling on companies to share responsibility for keeping children safe online.
TikTok CEO Zhou Jiyu Chu, appearing before US lawmakers for the first time since facing heated questions in March, defended the short video company’s commitment to keeping young people safe. “We carefully design our product to be inhospitable to those who want to harm teenagers,” Chu said, noting the TikTok community’s strict rules regarding the risks of exploitation.
Chu also reported that TikTok’s monthly US user base has grown to more than 170 million, an increase of 20 million from the previous year. In response to a question from Senator Graham Chu, he talked about TikTok’s $2 billion investment in trust and security, but declined to compare that figure to the company’s total revenue.
Zuckerberg, along with X Corp CEO Linda Jaccarino, Snap CEO Evan Spiegel and Discord CEO Jason Citron, all testified to the ongoing challenge of protecting young users from abuse on their platforms. Zuckerberg reiterated Meta’s decision to stop developing a children’s version of Instagram in response to heightened security concerns.
Spiegel compared Snap’s parental controls to real-world surveillance, offering a balance between awareness and privacy: “Parents want to know who their teens are spending time with, but they don’t need to listen in on every private conversation.”
Legislative action and a call for technical accountability
Last year, the committee approved several bills aimed at removing tech companies’ immunity from civil and criminal liability under child sexual abuse laws originally proposed in 2020, but none became law.
Sen. Amy Klobuchar on Wednesday criticized what she called inaction by the tech industry, comparing it to the drastic action taken when a Boeing plane lost a panel during a flight earlier this month. “Why don’t we take the same strong action against the dangers of these platforms when we know that children are dying?” – challenged Klobuchar.
Risks of AI-Generated Content
NCMEC said it received 4,700 notifications of AI-generated content depicting child sexual exploitation last year, a figure expected to rise as AI technology advances. “We receive reports from the AI companies themselves, online platforms and the public. It really is happening,” said John Sheahan, senior vice president of NCMEC, emphasizing the tangible threats posed by generative AI technology.
Clicking here will take you to the post
Researchers at the Stanford Internet Observatory warned in a June report that abusers could use generative artificial intelligence to repeatedly harm real children by creating new images that look like the child. The increasingly realistic nature of AI-generated content makes it difficult to tell whether a victim is a real person, said Fallon McNulty, director of NCMEC’s CyberTipline.
OpenAI, creator of the popular ChatGPT, has set up a reporting process to NCMEC and is in discussions with other AI companies to strengthen efforts against online child exploitation, McNulty said.