TECHNOLOGY
Deepfakes: New Law Protects Victims from AI Harms
USATue May 20 2025
Deepfakes, especially those of a sexual nature, have become a growing problem. These are images where someone's face is placed onto a nude body using artificial intelligence. This has affected many people, from celebrities like Taylor Swift to high school students across the country. After months of public outcry, a new federal law has been passed to tackle this issue.
The Take It Down Act was signed into law by the President. This law makes it illegal to share non-consensual explicit images, whether they are real or AI-generated. It also requires tech platforms to remove such images within 48 hours of being notified. This law is a big step forward in protecting victims of revenge porn and non-consensual AI-generated sexual images.
Before this law, federal protections were limited. Laws varied by state and didn't cover all cases. This new law provides clarity for law enforcement and increases accountability for tech platforms. It also addresses the potential harms from AI-generated content as the technology continues to advance.
The law passed with strong bipartisan support, with only two dissenting votes in the House. Over 100 organizations, including major tech companies like Meta, TikTok, and Google, supported the legislation. The First Lady also played a role in pushing for the law, hosting a teenage victim at a joint session of Congress.
The story of Elliston Berry, a Texas high school student, highlights the need for this law. A classmate altered a photo of her using AI to make it look like she was nude and shared it on Snapchat. Berry and other teens have faced similar harassment. The Take It Down Act will provide legal protections for victims like Berry, ensuring that those who share such images face consequences.
Tech platforms have already taken some steps to address this issue. Some have forms for users to request the removal of explicit images, and others have partnered with non-profits to facilitate the removal of such images. However, bad actors often seek out platforms that don't take action, underscoring the need for legal accountability.
The Center for Countering Digital Hate praised the law, stating that it compels social media platforms to protect women from intimate and invasive breaches of their rights. Public Citizen's Ilana Beller also emphasized the importance of sending a clear signal that such behavior is unacceptable.
continue reading...
questions
Will there be a black market for 'deepfake insurance' to protect against potential legal action?
How will the law differentiate between consensual and non-consensual AI-generated content?
What measures are in place to ensure tech platforms comply with the 48-hour removal requirement?
actions
flag content