Deepfakes just got criminal, and it’s about time

AI POLICY

The US has just passed a new law, the “Take It Down Act”, aimed at cracking down on non-consensual sharing of intimate images, including AI-generated deepfakes.

Signed by President Trump, the law makes it a federal crime to knowingly publish or threaten to publish private content without someone’s consent.

Platforms like social media sites now have just 48 hours to take this kind of content down once a victim reports it, and they’ll need to scrub any duplicates too.

Backed by both Republicans and Democrats, the act also had public support from Melania Trump, who spoke out about the emotional toll this kind of abuse takes, especially on young girls.

The bill was partly inspired by a case involving a 14-year-old girl whose deepfake image circulated online for months without being removed.

Meta, the parent company of Facebook and Instagram, supports the legislation, calling it a necessary step for protecting users.

Tech policy groups have also praised it as a move toward holding platforms and perpetrators more accountable.

In brief:

  • Targets both real and AI-generated non-consensual intimate imagery

  • Platforms must delete flagged content within 48 hours

  • Digital rights groups warn it could lead to over-censorship and privacy concerns

Not everyone is convinced, though.

Free speech advocates have warned that the law’s wording is too vague and might result in legitimate content being taken down.

The Electronic Frontier Foundation pointed out that automated systems could mislabel content, including news photos, protest footage, or even consensual adult material, leading platforms to remove it out of caution rather than certainty.

Smaller sites, in particular, might not have the time or resources to verify what’s legal and what’s not, and could over-censor as a result.

There’s also concern that the law might push platforms to actively monitor encrypted content, which raises privacy issues of its own.

Finally holding Big Tech to account?