YouTube is cracking down on deepfakes

YOUTUBE

YouTube is stepping up its efforts to deal with AI-generated content that mimics real people’s faces or voices.

The platform is expanding a pilot programme to detect this kind of content and has come out in support of the NO FAKES Act, a proposed US law designed to crack down on misleading AI replicas of public figures.

The idea is to stop people from being tricked by fake videos or audio that look or sound real.

YouTube worked with senators, the music industry, and film associations to help shape the bill.

In a blog post, it said AI has loads of creative potential, but also comes with risks.

Platforms need ways to handle those risks responsibly, especially by giving people a way to flag AI-generated content that misuses their likeness.

The AI detection tool first launched in December 2024 in partnership with Creative Artists Agency. It’s based on the same concept as Content ID, which flags copyright issues.

This version scans for AI-made faces or voices in uploads.

Here’s what you should know:

  • YouTube is trialling AI tools to detect deepfake content.

  • MrBeast, Marques Brownlee, and others are early testers.

  • New tools let users manage and report fake content that misuses their image.

Swipe left on synthetic you

Well-known creators like MrBeast, Marques Brownlee, and Doctor Mike are among the first to test the system.

YouTube will use its feedback to improve the tech before expanding it further. There’s no date yet for a wider rollout.

The platform has also introduced new privacy tools that let people request takedowns of fake or synthetic content, and manage how their likeness is used on YouTube.

MrBeast is in the pilot? You know they’re taking it seriously.