YouTube Takes Action: Supports No Fakes Act to Combat Deepfakes

YouTube declared its strong commitment to the No Deepfakes Act of 2025, aiming to shield its leading content makers from potential misuse of deepfakes.

On Wednesday, April 9th, YouTube shared insights about the anticipated role of artificial intelligence (AI) within their platform in a recent blog post. They are optimistic that AI could serve as a valuable asset for content creators, yet they recognize the potential dangers it may present.

For instance, it’s clear that deepfakes can pose a threat, as demonstrated when several prominent Twitch broadcasters uncovered a site exploiting their images for inappropriate purposes without consent, around the year 2023.

YouTube intends to tackle this problem by openly endorsing the No Fakes Act of 2025, a law designed to safeguard the authenticity of every individual’s voice and appearance from unauthorized, AI-generated duplicates or replications using generative artificial intelligence (AI) and other related technologies.

YouTube supports legislation to stop malicious deepfakes

In a recent blog update, YouTube detailed various strategies they’re implementing to protect content creators from unjust use of their image by harmful individuals. One of the steps taken is an update to their privacy policy which enables users to ask for the removal of any manipulated or artificial videos that feature their likeness.

In addition, YouTube is developing “image representation controls” as part of an effort to assist content creators in understanding and managing how artificial intelligence portrays them on the platform. They have already launched a trial version of this feature.

Lastly, I stand firm in my support for YouTube’s dedication to championing legislative measures that shield creators from having their likeness mimicked by Deepfakes on their platform. I’m on board with their endorsement of both the No Fakes Act and the Take It Down Act, which penalize the distribution of nonconsensual intimate media of others, ensuring a safer environment for all users.

Increasingly, influencers are finding AI-generated versions of themselves on platforms such as character.AI, where users can create and converse with artificial intelligence personas. This suggests that AI is now encroaching upon copyright territory within the creator sphere.

Despite influencers’ efforts to counteract this technology, they face a challenging path ahead. QTCinderella, a victim of a deepfake site in 2023, expressed her frustration that it was almost impossible to take legal action against the individuals behind the website, which hosted Deepfaked content featuring herself and other female streamers.

As a gamer, I’m deeply concerned about the lack of protection for individuals whose intimate images are deepfaked online without their consent. While many states in the U.S. have laws preventing such sharing, there aren’t many that shield us from having these fake images created in the first place. However, with giants like YouTube advocating for legislation like the No Fakes Act, we’re one step closer to securing legal defense against this malicious misuse of technology.

Read More

2025-04-10 00:48