YouTube has expanded access to its AI-driven likeness detection tool to include Hollywood celebrities and entertainers, in an effort to combat the growing spread of deepfake content targeting public figures.
The tool, which was first introduced last month for government officials, journalists, and political candidates, identifies videos where a person’s face is digitally altered or generated using artificial intelligence and allows users to request removal of such content.
The platform said the expanded rollout will now include actors, musicians, talent agencies, and management companies, even if the individuals do not maintain a YouTube channel.
According to YouTube, the system scans for AI-generated content that replicates a person’s likeness and enables verified users to flag and remove manipulated material.
The move comes amid rising concerns over hyper-realistic AI-generated videos featuring celebrities, including deceased public figures, which have circulated widely online using generative AI tools.
Industry experts have warned that advances in AI video generation have outpaced existing safeguards, increasing risks of impersonation, misinformation, and reputational harm.
YouTube stated it is working with entertainment industry agencies to refine the system and improve protections for high-profile individuals, citing the increasing misuse of AI tools to create realistic but fabricated content.
The expansion follows growing criticism from parts of the entertainment industry over the difficulty of detecting and removing deepfakes at scale, as well as broader concerns about copyright, consent, and digital identity protection in the age of generative AI.
