OpenAI Tightens Sora 2 Deepfake Rules After SAG-AFTRA, Cranston Push

OpenAI has introduced new safeguards for its video-generation platform, Sora 2, following growing criticism over the misuse of celebrity likenesses. The decision comes after actor Bryan Cranston and the Screen Actors Guild–American Federation of Television and Radio Artists (SAG-AFTRA) raised concerns about deepfakes that replicated voices and faces without consent.
To strengthen protections, OpenAI announced partnerships with major Hollywood representation agencies, including United Talent Agency (UTA), Creative Artists Agency (CAA), and the Association of Talent Agents (ATA). The collaboration aims to establish clearer consent requirements and improve mechanisms that prevent unauthorized use of performers’ identities.
Previously, users could create AI-generated videos resembling real people through Sora 2’s open framework. Under the new guidelines, OpenAI will adopt a stricter “opt-in” system, requiring individuals to give explicit permission before their voice or likeness can be recreated by the platform. The company will also deploy improved detection tools to identify and block content that violates its updated policies.
Cranston, who has been an outspoken advocate for actor rights in the digital age, praised the move, emphasizing the importance of protecting artists from misuse. SAG-AFTRA echoed the sentiment, calling it a necessary step in safeguarding the creative community as generative AI continues to evolve.
By tightening Sora 2’s deepfake policies and collaborating with key industry partners, OpenAI seeks to position itself as a leader in ethical AI innovation while addressing one of the most pressing concerns in modern entertainment technology.
