Clooney, Hanks Back 'Human Consent Standard' for AI
A coalition of A-list actors and SAG-AFTRA is backing RSL's new Human Consent Standard, a machine-readable licensing framework designed to control how AI models train on and replicate human likenesses, voices, and performances.
A coalition of Hollywood's most recognizable names — including George Clooney, Tom Hanks, Meryl Streep, Scarlett Johansson, and Cynthia Erivo — has thrown its weight behind a new licensing framework designed to govern how AI systems train on and reproduce human likenesses, voices, and performances. The initiative, called the Human Consent Standard, is being spearheaded by the Really Simple Licensing (RSL) Collective with backing from SAG-AFTRA and a growing roster of talent agencies.
The announcement marks one of the most concrete industry-led attempts so far to build technical infrastructure for performer consent in the age of generative video, voice cloning, and synthetic media.
What the Human Consent Standard Actually Does
The Human Consent Standard extends RSL's existing machine-readable licensing protocol — originally designed for publishers controlling how AI crawlers ingest text and images — into the domain of biometric and performance data. In practical terms, it provides a structured, machine-readable way for individuals, agencies, and studios to declare the terms under which a person's face, voice, body, or performance may be used in AI training datasets and generative outputs.
Rather than relying on opaque opt-out lists or after-the-fact takedown requests, the standard is designed to be embedded directly in the metadata of media files and at the protocol level, similar to how robots.txt instructs web crawlers. AI developers scraping or licensing content would be expected to read and respect these declarations before ingestion.
Key components reportedly include:
- Identity declarations tying a performer's likeness or voice to a unique consent record
- Granular permissions covering training, fine-tuning, generation, and commercial reproduction
- Scope controls — e.g., permitting use in dubbing but not in fully synthetic scenes
- Expiration and revocation mechanisms so consent isn't perpetual by default
Why Hollywood Is Pushing Now
The timing reflects mounting anxiety in the entertainment industry. Since the 2023 SAG-AFTRA strike — which secured baseline AI protections in union contracts — the pace of generative video and voice technology has accelerated dramatically. Models like OpenAI's Sora 2, Runway Gen-4, Google Veo 3, and open-source voice cloning systems such as ElevenLabs and F5-TTS can now produce convincing synthetic performances from minimal reference material.
High-profile incidents have sharpened the concern. Scarlett Johansson publicly clashed with OpenAI over the "Sky" voice in ChatGPT, which she said sounded "eerily similar" to her own. Deepfake ads featuring Hanks and Clooney hawking products they never endorsed have circulated widely on social platforms. And the rise of "likeness farms" — datasets scraped from publicly available footage to fine-tune custom LoRAs — has eroded the boundary between fair use and unauthorized commercial exploitation.
The Technical Challenge: Enforcement
The standard's real test will be enforcement. Machine-readable licensing only works if AI labs honor it, and history with robots.txt suggests compliance is uneven. Major model providers — OpenAI, Anthropic, Google, Meta — have shown willingness to respect opt-outs when legally pressured or commercially incentivized, but countless smaller open-source projects and offshore operators routinely ignore such signals.
To address this, RSL is reportedly working on pairing the consent standard with content provenance technologies such as C2PA signed credentials and watermarking schemes. The idea is to create a chain of evidence: if a generated video features a recognizable likeness, regulators and rights holders can trace whether the underlying training data carried a valid consent declaration.
Broader Implications for Synthetic Media
For the synthetic media ecosystem, the Human Consent Standard could become a meaningful baseline — particularly if paired with regulatory backing. The NO FAKES Act currently moving through the U.S. Congress would create federal liability for unauthorized digital replicas, and a machine-readable consent layer offers AI developers a defensible compliance pathway.
For deepfake detection and authenticity verification companies — including Truepic, Reality Defender, and Hive AI — the standard creates new commercial surface area: verifying consent records, auditing training datasets, and certifying compliant generative pipelines.
Whether the major foundation model labs adopt the standard voluntarily remains the open question. But with Clooney, Hanks, and Streep lending their names — and SAG-AFTRA's contract leverage behind it — the pressure to engage is significant.
Stay informed on AI video and digital authenticity. Follow Skrew AI News.