As generative AI transforms our industry, there is growing concern amongst Equity members working in film and television about how their voice and likeness - captured on set with digital scanning technology - is being used. Equity is working hard to modernise our collective agreements with explicit protections in your contracts.
To strengthen our fight and help you protect yourself, below are important steps you can take right now alongside your fellow members.
During Contract Negotiations
Step 1: Ask questions
Before you start your engagement, it is best for you (and your agent) to be informed.
-
Do you intend to scan or capture my face or body using digital scanning technology? If so, please provide a detailed description of the intended use of the scan in this production and any other intended uses (such as VFX and/or merchandising).
-
Do you intend to capture my voice using digital technology? If so, please provide a detailed description of the intended use of the scan in this production and any other intended uses.
-
Do you intend to use my personal data for the purpose of creating a synthetised performance (a digital performance reproducing my voice or likeness partially or in full)? If so, please provide a detailed description of the intended use of the scan in this production and any other intended uses.
-
Do you intend to use my personal data for the purpose of data mining and/or training an AI model?
-
How long do you intend to store my personal data? Please confirm that any data that captures my voice or likeness will be deleted once the production process is complete.
Step 2: Limit the scope
If you do agree to have your voice or likeness digitally captured, ensure that any use is limited to the specific production that you are working on and does not include purposes beyond the scope of the project. Seek to negotiate the following clause:
The Artists grants permission for the Producer to use data digitally capturing the individuals’ voice and/or likeness in one named production for which the data capture was taken. For the avoidance of doubt, it may not be used for any other production in any other media without prior permission.
Step 3: Negotiate an AI carve out
If you are comfortable having your voice or likeness digitally captured, but you want to carve out limitations specifically in relation to generative AI, seek to negotiate the following clauses:
The Artist does not grant permission to the processing of their intellectual property and associated rights or personal data for the purpose of data mining and generative AI training.
The Artist does not grant permission and/or right to use, reproduce and exploit the individual’s performances, voices and likeness or recordings thereof, for the purposes of creating a fine-tuned AI model; and creating and inserting AI-generated performances into the production and any other production in any other media.
Step 4: Say NO
If you do not wish to participate in any form of digital scanning, make this clear at contract stage and get this written into your contract:
The Artist does not grant permission to have their voice and/or likeness digitally captured for any purpose, including but not limited to body scan photogrammetry, 4D data capture, voice capture, photometric FACS poses, synthesisation, and digital cloning.
During Filming
Step 5: Don’t sign on set!
Performers should be given sufficient prior notice before granting production permission to be scanned. This is vital so that you are informed and have the opportunity to ask key questions. If you’re working on set and are suddenly asked by production to be scanned without previous discussion or agreement, our advice is this - say no and speak to your union by emailing productions@equity.org.uk
Once the Production Has Aired
Step 6: Get your scan deleted
Production should only be storing voice capture or digital scans of your likeness when it is genuinely necessary for delivering the contract. Once the content is released, there are strong grounds for production to stop processing your personal data or erase it all together. Email production and make a request. You can use an Equity template upon request by contacting the relevant official in your area of work. If production refuse your request, please notify the union.
Step 7: Stay Updated
Stay updated with the latest Equity advice listed on our AI Toolkit. This is an extremely fast-moving area and Equity is continually exploring new ways to protect members.
Step 8: Spread The Word
Share our resources with your agent and other performers. Help raise awareness about the evolving landscape you are working in and our fight to make AI fair for creators.
Background
Read our FAQs for more information about the current landscape for digital scanning.
It is common for performers to have their image, voice and/or likeness digitally captured on set. This process enables production companies to create realistic characters and digital assets that can be viewed, animated, or rigged for animation, and then used in films and television programmes.
The landscape for data capture is evolving quickly. There are lots of different forms and no two projects or scanning facilities are the same. Below are some of the methods used:
- Photometric FACS Poses (Facial Action Coding System) - capturing various facial expressions to create a fundamental dataset.
- OLAT (One Light at a Time) - a series of staged facial poses while the camera array takes simultaneous photographs, capturing reference points of how the face reacts to light from every angle, one light at a time.
- 4D Data Capture - performing a set of structured sentences (visemes) and multiple facial ROMs (Range Of Motion).
- Body Scan Photogrammetry - a camera array simultaneously captures full-body photographs that allow for the precise measurement of body proportions, which is then used to generate a highly accurate digital model in computer graphics (CG).
- Voice Capture - a wide range of voice samples, speech patterns, and tones, which are used to create a comprehensive library for speech reproduction.
AI models
Equity members are also having their image, voice and likeness captured for the purpose of training AI systems and creating AI-generated or ‘synthesised performances’. Generative AI models are built on the personal data of individuals using machine learning systems or equivalent technology, and can be fine-tuned to resemble the likeness of a performer.
A production company may create AI-generated performances using an AI model made available to them by an AI company through their platform, or training and improving an open access AI Model. However, typically this process would be outsourced to a third-party supplier like a VFX or sound studio, or an AI company who also offer production support. Companies specialising in AI-powered voice synthesis technology, such as Respeecher and Elevenlabs, are increasingly working with filmmakers, integrating their services into the production.
Digital scanning technology of any kind should only be used specifically within the context of specified production. However, members who are being scanned on set do not have transparency around how their personal data is being recorded, stored and processed within the context of the production and beyond. With the development of generative AI, the scope of what VFX can achieve and the displacement effects on performers have exponentially increased.
This concern is reinforced by the fact that members working across recorded media are signing contracts granting production the right to use your “simulated likeness in perpetuity and in any medium whether known or hereafter developed throughout the universe”. In some cases, members have signed highly exploitative contracts enabling the production to use their likeness “for any purpose”. Crucially, producers should not be using your personal data for generative AI training without your explicit and informed consent.
Data qualifies as personal data if it contains any information capable of identifying the performer directly or indirectly. This may be the case in instances where a performance features the performer’s voice (not modified or distorted to not resemble the natural voice of the performer) or their face (again, not masked or made-up to the extent that they would not be recognised). Read our advice for more information about personal data, performances, and GDPR.
Generative AI training is the process of teaching artificial intelligence models to recognise patterns and make decisions, enabling them to create new content such as AI-generated performances.