You will be redirected back to your article in seconds

What Writers and Studios Must Iron Out to Settle on AI 

robot hands using a typewriter
Illustration: VIP+: Adobe Stock

As the writers strike heads into its second week, one of the key conflicts that need to be negotiated is the use of artificial intelligence in writing scripts for TV and film. The AI technology in question are text-generating software tools supported by large language models (LLMs), notably including OpenAI’s ChatGPT.  

Since generative AI software tools have emerged for business or consumer use, questions have quickly followed about the possibility that their adoption could displace some workers, including in creative roles once considered immune from automation. However, it’s already clear that most media and entertainment businesses expect to adopt AI as well as text, image and voice-processing tools over the coming years.   

As of the latest negotiations status released by the WGA on May 1, 2023, the WGA’s proposed demands are threefold pertaining to studios’ use of AI

  1. It should not be used to write or rewrite literary material. This is defined in the 2020 minimum basic agreement (MBA) as written material for use in the production of film and TV, including screenplays, teleplays, treatments and more. Further implied in the demand is that the writer must be an actual person. A new MBA will likely need clearer terminology, as it defines who or what a writer can be. 
  2. AI-generated text should not be used as source material. This is defined as any material on which literary material is based — for example, by adapting another piece of IP, such as a novel into a film or TV script. Writers do not want to be asked to adapt or rewrite the text output of an AI system. 
  3. MBA-covered material should not be used to train AI. AI software tools based on LLMs rely on enormous quantities of text “training” data to produce their outputs that mimic various writing styles and structures. This proposes WGA members’ past writing not be used as data to train AI systems, such that it enables AI to produce a new output that replicates that work.  

The basis of compromise between writers and studios, and the resulting MBA agreement, will likely depend on and define more precise terms around when and how AI could or should be used for writing—and when it shouldn’t. Ultimately, that may depend on how the following key aspects are evaluated:  

Whether AI writing is or could be “as good as” human writing. This question is likely at the heart of the disagreement. Given the relative newness and rapid advancement of AI software and human experimentation with its capabilities, the answer is unclear and likely to change — seemingly with each passing week. In its current state, AI might already be capable of producing more “formulaic” scripts with a large corpus of available material that has the potential to be used for AI training — say, with episodes of “Law & Order.”  

Studios would appear to want the option to use AI, or the flexibility to test to what degree AI is capable of producing human-quality work, or something close enough to it. Writers, understandably, want contractual safeguards that would restrict direct competition with or displacement by AI, even if that restriction becomes in its own way artificial on the off chance AI manages to one day write the next “Citizen Kane.”  

Whether AI writing can be credited or copyright protected. A notice released May 4 from the AMPTP states, “Writers want to be able to use [AI] technology as part of their creative process, without changing how credits are determined, which is complicated given AI material can’t be copyrighted.” In the U.S., the current policy status is that AI-generated material cannot be copyrighted if it hasn’t been sufficiently modified by human creative efforts.  

Under the WGA’s recent rule, AI-generated writing cannot be credited to AI, meaning authors' credit defaults to the human author even if their creative process was AI-assisted. These two policy states can indeed coexist, but they create some potential — albeit slim — areas of conflict. For example, if a human writer were to use AI to pen a script without sufficient modification of the AI output, a possibility exists that the script may not be copyright eligible, even as the writer has received a writer’s credit under WGA rules.  

However, WGA writers don’t appear to want this as an outcome, even if they don’t preclude using AI as a resource during aspects of the writing process — for example, for research purposes or generating ideas. Studios, likewise, should probably not want any outcome where AI use would limit copyright protection and conceivably IP ownership.  

For studios, grayer territory occurs around at what threshold of “modification” copyright is achievable in other possible future scenarios, including if a human writer is brought on to adapt or rewrite AI-generated text or, vice versa, if an AI is brought in to adapt or rewrite human-generated material. 

Whether AI training data is protected or restricted. Another determining and complicating factor in the impasse is what material is allowed to be used to train AI. There are already meaningful questions, and one class-action suit, about the use of copyrighted data to train AI models, with proposed solutions including opt-out or opt-in with compensation. With the use of some software that has not disclosed their model training data, enforcement on this point may already be a challenge. It’s also hard to guarantee that AI trained on copyrighted data won’t produce outputs containing instances of plagiarism. 

Read more of VIP+'s AI assessments:

How gen AI aims for production efficiency

Will AI supplant or supplement Hollywood workers?

Takeaways for diligence and risk mitigation

When gen AI meets intellectual property law

Plus, dive into the expansive special reports ...

Read the Report

Read the Report