34 C
Dubai
Tuesday, May 7, 2024
spot_img

Generative AI: Media And Entertainment Considerations – Trade Secrets

[ad_1]

The future is now. Generative artificial intelligence
(“AI”) can be used to generate new content, including
text, images, animation, video, software code and music. Although
AI itself is not new, it has come into sharp focus in recent
months, accelerated by the availability and widespread adoption of
user-friendly programs such as ChatGPT1, DALL-E, Stable
Diffusion/Dream Studio, Midjourney, Jasper and CopyAI.

There are numerous content-related applications for AI among
media and entertainment companies, depending on the type of company
(e.g., publisher, video game company, creative agency,
studio/production company) and the specific use cases. Potential
applications include generating ideas and content, fact-checking,
editing text, moderating misinformation and content targeting and
ranking. Already, to name a few of the use cases among media and
entertainment companies, we have seen publishers such as CNET using
AI to assist in the writing of certain content2 and
BuzzFeed embracing AI to enhance and personalize certain types of
content offerings, video game creators such as Naughty Dog using AI
to create the environment for the highly popular “The Last of
Us” video game, creative agencies using AI for different
content creation, and entertainment companies adopting AI
technologies to augment visual effects, preserve and colorize film,
localize content, alter actors’ facial expressions3
, age and de-age actors’ faces4, and generate
synthetic human voices. This article will focus on content (not
code) generation applications and key U.S. legal considerations
bearing upon them.

The legal landscape surrounding the use of AI for content
applications is uncertain and rapidly evolving; some have likened
the current stage of AI to the early days of Napster.5
The use of AI tools involves legal and reputational risks that
clients (and media and entertainment companies in particular) must
carefully manage. We address below some of the legal and ethical
considerations associated with the creation of so-called
“synthetic media.”

  • Intellectual Property and Confidentiality:

    • Copyright Infringement: Can AI-generated
      content (“outputs”) be considered a derivative work or
      implicate the right of reproduction?6 Is it infringement
      or inspiration? Providers of AI tools use data-scraping to train
      their AI models (“inputs”).7 To the extent
      that there is copyright infringement or that such data-scraping
      otherwise constitutes copyright infringement or a Digital
      Millennium Copyright Act violation due to the removal of copyright
      management information, are there arguments that such uses are
      permitted under the fair use doctrine or an implied
      license?8 With respect to the fair use question, are AI
      outputs sufficiently transformative to be eligible for a fair use
      defense?9 A few closely watched lawsuits are expected to
      provide some clarity on these issues.10 Who is liable
      when the AI’s output is deemed to infringe someone else’s
      copyright? If an AI tool provider is found to have infringed
      third-party copyrights, could a media company using such a tool be
      liable for infringement as well? It bears mentioning that copyright
      infringement can be direct, contributory, or vicarious. To
      complicate matters further, some publishers and other content
      producers could find themselves on both sides of the AI usage aisle
      — as content owners seeking to be paid for the use of their
      content to train AI models and as users of AI tools to generate
      their content.11

    • Copyright Protectability: The U.S. Copyright
      Office recently denied an attempt to register copyright in
      individual images created using the Midjourney AI
      tool.12 At the same time, the Office said that it
      “will register works that contain otherwise unprotectable
      material that has been edited, modified, or otherwise revised by a
      human author, but only if the new work contains ‘a sufficient
      amount of original authorship’ to itself qualify for copyright
      protection.”13 While this decision could be
      appealed, it provides directional guidance and suggests that the
      more human alteration and involvement is involved in the creative
      process, the more likely the creator will be able to claim
      copyright in the finished work. On March 10, 2023, the U.S.
      Copyright Office published guidance on the copyrightability of
      works created using generative AI.14 That guidance
      reaffirms that some amount of “creative input or intervention
      from a human author” is required.” But, of course, that
      begs the question, how much?


    • Trademark Infringement and Unfair Competition:
      In its lawsuit against Stable Diffusion in the United States
      District Court for the District of Delaware, Getty Images asserts,
      inter alia, that the inclusion in Stable
      Diffusion/DreamStudio’s outputs of Getty Images marks or
      visually degraded versions thereof give rise to claims of trademark
      infringement, unfair competition, trademark dilution and deceptive
      trade practices.15 Could media companies that publish
      such outputs have liability for doing so?


    • Right of Publicity: AI may be used to alter an
      individual’s voice and image, thereby raising questions about
      whether an individual can control the right to use their voice or
      image for AI purposes. The actor James Earl Jones reportedly
      recently granted a Ukrainian startup a license of his voice,
      allowing the company to recreate his iconic Darth Vader voice using
      AI. Depending on the nature of the AI use and whether the
      individual’s voice or image is recognizable, state right of
      publicity statutes, which exist in some, but not all, states and
      vary among those states in which they exist, may provide protection
      for the individual (and, in some states and under certain
      circumstances, the individual’s heirs) against use of the
      individual’s voice and image (in addition to potential
      copyright claims where the voice or image was taken from, or
      resembles elements of, a prior copyright-protected work).


    • Trade Secrets and Confidential Information: A
      media company’s inputs into an AI tool may be used to train the
      AI tool’s model, thereby leading to the risk that those inputs
      could be included in outputs to a third-party user.16
      Given that trade secret laws require that trade secrets be
      maintained in secrecy, the inputting of trade secrets creates a
      risk of loss of trade secret protection. The inputting of
      third-party confidential information held by a media company could
      similarly run afoul of themedia company’s non-use or
      non-disclosure obligations.


    • Insurance Considerations: Media companies
      should ascertain the position of their Errors & Omissions
      liability insurance carriers on the use of AI and what
      pre-publication or pre-broadcast review processes their carriers
      may require.

  • Consumer Protection: With the proliferation of
    virtual influencers,17 which could potentially be
    AI-powered, the Federal Trade Commission has proposed revisions to
    its Guides Concerning the Use of Endorsements and Testimonials
    in Advertising
    18 (the “Guides”) that
    would include virtual influencers.19 Thus, brands that
    work with virtual influencers would need to disclose their
    connection and otherwise comply with the Guides. This raises the
    question of how, in the context of a virtual influencer, the
    Federal Trade Commission would enforce the requirement that
    influencers’ endorsements reflect the honest opinions of the
    influencer and that the influencer be a genuine user of the product
    . Further, companies should consider disclosing that the influencer
    is not human.

  • Content Integrity: AI outputs may be factually
    inaccurate or even false, thereby creating a risk that publication
    of content based on those outputs could lead to defamation claims.
    AI tools may inadvertently plagiarize a previous work. AI also has
    the potential to complicate efforts to validate the identity of
    sources and to make more challenging reliance during the research
    process on supposed media reports or social media. AI data set
    inputs and algorithms may include biases that result in biases in
    output content. Further, given the risk that AI tool inputs could
    be included in outputs to a third-party user, the identity of
    confidential sources could be exposed if inputted into AI. Having
    clear content integrity guidelines relating to the use of
    AI20 may help media companies mitigate some of these
    risks, and companies should consider requiring human review of any
    investigative or other news reportage generated using
    AI21, or even wholesale prohibition on the use of AI by
    its employees and contractors in connection with that content.

  • Regulatory and Compliance:

    • Computer Fraud and Abuse Act of 1986
      (“CFAA”)
      : Does data-scraping by AI tool
      providers of sites whose terms of service prohibit such activities
      create the risk of claims under the Computer Fraud and Abuse Act,
      which prohibits accessing a computer without, or in excess of,
      authorization and carries criminal liability? If so, are there
      circumstances in which a publisher or other media company could be
      found guilty of aiding and abetting the commission of such an
      offense? The Supreme Court narrowed application of the CFAA a
      couple years ago,22 but data-scraping, and in particular
      the manner in which it is performed, may still subject one to a
      CFAA charge and remains fraught with legal peril.23


    • Data Privacy and Protection Violations: The
      use of data sets containing personal data by providers of AI tools
      to train AI models or as inputs by media companies to generate
      content implicates applicable data protection laws. Users of
      personal data for these purposes may be subject to substantial
      penalties24 if they do not obtain that personal data in
      compliance with such laws.


    • Section 230 of the Communications Decency Act:
      The Communications Decency Act generally provides immunity for
      online computer services with respect to third-party content
      generated by its users. For media companies that host
      user-generated content, could the fact that hosted content is
      generated by AI affect such companies’ immunity under the
      Act?

  • Labor: How will labor unions such as
    SAG-AFTRA, WGA and the NewsGuild address use of their members’
    performances and creative material in connection with AI inputs and
    outputs?25

  • Contractual Risks: For companies licensing,
    syndicating or commissioned to produce content, how does the use of
    AI tools impact their ability to make representations and
    warranties regarding originality, ownership and non-infringement
    and their exposure in relation to indemnification provisions? For
    example, the Terms of Use for OpenAI (parent of ChatGPT and DALL-E)
    provide that, as between OpenAI and the user and to the extent
    permitted by applicable law, the user owns all input. The Terms of
    Use further provide that Open AI assigns all rights in the output
    to the user.26 Similarly, the API Terms of Service for
    Stability AI (parent of Stable Diffusion and DreamStudio) provide
    that, as between Stability AI and the user, the user owns the
    output to the extent permitted by applicable law, and the user
    represents that it owns the input.27 At the same time,
    OpenAI’s Terms of Use make an exception for output generated by
    a promptthat has been inputted by another user, and provide that
    this output cannot be owned by any of the parties making that same
    prompt. Companies may not be in a position to provide clear chain
    of title and make representations and warranties with respect to
    content generated using AI tools. This consideration may also arise
    in an M&A context. As with open source software, buyers may
    want to consider diligencing the seller’s use of AI and
    addressing any associated risks in the relevant purchase or merger
    documents.

As media and entertainment companies determine how AI can play a
role in optimizing their content-creation generation processes,
they should consider developing robust governance around the use of
AI, hand-in-hand with responsible, self-regulatory codes of
conduct28 and best practices to mitigate legal and
reputational risk and preserve brand safety.

Consideration should be given to adopting the following specific
practical steps and guardrails:

  • Identification of Use Cases and Risk
    Assessment
    : Companies should identify the specific types
    of applications for which their employees and contractors could use
    AI in connection with the generation of content. Once these
    potential applications have been identified, companies should rank
    different types of use cases and categories of synthetic media
    based on level of risks. For example, publishing content that is
    being syndicated to third parties and investigative and other news
    content may be deemed high risk, such that a company prohibits the
    use of AI in connection with that content. As another example,
    companies should consider prohibiting the use of trade secrets and
    confidential materials as inputs into AI tools.

  • Tracking: Companies should develop rights
    management system mechanisms for tracking the use of AI and any
    content generated using AI. These tracking mechanisms should
    identify the AI tool used to generate the relevant outputs and
    include a copy of the AI tool’s terms of use/service posted on
    the date of use. Such tracking mechanisms should also identify the
    role that humans played in generating the content, including the
    degree of human alteration.

  • Auditing: Companies should perform periodic
    internal audits of content to determine whether it was generated by
    AI tools in compliance with company policies.29

  • Oversight: Companies should consider
    designating content integrity personnel to oversee and monitor the
    use of AI, especially in permitted higher-risk use cases.

  • Training: Employees and contractors who
    generate content should periodically undergo training in the
    appropriate use of AI and related company policies. Companies may
    want to further consider requiring their employees and contractors
    to certify that they have reviewed company policies regarding
    AI.

  • Transparency and Disclosure: Companies should
    consider identifying (e.g., by applying disclosures to) content
    generated using AI when publishing or otherwise disseminating or
    sharing that content. 30This identification may specify
    the particular manner in which AI was used in connection with
    generation of that particular content. As an example (and to help
    support the argument for copyright registrability), for content
    created using AI tools, companies could include explanations in the
    end credits detailing exactly how AI impacted the final work and
    how much the work was altered by humans.

  • Contractual Restrictions and Terms of Use:
    Just as companies include restrictions in third-party contractor
    agreements on the use of open source software, companies should
    also consider restricting the use of AI tools without company
    approval by third-party contractors who create content. Companies
    should develop policies related to the outbound licensing or
    assignment to third parties of rights in employee-created and
    contractor-created synthetic media. Companies should also review
    their website Terms of Use to ensure that they explicitly prohibit
    data-scraping of their websites.

The introduction of AI for creation of synthetic media is a
potentially transformative moment for the media and entertainment
industries, but carries with it significant uncertainties. As the
landscape rapidly evolves, the implementation of robust governance
and frameworks may help to mitigate some of the legal and
reputational risks that media and entertainment companies face when
deploying AI as part of their content generation processes.

Footnotes

1. ChatGPT reached 100 million users within two months
after its launch, becoming “the fastest-growing internet
service ever.” Will Douglas Heaven, ChatGPT is everywhere.
Here’s where it came from, MIT Technology Review (Feb. 8,
2023), ChatGPT is everywhere. Here’s where it came
from | MIT Technology Review
.

2. Connie Guglielmo, CNET is testing an AI Engine.
Here’s What We’ve Learned, Mistakes and All, CNET (Jan. 25,
2023), CNET Is Testing an AI Engine. Here’s What
We’ve Learned, Mistakes and All – CNET

3. Brian Contreras, A.I. is here and it’s making
movies. Is Hollywood ready? (December 19, 2022), A.I. is here, and it’s making movies. Is
Hollywood ready? – Los Angeles Times (latimes.com)

4. George Winslow, Metaphysic Partners with CAA to Expand
Use of Generative AI in Film, TVTech (Jan. 31, 2023), Metaphysic Partners with CAA to Expand Use of
Generative AI in Film, TV | TV Tech
(tvtechnology.com)
.

5. James Vincent, The lawsuit that could rewrite the
rules of AI copyright, The Verge (Nov. 8, 2022), The lawsuit against Microsoft, GitHub and OpenAI
that could change the rules of AI copyright – The
Verge

6. There is disagreement over the likelihood that AI
tools will copy existing works in their outputs. A recent research
study found that certain AI models memorize and regenerate
individual images used as inputs to train the model. Extracting
Training Data from Diffusion Models, Cornell University (Jan. 30,
2023), [2301.13188] Extracting Training Data from
Diffusion Models (arxiv.org)
.

7. The USPTO has stated that the training process
“will almost by definition involve the reproduction of entire
works or substantial portions thereof.” Generative Artificial
Intelligence and Copyright Law, Congressional Research Service
(Feb. 24, 2023), Generative Artificial Intelligence and Copyright
Law (congress.gov)
.

8. To determine whether the use of a work is fair use,
four non-exclusive statutory factors must be considered: (1) the
purpose and character of the use, including whether it is of a
commercial nature or is for nonprofit educational purposes; (2) the
nature of the copyrighted work; (3) the amount and substantiality
of the portion used in relation to the copyrighted work as a whole;
and (4) the effect of the use on the potential market for or value
of the copyrighted work. See Generative Artificial Intelligence and
Copyright Law, Congressional Research Service (Feb. 24, 2023), Generative Artificial Intelligence and Copyright
Law (congress.gov)
; 17 U.S.C. § 107.

9. The first of the four statutory fair use factors
focuses on the purpose and character of the use. For this factor,
courts typically inquire into (1) whether the use is of a
commercial nature, and (2) whether the use is transformative.
Courts are more likely to consider transformative uses as fair. A
use is transformative if it “add[s] something new, with a
further purpose or different character, and do[es] not substitute
for the original use of the work.” U.S. Copyright Office Fair
Use Index (Feb. 2023), U.S.
Copyright Office Fair Use Index
.

10. Getty Images (US), Inc. v. Stability AI, Inc., No.
123-cv-00135 (D. Del., filed Feb. 3, 2023); Andersen et al v.
Stability AI Ltd. et al, Docket No. 3:23-cv-00201 (N.D. Cal. Jan.
13, 2023).

11. Keach Hagey, Alexandra Bruell, Tom Dotan, and Miles
Kruppa, Publishers Prepare for Showdown With Microsoft, Google Over
AI Tools, WSJ (Mar. 22, 2023), Publishers Prepare for Showdown With
Microsoft, Google Over AI Tools – WSJ.

12. U.S. Copyright Office, Zarya of the Dawn (Feb. 21,
2023), 2023.02.21 Zarya of the Dawn Letter
(copyright.gov)
(reasoning that the images generated by
Midjourney Technology were “not the product of human
authorship.”)

13. Id. at 11.

14. Copyright Registration Guidance: Works Containing
Material Generated by Artificial Intelligence, 88 Fed. Reg. 16190
(Mar. 16, 2023).

15. Getty Images (US), Inc. v. Stability AI, Inc., D.
Del., No. 1:23-cv-00135, filed Feb. 3, 2023.

16. See Terms of use, OpenAI (Mar. 14, 2023), Terms of use (openai.com) (stating “Input
and Output are collectively ‘Content.’ [. . .] OpenAI may
use Content as necessary to provide and maintain the Services,
comply with applicable law, and enforce our policies.”); Sharing & publication policy (openai.com);
Usage policies, OpenAI (Mar. 23, 2023), Usage policies (openai.com); see also,
STABILITY AI API Terms of Service, Stability AI (Dec. 14, 2022), Platform (stability.ai) (stating
“Stability and our affiliates may use the Content to develop
and improve the Services [. . .]”).

17. In 2020, it was proclaimed that virtual influencers
commanded “three times higher engagement than human
influencers.” Matt Klein, The Problematic Fakery of Lil
Miquela Explained—An Exploration of Virtual Influencers and
Realness, Forbes (Nov. 20, 2020), The Problematic Fakery Of Lil Miquela
Explained—An Exploration Of Virtual Influencers and Realness
(forbes.com)
; Lil Miquela, a “19-year-old robot,”
currently has 2.8 million followers on Instagram.

18. Guides Concerning the Use of Endorsements and
Testimonials in Advertising, A Proposed Rule by the Federal Trade
Commission, Federal Register (Jul. 26, 2022), Federal Register :: Guides Concerning the Use of
Endorsements and Testimonials in Advertising
.

19. Federal Register :: Guides Concerning the Use of
Endorsements and Testimonials in Advertising
(stating “The
Commission proposes a modification indicating that an endorser
could instead simply appear to be an individual, group, or
institution. Thus, the Guides would clearly apply to endorsements
by fabricated endorsers.”).

20. For example, WIRED has published an official AI
policy that spells out how the publication plans to use AI
technology. See How WIRED Will Use Generative AI Tools, WIRED,
https://www.wired.com/about/generative-ai-policy/. – Google
Search

21. For example, although BuzzFeed is using AI to assist
with content creation, at the moment, it will not use artificial
intelligence to write news stories. See Oliver Darcy, BuzzFeed says it will use AI to help create
content, stock jumps 150% | CNN Business
.

22. See Van Buren v. United States, 141 S. Ct.
1648 (2021).

23. Even if the CFAA is deemed to not apply, a data
scraper (someone who enables it) could still face claims under a
host of legal theories including trespass to chattels, copyright
infringement, misappropriation, unjust enrichment, conversion,
breach of contract or breach of privacy claims.

24. In addition to steep financial penalties, there is
also a risk of disgorgement of the outputs. The FTC required
companies that used deceptive data practices to build AI models to
destroy the data used as well as the models developed using such
data. See Mary Ashley Salvino, ANALYSIS: FTC Privacy
Authority Is Poised for Breakthrough Year, Bloomberg Law (Nov. 13,
2022), ANALYSIS: FTC Privacy Authority Is Poised for
Breakthrough Year (bloomberglaw.com)
.

25. SAG-AFTRA Executive Vice President Ben Whitehair
noted the importance of protecting digital performances and
artists’ likeness as AI-generated content grows. See
Entertainment in the Age of A.I., SAG-AFTRA (Aug. 10, 2022), Entertainment in the Age of A.I. | SAG-AFTRA
(sagaftra.org)
.

26. See Terms of Use, Open AI (Mar. 14, 2023),
Terms of use (openai.com); Service Terms, Open
AI (Mar. 1, 2023), Service terms (openai.com).

27. See STABILITY AI API Terms of Service,
Stability AI (Dec. 14, 2022), Platform (stability.ai).

28. Ethical self-regulatory frameworks may help to
provide conceptual guidance. One such example is The Partnership on
AI’s Responsible Practices for Synthetic Media, a set of
recommendations to support the ethical and responsible development
and deployment of synthetic media. See PAI’s
Responsible Practices for Synthetic Media, A Framework for
Collective Action, Partnership on AI, PAI’s Responsible Practices for Synthetic Media
– Partnership on AI – Synthetic Media
(last visited Mar. 15,
2023); see also Artificial Intelligence Risk Management
Framework (AI RMF 1.0), NIST (Jan. 2023), Artificial Intelligence Risk Management Framework
(AI RMF 1.0) (nist.gov)
.

29. Commercially available tools exist that enable users
to detect AI writing. Examples of such tools are: AI Writing Check,
GPTZero, and AI Text Classifier.

30. In certain instances, disclaimers informing users
that AI is being used may be required by the AI provider. For
example, see Usage policies, OpenAI (Mar. 23, 2023), Usage
policies (openai.com) (“Consumer-facing uses of our models…
in news generation or news summarization… must provide a
disclaimer to users informing them that AI is being used and of its
potential limitations.”).

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

[ad_2]

Source link

Related Articles

FIMM Events & Exhibition LLC- Dubai Presented Dubai Poetry Festival 2024 became one of the most historical poetry event of Dubai with a Twist...

The poetry lovers of Dubai witnessed something spectacular this weekend (20th of April -Saturday) in ‘Dubai Poetry Festival’ which was held at Glendale International...

Akshay Kumar Set to Ignite Telugu Screens with Debut, Breaking Boundaries

Akshay Kumar Set to Blaze a Trail with Telugu Debut Alongside Prabhas and Mohanlal Excitement mounts in the world of cinema as Bollywood's beloved actor,...

Salim Khan Condemns Threats Against Son Salman Khan as ‘jaahil’, Assures Enhanced Police Protection by Eknath Shinde.

Salim Khan Speaks Out on Threats Against Son Salman Khan Amidst Recent Firing Incident Following a visit by Maharashtra Chief Minister Eknath Shinde to their...

Saisha Bhasin Khan Excited About Starring in Upcoming Netflix Feature Film “Perfect Dad”

Saisha Bhasin Khan Gears Up for Leading Role in Netflix's Blockbuster 'Perfect Dad' In an exclusive interview, Saisha Bhasin Khan shares her joy and enthusiasm...

Tiger Shroff leaves the audience amazed with his performance in ‘Bade Miyan Chote Miyan’

Tiger Shroff Stuns Audiences with His Role in 'Bade Miyan Chote Miyan ‘Bade Miyan Chote Miyan’ has been released, and people cannot stop raving about...

Latest Articles