Landmark Ruling in Generative AI Copyright Case Establishes New Standards for Platform Liability

Foundin
[ 2025-02-12 ]

Written by Jirui Zheng

 

 

December 31, 2024 — The Hangzhou Intermediate People’s Court issued a final judgment in a groundbreaking copyright infringement case involving a generative AI technology platform. This ruling marks the first judicial clarification of the "reasonable use" principle for AI data training and establishes a new framework for determining liability among AI service providers. The decision provides critical guidance for the compliance-driven growth of the AI industry. Below are the key takeaways from the court’s decision:

 

1. Addressing Technological Challenges: Evolving Copyright Law Theory

 

The court directly addressed core disputes: Does AI model training infringe on copyright? What legal responsibilities do AI service providers bear? The court emphasized that when determining whether an AI service provider has committed infringement, different application scenarios must be considered, and liability should be assessed through a categorized and layered approach.

 

2. Reasonable Use for AI Training: A Breakthrough in Copyright Law

 

The court affirmed that data training for AI models can fall under the "reasonable use" principle of copyright law. It clarified that the purpose of AI model training is to learn and analyze the ideas, styles, and other elements of prior works—not to reproduce their original expressions. As such, this process qualifies as reasonable use.

 

3. Liability for AI Service Providers: Defining Responsibility for New Online Services

 

The court classified AI service providers as a new type of online service provider. In assessing liability for negligence, the court applied a "reasonable person standard" for similar industries and considered various factors, including:

      The nature of the generative AI service,

      The current development level of AI technology,

      The severity of the alleged infringement,

      The platform’s profit model,

      The potential consequences of infringement,

      The necessary preventive measures, and

      The potential impact of liability on the industry.

 

The court concluded that a platform’s duty of care should align with its information management capabilities.

 

In this case, although users generated the infringing content, the court identified five factors that established the platform’s negligence:

1.     As a commercial service provider, the platform had a higher duty of care for specific use cases.

2.     The Ultraman intellectual property (IP) is highly recognizable and prominently displayed, making the infringement easily detectable.

3.     The LoRA model consistently produced infringing features, indicating that the platform had the ability to intervene in the results.

4.     The platform directly benefited from its profit models, including membership subscriptions and point rewards.

5.     The platform failed to take reasonable preventive measures, such as keyword filtering, to avoid infringement.

 

Based on these factors, the court ruled that the platform failed to fulfill its duty of care and was liable for aiding the infringement.

 

This landmark decision sets a significant precedent in the evolving landscape of AI technology and copyright law. It offers clear guidelines for service providers and industry stakeholders on navigating infringement risks and underscores the importance of compliance in the rapidly growing AI sector.