
In 2024, authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson filed a lawsuit against Anthropic, one of the giants of artificial intelligence (AI), accusing it of using their works to train its Claude language model.
This case is part of a series of similar disputes: at least 47 lawsuits have already been filed in the United States, targeting various AI companies. The main issue? The AI models were allegedly trained using copyrighted works without prior authorization from the authors, thus violating their exclusive rights.
A universal problem
But this type of conflict is not limited to the United States: similar disputes are emerging around the world .
Everywhere, therefore, it is judges who, in the absence of clear legal precedents, must decide complex cases ( hard cases , as American authors call them). Copyright law varies from one country to another, of course, but the heart of the conflict remains universal: human creators confronted with a non-human technology that disrupts their place, their legitimacy and their future.
AI companies argue quite differently: they argue that using copyrighted content to train models falls under fair use, an exception to the exclusive rights of authors under US law. In other words, they believe they do not need to ask permission or pay royalties to authors.
This position fuels a growing fear among human authors: that of being dispossessed of their works or, worse still, of being replaced by AI capable of producing content in a few seconds, sometimes of quality comparable to that of a human.
War of stories
This debate is now at the heart of a war of narratives, relayed both in the media and on social networks. On one side are defenders of traditional copyright and human creators; on the other, supporters of disruptive technologies and rapid advances in AI. Behind these narratives, we are witnessing a real confrontation between economic models: that of human authors and industries based on "traditional" copyright (publishers, film and music producers, among others), and that of companies and investors developing "revolutionary" AI technologies.
Although copyright was conceived in the early 18th century in England , in a technological paradigm that can be called "analog", until now it has been able to adapt and also take advantage of technologies at the time considered "disruptive", such as photography, the phonograph, cinema and, later, the digital paradigm and the Internet. However, today it finds itself challenged again, perhaps more seriously than ever, by generative AI. Will copyright also be able to adapt to AI or is it, this time, in danger of extinction, radical changes or even condemned to insignificance?
Public, geopolitical, and geoeconomic interests also weigh on these legal cases. In the current multipolar and conflictual world order, countries make no secret of their ambitions to make artificial intelligence a strategic asset.
This is the case in the United States under Donald Trump, who does not hesitate to use public policies to support the leadership of American companies in AI, a true "reason of state." This leads to a policy of deregulation to eliminate existing rules considered obstacles to national innovation. For similar reasons, the European Union has decided to move in the opposite direction, assigning AI innovation a more restrictive legal framework.
US court decisions serve as a compass
On June 23, 2025, Judge Alsup of the District Court for the Northern District of California issued the first decision in a summary proceeding brought by the aforementioned authors against the company Anthropic, in which he established that the use of legally acquired copyrighted works to train large-scale language models (LLMs) constitutes fair use and therefore does not infringe the copyrights of their owners.
Similarly, this decision also established that downloading copyrighted works from pirate sites can never be considered legitimate use, thus constituting a copyright infringement (even if these works are not used to train LLMs and are simply stored in a general-purpose library).
The decisions of US courts in copyright matters, although they only have legal effect within the United States, generally serve as a compass for the evolution of the regulation of new disruptive technologies. This is due not so much to the prestige of North American legal culture, but to the fact that the largest technology companies, as well as the cultural industries of film, television, and music, are based in the United States.
It was North American case law that established at the time that cassette recorders ( Betamax (Sony) case ) did not infringe copyright. It was also the American courts that ruled that peer-to-peer file sharing networks infringed copyright ( Napster and Grokster cases), leading to the mass closure of these sites.
Currently, the technology accused of infringing copyright is therefore generative artificial intelligence.
Cases of alleged copyright infringement against US generative AI giants (such as OpenAI, Anthropic, Microsoft, etc.) can be grouped into two categories:
the use of protected works to train algorithms (the “ input ” problem),
and the total or partial reproduction of protected works in the results generated by generative AI (the “ outputs ” problem).
A Pyrrhic Victory for AI Companies?
The dispute between Bartz and Anthropic falls into the first category. Bartz and other authors accuse Anthropic of using their works to train its algorithms without payment or permission. It's worth remembering that, both in the United States and elsewhere in the world, all exploitation rights to a work belong to its author. For its part, Anthropic argued that this use should be considered a legitimate use requiring neither payment nor prior permission.
The fair use doctrine, codified in Section 107 of the 17th U.S. Code , provides that in determining whether a use of a work is in compliance with the law, the judge must assess on a case-by-case basis whether the four criteria set out in the law favor the copyright owner (the plaintiff) or the person who used the work (the defendant).
These four criteria are:
the purpose and nature of the use, including whether it is commercial or for non-profit educational purposes;
the nature of the protected work;
the quantity and substance of the part used in relation to the whole of the work;
the effect of the use on the potential market or value of the copyrighted work.
In the case in question, Judge Alsup distinguished between legitimate uses of legally acquired works and those that are not.
In the first case, Anthropic claimed to have purchased copyrighted works in paper format, scanned them, converted them to digital format, and then destroyed the physical copies (this point being legally important because it is then a simple change of format, without reproduction of the original work), in order to use them to train its LLM model. Judge Alsup found that this use was legitimate, given the lawful acquisition of the works and giving priority to the first criterion, with the support of the case law relating to “ transformative use ” (the more innovative the use of a work, the more likely it is to be considered fair use ).
Regarding the illegally acquired works, approximately 7 million works downloaded from pirate libraries such as Library Genesis and Pirate Library Mirror, fair use was not upheld. On the one hand, Anthropic could not have been unaware of the illicit origin of these works, which prevents any subsequent legitimate use. On the other hand, simply storing them in a digital repository, even without having used them to train its algorithms, does not constitute an admissible defense, because Anthropic has no right to copy or store them.
As a result, the proceedings regarding works downloaded from pirate sites are continuing, and Anthropic will face a trial on the merits, which could be very costly, as US copyright law allows for statutory damages of up to US$150,000 per work in the event of bad faith infringement.
Divergent reactions
Some commentators have hailed this decision as a resounding victory for AI companies. A more nuanced reading is in order. While this is the first decision recognizing the legitimate use of legally acquired copyrighted works for training an AI system, it also establishes that the use of works from pirate sites, even when transformed, can never be considered legitimate. In other words, the use of pirated works will always remain illegal. This decision could accelerate the negotiation of licenses allowing the legal acquisition of works for LLM training purposes, a trend that has already begun.
Criticism of this decision was swift. Some accuse Judge Alsup of misinterpreting federal law and case law, particularly the Supreme Court's 2023 decision in Warhol v. Goldsmith , which established that the first criterion could be disregarded if it significantly undermined the fourth criterion: namely, whether a derivative work competes with or diminishes the value of the original work.
Furthermore, it should be noted that this decision is that of a lower court. We will have to wait for the opinion of the court of appeals, or even, ultimately, of the United States Supreme Court, as the ultimate interpreter of the law. In any case, this decision seems to have important symbolic value.
The situation in Europe
Similar litigation has also been filed in Europe and other countries. Although their copyright laws are similar to those of the United States, they have notable differences.
In continental Europe, there is no equivalent to the fair use doctrine: the law is based on a system of exceptions and limitations strictly listed by law, the interpretation of which is restrictive for judges, unlike the flexibility enjoyed by their American counterparts.
Furthermore, although the 2019 European directive established a specific exception for text and data mining, its scope appears more limited than that of North American fair use . Furthermore, in the context of commercial uses, copyright holders can object to it (“ opt-out ”).
Finally, the European Union has other instruments that can regulate AI, such as the AI Regulation , which establishes various guarantees for the respect of copyright, without equivalent in US legislation.
International repercussions?
In conclusion, it should be noted that the conflict between copyright and AI goes beyond purely legal considerations.
The race for leadership in AI also has a strong national dimension. To this end, countries are competing with each other to promote their companies by all means at their disposal, including legal ones.
Given the minimalist regulatory policy and guidelines issued by the Trump administration, it would therefore not be surprising if judges ideologically close to the president adopted interpretations along these lines, favoring the interests of AI companies to the detriment of copyright holders. This perspective is a reflection of US legal pragmatism.
Richard Posner , a former federal judge, has suggested that when faced with "difficult cases," judges should not blindly follow logical and procedural rules, but should resolve them pragmatically, taking into account the possible consequences of their decisions and the political and economic context.
From a lawfare perspective, copyright could well become a new battleground in the global race for technological dominance between the United States, the European Union and China. ![]()
Maximiliano Marzetti , Associate Professor of Law, IESEG School of Management, Univ. Lille, CNRS, UMR 9221 - LEM - Lille Economy Management, IÉSEG School of Management
This article is republished from The Conversation under a Creative Commons license. Read the original article .