Publishing. Historic $1.5 billion agreement between authors and Anthropic for alleged copyright infringement. Does it also apply to Italian authors?
What are the boundaries of fair use, of the fair compensation of works covered by copyright for the purpose of training Artificial Intelligence models such as those of Anthropic?
A question that currently finds no unambiguous answers. On the publishers’ side, cease and desist letters are often sent or lawsuits are initiated, hoping to be able to ask billions from the wealthy Silicon Valley companies even for books that nobody buys anymore. On the platforms’ side, they start from the opposite position, believing that an A.I. that reads a book in order to be able to talk about it doesn’t differ from a human doing the same thing.
A first answer comes from the out-of-court agreement between Anthropic and the promoters of the class action Bartz vs Anthropic. An agreement that could create a precedent also for Italian publishers, including radio publishers, who produce original content.
Summary
On September 5, 2025, an out-of-court agreement was concluded on the case Bartz vs Anthropic: the parties agreed on compensation of 1.5 billion dollars that marks the first important worldwide precedent on copyright in A.I. training. Anthropic has accepted to pay 3,000 dollars for each of the 500,000 works illegally downloaded from pirate archives like Library Genesis and Pirate Library Mirror. The agreement obliges the company to destroy all pirated copies and extends also to non-American authors, including Italian ones.
The agreement, still subject to judicial ratification, explicitly excludes future content generated by A.I. and covers only violations prior to August 25, 2025.
The valuation of 3,000 dollars for each work could become the international benchmark for similar lawsuits against OpenAI, Google and other big tech companies.
However, the questions about the rights of derivative works generated by A.I. remain unresolved, since American law does not automatically recognize copyright on content created without significant human contribution.
[metaslider id=”62000″]
[metaslider id=”70589″]
[metaslider id=”72335″]
Not just Anthropic
Anthropic (or better to say claude.ai) is one of the large A.I. models, like Gemini by Google, GPT-5 by OpenAI, or Grok-4 by xAI: all companies currently subject to dozens of other class actions. The company led 50% by the Italian Dario Amodei is therefore the first to have reached an agreement, moreover very interesting for authors: 3,000 dollars per title.
Why books
Let’s start by saying that the quality of sources is essential to make an A.I. “intelligent”: exactly like for teenagers, quality reading leads to the formation of a solid culture, also capable of discerning truth from falsehood. It’s obvious that in the pre-training phase of A.I. systems, beyond easily digestible online content, they also seek to acquire knowledge that comes from works often available only on paper.
The Bartz v. Anthropic case
On September 5, 2025, the case Bartz et al. vs Anthropic PBC (No. 3:24-cv-05417-WHA, Northern California District Court) marked a point in favor of authors. Andrea Bartz, Kirk Wallace Johnson and Charles Graeber dragged Anthropic to court, accusing it of having illegally downloaded and used thousands of books protected by copyright from Library Genesis (LibGen) and Pirate Library Mirror (PiLiMi). These are pirate archives known for distributing unauthorized copies of hundreds of thousands of works.
[metaslider id=”62110″]
[metaslider id=”102141″]
[metaslider id=”73096″]
Shadow archives and old books
According to the complaint, Anthropic would have exploited these “shadow archives” to train its artificial intelligence models, without authorization or compensation for the authors. Partial use, from what is known: the company had nonetheless bought enormous quantities of used books, subjecting them to an original process of cover and binding destruction in order to scan their pages without too many problems. But, in the impossibility of finding “all” the texts ever published, the company would have at some stage of its life decided to also draw from the forbidden archives.
Not just USA authors
An official site, reachable at the address https://www.anthropiccopyrightsettlement.com/, declares: “If you believe that Anthropic may have downloaded your book(s) from LibGen or PiLiMi you can use our form where you provide all contact information”.

Are our texts included?
How to know if our titles (we’re obviously speaking to those who have published them) are included in the list and therefore hope for 3,000 dollars falling from the sky? A specific site is able to provide the answer, as can be seen in the image above.
The terms of the agreement
According to Bloomberg the agreement provides:
- Record payment: Anthropic will pay 1.5 billion dollars, plus interest, into a compensation fund, equal to about 3,000 dollars for each of the 500,000 works included in the certified class.
- Destruction of pirate copies: Anthropic is obliged to eliminate all files downloaded from LibGen and PiLiMi, in addition to any derivative copy.
- Time limitations: The agreement covers only violations that occurred before August 25, 2025 and does not grant licenses for future uses.
- Exclusion of outputs: No rights are released for content generated by Anthropic’s A.I. models.
- Works not included: Authors maintain rights on all works not present in the certified list.
[metaslider id=”67328″]
[metaslider id=”105769″]
[metaslider id=”74097″]
Italian authors too
The agreement seems to also concern Italian authors. The Atlantic has published a tool (“search libgen data“) useful to determine if a certain title was part of the pirated archives. The address is this.
In the reported example, the noted ex blogger Dottoressa Dania (Daniela Farnese) appears to be entitled to 9,000 dollars (about 7,697 euros at the September 2025 exchange rate), paid directly by the San Francisco company.
Not so easy
Done deal? Not quite…
The agreement is subject to approval by the “Federal Court” and indeed on September 8, 2025, federal judge William Alsup said he was “disappointed” by the fact that the parties left “important issues” pending for the future. For example, a list of works covered by the agreement and procedures to inform potential members of the class action are missing.
Too aggressive lawyers?
Alsup reprimanded the class action lawyers for having enlisted an army of lawyers in charge of distributing the settlement funds, including some from the Authors Guild and the Association of American Publishers.
He therefore specified that the additional lawyers will not be paid with the settlement funds and that the lawyers’ fees will be calculated based on how much will actually be distributed to the class action members.
[metaslider id=”67329″]
[metaslider id=”105770″]
Multiple authors
Regarding works on which multiple authors may hold rights, Alsup ordered the parties to prepare a form that provides for adherence to the settlement by anyone who holds copyright on the work. And if even just one of the holders refuses to adhere, that work will not be covered by the agreement.
Controversies
He also added that in case of controversies over ownership, the parties will have to turn to state courts. Alsup set September 15 as the deadline for presenting a definitive list of works, which currently amounts to about 465,000 titles.
Derivative works
The numerous articles that illustrate the story do not address the problem of who owns the rights to any future works generated by claude.ai based on titles for which the famous 3,000 dollars have been paid. We therefore asked Claude.ai (but then having a sort of fact checking done by Grok) to analyze acts and documents and provide us with an answer. Here’s what it replied.
[metaslider id=”69151″]
[metaslider id=”112474″]
Who holds the rights on the “new books” created by Claude?
According to current copyright law in the United States (as interpreted by the U.S. Copyright Office), works generated by A.I. are not automatically protected by copyright if created without significant human contribution: the author must be human. Therefore, if Claude.ai generates a new book entirely by itself (for example, through a prompt), Anthropic might not hold a copyright on it; the work could enter the public domain.
The importance of the Prompt
However, if a human user provides a prompt to Claude.ai and contributes creatively (for example, by modifying or structuring the content), that user could claim ownership of the final product. Regarding Anthropic’s terms of service from September 2025, they generally grant users ownership of the outputs generated with Claude.ai, but Anthropic retains the right to use such results to improve its own models.
Derivative works
And (again) however, if the new book is based on an original work by an author (for example, Claude.ai summarizes, paraphrases or draws plots/ideas from a pirated book used for training), it could be considered a derivative work or a violation. In other words, Anthropic (or the user) could own the new book in a practical sense, but if it violates copyright, the original author could contest its ownership or request damages or injunctions.
[metaslider id=”69151″]
[metaslider id=”60963″]
No automatism
But, and for one last time we must write however, the agreement explicitly excludes outputs and future training. If Claude.ai produces something based on a certain author’s work after the agreement, they are not automatically entitled to a share: they must file a new lawsuit demonstrating the violation, an operation that is not easy, considering that A.I. models are (necessarily, or rather intrinsically) black boxes.
Output/input
Tracing a specific output back to a training input is technically complex if not, for the moment, impossible.
A precedent for the future
However the story concludes, the Bartz v. Anthropic case marks a historic precedent, not only for the economic magnitude of the agreement, but for its impact on A.I. regulation. In particular, the valuation at 3000 dollars per work, even if determined based on Anthropic’s current specific financial capability (very high so as to impact its accounts but not so much as to force the company to take the books to court), could become a worldwide reference point for all lawsuits of a similar nature. (M.H.B. for NL)




