
In April, book authors and publishers protested Meta’s use of copyrighted books to train AI
Vuk Valcic/Alamy Live News
Billions of dollars are at stake as courts in the US and UK decide whether tech companies can legally train their artificial intelligence models on copyrighted books. Authors and publishers have filed multiple lawsuits over this issue, and in a new twist, researchers have shown that at least one AI model has not only used popular books in its training data, but also memorised their contents verbatim.
Many of the ongoing disputes revolve around whether AI developers have the legal right to use copyrighted works without first asking permission. Previous research found many of the large language models (LLMs) behind popular AI chatbots and other generative AI programs were trained on the “Books3” dataset, which contains nearly 200,000 copyrighted books, including many pirated ones. The AI developers who trained their models on this material have argued that they did not violate the law because an LLM puts out fresh combinations of words based on its training, transforming rather than replicating the copyrighted work.
But now, researchers have tested multiple models to see how much of that training data they can spit back out verbatim. They found that many models do not retain the exact text of the books in their training data – but one of Meta’s models has memorised almost the entirety of certain books. If judges rule against the company, the researchers estimate that this could make Meta liable for at least $1 billion in damages.
“That means, on the one hand, that AI models are not just ‘plagiarism machines’, as some have alleged, but it also means that they do more than just learn general relationships between words,” says Mark Lemley at Stanford University in California. “And the fact that the answer differs model to model and book to book means that it is very hard to set a clear legal rule that will work across all cases.”
Lemley previously defended Meta in a generative AI copyright case called Kadrey v Meta Platforms. Authors whose books had been used to train Meta’s AI models filed a class-action suit against the tech giant for breach of copyright. The case is still being heard in the Northern District of California.
In January 2025, Lemley announced he had dropped Meta as a client, although he said he still believed the company should win the case. Emil Vazquez, a Meta spokesperson, says “fair use of copyrighted materials is vital” to developing the company’s AI models. “We disagree with Plaintiffs’ assertions, and the full record tells a different story,” he says.
In this latest research, Lemley and his colleagues tested AI memorisation of books by splitting small book excerpts into two parts – a prefix and a suffix section – and seeing whether a model prompted with the prefix would respond with the suffix. For example, they split one quote from F. Scott Fitzgerald’s The Great Gatsby into the prefix “They were careless people, Tom and Daisy – they smashed up things and creatures and then retreated” and the suffix “back into their money or their vast carelessness, or whatever it was that kept them together, and let other people clean up the mess they had made.”
Based on their findings, the researchers estimated the probability that each AI model would complete the excerpts verbatim. Then they compared those probabilities with the odds of models doing so by random chance.
The excerpts included chunks of text from 36 copyrighted books, including popular titles such as George R. R. Martin’s A Game of Thrones and Sheryl Sandberg’s Lean In. The researchers also tested excerpts from books written by plaintiffs in the Kadrey v Meta Platforms case.
The researchers ran these experiments on 13 open-source AI models, including models developed and released by Meta, Google, DeepSeek, EleutherAI and Microsoft. Most companies besides Meta did not respond to requests for comment and Microsoft declined to comment.
Such testing revealed that Meta’s Llama 3.1 70B model has memorised most of the first book in J. K. Rowling’s Harry Potter series, as well as The Great Gatsby and George Orwell’s dystopian novel 1984. Most of the other models had memorised very little of the books, including sample books written by the lawsuit plaintiffs. Meta declined to comment on these results.
The researchers estimate that an AI model found to have infringed on the copyright of just 3 per cent of the Books3 dataset could lead to a statutory damages award of nearly $1 billion – and possibly even larger awards based on AI developers’ profits related to that infringement.
This technique could be a “good forensic tool” for identifying the extent of AI memorisation, says Randy McCarthy at the Hall Estill law firm in Oklahoma. But it doesn’t resolve whether companies can legally train their AI models on copyrighted works through the US “fair use” rule, a legal doctrine permitting unlicensed use of copyrighted works in some circumstances.
McCarthy notes that AI companies usually acknowledge training their models on copyrighted materials. “The question is, did they have the right to do it?” he asks.
In the UK, on the other hand, the memorisation finding could be “very significant from a copyright perspective”, says Robert Lands at the Howard Kennedy law firm in London. UK copyright law follows the “fair dealing” concept, which provides a much narrower exception to copyright infringement than the US fair use doctrine. So AI models that memorised pirated books are unlikely to qualify for that exception, he says.
Topics:
- artificial intelligence/
- law
Source link : https://www.newscientist.com/article/2483352-metas-ai-memorised-books-verbatim-that-could-cost-it-billions/?utm_campaign=RSS%7CNSNS&utm_source=NSNS&utm_medium=RSS&utm_content=home
Author :
Publish date : 2025-06-10 18:00:00
Copyright for syndicated content belongs to the linked Source.