Generative AI has caused a legal earthquake that has sent shockwaves through the world of copyright. Disruptive technologies like the VCR in the early days of home video and peer-to-peer file-sharing sites have caused their own legal storms, but this time is different. What makes this different is the unprecedented speed and number of lawsuits directly targeting the developers of the technology.
Generative AI has arguably sparked a more focused legal challenge than any other technology before it. In just over two years, authors, artists, movie studios, record labels and news organizations have filed dozens of lawsuits against it. We have three early, and notably divergent, summary judgment decisions thus far. These offer a preliminary glimpse into how courts are wrestling with whether AI training is “fair use.”
The biggest takeaway from these decisions is that we are in for a period of prolonged uncertainty.
The first decision, from a Delaware court in February 2025 (Thomson Reuters v. ROSS Intelligence), rejected a fair use defense. Just last week, two more decisions landed days apart from the same federal district in California, reaching contradictory conclusions on several key points.
While on the surface the results show two California judges finding for fair use, a closer look reveals a shared judicial struggle. In Kadrey v. Meta, for instance, the judge reluctantly granted summary judgment for Meta, not because the conduct was definitively fair use, but because the plaintiffs failed to build a sufficient record. The most crucial point of that case: the headline of who “won” or “lost” is less important than the court’s reasoning. All three decisions are likely to be appealed, with the ROSS case already on that path. The legal fog is here to stay.
The differences in approaches are striking. All three judges pull from the same repertoire of fair use precedents and yet all develop different reasoning:
- In Thomson Reuters v. ROSS: Judge Bibas, after reversing his own earlier opinion, found the AI training was not transformative and harmed the market for the original work. Acknowledging the challenge, he wrote, “I acknowledge that these questions are hard under existing precedent.”
- In Bartz v. Anthropic: Judge Alsup found the training process itself to be a “spectacularly” transformative fair use. However, he drew a sharp line at how the data was acquired, ruling that creating a permanent library from unauthorized sources was not fair use, stating Anthropic “stole the works for its central library by downloading them from pirated libraries.”
- In Kadrey v. Meta: Judge Chhabria, despite ruling for Meta on the specific facts presented, offered a broad warning for the AI industry. He argued that the potential for market dilution is so significant that fair use will often not apply. He wrote, “The upshot is that in many circumstances it will be illegal to copy copyright-protected works to train generative AI models without permission.”
This uncertainty is likely to persist for years, as these cases move through the appellate courts, likely ending up at the Supreme Court. And even then, we likely will have clarity for only limited factual matrixes. What remains clear is that fair use remains a highly fact-specific analysis and the outcome of these cases will hinge on the specific details of each case.
In the meantime, this legal ambiguity is creating its own market dynamics. AI developers and AI users are entering into licensing agreements with an increasing number of rightsholders. These deals provide a practical path through the fog. In the end, the only path forward for clarity is to have the parties, the AI developers and the content owners, set the rules of the game through mutually agreed licenses in an already functioning marketplace.