AI Lawsuits Will Continue Until Regulation Catches Up With Innovation

Anthropic, maker of the Claude AI model, is the latest AI company facing a lawsuit—this one alleging that Anthropic used over 6 million copyrighted books to train its language model. Litigation will keep coming until federal regulations set standards governing how AI companies can collect, use, and/or monetize data.

By Ian RomeroAugust 7, 2025

A Tech Perspective

AI Lawsuits

What happens when innovation outpaces regulation? We're finding out in real time. Last week, the internet lit up with headlines about Anthropic—maker of the Claude AI model—being sued over the alleged use of 6 million copyrighted books to train its language model. Damage estimates add up to a loss in excess of $1 trillion. That’s not just eye-popping—it’s potentially business-ending, according to Fortune.

As someone who works at the intersection of technology and business enablement, the news doesn’t surprise me. Lawsuits against large language model (LLM) companies will keep coming until some national laws are set on how they can and can't use data—especially data that’s protected under copyright.

The Legal Landscape Is Murky—and Crowded

Anthropic isn't alone. The rapid evolution and commercialization of artificial intelligence (AI) tools has sparked a string of litigation. OpenAI has already faced—and temporarily defeated—copyright lawsuits of its own. In one recent case, a federal judge dismissed claims that OpenAI’s ChatGPT unlawfully used articles from news outlets to train its AI, citing insufficient harm. But the judge also allowed the plaintiffs to refile, signaling this isn’t over yet.

Litigation centering on copyright infringement, including the latest case, broaches a nuanced issue. Words alone cannot be copyrighted. However, if the model gives you an answer based solely on a copyrighted book or article, that's where damages can become provable. It all hinges on whether these AI models are producing something derivative or merely generating new combinations of information based on public knowledge. For now, there’s no settled law about where those lines are drawn.

A Looming Risk for Every AI Innovator

Let’s be real: Every AI company is vulnerable. Whether they’re training on books, news articles, images, or YouTube videos, the fact is that much of what’s used to create modern LLMs wasn't explicitly cleared by rights holders. Some companies, like Google, may skirt legal challenges by training on data they already index or own, such as YouTube content. Others might just roll the dice—especially companies outside the U.S. or those operating under vague or non-existent local laws.

But if the U.S. gets serious about AI regulation, even international players could be subject to rules about how their systems interact with U.S.-based users.

Why the U.S. Needs Clear AI Data Laws

Here's the problem: we’re operating in a legal vacuum.

There is no comprehensive federal law in the U.S. that governs how AI companies can collect, use, and/or monetize data—especially copyrighted content—for model training. That absence leaves courts to handle piecemeal disputes, creating uncertainty for innovators and rights holders alike.

From our vantage of helping businesses adopt AI securely and effectively, this patchwork approach presents both a risk and an opportunity. Without national standards, companies face growing legal exposure. Clear regulations would encourage companies to move forward with confidence—knowing what is and isn’t permissible.

What Businesses Should Be Thinking About

If you're currently using or planning to use AI in your business, here are several questions worth asking:

  • Am I using AI in a public or private setting?
  • What data is my AI vendor training on?
  • Does that data raise copyright or privacy issues?
  • What contractual protection do I have if there's a legal challenge?
  • Am I collecting and using my own data in a way that's compliant and futureproof?

We’re entering an era where compliance and AI ethics can’t be an afterthought—they need to be part of every business’ strategy right out of the gate.

Innovation Needs Guardrails

AI has incredible potential to revolutionize how we work—but without clear national data use laws, we're in for a future full of lawsuits, confusion, and risk. Until there’s legal clarity, the solution is setting up your own private GPT in a secure cloud environment. K3 Technology builds this custom solution, and we’d love to discuss how the tool can benefit your business.

If you’re wondering how to navigate the gray areas of AI and data use, we’re here to help. At K3, we focus on making AI work for your business—securely, ethically, and legally.