ANTHROPIC

The judge just gave Anthropic a homework pass

A U.S. judge just handed Anthropic a partial win in one of the first major AI copyright cases.

The court ruled it’s fair use to train AI models on physical books the company legally purchased and digitised, marking the first time a judge has taken this stance.

But the ruling is narrow.

It only applies to books Anthropic bought, scanned, and then used for training.

The judge said this process is “transformative” enough to fall under fair use, comparing it to teaching students how to write by using examples.

In short, using lawfully obtained books to help an AI “learn” isn’t the same as copying or replacing the original work.

However, this doesn’t mean Anthropic is off the hook.

The company still faces a separate trial over claims it stored and potentially used millions of pirated books from the internet.

In brief:

  • Training AI on purchased, digitised books is fair use

  • Separate trial coming over pirated books allegedly stored by Anthropic

  • The ruling doesn’t cover whether AI outputs break copyright laws

Like DIY data or something

The judge made it clear: downloading material from pirate websites, when it could’ve been bought legally, is not protected by fair use, no matter how it’s used later on.

The ruling also didn’t address a key question in other copyright lawsuits: whether AI-generated content itself could violate copyright.

That issue is still up in the air.

"Transformative" is just legal speak for "don’t come for me."

Keep Reading

No posts found