AI Lawsuits reveal the extent of IP damage the systems can cause IP Holders

AI has fast become part of the everyday workplace. These tools are designed to be used to simplify tasks. With the new technology also comes many of the same pitfalls that affect many people. Some of these pitfalls include potential issues related to intellectual property and protecting it from unauthorized use.

AI, in many ways, is a product of what is put into it. Done correctly, AI is a wonderful tool. Many design companies have defended their use of AI to manage the finer details. On the other hand, the only way for AI to learn is for people to feed the machine, so to speak. This has led to many companies like Twitter/X putting in place a policy that anything posted on their website will be used to train their AI bot Grok. (1) This, of course, leads to many people crying foul.

In a 2025 case, AI was at the center of a dispute over IP rights and what constitutes fair use. Under the US copyright law, Section 107 outlines what is fair use of someone’s copyrighted material. “Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include—

(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

(2) the nature of the copyrighted work;

(3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

(4) the effect of the use upon the potential market for or value of the copyrighted work.

The fact that a work is unpublished shall not itself bar a finding of fair use if such finding is made upon consideration of all the above factors.” (2) This section, along with wording in the Digital Millennium Copyright Act, allows online videos to be used without fear of being taken down by the copyright holders as long as it's seen as derivative work.

In the 2024 case, a class action lawsuit alleges that AI companies use these fair use arguments to train their AI. In the suit against Anthropic, several authors contend that the AI company not only stole their work but also didn’t properly compensate them. Anthropic countered this by saying that as the stories were online, they could use it as they were publicly available. In June 2025, the judge ruled that while their AI training was considered fair use, the fact they instead went to pirated sites to download these books was unacceptable. This led to a $1.5 Billion settlement, or roughly $3,000 per author.(3)

Though it was the first one to result in a settlement, the Anthropic case reveals that copyright infringement is a huge problem with AI as more cases begin to work their way through the court systems. Someone could easily take the work, modify it with AI tools, and then pass it off as their own work. And due to said sophistication and development it has become harder to tell real from fraud. In a series of studies published in the journal Cognitive Research: Principles and Implications, they found it was incredibly hard, even with a reference picture, to tell fake from real.

One of the experiments, which involved participants from US, Canada, the UK, Australia, and New Zealand, saw subjects shown a series of facial images, both real and artificially generated, and they were asked to identify which was which. The team say the fact the participants mistook the AI-generated novel faces for real photos indicated just how plausible they were.

Another experiment saw participants asked to if they could tell genuine pictures of Hollywood stars such as Paul Rudd and Olivia Wilde from computer-generated versions. Again the study’s results showed just how difficult individuals can find it to spot the authentic version.

Professor Jeremy Tree, from the School of Psychology, said: “Studies have shown that face images of fictional people generated using AI are indistinguishable from real photographs. But for this research we went further by generating synthetic images of real people.

“The fact that everyday AI tools can do this not only raises urgent concerns about misinformation and trust in visual media but also the need for reliable detection methods as a matter of urgency.”(4)

AI has become entrenched in our personal and commercial lives. The information they gathered is used to train their algorithms, which causes legal headaches for all involved. IP and copyright holders should continue to protect their products, even if AI companies believe they can use them. As these lawsuits come out, the line between real and AI will be blurred and thus the need to continue to define it.

Share this post

Recent Posts