OpenAI CEO Sam Altman's words haunt Claude AI: "Anthropic’s model seeks to profit from strip-mining the human expression and ingenuity behind each one of those works"

Anthropic Claude
Anthropic Claude (Image credit: Anthropic)

What you need to know

  • Anthropic has been slapped with a lawsuit by a group of authors for copyright infringement.
  • The company is allegedly training its Claude AI model using the authors' content without consent or compensation.
  • OpenAI CEO Sam Altman had previously admitted it's impossible to create ChatGPT-like tools without copyrighted content.

Maybe OpenAI CEO Sam Altman was right. It's impossible to create tools like ChatGPT without copyright content. Over the past few years, Microsoft and OpenAI have been sued multiple times over copyright infringement issues. Now, Anthropic is joining the fray after multiple authors filed a lawsuit against the company over the same issue.

Per the complaint, Anthropic was spotted using the authors' work to train its Claude AI chatbot to respond to human prompts. The tech corporation indicated that it's aware and is currently assessing the class action lawsuit filed against it by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson over copyright infringement of their content. 

According to the lawsuit: 

"It is no exaggeration to say that Anthropic’s model seeks to profit from strip-mining the human expression and ingenuity behind each one of those works. Humans who learn from books buy lawful copies of them, or borrow them from libraries that buy them, providing at least some measure of compensation to authors and creators."

For context, Anthropic was founded in 2021 to make advances in the generative AI landscape and deploy safe and reliable models for everyone. As it happens, OpenAI co-founder John Schulman announced his departure from the ChatGPT maker to focus on AI alignment at Anthropic. It's often viewed as an OpenAI rival. Their flagship models spot multiple similarities. 

For instance, its recently unveiled Claude 3.5 Sonnet model competes on an even field with OpenAI's GPT-40 model with vision capabilities and a great sense of humor. 

In the interim, Anthropic is also fighting a separate lawsuit in the corridors of justice for allegedly using lyrics from copyrighted songs without consent or compensation. Major tech corporations in the AI space like Microsoft and OpenAI often brand the training of their models using copyright content as "fair use." They also indicated copyright law doesn't forbid the training of AI models using copyrighted content. 

What would happen if AI models were barred from using copyrighted content?

Anthropic Claude 3.5 Sonnet model (Image credit: Anthropic)

While tech companies haven't been restricted from using copyrighted content for training their AI models, multiple reports indicate that the chatbots are seemingly getting dumber and often fall off the rails by generating inaccurate responses. 

AI chatbots have been spotted having lucid hallucinations, erroneously recommending a Food Bank as a tourist attraction, and even asking readers to take part in a poll to determine the cause of a woman's unfortunate passing. Even Google's AI Overviews feature recommended eating glue and rocks. The situation could only worsen if AI chatbots are restricted from assessing copyrighted content. 

🎒The best Back to School deals📝

CATEGORIES
Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

  • fjtorres5591
    Trying to go after AI companies with copyright is a fool's quest.
    Copyright has very specific rules that set bounds that AI training does not cross.
    The google hathitrust case, among many, clearly establishes the legality of web crawling/scanning copyrighted material to produce a vastly different product. In this case, transforming content into executable software. Memorize that word: transforming.

    It's the wrong tool for the job.

    As the name clearly states, copy-right is about making and distributing *exact* (or near-exact) copies that can *substitute* for a protected product. (Another word to memorize.) Creating and distributing a vastly different product is literally FAIR USE. A hundred years of precedent says so. It is a waste of time and money going after the chatbot and the training database, both of which are *software* and not content. Technical and legal illiteracy combined.

    The cases claiming unjust enrichment will probably fail, too, but at least they have the "virtue" of novelty and that might fly to some extent in the nutty ninth.
    Reply