Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday December 29 2023, @03:13PM   Printer-friendly

New York Times Sues Microsoft, ChatGPT Maker OpenAI Over Copyright Infringement

The New York Times on Wednesday filed a lawsuit against Microsoft and OpenAI, the company behind popular AI chatbot ChatGPT, accusing the companies of creating a business model based on "mass copyright infringement," stating their AI systems "exploit and, in many cases, retain large portions of the copyrightable expression contained in those works:"

Microsoft both invests in and supplies OpenAI, providing it with access to the Redmond, Washington, giant's Azure cloud computing technology.

The publisher said in a filing in the U.S. District Court for the Southern District of New York that it seeks to hold Microsoft and OpenAI to account for the "billions of dollars in statutory and actual damages" it believes it is owed for the "unlawful copying and use of The Times's uniquely valuable works."

[...] The Times said in an emailed statement that it "recognizes the power and potential of GenAI for the public and for journalism," but added that journalistic material should be used for commercial gain with permission from the original source.

"These tools were built with and continue to use independent journalism and content that is only available because we and our peers reported, edited, and fact-checked it at high cost and with considerable expertise," the Times said.

"Settled copyright law protects our journalism and content. If Microsoft and OpenAI want to use our work for commercial purposes, the law requires that they first obtain our permission. They have not done so."

[...] OpenAI has tried to allay news publishers concerns. In December, the company announced a partnership with Axel Springer — the parent company of Business Insider, Politico, and European outlets Bild and Welt — which would license its content to OpenAI in return for a fee.

Also at CNBC and The Guardian.

Previously:

NY Times Sues Open AI, Microsoft Over Copyright Infringement

NY Times sues Open AI, Microsoft over copyright infringement:

In August, word leaked out that The New York Times was considering joining the growing legion of creators that are suing AI companies for misappropriating their content. The Times had reportedly been negotiating with OpenAI regarding the potential to license its material, but those talks had not gone smoothly. So, eight months after the company was reportedly considering suing, the suit has now been filed.

The Times is targeting various companies under the OpenAI umbrella, as well as Microsoft, an OpenAI partner that both uses it to power its Copilot service and helped provide the infrastructure for training the GPT Large Language Model. But the suit goes well beyond the use of copyrighted material in training, alleging that OpenAI-powered software will happily circumvent the Times' paywall and ascribe hallucinated misinformation to the Times.

Journalism is expensive

The suit notes that The Times maintains a large staff that allows it to do things like dedicate reporters to a huge range of beats and engage in important investigative journalism, among other things. Because of those investments, the newspaper is often considered an authoritative source on many matters.

All of that costs money, and The Times earns that by limiting access to its reporting through a robust paywall. In addition, each print edition has a copyright notification, the Times' terms of service limit the copying and use of any published material, and it can be selective about how it licenses its stories. In addition to driving revenue, these restrictions also help it to maintain its reputation as an authoritative voice by controlling how its works appear.

The suit alleges that OpenAI-developed tools undermine all of that. "By providing Times content without The Times's permission or authorization, Defendants' tools undermine and damage The Times's relationship with its readers and deprive The Times of subscription, licensing, advertising, and affiliate revenue," the suit alleges.

Part of the unauthorized use The Times alleges came during the training of various versions of GPT. Prior to GPT-3.5, information about the training dataset was made public. One of the sources used is a large collection of online material called "Common Crawl," which the suit alleges contains information from 16 million unique records from sites published by The Times. That places the Times as the third most references source, behind Wikipedia and a database of US patents.

OpenAI no longer discloses as many details of the data used for training of recent GPT versions, but all indications are that full-text NY Times articles are still part of that process. [...] Expect access to training information to be a major issue during discovery if this case moves forward.

Not just training

A number of suits have been filed regarding the use of copyrighted material during training of AI systems. But the Times' suite goes well beyond that to show how the material ingested during training can come back out during use. "Defendants' GenAI tools can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style, as demonstrated by scores of examples," the suit alleges.


Original Submission #1Original Submission #2Original Submission #3

 
This discussion was created by martyb (76) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by Rosco P. Coltrane on Friday December 29 2023, @03:24PM (1 child)

    by Rosco P. Coltrane (4757) on Friday December 29 2023, @03:24PM (#1338238)

    Azure cloud computing honeypot.

    There. FTFY.

    The cloud isn't about providing online services: it's about collecting as much private data as possible and monetizing it. It's always been about that.

    The New York Times is absolutely right: it is a business model based on massive copyright infringement. But it's wrong on two things: it's not new, and it's not just Microsoft and OpenAI. It's been going on for decades, and all Big Data players essentially owe their very existence to the business of exploiting data they have no right to exploit.

    The difference between then and now is that the data they had no right to exploit wasn't exploited directly: it was digested and used indirectly for the purpose of advertisement. For example, when Google's surveillance collective has your medical file because your healthcare provider put your medical data in their cloud, and it knows you have some disease, and you keep getting advertisement more or less closely related to that disease, you have a hunch that Google is using data it shouldn't be using but you can't prove it.

    With AI, you can prove the infringement clear as day: chat with the stupid bot long enough and it will regurgitate your own data back to you verbatim. That's the difference.

    Starting Score:    1  point
    Moderation   +3  
       Insightful=1, Interesting=1, Funny=1, Total=3
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 2) by The Vocal Minority on Sunday December 31 2023, @05:58AM

    by The Vocal Minority (2765) on Sunday December 31 2023, @05:58AM (#1338446) Journal

    If you have any actual proof this is happening then please provide it. The contractual arrangements around the use of Azure, and other similar cloud platforms, provide garantees of privacy for customer data, otherwise they wouldn't use them. I also believe the data is actually encrypted at rest with the private key in the customers possession. This is not gmail where google explicitly tells you they are going to look through your e-mails.

    Yes, there are no grantees and trust is required that the cloud infrastructure is actually doing what you are told it is doing. But that is no different from running close source software in general and/or using a third party data center.

    Personally, I am very suspicious of these cloud platforms and I think they give big American tech companies way too much power, but if I am to convince people not to use then the I need proof that these privacy abuses are happening. Otherwise it is all just a bunch of paranoid ranting.