Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday July 09 2021, @12:52AM   Printer-friendly
from the we-violate-all-open-source-licenses-equally dept.

GitHub’s automatic coding tool rests on untested legal ground:

The Copilot tool has been trained on mountains of publicly available code

[...] When GitHub announced Copilot on June 29, the company said that the algorithm had been trained on publicly available code posted to GitHub. Nat Friedman, GitHub’s CEO, has written on forums like Hacker News and Twitter that the company is legally in the clear. “Training machine learning models on publicly available data is considered fair use across the machine learning community,” the Copilot page says.

But the legal question isn’t as settled as Friedman makes it sound — and the confusion reaches far beyond just GitHub. Artificial intelligence algorithms only function due to massive amounts of data they analyze, and much of that data comes from the open internet. An easy example would be ImageNet, perhaps the most influential AI training dataset, which is entirely made up of publicly available images that ImageNet creators do not own. If a court were to say that using this easily accessible data isn’t legal, it could make training AI systems vastly more expensive and less transparent.

Despite GitHub’s assertion, there is no direct legal precedent in the US that upholds publicly available training data as fair use, according to Mark Lemley and Bryan Casey of Stanford Law School, who published a paper last year about AI datasets and fair use in the Texas Law Review.

[...] And there are past cases to support that opinion, they say. They consider the Google Books case, in which Google downloaded and indexed more than 20 million books to create a literary search database, to be similar to training an algorithm. The Supreme Court upheld Google’s fair use claim, on the grounds that the new tool was transformative of the original work and broadly beneficial to readers and authors.

Microsoft’s GitHub Copilot Met with Backlash from Open Source Copyright Advocates:

GitHub Copilot system runs on a new AI platform developed by OpenAI known as Codex. Copilot is designed to help programmers across a wide range of languages. That includes popular scripts like JavaScript, Ruby, Go, Python, and TypeScript, but also many more languages.

“GitHub Copilot understands significantly more context than most code assistants. So, whether it’s in a docstring, comment, function name, or the code itself, GitHub Copilot uses the context you’ve provided and synthesizes code to match. Together with OpenAI, we’re designing GitHub Copilot to get smarter at producing safe and effective code as developers use it.”

One of the main criticisms regarding Copilot is it goes against the ethos of open source because it is a paid service. However, Microsoft would arguably justify this by saying the resources needed to train the AI are costly. Still, the training is problematic for some people because they argue Copilot is using snippets of code to train and then charging users.

Is it fair use to auto-suggest snippets of code that are under an open source copyright license? Does that potentially bring your code under that license by using Copilot?

One glorious day code will write itself without developers developers.

See Also:
CoPilot on GitHub
Twitter: GitHub Support just straight up confirmed in an email that yes, they used all public GitHub code, for Codex/Copilot regardless of license.
Hacker News: GitHub confirmed using all public code for training copilot regardless license
OpenAI warns AI behind GitHub’s Copilot may be susceptible to bias


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday July 09 2021, @03:57AM (2 children)

    by Anonymous Coward on Friday July 09 2021, @03:57AM (#1154184)
    Actually, the license doesn't give them the right to take your code and copy-pasta it into a separate derivative work. Especially since such use is not necessary for providing the github service. Read it again.

    Though why anyone would use github, knowing it's going to be abused, is beyond me.

  • (Score: 2) by HiThere on Friday July 09 2021, @03:08PM (1 child)

    by HiThere (866) Subscriber Badge on Friday July 09 2021, @03:08PM (#1154310) Journal

    The questions then appears to be "Can they choose to add new features and call it part of the same service?". Certainly it wasn't a part of the service when most people agreed to it, but if their "new AI application" is offered by the same organization to those capable of using the prior service, can they define it as a part of the same service?

    It's not as if people never used code repositories as examples of how to do things before. They've just automated that as a new feature of their service. Or is that stretching things beyond where a court would agree?

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 3, Insightful) by JoeMerchant on Friday July 09 2021, @03:44PM

      by JoeMerchant (3937) on Friday July 09 2021, @03:44PM (#1154338)

      Depends on the court, of course.

      What I wonder is: if this goes on for 5, 20, or 100 years without being tested in court, at what point is it immune from contest? I mean, of course as the practice spreads over time and various service providers there will be fewer and fewer courts willing to find against it, but Mickey Mouse made a mockery of fair use for nearly 100 years before the political climate denied him (and the industry as a whole) another copyright extension.

      --
      🌻🌻 [google.com]