Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by hubie on Friday January 12 2024, @05:24PM   Printer-friendly
from the can-we-crush-AI-instead? dept.

Media outlets are calling foul play over AI companies using their content to build chatbots. They may find friends in the Senate:

Logo text More than a decade ago, the normalization of tech companies carrying content created by news organizations without directly paying them — cannibalizing readership and ad revenue — precipitated the decline of the media industry. With the rise of generative artificial intelligence, those same firms threaten to further tilt the balance of power between Big Tech and news.

On Wednesday, lawmakers in the Senate Judiciary Committee referenced their failure to adopt legislation that would've barred the exploitation of content by Big Tech in backing proposals that would require AI companies to strike licensing deals with news organizations.

Richard Blumenthal, Democrat of Connecticut and chair of the committee, joined several other senators in supporting calls for a licensing regime and to establish a framework clarifying that intellectual property laws don't protect AI companies using copyrighted material to build their chatbots.

[...] The fight over the legality of AI firms eating content from news organizations without consent or compensation is split into two camps: Those who believe the practice is protected under the "fair use" doctrine in intellectual property law that allows creators to build upon copyrighted works, and those who argue that it constitutes copyright infringement. Courts are currently wrestling with the issue, but an answer to the question is likely years away. In the meantime, AI companies continue to use copyrighted content as training materials, endangering the financial viability of media in a landscape in which readers can bypass direct sources in favor of search results generated by AI tools.

[...] A lawsuit from The New York Times, filed last month, pulled back the curtain behind negotiations over the price and terms of licensing its content. Before suing, it said that it had been talking for months with OpenAI and Microsoft about a deal, though the talks reached no such truce. In the backdrop of AI companies crawling the internet for high-quality written content, news organizations have been backed into a corner, having to decide whether to accept lowball offers to license their content or expend the time and money to sue in a lawsuit. Some companies, like Axel Springer, took the money.

It's important to note that under intellectual property laws, facts are not protected.

Also at Courthouse News Service and Axios.

Related:


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by RS3 on Friday January 12 2024, @07:25PM (6 children)

    by RS3 (6367) on Friday January 12 2024, @07:25PM (#1340065)

    I hate to be cynical or pessimistic, but my money is on the AI figuring it out much faster than our bio brains can try to block them. It's going to be up to the AI admins to know what the AI is doing, and programmers and admins putting limits on it. I'm sure most reading this can see the morass this is heading toward.

    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 4, Informative) by ikanreed on Friday January 12 2024, @07:54PM (4 children)

    by ikanreed (3164) Subscriber Badge on Friday January 12 2024, @07:54PM (#1340068) Journal

    AI(as this current crop of AI companies pitch it) doesn't "figure" anything out. It scrapes a shit-ton of shit, and then finds patterns in it.

    "Putting limits on it" at this point has amounted to a second layer of training it to not say anything too offensive. That's it.

    The problem we face now is a pissload of absolutely useless content produced to grab pennies of advertising dollars. And any attempt to "limit" that will face inherent problems with the inoffensiveness of the "desired" content. The only problem is quantity and there's no way to police that.

    • (Score: 2) by RS3 on Friday January 12 2024, @08:16PM (3 children)

      by RS3 (6367) on Friday January 12 2024, @08:16PM (#1340071)

      AI (as this current crop of AI companies pitch it) doesn't "figure" anything out.

      I understand you, but it might be a very fine-line definition. And maybe some AI have much more capability than we're being told. I'm reasonably certain there's much more research into much higher levels of reasoning, "figuring out", etc.

      I'm more interested in what constitutes moral and ethical values in AI as AI development ensues.

      • (Score: 5, Interesting) by ikanreed on Friday January 12 2024, @08:26PM (1 child)

        by ikanreed (3164) Subscriber Badge on Friday January 12 2024, @08:26PM (#1340073) Journal

        No. It really doesn't. When not processing or generating text the transformator model doesn't "think" on its own.

        It has two modes where the matrix of weights is being read or written in memory(outside of debugging tools, of course), and that's

        1: it's being trained. If it's reading in new data the weights are being changed based on the difference between it sees and what it expected to see.
        2: it's being asked to generate content, then it processes the input data through a convolutional matrix and spits out output.

        Nowhere in that process is it abstractly considering trying to solve a problem "out of scope" of anticipating the outputs for the inputs. The code simply does not work that way.

        • (Score: 3, Insightful) by RS3 on Friday January 12 2024, @10:46PM

          by RS3 (6367) on Friday January 12 2024, @10:46PM (#1340086)

          Not an arguer; don't mean to argue. I'm sure you're right, for some given code, meaning some very specific (and limiting) definition of "AI".

          But, you can't be sure that nobody is working on much higher levels of "thinking", even if it's mostly iterative in guessing outcomes and pattern-matching them against known conclusions. Our brains pretty much work that way, hopefully we learn things like touching the hot stove is not one of the better possible paths. Even sci-fi authors have envisioned "thinking" computers that have a huge database, including iterative and multi-step events / processes and outcomes, some degree of random generator that conjures possibilities, and tests them against known outcomes, and databanks (caches) them too. I dunno, doesn't seem all that far-fetched, but I'm not deep in that world.

      • (Score: 3, Insightful) by hendrikboom on Friday January 12 2024, @11:45PM

        by hendrikboom (1125) on Friday January 12 2024, @11:45PM (#1340095) Homepage Journal

        Yes, there is research into "figuring out".
        One team is having an AI generate some computer code and its proof of correctness. Then they feed that into anotger system that checks the proof using traditional formal logical systems, feeding thenerrors back to the generator.
        Apparently they claim to reach about 60% success. I do not know how big the programs are. My guess is small.

  • (Score: 3, Insightful) by looorg on Friday January 12 2024, @08:01PM

    by looorg (578) on Friday January 12 2024, @08:01PM (#1340069)

    I'm not saying the solution won't be utter snakeoil but I wouldn't be all to surprise if I soon get calls about people trying to sell the latest and greatest in AI-blocking technology.