Media outlets are calling foul play over AI companies using their content to build chatbots. They may find friends in the Senate:
Logo text More than a decade ago, the normalization of tech companies carrying content created by news organizations without directly paying them — cannibalizing readership and ad revenue — precipitated the decline of the media industry. With the rise of generative artificial intelligence, those same firms threaten to further tilt the balance of power between Big Tech and news.
On Wednesday, lawmakers in the Senate Judiciary Committee referenced their failure to adopt legislation that would've barred the exploitation of content by Big Tech in backing proposals that would require AI companies to strike licensing deals with news organizations.
Richard Blumenthal, Democrat of Connecticut and chair of the committee, joined several other senators in supporting calls for a licensing regime and to establish a framework clarifying that intellectual property laws don't protect AI companies using copyrighted material to build their chatbots.
[...] The fight over the legality of AI firms eating content from news organizations without consent or compensation is split into two camps: Those who believe the practice is protected under the "fair use" doctrine in intellectual property law that allows creators to build upon copyrighted works, and those who argue that it constitutes copyright infringement. Courts are currently wrestling with the issue, but an answer to the question is likely years away. In the meantime, AI companies continue to use copyrighted content as training materials, endangering the financial viability of media in a landscape in which readers can bypass direct sources in favor of search results generated by AI tools.
[...] A lawsuit from The New York Times, filed last month, pulled back the curtain behind negotiations over the price and terms of licensing its content. Before suing, it said that it had been talking for months with OpenAI and Microsoft about a deal, though the talks reached no such truce. In the backdrop of AI companies crawling the internet for high-quality written content, news organizations have been backed into a corner, having to decide whether to accept lowball offers to license their content or expend the time and money to sue in a lawsuit. Some companies, like Axel Springer, took the money.
It's important to note that under intellectual property laws, facts are not protected.
Also at Courthouse News Service and Axios.
Related:
(Score: 4, Interesting) by RS3 on Friday January 12 2024, @07:25PM (6 children)
I hate to be cynical or pessimistic, but my money is on the AI figuring it out much faster than our bio brains can try to block them. It's going to be up to the AI admins to know what the AI is doing, and programmers and admins putting limits on it. I'm sure most reading this can see the morass this is heading toward.
(Score: 4, Informative) by ikanreed on Friday January 12 2024, @07:54PM (4 children)
AI(as this current crop of AI companies pitch it) doesn't "figure" anything out. It scrapes a shit-ton of shit, and then finds patterns in it.
"Putting limits on it" at this point has amounted to a second layer of training it to not say anything too offensive. That's it.
The problem we face now is a pissload of absolutely useless content produced to grab pennies of advertising dollars. And any attempt to "limit" that will face inherent problems with the inoffensiveness of the "desired" content. The only problem is quantity and there's no way to police that.
(Score: 2) by RS3 on Friday January 12 2024, @08:16PM (3 children)
I understand you, but it might be a very fine-line definition. And maybe some AI have much more capability than we're being told. I'm reasonably certain there's much more research into much higher levels of reasoning, "figuring out", etc.
I'm more interested in what constitutes moral and ethical values in AI as AI development ensues.
(Score: 5, Interesting) by ikanreed on Friday January 12 2024, @08:26PM (1 child)
No. It really doesn't. When not processing or generating text the transformator model doesn't "think" on its own.
It has two modes where the matrix of weights is being read or written in memory(outside of debugging tools, of course), and that's
1: it's being trained. If it's reading in new data the weights are being changed based on the difference between it sees and what it expected to see.
2: it's being asked to generate content, then it processes the input data through a convolutional matrix and spits out output.
Nowhere in that process is it abstractly considering trying to solve a problem "out of scope" of anticipating the outputs for the inputs. The code simply does not work that way.
(Score: 3, Insightful) by RS3 on Friday January 12 2024, @10:46PM
Not an arguer; don't mean to argue. I'm sure you're right, for some given code, meaning some very specific (and limiting) definition of "AI".
But, you can't be sure that nobody is working on much higher levels of "thinking", even if it's mostly iterative in guessing outcomes and pattern-matching them against known conclusions. Our brains pretty much work that way, hopefully we learn things like touching the hot stove is not one of the better possible paths. Even sci-fi authors have envisioned "thinking" computers that have a huge database, including iterative and multi-step events / processes and outcomes, some degree of random generator that conjures possibilities, and tests them against known outcomes, and databanks (caches) them too. I dunno, doesn't seem all that far-fetched, but I'm not deep in that world.
(Score: 3, Insightful) by hendrikboom on Friday January 12 2024, @11:45PM
Yes, there is research into "figuring out".
One team is having an AI generate some computer code and its proof of correctness. Then they feed that into anotger system that checks the proof using traditional formal logical systems, feeding thenerrors back to the generator.
Apparently they claim to reach about 60% success. I do not know how big the programs are. My guess is small.
(Score: 3, Insightful) by looorg on Friday January 12 2024, @08:01PM
I'm not saying the solution won't be utter snakeoil but I wouldn't be all to surprise if I soon get calls about people trying to sell the latest and greatest in AI-blocking technology.