A proposed set of rules by the European Union would, among other things. require makers of generative AI tools such as ChatGPT,to publicize any copyrighted material used by the technology platforms to create content of any kind.
A new draft of European Parliament's legislation, a copy of which was attained by The Wall Street Journal, would allow the original creators of content used by generative AI applications to share in any profits that result.
The European Union's "Artificial Intelligence Act" (AI Act) is the first of its kind by a western set of nations. The proposed legislation relies heavily on existing rules, such as the General Data Protection Regulation (GDPR), the Digital Services Act, and the Digital Markets Act. The AI Act was originally proposed by the European Commission in April 2021.
The bill's provisions also require that the large language models (LLMs) behind generative AI tech, such as the GPT-4, be designed with adequate safeguards against generating content that violates EU laws; that could include child pornography or, in some EU countries, denial of the Holocaust, according to The Washington Post.
[...] But the solution to keeping AI honest isn't easy, according to Avivah Litan, a vice president and distinguished analyst at Gartner Research. It's likely that LLM creators, such as San Fransisco-based OpenAI and others, will need to develop powerful LLMs to check that the ones trained initially have no copyrighted materials. Rules-based systems to filter out copyright materials are likely to be ineffective, Liten said.
[...] Regulators should consider that LLMs are effectively operating as a black box, she said, and it's unlikely that the algorithms will provide organizations with the needed transparency to conduct the requisite privacy impact assessment. "This must be addressed," Litan said.
"It's interesting to note that at one point the AI Act was going to exclude oversight of Generative AI models, but they were included later," Litan said "Regulators generally want to move carefully and methodically so that they don't stifle innovation and so that they create long-lasting rules that help achieve the goals of protecting societies without being overly prescriptive in the means."
[...] "The US and the EU are aligned in concepts when it comes to wanting to achieve trustworthy, transparent, and fair AI, but their approaches have been very different," Litan said.
So far, the US has taken what Litan called a "very distributed approach to AI risk management," and it has yet to create new regulations or regulatory infrastructure. The US has focused on guidelines and an AI Risk Management framework.
[...] Key to the EU's AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The framework includes four risk tiers: unacceptable, high, limited, and minimal, according to the World Economic Forum.
[...] While AI has been around for decades, it has "reached new capacities fueled by computing power," Thierry Breton, the EU's Commissioner for Internal Market, said in a statement in 2021. The Artificial Intelligence Act, he said, was created to ensure that "AI in Europe respects our values and rules, and harness the potential of AI for industrial use."
Related:
Yet Again, the Copyright Industry Demands to be Shielded From Technological Progress
Inside the Secret List of Websites That Make AI Like ChatGPT Sound Smart
Bad News: Copyright Industry Attacks on the Internet's Plumbing are Increasing – and Succeeding
Stable Diffusion Copyright Lawsuits Could be a Legal Earthquake for AI
Paper: Stable Diffusion "Memorizes" Some Images, Sparking Privacy Concerns
(Score: 4, Interesting) by Barenflimski on Wednesday May 03, @01:42PM (1 child)
The first regulation is with regards to copyrights? Google has been using their algorithms to scrape the web for two decades. Now these bots can't do the same?
I think its pretty clear. These folks could care little about watching the world burn as long as they all make a crap ton of money. The only reason one would do this is to lock in the big players and slow down competition.
I have zero problems with these AI bots so far. All they do is regurgitate what they've been trained on. If one doesn't place these things on a pedestal, treating them like all knowing gods, I think we're all fine.
What worries me is this instant push by the talkers about how these things are sentient, smart and 'like humans but without the flaws.' It seems to me that all these folks would rather trust a bot than their fellow human. It's like they've drank the same kool-aid they've been spewing themselves about how terrible everything and everyone else is. While the news is bad, the people I meet on a daily basis are kind, witty, fun, positive and don't short circuit when having a beer.
If these lawmakers gave half a shit, they'd create regulations around pairing these things with robots that actually DO something. Maybe they could even reign in the people that continually gaslight the world?
(Score: 3, Insightful) by DeathMonkey on Wednesday May 03, @06:55PM
Google already does publish the list of material used because they link you to the site it's on. And I would generally consider it fair use because it's a snippet used in furtherance of describing the content at the link.
As copyright law stands now these chat bots should probably be fully disallowed from using any copyrighted materials on the internet to train their model because that is then creating a derivative work.
So this sounds likes a compulsory licensing scheme like ASCAP to allow some public usage of the data in exchange for a share of any proceeds.
Sounds like a pretty good idea to me, actually.