Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by janrinok on Wednesday February 04, @11:31PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

[...] When the UK government launched a public consultation on AI and copyright in early 2025, it likely didn't expect to receive a near-unanimous dressing-down. But of the roughly 10,000 responses submitted through its official “Citizen Space” platform, just 3% supported the government's preferred policy for regulating how AI uses copyrighted material for training. A massive 88% backed a stricter approach focused on rights-holders.

The survey asked for opinions on four possible routes the UK might take to address what rules should apply when AI developers train their models on books, songs, art, and other copyrighted works. The government’s favored route was labeled Option 3 and offered a compromise where AI developers had a default right to use copyrighted material as long as they disclosed what they used, and offered a way for those with the rights to the material to opt out. But most who responded disagreed.

Option 3 received the least support. Even the “do nothing” option of just leaving the law vague and inconsistent polled better. More people would prefer no reform at all than accept the government's suggestion. That level of disapproval is hard to spin.

It's a triumph for the campaign by writers’ unions, music industry groups, visual artists, and game developers seeking exactly this result. They spent months warning about a future where creative work becomes free fuel for unlicensed AI engines.

The artists argued that the fight was over consent as much as royalties. They argued that having creative work swept up into a training dataset without permission means the damage is done, even if you can opt out months later. And they pointed out that the UK’s copyright laws weren’t built for AI. Copyright in the UK is automatic, not registered, which is great for flexibility, but tough for any enforcement, as there's no central database of copyright ownership.

Officials crafted Option 3 to try to appease all sides. The government's stated aim was to stimulate AI innovation while still respecting creators. A transparent opt-out mechanism would let developers build useful models while giving artists a way to refuse. But it ultimately felt to many creators like all the burden fell on them, and they would have to constantly monitor how their work is used, sometimes across borders, languages, and platforms they’ve never heard of.

That's likely why 88% of respondents went for requiring licenses for everything as their preferred choice. If an AI model were to be implemented, wanting to train on your book, your voice, your illustration, or your photography, it would have to ask, and potentially pay first.

A final report and economic impact assessment from the government is due in March. It will evaluate the legal, commercial, and cultural implications of each option. Officials say they will consider input from creators, tech firms, small businesses, and other stakeholders. Clearly the government's hope to smoothly start implementing its prefeerred appraoch won't happen.

For now, the confusing status quo remains. Without a court ruling or legislative fix, uncertainty reigns. AI developers don’t know what’s allowed. Creators don’t know what’s protected. Everyone's waiting for clarity that keeps getting delayed.

What happens next could shape the UK's digital economy for years. If officials side with the 3% who backed their initial plan, they risk alienating the very creators whose work is so valuable. But stronger licensing rules would undoubtedly face resistance from AI startups and international tech firms. Either way, the fighting is far from over.


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by Thexalon on Thursday February 05, @03:06AM (2 children)

    by Thexalon (636) on Thursday February 05, @03:06AM (#1432597)

    AI developers don’t know what’s allowed.

    Hogwash. AI developers like the status quo because in practice they're allowed to do whatever they want. And they probably "convinced" the government to back their preferred policy because they know enforcing it is basically impossible - good luck proving in court that you opted out properly and that an AI bot definitely used your work.

    --
    "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
    Starting Score:    1  point
    Moderation   +3  
       Insightful=3, Total=3
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 4, Interesting) by Anonymous Coward on Thursday February 05, @04:48AM

    by Anonymous Coward on Thursday February 05, @04:48AM (#1432606)

    In the recent Anthropic class action settlement (over $1B), the lawyers were able to obtain a list(s) of books used for training. Having a registered copyright (in USA?) appears to have been sufficient to prove the desire to opt out. Average class action payouts are anticipated at about USD $3000 per infringed work (book).

    Saw recently that another AI infringer has a similar (perhaps larger) class action shit against them, sorry, forgot details but it sounded similar to the Anthropic case.

  • (Score: 1) by khallow on Thursday February 05, @05:40PM

    by khallow (3766) Subscriber Badge on Thursday February 05, @05:40PM (#1432686) Journal
    It also involves an enormous hassle (which I grant affects the bottom line too). Your AI product licenses my copyrighted works and trains on them. So far so good. But I have a pout over my fans ("I'm going to take my ball and go home! Boohoo!") and as part of the fallout to the falling out, I revoke your license to my copyrighted works. How do you untrain an AI product? How do you show in court that you untrained it?