The latest idea in the long gestation of the online harms legislation:
The UK government is putting forward changes to the law which would require social media platforms to give users the option to avoid seeing and engaging with harmful — but legal — content.
Presenting the amended Online Safety Bill to Parliament this week, Michelle Donelan, the minister for digital, culture, media and sport pledged to create a "third shield" to protect users from harmful content. She promised the mechanism, to be built by platform providers if the bill makes it into law, "transfers power away from Silicon Valley algorithms to ordinary people."
"Our new triple-shield mechanism puts accountability, transparency and choice at the heart of the way we interact with each other online. If [the content] is illegal, it has to go. If it violates a company's terms and conditions, it has to go. Under the third and final layer of the triple-shield, platforms must offer users tools to allow them to choose what kind of content they want to see and engage with," Donelan told Parliament.
[...] Rather than strict ID-base age-verification, platform providers would be forced to publish data revealing the risk of children viewing such content on their systems.
[...] However, Lucy Powell, shadow minister for digital, culture, media and sport, said: "Simply holding platforms to account for their own terms and conditions – the Secretary of State referred to that earlier – which, as we saw just this week at Twitter, can be rewritten or changed at whim, will not constitute robust enough regulation to deal with the threat that these platforms present.
"To protect children, the government are relying on age verification, but as those with teenage children are well aware – including many of us in the House – most of them pass themselves off as older than they are, and verification is easy to get around. The proposed three shields for adults are just not workable and do not hold up to scrutiny. Let us be clear that the raft of new amendments that have been tabled by the government this week are nothing more than a major weakening and narrowing of this long-awaited legislation," Powell said.
(Score: 2) by Mojibake Tengu on Friday December 16, @12:32AM (2 children)
If information can do harm, what will you do about harmful thinking?
The edge of 太玄 cannot be defined, for it is beyond every aspect of design
(Score: 2) by choose another one on Friday December 16, @11:45AM
Without information, thinking will only be abstract.
Triple shield = off switch + contemplate black screen*
*Optional pale pastel coloured screens available for those who find black offensive.
Start worrying when they take away the off switch...
(Score: 2) by Joe Desertrat on Saturday December 17, @01:57AM
When you connect your newly required brain receptacle to the internet, harmful thinking will be noticed and stamped out.
On a more serious note, who designates how content is defined? Do the platforms preview and define content before allowing it to be posted? Do posters check off what category their content fits? Do viewers get to define what content fits in which category? How do they determine that? If viewers mark it as such and such content after first viewing, will that rating apply only to them or are we all to be subject to the whims of the most conservative of viewers? Will there be any way to appeal any of this? And so on.
I suspect any such system will be easily gamed by the "bad actors", while ordinary users will have to jump through hoops to view much of the same benign content they easily view today.
(Score: 2) by Gaaark on Friday December 16, @03:20AM (2 children)
you can say you don't want to see any ads?
No tracking, profiling, nothing?
A'ight!
--- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
(Score: 0) by Anonymous Coward on Friday December 16, @12:03PM (1 child)
Funny how when you apply such things logically like that they disappear.
Just like the doomed to fail wokeness where job applicant names and other person information is withheld until the interview stage. After a significant drop in female and "minorities" the policy was reversed.
Go ahead and try it though as it could be fun to watch.
(Score: 2) by Gaaark on Friday December 16, @12:07PM
Do you mean Faceplant, Twatter, etc are going to disappear! A'ight!!!
Disclaimer:
I DON'T CARE! I DON'T USE SHITE 'SERVICES'!
--- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
(Score: 2) by PinkyGigglebrain on Friday December 16, @03:35AM
Yet another thinly veiled "Think of the Children" law to "protect" everyone, not just children from "harmful" (as defined by whom?) content that is little more than an attempt to manipulate and control what the Public sees, hears, and knows.
The disgusting part is it will probably actually get discussed in Parliament and have a chance of actually passing.
"Beware those who would deny you Knowledge, For in their hearts they dream themselves your Master."
(Score: 2) by Nuke on Friday December 16, @11:54AM (1 child)
How can you hide something with a transparent shield? Anyway, I want an opaque shield for ads.
(Score: 3, Informative) by maxwell demon on Saturday December 17, @01:24PM
Well, ads are clearly harmful media, especially harmful for children, so they should be blockable, right?
The Tao of math: The numbers you can count are not the real numbers.