Scientists urge EU governments to reject Chat Control rules:
As the final vote draws closer, an open letter has highlighted significant risks that remain in the EU's controversial 'Chat Control' regulation.
617 of the world's top scientists, cryptographers and security researchers have released an open letter today (10 September) calling on governments to reject the upcoming final vote on the EU's 'Chat Control' legislation.
The group of scientists and researchers – hailing from 35 countries and including the likes of AI expert Dr Abeba Birhane – has warned that the EU's proposed legislation targeting online child sexual abuse material (CSAM), known colloquially as Chat Control, would undermine the region's digital security and privacy protections and "endangers the digital safety of our society in Europe and beyond".
The group also warned that the new rules will create "unprecedented capabilities" for surveillance, control and censorship, and has an "inherent risk for function creep and abuse by less democratic regimes".
This is not the first time this collective has warned against the regulation, having previously published its recommendations in July 2023, May 2024 and September 2024.
The proposed legislation would require providers of messaging services such as WhatsApp, Signal, Instagram, email and more to scan its users' private digital communications and chats for CSAM material. This scanning would even apply to end-to-end encrypted communications, regardless of a provider's own security protections.
Any content flagged as potential CSAM material by the scanning algorithms would then be automatically reported to authorities.
Currently, 15 EU member states have issued support for the legislation – including Ireland. Six member states oppose the rules, while six remain undecided in their stance.
While the latest draft of the legislation has been amended to exclude the detection of audio and text communications – limiting detection to "visual content", such as images and URLs – the scientists argue that the legislation in its current form is still unacceptable.
The group argues that none of the legislation's changes address its major concerns, namely the infeasibility of scanning hundreds of millions of users for CSAM content with appropriate accuracy, the undermining of end-to-end encryption protections and the heightened privacy risks to EU citizens.
While the latest draft of the regulation has reduced the scope of targeted material (limited to visual content and URLs), the group of scientists states that this reduction will not improve effectiveness.
"There is no scientific basis to argue that detection technology would work any better on images than on text," reads the letter, with further assertions that CSAM detection methods can be easily evaded. The group states that just changing a few bits in an image is "sufficient to ensure that an image will not trigger state-of-the-art detectors".
The group also criticises the EU's proposal of using AI and machine learning to detect CSAM imagery due to the technology's unreliability.
"We reiterate to the best of our knowledge there is no machine-learning algorithm that can perform such detection without committing a large number of errors, and that all known algorithms are fundamentally susceptible to evasion."
When it comes to URLs, the group says that evading detectors is even easier, due to the ease at which users can redirect to other URLs.
In terms of end-to-end encryption, the group says that the legislation violates the core principles of the practice – ensuring that only the intended two endpoints can access the data, and avoiding a single point of failure.
"Enforcing a detection mechanism to scan private data before it gets encrypted – with the possibility to transmit it to law enforcement upon inspection – inherently violates both principles," says the group.
The researchers also call into question the proposal of service providers using age verification and age assessment measures, pointing to recent backlash to the UK's Online Safety Act in relation to similar requirements. The group states that these age verification rules could become a reason to ban the use of virtual private networks (VPNs), thus threatening freedom of speech, freedom of information and undermining "the tools needed by whistleblowers, journalists and human right activists".
Lastly, the researchers find that the current "techno-solutionist proposal" has little potential to achieve its stated ambition – the eradication of abuse perpetrated against children.
The group calls on administrations to focus instead on measures recommended by the UN, such as education, trauma-sensitive reporting hotlines and keyword-search based interventions.
"By eliminating abuse, these measures will also eradicate abusive material without introducing any risk to secure digital interactions which are essential for the safety of the children the proposed regulation aims to protect."
The Chat Control legislation has been under fire for a number of years now from digital rights groups and advocates including the Pirate Party's Patrick Breyer.
Last year, voting on the legislation was temporarily withdrawn by the EU Council in a move that was believed to have been influenced by prominent pushback against the regulation.
Irish cybersecurity expert Brian Honan told SiliconRepublic.com that Chat Control could potentially "put everyone under mass surveillance by scanning all messages on our personal devices before they are sent, even encrypted ones, undermining the security of the messaging platforms and imposing on our rights to privacy".
"The proposals of client-side scanning also introduces a significant risk of that software being targeted by criminals, hostile nation states, and being abused by authoritarian governments."
He added that while Chat Control's goal of stopping the spread of CSAM is worthy and should be supported, "the proposed EU Chat Control is not the appropriate mechanism to do so".
"Real progress against dealing with CSAM will come from investing in more resources for police forces to investigate and prosecute those behind this material, stronger sanctions against platforms and countries that allow this material, and increased support for hotlines like the Irish Internet Hotline."
See also: Encrypted email provider Tuta warns EU privacy is at risk with Chat Control law:
« ASML Invests €1.3bn in AI Company Mistral | AI's Free Web Scraping Days May be Over, Thanks to This New Licensing Protocol »
Related Stories
Once Again, Chat Control Flails After Strong Public Pressure:
The European Union Council pushed for a dangerous plan to scan encrypted messages, and once again, people around the world loudly called out the risks, leading to the current Danish presidency to withdraw the plan.
EFF has strongly opposed Chat Control since it was first introduced in 2022. The zombie proposal comes back time and time again, and time and time again, it's been shot down because there's no public support. The fight is delayed, but not over.
It's time for lawmakers to stop attempting to compromise encryption under the guise of public safety. Instead of making minor tweaks and resubmitting this proposal over and over, the EU Council should accept that any sort of client-side scanning of devices undermines encryption, and move on to developing real solutions that don't violate the human rights of people around the world.
As long as lawmakers continue to misunderstand the way encryption technology works, there is no way forward with message-scanning proposals, not in the EU or anywhere else. This sort of surveillance is not just an overreach; it's an attack on fundamental human rights.
The coming EU presidencies should abandon these attempts and work on finding a solution that protects people's privacy and security.
Previously:
• Scientists Urge EU Governments to Reject Chat Control Rules
• EU Chat Control Law Proposes Scanning Your Messages — Even Encrypted Ones
• EU Parliament's Research Service Confirms: Chat Control Violates Fundamental Rights
• Client Side Scanning May Cost More Than it Delivers
(Score: 4, Informative) by Anonymous Coward on Saturday September 13, @08:17AM
More information about how to fight Chat Control and to protect digital privacy in the EU:
https://fightchatcontrol.eu/ [fightchatcontrol.eu]
(Score: 5, Interesting) by turgid on Saturday September 13, @09:25AM (2 children)
Pervasive, intrusive surveillance is a terrible idea. It's abhorrent. The powers that be are too ignorant ad lazy to actually deal with crime and criminals properly so they're saying, "We'll treat you all as potential criminals just waiting to be caught." Good grief.
There are already idiot politicians who don't understand that data encryption is necessary for privacy and safety. They don't even understand, or at least don't appear to, that commerce would be impossible over communications media without it.
Yes but if you've got nothing to hide you've got nothing to fear. True, until the Bad Guys(TM) hack the surveillance system, the back doors. Then all havoc will be let loose.
And what about false positives? Anyone who has dealt with the various legal systems will tell you that it's a life-changing experience (financially and in terms of career, reputation, criminal record...)
The other elephant in the room is hostile regimes. Just suppose (and I'll phrase this in language friendly to the Alt-Wrong stupid signallers) some far-left communist dictator got in power. That dictator would now know absolutely everything about you. You would not be able to secretly plot her overthrowing with your patriotic friends.
It really is that simple.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 3, Insightful) by khallow on Saturday September 13, @05:02PM
(Score: 2, Insightful) by whibla on Tuesday September 16, @11:13AM
What about planted real positives? Now everyone has something to fear.
To clarify, if a malicious actor sends a 'qualifying' image to you via WhatsApp (etc.) the software reports you to the authorities. Sure, it also reports the sender, but SIM cards are a disposable asset in situations like this.
Undoubtedly you're going to claim you have no knowledge of the sender, and either refused the message or immediately deleted it, but I'm not sure that will help salve the "we're arresting you on suspicion of child sex offences" fallout.
Of course, this does rather suggest a simple solution to the problem of dozy lawmakers: start sending those responsible for scrutinising this legislation some less than salubrious images. Either they recognise the shortcomings of the proposals, or they get arrested and replaced.
[Just to be clear, and to cma - and what a sad world we live in where I worry this might be necessary - I am not really suggesting that anyone actually do this last. Think of it more as a humorous hypothetical, not an incitement to commit a crime.]
(Score: 2) by Frosty Piss on Saturday September 13, @07:42PM (1 child)
"Scientists" want this, "Scientists" want that. Which ones? ALL OF THEM? There is a union that has agreed on these demands?
(Score: 2) by FunkyLich on Saturday September 13, @08:34PM
Using the word "Stience" gives everything a higher level of credibility. I for one am glad humanity is itching to modernize away from the other one, the legacy "by God's decree".
(Score: 3, Funny) by anotherblackhat on Sunday September 14, @12:10AM
Detecting CASM requires tech. we don't currently understand.
We don't understand how AI works.
Therefore, AI must be able to detect CASM.