████ # This file was generated bot-o-matically! Edit at your own risk. ████
by jan
Scientists urge EU governments to reject Chat Control rules [siliconrepublic.com]:
Scientists urge EU governments to reject Chat Control rules Save article
As the final vote draws closer, an open letter has highlighted significant risks that remain in the EU’s controversial ‘Chat Control’ regulation.
617 of the world’s top scientists, cryptographers and security researchers have released an open letter today (10 September) calling on governments to reject the upcoming final vote on the EU’s ‘Chat Control’ legislation.
The group of scientists and researchers – hailing from 35 countries and including the likes of AI expert Dr Abeba Birhane – has warned that the EU’s proposed legislation targeting online child sexual abuse material (CSAM), known colloquially as Chat Control, would undermine the region’s digital security and privacy protections and “endangers the digital safety of our society in Europe and beyond”.
The group also warned that the new rules will create “unprecedented capabilities” for surveillance, control and censorship, and has an “inherent risk for function creep and abuse by less democratic regimes”.
This is not the first time this collective has warned against the regulation, having previously published its recommendations in July 2023, May 2024 and September 2024.
The proposed legislation would require providers of messaging services such as WhatsApp, Signal, Instagram, email and more to scan its users’ private digital communications and chats for CSAM material. This scanning would even apply to end-to-end encrypted communications, regardless of a provider’s own security protections.
Any content flagged as potential CSAM material by the scanning algorithms would then be automatically reported to authorities.
Currently, 15 EU member states have issued support for the legislation – including Ireland. Six member states oppose the rules, while six remain undecided in their stance.
While the latest draft of the legislation has been amended to exclude the detection of audio and text communications – limiting detection to “visual content”, such as images and URLs – the scientists argue that the legislation in its current form is still unacceptable.
The group argues that none of the legislation’s changes address its major concerns, namely the infeasibility of scanning hundreds of millions of users for CSAM content with appropriate accuracy, the undermining of end-to-end encryption protections and the heightened privacy risks to EU citizens.
The major concerns
While the latest draft of the regulation has reduced the scope of targeted material (limited to visual content and URLs), the group of scientists states that this reduction will not improve effectiveness.
“There is no scientific basis to argue that detection technology would work any better on images than on text,” reads the letter, with further assertions that CSAM detection methods can be easily evaded. The group states that just changing a few bits in an image is “sufficient to ensure that an image will not trigger state-of-the-art detectors”.
The group also criticises the EU’s proposal of using AI and machine learning to detect CSAM imagery due to the technology’s unreliability.
“We reiterate to the best of our knowledge there is no machine-learning algorithm that can perform such detection without committing a large number of errors, and that all known algorithms are fundamentally susceptible to evasion.”
When it comes to URLs, the group says that evading detectors is even easier, due to the ease at which users can redirect to other URLs.
In terms of end-to-end encryption, the group says that the legislation violates the core principles of the practice – ensuring that only the intended two endpoints can access the data, and avoiding a single point of failure.
“Enforcing a detection mechanism to scan private data before it gets encrypted – with the possibility to transmit it to law enforcement upon inspection – inherently violates both principles,” says the group.
The researchers also call into question the proposal of service providers using age verification and age assessment measures, pointing to recent backlash to the UK’s Online Safety Act [sky.com] in relation to similar requirements. The group states that these age verification rules could become a reason to ban the use of virtual private networks (VPNs), thus threatening freedom of speech, freedom of information and undermining “the tools needed by whistleblowers, journalists and human right activists”.
Lastly, the researchers find that the current “techno-solutionist proposal” has little potential to achieve its stated ambition – the eradication of abuse perpetrated against children.
The group calls on administrations to focus instead on measures recommended by the UN, such as education, trauma-sensitive reporting hotlines and keyword-search based interventions.
“By eliminating abuse, these measures will also eradicate abusive material without introducing any risk to secure digital interactions which are essential for the safety of the children the proposed regulation aims to protect.”
The Chat Control legislation has been under fire for a number of years now from digital rights groups and advocates including the Pirate Party’s Patrick Breyer.
Last year, voting on the legislation was temporarily withdrawn by the EU Council in a move that was believed to have been influenced by prominent pushback against the regulation.
Irish cybersecurity expert Brian Honan told SiliconRepublic.com that Chat Control could potentially “put everyone under mass surveillance by scanning all messages on our personal devices before they are sent, even encrypted ones, undermining the security of the messaging platforms and imposing on our rights to privacy”.
“The proposals of client-side scanning also introduces a significant risk of that software being targeted by criminals, hostile nation states, and being abused by authoritarian governments.”
He added that while Chat Control’s goal of stopping the spread of CSAM is worthy and should be supported, “the proposed EU Chat Control is not the appropriate mechanism to do so”.
“Real progress against dealing with CSAM will come from investing in more resources for police forces to investigate and prosecute those behind this material, stronger sanctions against platforms and countries that allow this material, and increased support for hotlines like the Irish Internet Hotline.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief [eepurl.com], Silicon Republic’s digest of need-to-know sci-tech news.
You May Also Like You May Also Like Seven years on businesses still grapple with GDPR compliance [siliconrepublic.com]YieldHub founder on Ireland’s part in EU’s chip plans [siliconrepublic.com]Taceo: Facilitating secure collaboration on private data [siliconrepublic.com]Why did the EU fine Ireland €1.54m under work-life balance act? [siliconrepublic.com]Meta to end political ads in the EU ahead of new transparency rules [siliconrepublic.com]Ireland targets world-class researchers with new initiative [siliconrepublic.com]More from Technology