Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
ETH Zurich boffins exploit branch prediction race condition to steal info from memory, fixes have mild perf hit
by Thomas Claburn // Tue 13 May 2025
Researchers at ETH Zurich in Switzerland have found a way around Intel's defenses against Spectre, a family of data-leaking flaws in the x86 giant's processor designs that simply won't die.
Sandro Rüegge, Johannes Wikner, and Kaveh Razavi have identified a class of security vulnerabilities they're calling Branch Predictor Race Conditions (BPRC), which they describe in a paper [PDF] scheduled to be presented at USENIX Security 2025 and Black Hat USA 2025 later this year.
Spectre refers to a set of hardware-level processor vulnerabilities identified in 2018 that can be used to break the security isolation between software. It does this by exploiting speculative execution - a performance optimization technique that involves the CPU anticipating future code paths (also known as branch prediction) and executing down those paths before they're actually needed.
In practice, this all means malware running on a machine, or a rogue logged-in user, can potentially abuse Spectre flaws within vulnerable Intel processors to snoop on and steal data – such as passwords, keys, and other secrets – from other running programs or even from the kernel, the heart of the operating system itself, or from adjacent virtual machines on a host, depending on the circumstances. In terms of real-world risk, we haven't seen the Spectre family exploited publicly in a significant way, yet.
There are several Spectre variants. One of these, Spectre v2, enables an attacker to manipulate indirect branch predictions across different privilege modes to read arbitrary memory; it effectively allows a malicious program to extract secrets from the kernel and other running applications.
Intel has added various hardware-based defenses against these sorts of attacks over the years, which include Indirect Branch Restricted Speculation (IBRS/eIBRS) for restricting indirect branch target prediction, a sanitizing technique called Indirect Branch Predictor Barrier (IBPB), and other microarchitectural speculation controls.
eIBRS, the researchers explain, is designed to restrict indirect branch predictions to their originating privilege domain, preventing them from leaking across boundaries. Additional protection provided by IBPB is recommended in scenarios where different execution contexts, like untrusted virtual machines (VMs), share the same privilege level and hardware domain.
But Rüegge, Wikner, and Razavi found that branch predictors on Intel processors are updated asynchronously inside the processor pipeline, meaning there are potential race conditions – situations when two or more processes or threads attempt to access and update the same information concurrently, resulting in unpredictable behavior.
[...] Razavi said there are several possible attack scenarios.
"You could start a VM in your favorite cloud and this VM could then leak information from the hypervisor, including information that belongs to other VMs owned by other customers," he explained.
"While such attacks are in theory possible, and we have shown that BPI enables such attacks, our particular exploit leaks information in the user-to-kernel scenario. In such a scenario, the attacker runs inside an unprivileged user process (instead of a VM), and leaks information from the OS (instead of the hypervisor)."
Essentially, BPI allows the attacker to inject branch predictions tagged with elevated privileges in user mode, which ignores the security guarantees of eIBRS and IBPB. Thereafter, a Spectre v2 attack (sometimes called Branch Target Injection, or BTI) can be carried out to gain access to sensitive data in memory.
[...] Razavi said Spectre-related flaws are likely to continue to haunt us for a while – which El Reg did warn about back in 2018.
"Speculative execution is quite fundamental in how we build high-performance CPUs, so as long as we build CPUs this way, there is always a chance for such vulnerabilities to happen," he said. "That said, CPU vendors are now more aware of these issues and hopefully also more careful when introducing new designs and new features.
"Furthermore, while much more work still needs to be done, there is some progress in building the necessary tooling for detecting such issues. Once we have better tooling, it becomes easier to find and fix these issues pre-silicon. To summarize, things should hopefully slowly get better, but we are not there yet."
Of note: Security experts have discovered new Intel Spectre vulnerabilities
New Intel CPU flaws leak sensitive data from privileged memory:
A new "Branch Privilege Injection" flaw in all modern Intel CPUs allows attackers to leak sensitive data from memory regions allocated to privileged software like the operating system kernel.
Typically, these regions are populated with information like passwords, cryptographic keys, memory of other processes, and kernel data structures, so protecting them from leakage is crucial.
According to ETH Zurich researchers Sandro Rüegge, Johannes Wikner, and Kaveh Razavi, Spectre v2 mitigations held for six years, but their latest "Branch Predictor Race Conditions" exploit effectively bypasses them.
The flaw, which is named 'branch privilege injection' and tracked under CVE-2024-45332, is a race condition on the subsystem of branch predictors used in Intel CPUs.
Branch predictors like Branch Target Buffer (BTB) and Indirect Branch Predictor (IBP) are specialized hardware components that try to guess the outcome of a branch instruction before it's resolved to keep the CPU pipeline full for optimal performance.
These predictions are speculative, meaning they are undone if they end up being wrong. However, if they are correct, it increases performance.
The researchers found that Intel's branch predictor updates are not synchronized with instruction execution, resulting in these updates traversing privilege boundaries.
If a privilege switch happens, like from user mode to kernel mode, there is a small window of opportunity during which the update is associated with the wrong privilege level.
[...] CVE-2024-45332 impacts all Intel CPUs from the ninth generation onward, including Coffee Lake, Comet Lake, Rocket Lake, Alder Lake, and Raptor Lake.
"All intel processors since the 9th generation (Coffee Lake Refresh) are affected by Branch Privilege Injection," explains the researchers.
"However, we have observed predictions bypassing the Indirect Branch Prediction Barrier (IBPB) on processors as far back as 7th generation (Kaby Lake)."
ETH Zurich researchers did not test older generations at this time, but since they do not support Enhanced Indirect Branch Restricted Speculation (eIBRS), they're less relevant to this specific exploit and likely more prone to older Spectre v2-like attacks.
Arm Cortex-X1, Cortex-A76, and AMD Zen 5 and Zen 4 chips were also examined, but they do not exhibit the same asynchronous predictor behavior, so they are not vulnerable to CVE-2024-45332.
[...] The risk is low for regular users, and attacks have multiple strong prerequisites to open up realistic exploitation scenarios. That being said, applying the latest BIOS/UEFI and OS updates is recommended.
ETH Zurich will present the full details of their exploit in a technical paper at the upcoming USENIX Security 2025.
See also:
Arthur T Knackerbracket has processed the following story:
This week, meet a reader we'll Regomize as "Colin" who told us about his time working as a front-end developer for an education company that decided the time was right to expand from the UK to the US.
"Suddenly we needed to localize thousands of online articles, lessons, and other documents into American English."
Inconveniently, all that content was static HTML. "There was no CMS, no database, nothing I could harness on the server side," Colin lamented to Who, Me?
After due consideration, Colin and his team decided to use regular expressions to do the job.
"Our system combined tackling spelling swaps like changing 'ae' to 'e' in words like 'archaeology' and word/phrase swaps so that British terms like 'post' were changed to the American 'mail.'" Colin knew this could go pear-shaped if the system changed a term like "post-modern" to "mail-modern," so compound words were exempt.
As Colin and his workmates considered all the necessary changes, they realized they needed a lot of rules.
"The fact it was running the replacements directly on the body HTML, and causing lots of page repaints, meant we had to build a REST API to cache which rules ran and didn't run for each page, so as to not cause slowdown by running unnecessary rules," he explained.
Which worked well until it didn't.
"One day we got a call asking why a lesson about famous artists referred to the great painter 'Vincent Truck Gogh.'"
Readers are doubtless familiar with Vincent Van Gogh, and the different names for midsize vehicles on each side of the North Atlantic.
That was just the start. Next came complaints about a religious studies lesson that explained how Adam and Eve lived in the "Yard of Eden" – not the garden. Another religion class mentioned sinister-sounding "Easter hoods" instead of the daintier "Easter bonnets."
Colin figured out that the word swaps he coded failed to consider cases where it should just skip a word altogether. A van, after all, is a truck if you're American.
"In the end, we managed to get the system to be context-aware, so that certain swaps could be suppressed if the article contained a certain trigger word which suggested it shouldn't run, and the problems went away. But it was a very entertaining bug to be involved with!"
Processed by jelizondo
Arthur T Knackerbracket has processed the following story:
The HoloBoard augmented-reality system lets people type independently.
Jeremy is a 31-year-old autistic man who loves music and biking. He's highly sensitive to lights, sounds, and textures, has difficulty initiating movement, and can say only a few words. Throughout his schooling, it was assumed he was incapable of learning to read and write. But for the past 30 minutes, he's been wearing an augmented-reality (AR) headset and spelling single words on the HoloBoard, a virtual keyboard that hovers in the air in front of him. And now, at the end of a study session, a researcher asks Jeremy (not his real name) what he thought of the experience.
Deliberately, poking one virtual letter at a time, he types, "That was good."
It was not obvious that Jeremy would be able to wear an AR headset, let alone use it to communicate. The headset we use, Microsoft's HoloLens 2, weighs 566 grams (more than a pound), and the straps that encircle the head can be uncomfortable. Interacting with virtual objects requires precise hand and finger movements. What's more, some people doubt that people like Jeremy can even understand a question or produce a response. And yet, in study after study, we have found that most nonspeaking autistic teenage and adult participants can wear the HoloLens 2, and most can type short words on the HoloBoard.
The HoloBoard prototype that Jeremy first used in 2023 was three years in the making. It had its origins in an interdisciplinary feasibility study that considered whether individuals like Jeremy could tolerate a commercial AR headset. That study was led by the three of us: a developmental psychologist (Vikram Jaswal at the University of Virginia), an electrical and software engineer (Diwakar Krishnamurthy at the University of Calgary), and a computer scientist (Mea Wang, also at the University of Calgary).
Our journey to this point was not smooth. Some autism researchers told us that nonspeaking autistic people "do not have language" and so couldn't possibly communicate by typing. They also said that nonspeaking autistic people are so sensitive to sensory experiences that they would be overwhelmed by augmented reality. But our data, from more than a half-dozen peer-reviewed studies, have shown both assumptions to be wrong. And those results have informed the tools we're creating, like the HoloBoard, to enable nonspeaking autistic people to communicate more effectively.
Nonspeaking autistic people may also appear inattentive, engage in impulsive behavior, and score poorly on standard intelligence tests (many of which require spoken responses within a set amount of time). Historically, these challenges have led to unfounded assumptions about these individuals' ability to understand language and their capacity for symbolic thought. To put it bluntly, it has sometimes been assumed that someone who can't talk is also incapable of thinking.
Most attempts to provide nonspeaking autistic people with an alternative to speech have been rudimentary. Picture-based communication systems, often implemented on an iPad or tablet, are frequently used in schools and therapy clinics. If a user wants a cookie, they can tap a picture of a cookie. But the vocabulary of these systems is limited to the concepts that can be represented by a simple picture.
There are other options. Some nonspeaking autistic people have learned, over the course of many years and guided by parents and professionals, to communicate by spelling words and sentences on a letterboard that's held by a trained human assistant - a communication and regulation partner, or CRP. Part of the CRP's role is to provide attentional and emotional support, which can help with conditions that commonly accompany severe autism and that interfere with communication, including anxiety, attention-deficit hyperactivity disorder, and obsessive-compulsive disorder. Having access to such assisted methods of communication has allowed nonspeaking autistic people to graduate from college, write poetry, and publish a best-selling memoir.
But the role of the CRP has generated considerable controversy. Critics contend that the assistants can subtly guide users to point to particular letters, which would make the CRP, rather than the user, the author of any words produced. If nonspeaking autistic people who use a letterboard really know how to spell, critics ask, why is the CRP necessary? Some professional organizations, including the American Speech-Language-Hearing Association, have even cautioned against teaching nonspeaking autistic people communication methods that involve assistance from another person.
And yet, research suggests that CRP-aided methods can teach users the skills to communicate without assistance; indeed, some individuals who previously required support now type independently. And a recent study by coauthor Jaswal showed that, contrary to critics' assumptions, most of the nonspeaking autistic individuals in his study (which did not involve a CRP) knew how to spell. For example, in a string of text without any spaces, they knew where one word ended and the next word began. Using eye tracking, Jaswal's team also showed that nonspeaking autistic people who use a letterboard look at and point to letters too quickly and accurately to be responding to subtle cues from a human assistant.
Our focus then was not on improving underlying AR hardware and system software, but finding the most productive ways to adapt it for our users.
We knew we wanted to design a typing system that would allow users to convey anything they wanted. And given the ongoing controversy about assisted communication, we wanted a system that could build the skills needed to type independently. We envisioned a system that would give users more agency and potentially more privacy if the tool is used outside a research setting.
Augmented reality has various features that, we reasoned, make it attractive for these purposes. AR's eye- and hand-tracking capabilities could be leveraged in activities that train users in the motor skills needed to type, such as isolating and tapping targets. Some of the CRP's tasks, like offering encouragement to a user, could be automated and rolled into an AR device. Also, AR allows users to move around freely as they engage with virtual objects, which may be more suitable for autistic people who have trouble staying still: A HoloBoard can "follow" the user around a room using head tracking. What's more, virtual objects in AR are overlaid on a user's actual environment, making it safer and less immersive than virtual reality (VR) - and potentially less overwhelming for our target population.
We carefully considered our choice of hardware. While lightweight AR glasses like the Ray-Ban Meta AI glasses and Snap's AI Spectacles would have been less cumbersome for users, they don't have the high-fidelity hand-tracking and gaze-tracking we needed. Headsets like the HoloLens 2 and Meta's Quest 3 provide greater computing power and support a broader range of interaction modalities.
We aren't the first researchers to consider how AR can help autistic people. Other groups have used AR to offer autistic children real-time information about the emotions people show on their faces, for example, and to gamify social- and motor-skill training. We drew inspiration from those efforts as we took on the new idea of using AR to help nonspeaking autistic people communicate.
Our efforts have been powered by our close collaboration with nonspeaking autistic people. They are, after all, the experts about their condition, and they're the people best suited to guide the design of any tools intended for them. Everything we do is informed by their input, including the design of prototypes and the studies to test those prototypes.
When neurotypical people see someone who cannot talk, whose body moves in unusual ways, and who acts in socially unconventional ways, they may assume that the person wouldn't be interested in collaborating or wouldn't be able to do so. But, as noted by Anne M. Donnellan and others who conduct research with disabled people, behavioral differences don't necessarily reflect underlying capacities or a lack of interest in social engagement. These researchers have emphasized the importance of presuming competence - in our case, that means expecting nonspeaking autistic people to be able to learn, think, and participate.
Thus, throughout our project, we have invited nonspeaking autistic people to offer suggestions and feedback in whatever manner they prefer, including by pointing to letters on a physical letterboard while supported by a CRP. Although critics of assisted forms of communication may object to this inclusive approach, we have found the contributions of nonspeakers invaluable. Through Zoom meetings, email correspondence, comments after research sessions, and shared Google docs, these participants have provided essential input about whether and how the AR technology we're developing could be a useful communication tool. In keeping with the community's interest in more independent communication, our tests of the technology have focused on nonspeakers' performance without the assistance of a CRP.
In early conversations, our collaborators raised several concerns about using AR. For example, they worried that wearing a head-mounted device wouldn't be comfortable. Our first study investigated this topic and found that, with appropriate support and sufficient time, 15 of 17 nonspeakers wore the device without difficulty. We now have 3D-printed models that replicate the shape and weight of the HoloLens 2, to allow participants to build up tolerance before they participate in actual experiments.
Some users also expressed concern about the potential for sensory overload, and their concerns made us realize that we hadn't adequately explained the difference between AR and VR. We now provide a video before each study that explains exactly what participants will do and see and shows how AR is less immersive than VR.
Some participants told us that they like the tactile input from interacting with physical objects, including physical letterboards, and were concerned that virtual objects wouldn't replicate that experience. We currently address this concern using sensory substitution: Letters on the HoloBoard hover slightly in front of a semitransparent virtual backplate. Activating a letter requires the user to "push" it approximately 3 centimeters toward the backplate, and successful activation is accompanied by an audible click and a recorded voice saying the letter aloud.
Our users' needs and preferences have helped us set priorities for our research program. One person noted that an AR communication system seemed "cool," but worried that the motor skills required to interact in AR might not be possible without practice. So from the very first app we developed, we built in activities to let users practice the motor skills they needed to succeed.
Participants also told us they wanted to be able to customize the holograms - not just to suit their aesthetic preferences but also to better fit their unique sensory, motor, and attentional profiles. As a result, users of the HoloBoard can choose its color scheme and the size of the virtual letterboard, and whether the letters are said aloud as they're pressed. We've also provided several ways to activate letters: by pressing them, looking at them, or looking at them while using a physical clicker.
We had initially assumed that users would be interested in predictive text capabilities for the HoloBoard - having it autofill likely words based on the first letters typed. However, several people explained that although such a system could theoretically speed up communication, they would find it distracting. We've put this idea on the back burner for now; it may eventually become an option that users can toggle on if they wish.
To make things easier for users, we've investigated whether the HoloBoard could be positioned automatically in space, dynamically adjusting to the user's motor skills and movement patterns throughout a session. To this end, we used a behavioral cloning approach: During real-world interactions between nonspeakers and their CRPs, we observed the position of the user's fingers, palms, head, and physical letterboard. We then used that data to train a machine learning model to automatically adapt the placement of a virtual letterboard for a specific user.
Many nonspeaking participants who currently communicate with human assistance see the HoloBoard as providing a way to communicate with more autonomy. Indeed, we've found that after a 10-minute training procedure, most users of the HoloBoard can, like Jeremy, use it to type short words independently. We recently began a six-month study with five participants who have regular sessions in building their typing skills on the HoloBoard.
One of the most common questions from our nonspeaking participants, as well as from parents and professionals, is whether AR could teach the skills needed to type on a standard keyboard. It seems possible, in theory. As a first step, we're creating other types of AR teaching tools, including an educational AR app that teaches typing in the context of engaging and age-appropriate lessons.
We've also begun developing a virtual CRP that can offer support and feedback as a user interacts with the virtual letterboard. This virtual assistant, named ViC, can demonstrate motor movements as a user is learning to spell with the HoloBoard, and also offers verbal prompts and encouragement during a training session. There aren't many professionals who know how to teach nonspeakers typing skills, so a virtual CRP could be a game changer for this population.
Although nonspeakers have responded enthusiastically to our AR communication tools, our conversations and studies have revealed a number of practical challenges with the current technology.
For starters, most people can't afford Microsoft's HoloLens 2, which costs US $3,500. (It's also recently been discontinued!) So we've begun testing our software on less expensive mixed-reality products such as Meta's $500 Quest 3, and preliminary results have been promising. But regardless of which device is used, most headsets are bulky and heavy. It's unlikely that someone would wear one throughout a school day, for example. One idea we're pursuing is to design a pair of AR glasses that's just for virtual typing; a device customized for a single function would weigh much less than a general-purpose headset.
We've also encountered technical challenges. For example, the HoloLens 2's field of view is only 52 degrees. This restricts the size and placement of holograms, as larger holograms or those positioned incorrectly may be partially or entirely invisible to the user. So when participants use their fingers to point at virtual letters on the HoloBoard, some letters near the edges of the board may fall outside the visible area, which is frustrating to users. To address these issues, we used a vertical layout in our educational app so that the multiple-choice buttons always remain within a user's field of view. Our systems also allow a researcher or caregiver to monitor an AR session and, if necessary, adjust the size of virtual objects so they're always in view.
We have a few other ideas for dealing with the field-of-view issue, including deploying devices that have a larger field of view. Another strategy is to use eye tracking to select letters, which would eliminate the reliance on hand movements and the problem of the user's pointing fingers obscuring the letters. And some users might prefer using a joystick or other handheld controller to navigate and select letters. Together, these techniques should make the system more accessible while working within hardware constraints.
We have also been developing cross-reality apps, which allow two or more people wearing AR headsets to interact within the same virtual space. That's the setup we use to enable researchers to monitor study sessions in real time. Based on our development experience, we created an open-source tool called SimpleShare for the development of multiuser extended-reality apps in a device-agnostic way. A related issue is that many of our users make sudden movements; a sudden shake of a head can interfere with the sensors on the AR headset and upset the spatial alignment between multiple headsets. So our apps and SimpleShare instruct the headset to routinely scan the environment and use that data to automatically realign multiple devices, if necessary.
We've had to find solutions to cope with the limited computing power available on AR headsets. Running the AI model that automates the custom placement of the HoloBoard for each user can cause a lag in letterboard interactions and can cause the headset to heat up. We solved this problem by simplifying the AI model and decreasing the frequency of the model's interventions. Rendering a realistic virtual CRP via a headset is also computationally intensive. In our virtual CRP work, we're now rendering the avatar on an edge device, such as a laptop with a state-of-the-art GPU, and streaming it to the display.
As we continue to tackle these technology challenges, we're well aware that we don't have all the answers. That's why we discuss the problems that we're working on with the nonspeaking autistic people who will use the technology. Their perspectives are helping us make progress toward a truly usable and useful device.
So many assumptions are made about people who cannot speak, including that they don't have anything to say. We went into this project presuming competence in nonspeaking people, and yet we still weren't sure if our participants would be able to adapt to our technology. In our initial work, we were unsure whether nonspeakers could wear the AR device or interact with virtual buttons. They easily did both. In our evaluation of the HoloBoard prototype, we didn't know if users could type on a virtual letterboard hovering in front of them. They did so while we watched. In a recent study investigating whether nonspeakers could select letters using eye-gaze tracking, we wondered if they could complete the built-in gaze-calibration procedure. They did.
The ability to communicate - to share information, memories, opinions - is essential to well-being. Unfortunately, most autistic people who can't communicate using speech are never provided an effective alternative. Without a way to convey their thoughts, they are deprived of educational, social, community, and employment opportunities.
We aren't so naive as to think that AR is a silver bullet. But we're hopeful that there will be more community collaborations like ours, which take seriously the lived experiences of nonspeaking autistic people and lead to new technologies to support them. Their voices may be stuck inside, but they deserve to be heard.
Regeneron has agreed to buy 23andMe, the once buzzy genetic testing company, out of bankruptcy for $256 million under a court-supervised sale process:
23andMe declared bankruptcy in March and announced it would seek a buyer, while also saying that co-founder and CEO Anne Wojcicki would resign.
Under the proposed agreement with Regeneron, the Tarrytown, New York, drugmaker will acquire 23andMe's assets, including its personal genome service and total health and research services. Regeneron said Monday that it will abide by 23andMe's privacy policies and applicable law to protect customer data.
Data privacy experts had raised concerns about 23andMe's storehouse of data for about 15 million customers, including their DNA.
23andMe's consumer-genome services will continue interrupted, the purchaser said. Regeneron will not acquire 23andMe's Lemonaid Health telehealth business.
Also at ZeroHedge.
Previously: 23andMe Reportedly Faces Bankruptcy — What Will Happen to Everyone's DNA Samples?
The Verge, Space News, and The South China Morning Post are reporting that Red China has begun assembling a 744 TOPS super computer in Earth's orbit. The advantages of an orbital super computer include better access to solar energy, easier radiation of waste heat, and, above all, shorter communications times with other satellites.
The satellites communicate with each other at up-to-100Gbps using lasers, and share 30 terabytes of storage between them, according to Space News. The 12 launched last week carry scientific payloads, including an X-ray polarization detector for picking up brief cosmic phenomena such as gamma-ray bursts. The satellites also have the capability to create 3D digital twin data that can be used for purposes like emergency response, gaming, and tourism, ADA Space says in its announcement.
- — China begins assembling its supercomputer in space. The Verge.
They are part of the Three-Body Computing Constellation, space-based infrastructure being developed by Zhejiang Lab. Once complete, the constellation would support real-time, in-orbit data processing with a total computing capacity of 1,000 peta operations per second (POPS) – or one quintillion operations per second – the report said.
- — China launches satellites to start building the world's first supercomputer in orbit. The South China Morning Post.
The satellites feature advanced AI capabilities, up to 100 Gbps laser inter-satellite links and remote sensing payloads—data from which will be processed onboard, reducing data transmission requirements. One satellite also carries a cosmic X-ray polarimeter developed by Guangxi University and the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC), which will detect, identify and classify transient events such as gamma-ray bursts, while also triggering messages to enable followup observations by other missions.
Maintenance will be difficult.
Previously:
(2025) PA's Largest Coal Plant to Become 4.5GW Gas-Fired AI Hub
(2025) FTC Removes Posts Critical of Amazon, Microsoft, and AI Companies
(2025) Real Datacenter Emissions Are A Dirty Secret
(2022) Amazon and Microsoft Want to Go Big on Data Centres, but the Power Grid Can't Support Them
EU users have less than two weeks to opt out of Meta's AI training:
Privacy watchdog Noyb sent a cease-and-desist letter to Meta Wednesday, threatening to pursue a potentially billion-dollar class action to block Meta's AI training, which starts soon in the European Union.
In the letter, Noyb noted that Meta only recently notified EU users on its platforms that they had until May 27 to opt their public posts out of Meta's AI training data sets. According to Noyb, Meta is also requiring users who already opted out of AI training in 2024 to opt out again or forever lose their opportunity to keep their data out of Meta's models, as training data likely cannot be easily deleted. That's a seeming violation of the General Data Protection Regulation (GDPR), Noyb alleged.
"Meta informed data subjects that, despite that fact that an objection to AI training under Article 21(2) GDPR was accepted in 2024, their personal data will be processed unless they object again—against its former promises, which further undermines any legitimate trust in Meta's organizational ability to properly execute the necessary steps when data subjects exercise their rights," Noyb's letter said.
[...] The letter accused Meta of further deceptions, like planning to seize data that users may not consider "public," like disappearing stories typically only viewed by small audiences. That, Noyb said, differs significantly from AI crawlers scraping information posted on a public website.
According to Noyb, there would be no issue with Meta's AI training in the EU if Meta would use a consent-based model rather than requiring rushed opt-outs. As Meta explained in a blog following a threatened preliminary injunction on AI training in Germany, the company plans to collect AI training data using a "legitimate interest" legal basis, which supposedly "follows the clear guidelines of the European Data Protection Committee of December 2024, which reflect the consensus between EU data protection authorities."
But Noyb Chairman Max Schrems doesn't believe that Meta has a legitimate interest in sweeping data collection for AI training.
"The European Court of Justice has already held that Meta cannot claim a 'legitimate interest' in targeting users with advertising," Schrems said in a press release. "How should it have a 'legitimate interest' to suck up all data for AI training? While the 'legitimate interest' assessment is always a multi-factor test, all factors seem to point in the wrong direction for Meta. Meta simply says that its interest in making money is more important than the rights of its users."
In a statement, Meta's spokesperson defended the opt-out approach, noting that "we've provided EU users with a clear way to object to their data being used for training AI at Meta, notifying them via email and in-app notifications that they can object at any time."
The spokesperson criticized "Noyb's copycat actions" as "part of an attempt by a vocal minority of activist groups to delay AI innovation in the EU, which is ultimately harming consumers and businesses who could benefit from these cutting-edge technologies."
[...] Meta argued that its AI training efforts in the EU are far more transparent than efforts from competitors Google and OpenAI, which, Meta noted, "have already used data from European users to train their AI models," supposedly without taking the steps Meta has to inform users.
Also echoing a common refrain in the AI industry, another Meta blog warned that efforts to further delay Meta's AI training in the EU could lead to "major setbacks," pushing the EU behind rivals in the AI race.
"Without a reform and simplification of the European regulatory system, Europe threatens to fall further and further behind in the global AI race and lose ground compared to the USA and China," Meta warned.
[...] "This fight is essentially about whether to ask people for consent or simply take their data without it," Schrems said, adding, "Meta's absurd claims that stealing everyone's personal data is necessary for AI training is laughable. Other AI providers do not use social network data—and generate even better models than Meta."
https://www.theregister.com/2025/05/13/nextcloud_play_store_complaint/
https://nextcloud.com/blog/nextcloud-android-file-upload-issue-google/
Exclusive European software vendor Nextcloud has accused Google of deliberately crippling its Android Files application, which it says has more than 800,000 users.
The problem lies with the "All files access" permission, where broad access to files on a device is required. While most applications can make do with Google's more privacy-friendly storage access tools, such as Storage Access Framework (SAF) or the MediaStore API, others require more permissions – hence the "All files access" privilege.
Nextcloud's Android Files app is a file synchronization tool that, according to the company, has long had permission to read and write all file types. "Nextcloud has had this feature since its inception in 2016," it said, "and we never heard about any security concerns from Google about it."
That changed in 2024, when someone or something at Google's Play Store decided to revoke the permission, effectively crippling the application. Nextcloud was instructed to use "a more privacy-aware replacement."
According to Nextcloud, "SAF cannot be used, as it is for sharing/exposing our files to other apps ... MediaStore API cannot be used as it does not allow access to other files, but only media files."
Scientists have re-calculate when the universe will end and it's a lot sooner than previously expected.
"sooner" still means a mind-bending 10 to the power of 78 years from now. That is a 1 followed by 78 zeros, which is unimaginably far into the future. However, in cosmic terms, this estimate is a dramatic advancement from the previous prediction of 10 to the power of 1,100 years, made by Falcke and his team in 2023.
Lets hope the next predictions does not come with the same results. The logarithmic end of time.
M&S forces customer password resets after data breach:
Marks and Spencer (M&S) has confirmed that customer data was stolen during the Easter DragonForce ransomware attack on its server infrastructure and will be prompting all online customers to reset their account passwords as a precautionary move.
The attack unfolded three weeks ago and is thought to have been the work of a white-label affiliate of DragonForce – possibly the notorious Scattered Spider operation, which uses social engineering tactics to conduct its intrusions.
The stolen tranche of data is understood to include contact details email addresses, postal addresses and phone numbers; personal information including names and dates of birth; and data on customer interactions with the chain, including online order histories, household information, and 'masked' payment card details.
M&S added that customer reference numbers, but not payment information, belonging to holders of M&S credit cards or Sparks Pay cards – including former cardholders – may also have been taken.
"We have written to customers today to let them know that unfortunately, some personal customer information has been taken," said M&S chief exec Stuart Machin.
"Importantly there is no evidence that the information has been shared and it does not include useable card or payment details, or account passwords, so there is no need for customers to take any action."
[...] NordVPN chief technology officer, Marijus Briedis, described M&S' assertion that the attackers have not yet leaked or shared the stolen data was "overly optimistic" under the circumstances and warned that even if passwords or credit card details were not exposed, the data that was taken was still very useful to cyber criminals.
"This type of data can be used in phishing campaigns or combined with other leaked information to commit identity theft," explained Briedis.
"Consumers often underestimate how damaging 'harmless' data like order history or email addresses can be in the wrong hands. These M&S hackers could use this data to build highly personalised phishing emails, designed to look identical to what the retailer would send, and these are much harder to spot.
"This breach highlights how companies must not only secure financial data, but also treat seemingly less sensitive information – like customer profiles and purchase records – as critical assets that require protection."
Max Vetter, vice president of cyber at Immersive and a former money laundering investigator with London's Metropolitan Police, also had harsh words for M&S.
"M&S saying that customers could change their passwords "for extra peace of mind" does little to reassure those worried about who has access to their personal information," he said. "As the fallout from this attack continues, customers want clear assurances about their personal data and what M&S is doing to keep it safe from being published online.
"M&S want to appear in control and telling people to be more vigilant, however, telling customers there's no need to act risks does potentially the wrong message. We recommend all customers reset their password.
Zetter reaffirmed the stolen data would be prime material for downstream social engineering and phishing attacks, especially if it is indeed in the hands of Scattered Spider who, he said, "often play a long game".
See also: RansomHub Went Dark April 1; Affiliates Fled to Qilin, DragonForce Claimed Control
Co-op cyber attack affects customer data, firm admits, after hackers contact BBC:
Cyber criminals have told BBC News their hack against Co-op is far more serious than the company previously admitted.
Hackers contacted the BBC with proof they had infiltrated IT networks and stolen huge amounts of customer and employee data.
After being approached on Friday, a Co-op spokesperson said the hackers "accessed data relating to a significant number of our current and past members".
Co-op had previously said that it had taken "proactive measures" to fend off hackers and that it was only having a "small impact" on its operations.
It also assured the public that there was "no evidence that customer data was compromised".
The cyber criminals claim to have the private information of 20 million people who signed up to Co-op's membership scheme, but the firm would not confirm that number.
The criminals, who are using the name DragonForce, say they are also responsible for the ongoing attack on M&S and an attempted hack of Harrods.
The attacks have led government minister Pat McFadden to warn companies to "treat cyber security as an absolute priority".
[...] Co-op has more than 2,500 supermarkets as well as 800 funeral homes and an insurance business.
[...] On Thursday, it was revealed Co-op staff were being urged to keep their cameras on during Teams meetings, ordered not to record or transcribe calls, and to verify that all participants were genuine Co-op staff.
The security measure now appears to be a direct result of the hackers having access to internal Teams chats and calls.
[...] Since the BBC contacted Co-op about the hackers' evidence, the firm has disclosed the full extent of the breach to its staff and the stock market.
"This data includes Co-op Group members' personal data such as names and contact details, and did not include members' passwords, bank or credit card details, transactions or information relating to any members' or customers' products or services with the Co-op Group," a spokesperson said.
DragonForce want the BBC to report the hack - they are apparently trying to extort the company for money.
But the criminals wouldn't say what they plan to do with the data if they don't get paid.
They refused to talk about M&S or Harrods and when asked about how they feel about causing so much distress and damage to business and customers, they refused to answer.
[...] It's not known who is ultimately using the DragonForce service to attack the retailers, but some security experts say the tactics seen are similar to that of a loosely coordinated group of hackers who have been called Scattered Spider or Octo Tempest.
The gang operates on Telegram and Discord channels and is English-speaking and young – in some cases only teenagers.
Harrods is latest British retailer to be hit by cyber attack:
London department store Harrods said on Thursday hackers had attempted to break into its systems, the third high-profile cyber attack on a UK retailer in two weeks, following incidents at Marks & Spencer and the Co-op Group.
British companies, public bodies and institutions have been hit by a wave of cyber attacks in recent years, costing them tens of millions of pounds and often months of disruption.
"We recently experienced attempts to gain unauthorised access to some of our systems," a statement from Harrods, owned by the Qatar Investment Authority, said.
"Our seasoned IT security team immediately took proactive steps to keep systems safe and as a result we have restricted internet access at our sites today."
It said all its sites, including its flagship Knightsbridge store in London, H beauty stores and airport stores remained open and customers could also continue to shop online.
The Harrods and Co-op incidents appear to have had less of an impact than the attack on M&S, one of Britain's best known retailers, which has paused taking clothing and home orders through its website and app for the last seven days.
[...] Technology specialist site BleepingComputer, citing multiple sources, said a ransomware attack that encrypted M&S's servers was believed to have been conducted by a hacking collective known as "Scattered Spider".
Arthur T Knackerbracket has processed the following story:
Police and federal agencies have found a controversial new way to skirt the growing patchwork of laws that curb how they use facial recognition: an AI model that can track people using attributes like body size, gender, hair color and style, clothing, and accessories.
The tool, called Track and built by the video analytics company Veritone, is used by 400 customers, including state and local police departments and universities all over the US. It is also expanding federally: US attorneys at the Department of Justice began using Track for criminal investigations last August. Veritone’s broader suite of AI tools, which includes bona fide facial recognition, is also used by the Department of Homeland Security—which houses immigration agencies—and the Department of Defense, according to the company.
“The whole vision behind Track in the first place,” says Veritone CEO Ryan Steelberg, was “if we’re not allowed to track people’s faces, how do we assist in trying to potentially identify criminals or malicious behavior or activity?” In addition to tracking individuals where facial recognition isn’t legally allowed, Steelberg says, it allows for tracking when faces are obscured or not visible.
The product has drawn criticism from the American Civil Liberties Union, which—after learning of the tool through MIT Technology Review—said it was the first instance they’d seen of a nonbiometric tracking system used at scale in the US.They warned that it raises many of the same privacy concerns as facial recognition but also introduces new ones at a time when the Trump administration is pushing federal agencies to ramp up monitoring of protesters, immigrants, and students.
Veritone gave us a demonstration of Track in which it analyzed people in footage from different environments, ranging from the January 6 riots to subway stations. You can use it to find people by specifying body size, gender, hair color and style, shoes, clothing, and various accessories. The tool can then assemble timelines, tracking a person across different locations and video feeds. It can be accessed through Amazon and Microsoft cloud platforms.
In an interview, Steelberg said that the number of attributes Track uses to identify people will continue to grow. When asked if Track differentiates on the basis of skin tone, a company spokesperson said it’s one of the attributes the algorithm uses to tell people apart but that the software does not currently allow users to search for people by skin color. Track currently operates only on recorded video, but Steelberg claims the company is less than a year from being able to run it on live video feeds.
Agencies using Track can add footage from police body cameras, drones, public videos on YouTube, or so-called citizen upload footage (from Ring cameras or cell phones, for example) in response to police requests.
“We like to call this our Jason Bourne app,” Steelberg says. He expects the technology to come under scrutiny in court cases but says, “I hope we’re exonerating people as much as we’re helping police find the bad guys.” The public sector currently accounts for only 6% of Veritone’s business (most of its clients are media and entertainment companies), but the company says that’s its fastest-growing market, with clients in places including California, Washington, Colorado, New Jersey, and Illinois.
[...] Track’s expansion comes as laws limiting the use of facial recognition have spread, sparked by wrongful arrests in which officers have been overly confident in the judgments of algorithms. Numerous studies have shown that such algorithms are less accurate with nonwhite faces. Laws in Montana and Maine sharply limit when police can use it—it’s not allowed in real time with live video—while San Francisco and Oakland, California have near-complete bans on facial recognition. Track provides an alternative.
Though such laws often reference “biometric data,” Wessler says this phrase is far from clearly defined. It generally refers to immutable characteristics like faces, gait and fingerprints rather than things that change, like clothing. But certain attributes, such as body size, blur this distinction.
Consider also, Wessler says, someone in winter who frequently wears the same boots, coat, and backpack. “Their profile is going to be the same day after day,” Wessler says. “The potential to track somebody over time based on how they’re moving across a whole bunch of different saved video feeds is pretty equivalent to face recognition.”
In other words, Track might provide a way of following someone that raises many of the same concerns as facial recognition, but isn’t subject to laws restricting use of facial recognition because it does not technically involve biometric data. Steelberg said there are several ongoing cases that include video evidence from Track, but that he couldn’t name the cases or comment further. So for now, it’s unclear whether it’s being adopted in jurisdictions where facial recognition is banned.
On Tuesday, someone posted a video on X of a procession of crosses, with a caption reading, "Each cross represents a white farmer who was murdered in South Africa." Elon Musk, South African by birth, shared the post, greatly expanding its visibility. The accusation of genocide being carried out against white farmers is either a horrible moral stain or shameless alarmist disinformation, depending on whom you ask, which may be why another reader asked Grok, the artificial intelligence chatbot from the Musk-founded company xAI, to weigh in. Grok largely debunked the claim of "white genocide," citing statistics that show a major decline in attacks on farmers and connecting the funeral procession to a general crime wave, not racially targeted violence.
By the next day, something had changed. Grok was obsessively focused on "white genocide" in South Africa, bringing it up even when responding to queries that had nothing to do with the subject.
How much do the Toronto Blue Jays pay the team's pitcher, Max Scherzer? Grok responded by discussing white genocide in South Africa. What's up with this picture of a tiny dog? Again, white genocide in South Africa. Did Qatar promise to invest in the United States? There, too, Grok's answer was about white genocide in South Africa.
Arthur T Knackerbracket has processed the following story:
Mars may still be home to oceanic quantities of liquid water, according to a recent paper published by the National Science Review.
Titled “Seismic evidence of liquid water at the base of Mars' upper crust”, the paper [PDF] notes that liquid water once flowed freely on the surface of Mars before the planet’s magnetic field faded, its atmosphere thinned, and it became the dry and frozen hellscape we know today.
The paper’s authors – from China’s Academy of Sciences, the Australian National University, and the University of Milano-Bicocca – note the generally accepted theory that Mars’ water either evaporated into space or was somehow stored in the planet’s crust but worry there’s little evidence to help us understand how much water may remain.
They think they found that evidence in data gathered by the Mars InSight, the sadly defunct lander that studied the Red Planet’s interior, when it recorded the impact of two meteorite impacts in 2021 and a 2022 Marsquake.
Those incidents produced seismic waves that slowed as they passed through a layer between 5.4 and 8 kilometers below the surface.
The authors cite studies on how quickly seismic waves travel through porous rocks, plus research on how such waves behave as they pass through layers in Earth’s crust and conclude that Mars is home to a “water-soaked layer 5.4 to 8 kilometers deep.”
In a summary of the paper, Australian and Chinese researchers characterize the that layer as “most likely highly porous rock filled with liquid water, like a saturated sponge” and akin to Earth’s aquifers. The paper estimates the porous rocks contain enough water to cover Mars in a global ocean 520–780m deep.
Journal Reference: Weijia Sun, Hrvoje Tkalčić, Marco G Malusà, Yongxin Pan, Seismic evidence of liquid water at the base of Mars' upper crust, National Science Review, 2025, nwaf166, https://doi.org/10.1093/nsr/nwaf166
Last week, a U.S. congressman announced a plan to introduce a bill that would mandate producers of high-performance AI processors to track them geographically in a bid to limit their usage by unauthorized foreign actors, such as China. Senator Tom Cotton of Arkansas then introduced a legislative measure later in the week. The bill covers hardware that goes way beyond just AI processors, and would give the Commerce Secretary power to verify the location of hardware, and put mandatory location controls on commercial companies. To make matters even more complicated, geo-tracking features would be required for high-performance graphics cards as well.
The bill covers a wide range of products classified as 3A090, 4A090, 4A003.z, and 3A001.z export control classification numbers (ECCNs), so advanced processors for AI, AI servers (including rack-scale solutions), HPC servers, and general-purpose electronics of strategic concern due to potential military utility or dual-use risk. It should be noted that many high-end graphics cards (such as Nvidia's GeForce RTX 4090 and RTX 5090) are also classified as a 3A090 product, so it looks like such add-in-boards will also have to add geo-tracking capabilities.
The first and central provision of the bill is the requirement for tracking technology to be embedded in any high-end processor module or device that falls under the U.S. export restrictions. This condition would take effect six months after the legislation is enacted, which will make the lives of companies like AMD, Intel, and Nvidia harder, as adding a feature to already developed products is a tough task. The mechanism must allow verification of a chip's or device's physical location, enabling the U.S. government to confirm whether it remains at the approved endpoint. Yet, exporters would be obliged to keep track of their products.
The bill authorizes the Secretary of Commerce to verify the ownership and location of regulated processors and systems after export and maintain a centralized registry of current locations and end-users. Nvidia, as well as other exporters, would also be obligated to inform the Bureau of Industry and Security if there is evidence that a component has been redirected from its authorized destination. Additionally, any indications of tampering or manipulation must be reported.
The bill, if supported by lawmakers, will mandate a one-year study to be conducted jointly by the Department of Commerce and the Department of Defense, which will identify additional protective measures that could be introduced in the future. Beyond the initial study, the same two departments are required to conduct yearly assessments for three consecutive years following the bill's enactment. These reviews must evaluate the most current advancements in security technologies applicable to products under export control. Based on these assessments, the departments may determine whether new requirements should be imposed.
If the assessment concludes that additional mechanisms are appropriate, the Commerce Department must finalize rules within two years requiring covered chips and systems to incorporate these secondary features. A detailed implementation roadmap must also be submitted to the relevant congressional committees. All development and deployment of these mechanisms must preserve the confidentiality of sensitive commercial technologies.
Finally, the legislation emphasizes confidentiality in all stages of developing and applying these new technical requirements. Any proposed safeguards or tracking features must be designed and implemented in a way that protects the proprietary information and trade secrets of American developers, such as AMD, Intel, and Nvidia. This condition ensures that while national security is strengthened, industrial competitiveness is not undermined.
Is it even possible? Does the "tracking" stop if an American purchases the GPU?
See also: Nvidia says it is not sending GPU designs to China after reports of new Shanghai operation [JR]
Processed by drussell
https://www.theregister.com/2025/05/15/voyager_1_survives_with_thruster_fix/
NASA has revived a set of thrusters on the nearly 50-year-old Voyager 1 spacecraft after declaring them inoperable over two decades ago.
It's a nice long-distance engineering win for the team at NASA's Jet Propulsion Laboratory, responsible for keeping the venerable Voyager spacecraft flying - and a critical one at that, as clogging fuel lines threatened to derail the backup thrusters currently in use.
The things you have to deal with when your spacecraft is operating more than four decades beyond its original mission plan, eh? Voyager 1 launched in 1977.
JPL reported Wednesday that the maneuver, completed in March, restarted Voyager 1's primary roll thrusters, which are used to keep the spacecraft aligned with a tracking star. That guide star helps keep its high-gain antenna aimed at Earth, now over 15.6 billion miles (25 billion kilometers) away, and far beyond the reach of any telescope.
Those primary roll thrusters stopped working in 2004 after a pair of internal heaters lost power. Voyager engineers long believed they were broken and unfixable. The backup roll thrusters in use are now at risk due to residue buildup in their fuel lines, which could cause failure as early as this fall.
Without roll thrusters, Voyager 1 would lose its ability to stay properly oriented and eventually drift out of contact.
White House scraps plan to block data brokers from selling Americans' sensitive data:
A senior Trump administration official has scrapped a plan that would have blocked data brokers from selling Americans' personal and financial information, including Social Security numbers.
The Consumer Financial Protection Bureau (CFPB) said in December 2024 it planned to close a loophole under the Fair Credit Reporting Act, the federal law that protects Americans' personal data collected by consumer reporting agencies, such as credit bureaus and renter-screening companies. The rule would have treated data brokers no differently than any other company covered under the federal law and would have required them to comply with the law's privacy rules.
The rule was withdrawn early Tuesday, according to its listing in the Federal Register. The CFPB's acting director, Russell Vought, who also serves as the director of the White House's Office of Management and Budget, wrote that the rule is "not aligned with the Bureau's current interpretation" of the Fair Credit Reporting Act.
[...] Privacy advocates have long called for the government to use the Fair Credit Reporting Act to rein in data brokers.
The decision by CFPB to cancel the rule comes days after the Financial Technology Association, an industry lobby group representing non-bank fintech companies, wrote to Vought in his capacity as the White House's budget director. The lobby group asked the administration to withdraw the CFPB's rule, claiming it would be "harmful to financial institutions' efforts to detect and prevent fraud."