Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Arthur T Knackerbracket has processed the following story:
Imagine an inverse Black Hat conference, an Alcoholics Anonymous for CISOs, where everyone commits to frank disclosure and debate on the underlying structural causes of persistently failing cybersecurity syndrome
It's been a devastating few weeks for UK retail giants. Marks and Spencer, the Co-Op, and now uber-posh Harrods have had massive disruptions due to ransomware attacks taking systems down for prolonged periods.
If the goods these people sold were one-tenth as shoddy as their corporate cybersecurity, they'd have been out of business years ago. It's a wake-up call, says the UK's National Center for Stating the Obvious. And what will happen? The industry will just press the snooze button again, as we hear reports that other retailers are "patching like crazy."
The bare fact that entire sectors remain exquisitely vulnerable to what is, by now, a very familiar form of attack is a diagnostic of systematic failure in the way such sectors are run. There are few details of what exactly happened, but it's not the details that matter, it's the fact that so little was made public.
We see only silence, deflection, and grudging admission as the undeniable effects multiply - which is a very familiar pattern. The only surprise is that there is no surprise. This isn't part of the problem, it is the problem. Like alcoholics, organizations cannot get better until they admit, confront, and work with others to mitigate the compulsions that bring them low. The raw facts are not in doubt; it's the barriers to admitting and drawing out their sting that perpetuate the problem.
We know this because there is so much evidence of corporate IT's fundamental flaws. If you have been in the business for a few years, you'll already know what they are – just as surely as you'll have despaired of progress. If you are joyfully innocent newbie, then look at the British Library's report into its own 2023 ransomware catastrophe. It took many core systems down, some of them forever, while leaking huge amounts of data that belonged to staff and customers. As a major public institution established by law, and one devoted to knowledge as a social good, the British Library wasn't just free to be frank about what happened, it had a moral obligation to do so.
[...] This is basic human psychology that operates at every scale. Getting the boiler serviced or buying a sparkling new gaming rig - there's a right decision and one you'll actually make. Promising to run a state well while starving it of funds, is again hardly unknown. Such an act is basic, but toxic, and it admits of its toxicity by being something that polite people are loath to discuss in public.
Where there's insufficient discipline to Do The Right Thing in private, though, making it public is a powerful corrective. Self-help groups for alcohol abuse work for many. Religions are big on public confession for a reason. Democracy forces periodic public review of promises kept or truths disowned. What might work for the toxic psychology of organizations that keeps them addicted to terrible cybersecurity?
It's unlikely that entrenched corporate culture will reform itself. You are welcome to look for historic examples, they're filed alongside tobacco companies moving into tomato farming and the Kalashnikov Ploughshare Company.
[...] What then? A protocol for ensuring, or at least encouraging, the security lifecycle of a project or component. How long will it live, how much will it cost to watch and maintain it, what mechanisms are there to reassess it regularly as the threat environment evolves, what dependencies need safeguarding, and, lastly, what is the threat surface of third party elements? In short, we must agree to accept that there is no such thing as "legacy IT," no level of technical debt that can be quietly shoved off the books. If all that isn't signed off at the start of a system's life, it doesn't happen.
No silver bullet, nor proof against toxic psychology. It would be a tool for everyone who knows what the right decision is, but who can't see how to make it happen. There are plenty of accepted methodologies for characterizing the shape of a project at its inception and development, and all came about to fix previous problems.
Several planets orbiting two stars at once, like the fictional Star Wars world Tatooine, have been discovered in the past years. These planets typically occupy orbits that roughly align with the plane in which their host stars orbit each other. There have previously been hints that planets on perpendicular, or polar, orbits around binary stars could exist: in theory, these orbits are stable, and planet-forming discs on polar orbits around stellar pairs have been detected. However, until now, we lacked clear evidence that these polar planets do exist.
"I am particularly excited to be involved in detecting credible evidence that this configuration exists," says Thomas Baycroft, a PhD student at the University of Birmingham, UK, who led the study published today in Science Advances.
The unprecedented exoplanet, named 2M1510 (AB) b, orbits a pair of young brown dwarfs — objects bigger than gas-giant planets but too small to be proper stars. The two brown dwarfs produce eclipses of one another as seen from Earth, making them part of what astronomers call an eclipsing binary. This system is incredibly rare: it is only the second pair of eclipsing brown dwarfs known to date, and it contains the first exoplanet ever found on a path at right angles to the orbit of its two host stars.
"A planet orbiting not just a binary, but a binary brown dwarf, as well as being on a polar orbit is rather incredible and exciting," says co-author Amaury Triaud, a professor at the University of Birmingham.
The team found this planet while refining the orbital and physical parameters of the two brown dwarfs by collecting observations with the Ultraviolet and Visual Echelle Spectrograph (UVES) instrument on ESO's VLT at Paranal Observatory, Chile. The pair of brown dwarfs, known as 2M1510, were first detected in 2018 by Triaud and others with the Search for habitable Planets EClipsing ULtra-cOOl Stars (SPECULOOS), another Paranal facility.
The astronomers observed the orbital path of the two stars in 2M1510 being pushed and pulled in unusual ways, leading them to infer the existence of an exoplanet with its strange orbital angle. "We reviewed all possible scenarios, and the only one consistent with the data is if a planet is on a polar orbit about this binary," says Baycroft [1].
"The discovery was serendipitous, in the sense that our observations were not collected to seek such a planet, or orbital configuration. As such, it is a big surprise," says Triaud. "Overall, I think this shows to us astronomers, but also to the public at large, what is possible in the fascinating Universe we inhabit."
This research was presented in a paper to appear in Science Advances titled "Evidence for a polar circumbinary exoplanet orbiting a pair of eclipsing brown dwarfs" (https://doi.org/10.1126/sciadv.adu0627).
Journal Reference: DOI: 10.1126/sciadv.adu0627
Arthur T Knackerbracket has processed the following story:
Tech buyers should purchase refurbished devices to push vendors to make hardware more repairable and help the shift to a more circular economy, according to a senior analyst at IDC.
Presenting a TED talk, vice president of devices for EMEA Francisco Jeronimo said that in 2022 there were 62 million tons of electronic waste generated while the average e-waste per person amounted to 11.2 kg annually.
While governments and manufacturers should press for more ethical sourcing and better recycling practices in consumer tech, buyers are not entirely powerless.
"When we look into all this waste, we know there's a problem, but we don't look into what we are doing to fix it," he said. "We blame governments, we blame corporations, we blame the brands. Because at the end of the day, how can I make my smartphone more sustainable? I can't. It needs to be the brand [and] governments [that] are bringing legislation to force the brands, but we have a superpower."
The buyer's superpower comes in the form of extending the life of devices we own, and choosing to buy secondhand refurbished devices when we need new ones, he said.
"Circularity is the answer. We need to decide whether we're going to keep buying new devices or take action to extend the life of the devices we use and make better choices when we buy new products."
If users in the European Union were able to extend by one year the lifespan of washing machines, notebooks, vacuum cleaners and smartphones, roughly four million tons of CO2 emissions would be saved, a European Environmental Bureau study claimed in 2019.
Jeronimo said the popularity of secondhand clothing has taken off on platforms like Vinted and eBay, but more could be done in technology.
"When we need a new smartphone or tablet or PC, we rush to the store to buy it new, and that needs to change, and there are 62 million tons of reasons why it matters."
[...] In March 2024, research showed the tech industry was creating electronic waste almost five times faster than it was recycling it (using documented methods). A United Nations report found that e-waste recycling has benefits estimated to include $23 billion of monetized value from avoided greenhouse gas emissions and $28 billion of recovered materials like gold, copper, and iron. It also comes at a cost – $10 billion associated with e-waste treatment and $78 billion of externalized costs to people and the environment.
Of the 62 million tons of e-waste generated globally in 2022, an estimated 13.8 million tons were documented, collected, and properly recycled, the report found.
Apple is boldly embracing brain-computer interface (BCI) technology to enable users to control its devices using only their thoughts—a novel frontier for the company.
Earlier this week, it was announced that the tech giant is working with Synchron, a company that has been pioneering BCI research and work for more than a decade. The company was founded by Dr. Tom Oxley, a neurointerventionalist and technologist. Synchron has developed a stent-like implant that can be inserted using a (relatively) minimally invasive procedure on an individual's motor cortex. The stent was reportedly granted FDA clearance for human trials in 2021, and works to detect brain signals and translate them into software-enabled relays; in the case of an Apple device, the relays can select icons on a iPhone or iPad.
The video below shows a user's experience with Synchron's BCI in conjunction with the Apple Vision headset.
(see site for video)
Apple is working to establish the standards for BCI devices and protocolize what their use could look like across its device landscape. The company is expected to open up the technology and protocols to third-party developers in short order.
Among the primary goals of BCI technology is to enable the millions of individuals worldwide that may have limited physical functions to use devices. For example, the World Health Organization reports that globally, over 15 million people are living with spinal cord injuries. Many of these individuals may experience some type of loss of physical or sensory functions over the course of their lifetimes.
This is where BCIs can truly make a difference—enabling individuals to control electronic devices purely with their thoughts. In fact, reports indicate that the BCI industry is expected to grow at a CAGR of 9.35% from 2025 to 2030 and has huge potential to become a trillion dollar market within the next decade.
Arthur T Knackerbracket has processed the following story:
The UK needs more nuclear energy generation just to power all the AI datacenters that are going to be built, according to the head of Amazon Web Services (AWS).
In an interview with the BBC, AWS chief executive Matt Garman said the world is going to have to build new technologies to cope with the projected energy demands of all the bit barns that are planned to support AI.
"I believe nuclear is a big part of that, particularly as we look ten years out," he said.
AWS has already confirmed plans to invest £8 billion ($10.6 billion) on building out its digital and AI infrastructure in Britain between now and the end of 2028 to meet "the growing needs of our customers and partners."
Yet the cloud computing arm of Amazon isn't the only biz popping up new bit barns in Blighty. Google started building a $1 billion campus at Waltham Cross near London last year, while Microsoft began construction of the Park Royal facility in West London in 2023, and made public its plans for another datacenter on the site of a former power station in Leeds last year.
Earleir this year, approval was granted for what is set to become Europe's largest cloud and AI datacenter at a site in Hertfordshire, while another not far away has just been granted outline planning permission by a UK government minister, overruling the local district authority.
This activity is accelerating thanks to the government's AI Opportunities Action Plan, which includes streamlined planning processes to expedite the building of more data facilities in the hope this will drive AI development.
As The Register has previously reported, the infrastructure needed for AI is getting more power-hungry with each generation, and the datacenter expansion to serve the growth in AI services has led to concerns over the amount of energy required.
[...] "AI is driving exponential demand for compute, and that means power. Ultimately, a long-term, resilient energy strategy is critical," said Séamus Dunne, managing director in the UK and Ireland for datacenter biz Digital Realty.
"For the UK to stay competitive in the global digital economy, we need a stable, scalable, and low-carbon energy mix to support the next generation of data infrastructure. With demand already outpacing supply, and the UK aiming to establish itself as an AI powerhouse, it's vital we stay open to a range of solutions. That also means building public trust and working with government to ensure the grid can keep pace."
Garman told the BBC that nuclear is a "great solution" to datacenter energy requirements as it is "an excellent source of zero-carbon, 24/7 power."
This might be true, but new atomic capacity simply can't be delivered fast enough to meet near-term demand, as we reported earlier this year. The World Nuclear Association says that an atomic plant typically takes at least five years to construct, whereas natural gas plants are often built in about two years.
ETH Zurich boffins exploit branch prediction race condition to steal info from memory, fixes have mild perf hit
by Thomas Claburn // Tue 13 May 2025
Researchers at ETH Zurich in Switzerland have found a way around Intel's defenses against Spectre, a family of data-leaking flaws in the x86 giant's processor designs that simply won't die.
Sandro Rüegge, Johannes Wikner, and Kaveh Razavi have identified a class of security vulnerabilities they're calling Branch Predictor Race Conditions (BPRC), which they describe in a paper [PDF] scheduled to be presented at USENIX Security 2025 and Black Hat USA 2025 later this year.
Spectre refers to a set of hardware-level processor vulnerabilities identified in 2018 that can be used to break the security isolation between software. It does this by exploiting speculative execution - a performance optimization technique that involves the CPU anticipating future code paths (also known as branch prediction) and executing down those paths before they're actually needed.
In practice, this all means malware running on a machine, or a rogue logged-in user, can potentially abuse Spectre flaws within vulnerable Intel processors to snoop on and steal data – such as passwords, keys, and other secrets – from other running programs or even from the kernel, the heart of the operating system itself, or from adjacent virtual machines on a host, depending on the circumstances. In terms of real-world risk, we haven't seen the Spectre family exploited publicly in a significant way, yet.
There are several Spectre variants. One of these, Spectre v2, enables an attacker to manipulate indirect branch predictions across different privilege modes to read arbitrary memory; it effectively allows a malicious program to extract secrets from the kernel and other running applications.
Intel has added various hardware-based defenses against these sorts of attacks over the years, which include Indirect Branch Restricted Speculation (IBRS/eIBRS) for restricting indirect branch target prediction, a sanitizing technique called Indirect Branch Predictor Barrier (IBPB), and other microarchitectural speculation controls.
eIBRS, the researchers explain, is designed to restrict indirect branch predictions to their originating privilege domain, preventing them from leaking across boundaries. Additional protection provided by IBPB is recommended in scenarios where different execution contexts, like untrusted virtual machines (VMs), share the same privilege level and hardware domain.
But Rüegge, Wikner, and Razavi found that branch predictors on Intel processors are updated asynchronously inside the processor pipeline, meaning there are potential race conditions – situations when two or more processes or threads attempt to access and update the same information concurrently, resulting in unpredictable behavior.
[...] Razavi said there are several possible attack scenarios.
"You could start a VM in your favorite cloud and this VM could then leak information from the hypervisor, including information that belongs to other VMs owned by other customers," he explained.
"While such attacks are in theory possible, and we have shown that BPI enables such attacks, our particular exploit leaks information in the user-to-kernel scenario. In such a scenario, the attacker runs inside an unprivileged user process (instead of a VM), and leaks information from the OS (instead of the hypervisor)."
Essentially, BPI allows the attacker to inject branch predictions tagged with elevated privileges in user mode, which ignores the security guarantees of eIBRS and IBPB. Thereafter, a Spectre v2 attack (sometimes called Branch Target Injection, or BTI) can be carried out to gain access to sensitive data in memory.
[...] Razavi said Spectre-related flaws are likely to continue to haunt us for a while – which El Reg did warn about back in 2018.
"Speculative execution is quite fundamental in how we build high-performance CPUs, so as long as we build CPUs this way, there is always a chance for such vulnerabilities to happen," he said. "That said, CPU vendors are now more aware of these issues and hopefully also more careful when introducing new designs and new features.
"Furthermore, while much more work still needs to be done, there is some progress in building the necessary tooling for detecting such issues. Once we have better tooling, it becomes easier to find and fix these issues pre-silicon. To summarize, things should hopefully slowly get better, but we are not there yet."
Of note: Security experts have discovered new Intel Spectre vulnerabilities
New Intel CPU flaws leak sensitive data from privileged memory:
A new "Branch Privilege Injection" flaw in all modern Intel CPUs allows attackers to leak sensitive data from memory regions allocated to privileged software like the operating system kernel.
Typically, these regions are populated with information like passwords, cryptographic keys, memory of other processes, and kernel data structures, so protecting them from leakage is crucial.
According to ETH Zurich researchers Sandro Rüegge, Johannes Wikner, and Kaveh Razavi, Spectre v2 mitigations held for six years, but their latest "Branch Predictor Race Conditions" exploit effectively bypasses them.
The flaw, which is named 'branch privilege injection' and tracked under CVE-2024-45332, is a race condition on the subsystem of branch predictors used in Intel CPUs.
Branch predictors like Branch Target Buffer (BTB) and Indirect Branch Predictor (IBP) are specialized hardware components that try to guess the outcome of a branch instruction before it's resolved to keep the CPU pipeline full for optimal performance.
These predictions are speculative, meaning they are undone if they end up being wrong. However, if they are correct, it increases performance.
The researchers found that Intel's branch predictor updates are not synchronized with instruction execution, resulting in these updates traversing privilege boundaries.
If a privilege switch happens, like from user mode to kernel mode, there is a small window of opportunity during which the update is associated with the wrong privilege level.
[...] CVE-2024-45332 impacts all Intel CPUs from the ninth generation onward, including Coffee Lake, Comet Lake, Rocket Lake, Alder Lake, and Raptor Lake.
"All intel processors since the 9th generation (Coffee Lake Refresh) are affected by Branch Privilege Injection," explains the researchers.
"However, we have observed predictions bypassing the Indirect Branch Prediction Barrier (IBPB) on processors as far back as 7th generation (Kaby Lake)."
ETH Zurich researchers did not test older generations at this time, but since they do not support Enhanced Indirect Branch Restricted Speculation (eIBRS), they're less relevant to this specific exploit and likely more prone to older Spectre v2-like attacks.
Arm Cortex-X1, Cortex-A76, and AMD Zen 5 and Zen 4 chips were also examined, but they do not exhibit the same asynchronous predictor behavior, so they are not vulnerable to CVE-2024-45332.
[...] The risk is low for regular users, and attacks have multiple strong prerequisites to open up realistic exploitation scenarios. That being said, applying the latest BIOS/UEFI and OS updates is recommended.
ETH Zurich will present the full details of their exploit in a technical paper at the upcoming USENIX Security 2025.
See also:
Arthur T Knackerbracket has processed the following story:
This week, meet a reader we'll Regomize as "Colin" who told us about his time working as a front-end developer for an education company that decided the time was right to expand from the UK to the US.
"Suddenly we needed to localize thousands of online articles, lessons, and other documents into American English."
Inconveniently, all that content was static HTML. "There was no CMS, no database, nothing I could harness on the server side," Colin lamented to Who, Me?
After due consideration, Colin and his team decided to use regular expressions to do the job.
"Our system combined tackling spelling swaps like changing 'ae' to 'e' in words like 'archaeology' and word/phrase swaps so that British terms like 'post' were changed to the American 'mail.'" Colin knew this could go pear-shaped if the system changed a term like "post-modern" to "mail-modern," so compound words were exempt.
As Colin and his workmates considered all the necessary changes, they realized they needed a lot of rules.
"The fact it was running the replacements directly on the body HTML, and causing lots of page repaints, meant we had to build a REST API to cache which rules ran and didn't run for each page, so as to not cause slowdown by running unnecessary rules," he explained.
Which worked well until it didn't.
"One day we got a call asking why a lesson about famous artists referred to the great painter 'Vincent Truck Gogh.'"
Readers are doubtless familiar with Vincent Van Gogh, and the different names for midsize vehicles on each side of the North Atlantic.
That was just the start. Next came complaints about a religious studies lesson that explained how Adam and Eve lived in the "Yard of Eden" – not the garden. Another religion class mentioned sinister-sounding "Easter hoods" instead of the daintier "Easter bonnets."
Colin figured out that the word swaps he coded failed to consider cases where it should just skip a word altogether. A van, after all, is a truck if you're American.
"In the end, we managed to get the system to be context-aware, so that certain swaps could be suppressed if the article contained a certain trigger word which suggested it shouldn't run, and the problems went away. But it was a very entertaining bug to be involved with!"
Processed by jelizondo
Arthur T Knackerbracket has processed the following story:
The HoloBoard augmented-reality system lets people type independently.
Jeremy is a 31-year-old autistic man who loves music and biking. He's highly sensitive to lights, sounds, and textures, has difficulty initiating movement, and can say only a few words. Throughout his schooling, it was assumed he was incapable of learning to read and write. But for the past 30 minutes, he's been wearing an augmented-reality (AR) headset and spelling single words on the HoloBoard, a virtual keyboard that hovers in the air in front of him. And now, at the end of a study session, a researcher asks Jeremy (not his real name) what he thought of the experience.
Deliberately, poking one virtual letter at a time, he types, "That was good."
It was not obvious that Jeremy would be able to wear an AR headset, let alone use it to communicate. The headset we use, Microsoft's HoloLens 2, weighs 566 grams (more than a pound), and the straps that encircle the head can be uncomfortable. Interacting with virtual objects requires precise hand and finger movements. What's more, some people doubt that people like Jeremy can even understand a question or produce a response. And yet, in study after study, we have found that most nonspeaking autistic teenage and adult participants can wear the HoloLens 2, and most can type short words on the HoloBoard.
The HoloBoard prototype that Jeremy first used in 2023 was three years in the making. It had its origins in an interdisciplinary feasibility study that considered whether individuals like Jeremy could tolerate a commercial AR headset. That study was led by the three of us: a developmental psychologist (Vikram Jaswal at the University of Virginia), an electrical and software engineer (Diwakar Krishnamurthy at the University of Calgary), and a computer scientist (Mea Wang, also at the University of Calgary).
Our journey to this point was not smooth. Some autism researchers told us that nonspeaking autistic people "do not have language" and so couldn't possibly communicate by typing. They also said that nonspeaking autistic people are so sensitive to sensory experiences that they would be overwhelmed by augmented reality. But our data, from more than a half-dozen peer-reviewed studies, have shown both assumptions to be wrong. And those results have informed the tools we're creating, like the HoloBoard, to enable nonspeaking autistic people to communicate more effectively.
Nonspeaking autistic people may also appear inattentive, engage in impulsive behavior, and score poorly on standard intelligence tests (many of which require spoken responses within a set amount of time). Historically, these challenges have led to unfounded assumptions about these individuals' ability to understand language and their capacity for symbolic thought. To put it bluntly, it has sometimes been assumed that someone who can't talk is also incapable of thinking.
Most attempts to provide nonspeaking autistic people with an alternative to speech have been rudimentary. Picture-based communication systems, often implemented on an iPad or tablet, are frequently used in schools and therapy clinics. If a user wants a cookie, they can tap a picture of a cookie. But the vocabulary of these systems is limited to the concepts that can be represented by a simple picture.
There are other options. Some nonspeaking autistic people have learned, over the course of many years and guided by parents and professionals, to communicate by spelling words and sentences on a letterboard that's held by a trained human assistant - a communication and regulation partner, or CRP. Part of the CRP's role is to provide attentional and emotional support, which can help with conditions that commonly accompany severe autism and that interfere with communication, including anxiety, attention-deficit hyperactivity disorder, and obsessive-compulsive disorder. Having access to such assisted methods of communication has allowed nonspeaking autistic people to graduate from college, write poetry, and publish a best-selling memoir.
But the role of the CRP has generated considerable controversy. Critics contend that the assistants can subtly guide users to point to particular letters, which would make the CRP, rather than the user, the author of any words produced. If nonspeaking autistic people who use a letterboard really know how to spell, critics ask, why is the CRP necessary? Some professional organizations, including the American Speech-Language-Hearing Association, have even cautioned against teaching nonspeaking autistic people communication methods that involve assistance from another person.
And yet, research suggests that CRP-aided methods can teach users the skills to communicate without assistance; indeed, some individuals who previously required support now type independently. And a recent study by coauthor Jaswal showed that, contrary to critics' assumptions, most of the nonspeaking autistic individuals in his study (which did not involve a CRP) knew how to spell. For example, in a string of text without any spaces, they knew where one word ended and the next word began. Using eye tracking, Jaswal's team also showed that nonspeaking autistic people who use a letterboard look at and point to letters too quickly and accurately to be responding to subtle cues from a human assistant.
Our focus then was not on improving underlying AR hardware and system software, but finding the most productive ways to adapt it for our users.
We knew we wanted to design a typing system that would allow users to convey anything they wanted. And given the ongoing controversy about assisted communication, we wanted a system that could build the skills needed to type independently. We envisioned a system that would give users more agency and potentially more privacy if the tool is used outside a research setting.
Augmented reality has various features that, we reasoned, make it attractive for these purposes. AR's eye- and hand-tracking capabilities could be leveraged in activities that train users in the motor skills needed to type, such as isolating and tapping targets. Some of the CRP's tasks, like offering encouragement to a user, could be automated and rolled into an AR device. Also, AR allows users to move around freely as they engage with virtual objects, which may be more suitable for autistic people who have trouble staying still: A HoloBoard can "follow" the user around a room using head tracking. What's more, virtual objects in AR are overlaid on a user's actual environment, making it safer and less immersive than virtual reality (VR) - and potentially less overwhelming for our target population.
We carefully considered our choice of hardware. While lightweight AR glasses like the Ray-Ban Meta AI glasses and Snap's AI Spectacles would have been less cumbersome for users, they don't have the high-fidelity hand-tracking and gaze-tracking we needed. Headsets like the HoloLens 2 and Meta's Quest 3 provide greater computing power and support a broader range of interaction modalities.
We aren't the first researchers to consider how AR can help autistic people. Other groups have used AR to offer autistic children real-time information about the emotions people show on their faces, for example, and to gamify social- and motor-skill training. We drew inspiration from those efforts as we took on the new idea of using AR to help nonspeaking autistic people communicate.
Our efforts have been powered by our close collaboration with nonspeaking autistic people. They are, after all, the experts about their condition, and they're the people best suited to guide the design of any tools intended for them. Everything we do is informed by their input, including the design of prototypes and the studies to test those prototypes.
When neurotypical people see someone who cannot talk, whose body moves in unusual ways, and who acts in socially unconventional ways, they may assume that the person wouldn't be interested in collaborating or wouldn't be able to do so. But, as noted by Anne M. Donnellan and others who conduct research with disabled people, behavioral differences don't necessarily reflect underlying capacities or a lack of interest in social engagement. These researchers have emphasized the importance of presuming competence - in our case, that means expecting nonspeaking autistic people to be able to learn, think, and participate.
Thus, throughout our project, we have invited nonspeaking autistic people to offer suggestions and feedback in whatever manner they prefer, including by pointing to letters on a physical letterboard while supported by a CRP. Although critics of assisted forms of communication may object to this inclusive approach, we have found the contributions of nonspeakers invaluable. Through Zoom meetings, email correspondence, comments after research sessions, and shared Google docs, these participants have provided essential input about whether and how the AR technology we're developing could be a useful communication tool. In keeping with the community's interest in more independent communication, our tests of the technology have focused on nonspeakers' performance without the assistance of a CRP.
In early conversations, our collaborators raised several concerns about using AR. For example, they worried that wearing a head-mounted device wouldn't be comfortable. Our first study investigated this topic and found that, with appropriate support and sufficient time, 15 of 17 nonspeakers wore the device without difficulty. We now have 3D-printed models that replicate the shape and weight of the HoloLens 2, to allow participants to build up tolerance before they participate in actual experiments.
Some users also expressed concern about the potential for sensory overload, and their concerns made us realize that we hadn't adequately explained the difference between AR and VR. We now provide a video before each study that explains exactly what participants will do and see and shows how AR is less immersive than VR.
Some participants told us that they like the tactile input from interacting with physical objects, including physical letterboards, and were concerned that virtual objects wouldn't replicate that experience. We currently address this concern using sensory substitution: Letters on the HoloBoard hover slightly in front of a semitransparent virtual backplate. Activating a letter requires the user to "push" it approximately 3 centimeters toward the backplate, and successful activation is accompanied by an audible click and a recorded voice saying the letter aloud.
Our users' needs and preferences have helped us set priorities for our research program. One person noted that an AR communication system seemed "cool," but worried that the motor skills required to interact in AR might not be possible without practice. So from the very first app we developed, we built in activities to let users practice the motor skills they needed to succeed.
Participants also told us they wanted to be able to customize the holograms - not just to suit their aesthetic preferences but also to better fit their unique sensory, motor, and attentional profiles. As a result, users of the HoloBoard can choose its color scheme and the size of the virtual letterboard, and whether the letters are said aloud as they're pressed. We've also provided several ways to activate letters: by pressing them, looking at them, or looking at them while using a physical clicker.
We had initially assumed that users would be interested in predictive text capabilities for the HoloBoard - having it autofill likely words based on the first letters typed. However, several people explained that although such a system could theoretically speed up communication, they would find it distracting. We've put this idea on the back burner for now; it may eventually become an option that users can toggle on if they wish.
To make things easier for users, we've investigated whether the HoloBoard could be positioned automatically in space, dynamically adjusting to the user's motor skills and movement patterns throughout a session. To this end, we used a behavioral cloning approach: During real-world interactions between nonspeakers and their CRPs, we observed the position of the user's fingers, palms, head, and physical letterboard. We then used that data to train a machine learning model to automatically adapt the placement of a virtual letterboard for a specific user.
Many nonspeaking participants who currently communicate with human assistance see the HoloBoard as providing a way to communicate with more autonomy. Indeed, we've found that after a 10-minute training procedure, most users of the HoloBoard can, like Jeremy, use it to type short words independently. We recently began a six-month study with five participants who have regular sessions in building their typing skills on the HoloBoard.
One of the most common questions from our nonspeaking participants, as well as from parents and professionals, is whether AR could teach the skills needed to type on a standard keyboard. It seems possible, in theory. As a first step, we're creating other types of AR teaching tools, including an educational AR app that teaches typing in the context of engaging and age-appropriate lessons.
We've also begun developing a virtual CRP that can offer support and feedback as a user interacts with the virtual letterboard. This virtual assistant, named ViC, can demonstrate motor movements as a user is learning to spell with the HoloBoard, and also offers verbal prompts and encouragement during a training session. There aren't many professionals who know how to teach nonspeakers typing skills, so a virtual CRP could be a game changer for this population.
Although nonspeakers have responded enthusiastically to our AR communication tools, our conversations and studies have revealed a number of practical challenges with the current technology.
For starters, most people can't afford Microsoft's HoloLens 2, which costs US $3,500. (It's also recently been discontinued!) So we've begun testing our software on less expensive mixed-reality products such as Meta's $500 Quest 3, and preliminary results have been promising. But regardless of which device is used, most headsets are bulky and heavy. It's unlikely that someone would wear one throughout a school day, for example. One idea we're pursuing is to design a pair of AR glasses that's just for virtual typing; a device customized for a single function would weigh much less than a general-purpose headset.
We've also encountered technical challenges. For example, the HoloLens 2's field of view is only 52 degrees. This restricts the size and placement of holograms, as larger holograms or those positioned incorrectly may be partially or entirely invisible to the user. So when participants use their fingers to point at virtual letters on the HoloBoard, some letters near the edges of the board may fall outside the visible area, which is frustrating to users. To address these issues, we used a vertical layout in our educational app so that the multiple-choice buttons always remain within a user's field of view. Our systems also allow a researcher or caregiver to monitor an AR session and, if necessary, adjust the size of virtual objects so they're always in view.
We have a few other ideas for dealing with the field-of-view issue, including deploying devices that have a larger field of view. Another strategy is to use eye tracking to select letters, which would eliminate the reliance on hand movements and the problem of the user's pointing fingers obscuring the letters. And some users might prefer using a joystick or other handheld controller to navigate and select letters. Together, these techniques should make the system more accessible while working within hardware constraints.
We have also been developing cross-reality apps, which allow two or more people wearing AR headsets to interact within the same virtual space. That's the setup we use to enable researchers to monitor study sessions in real time. Based on our development experience, we created an open-source tool called SimpleShare for the development of multiuser extended-reality apps in a device-agnostic way. A related issue is that many of our users make sudden movements; a sudden shake of a head can interfere with the sensors on the AR headset and upset the spatial alignment between multiple headsets. So our apps and SimpleShare instruct the headset to routinely scan the environment and use that data to automatically realign multiple devices, if necessary.
We've had to find solutions to cope with the limited computing power available on AR headsets. Running the AI model that automates the custom placement of the HoloBoard for each user can cause a lag in letterboard interactions and can cause the headset to heat up. We solved this problem by simplifying the AI model and decreasing the frequency of the model's interventions. Rendering a realistic virtual CRP via a headset is also computationally intensive. In our virtual CRP work, we're now rendering the avatar on an edge device, such as a laptop with a state-of-the-art GPU, and streaming it to the display.
As we continue to tackle these technology challenges, we're well aware that we don't have all the answers. That's why we discuss the problems that we're working on with the nonspeaking autistic people who will use the technology. Their perspectives are helping us make progress toward a truly usable and useful device.
So many assumptions are made about people who cannot speak, including that they don't have anything to say. We went into this project presuming competence in nonspeaking people, and yet we still weren't sure if our participants would be able to adapt to our technology. In our initial work, we were unsure whether nonspeakers could wear the AR device or interact with virtual buttons. They easily did both. In our evaluation of the HoloBoard prototype, we didn't know if users could type on a virtual letterboard hovering in front of them. They did so while we watched. In a recent study investigating whether nonspeakers could select letters using eye-gaze tracking, we wondered if they could complete the built-in gaze-calibration procedure. They did.
The ability to communicate - to share information, memories, opinions - is essential to well-being. Unfortunately, most autistic people who can't communicate using speech are never provided an effective alternative. Without a way to convey their thoughts, they are deprived of educational, social, community, and employment opportunities.
We aren't so naive as to think that AR is a silver bullet. But we're hopeful that there will be more community collaborations like ours, which take seriously the lived experiences of nonspeaking autistic people and lead to new technologies to support them. Their voices may be stuck inside, but they deserve to be heard.
Regeneron has agreed to buy 23andMe, the once buzzy genetic testing company, out of bankruptcy for $256 million under a court-supervised sale process:
23andMe declared bankruptcy in March and announced it would seek a buyer, while also saying that co-founder and CEO Anne Wojcicki would resign.
Under the proposed agreement with Regeneron, the Tarrytown, New York, drugmaker will acquire 23andMe's assets, including its personal genome service and total health and research services. Regeneron said Monday that it will abide by 23andMe's privacy policies and applicable law to protect customer data.
Data privacy experts had raised concerns about 23andMe's storehouse of data for about 15 million customers, including their DNA.
23andMe's consumer-genome services will continue interrupted, the purchaser said. Regeneron will not acquire 23andMe's Lemonaid Health telehealth business.
Also at ZeroHedge.
Previously: 23andMe Reportedly Faces Bankruptcy — What Will Happen to Everyone's DNA Samples?
The Verge, Space News, and The South China Morning Post are reporting that Red China has begun assembling a 744 TOPS super computer in Earth's orbit. The advantages of an orbital super computer include better access to solar energy, easier radiation of waste heat, and, above all, shorter communications times with other satellites.
The satellites communicate with each other at up-to-100Gbps using lasers, and share 30 terabytes of storage between them, according to Space News. The 12 launched last week carry scientific payloads, including an X-ray polarization detector for picking up brief cosmic phenomena such as gamma-ray bursts. The satellites also have the capability to create 3D digital twin data that can be used for purposes like emergency response, gaming, and tourism, ADA Space says in its announcement.
- — China begins assembling its supercomputer in space. The Verge.
They are part of the Three-Body Computing Constellation, space-based infrastructure being developed by Zhejiang Lab. Once complete, the constellation would support real-time, in-orbit data processing with a total computing capacity of 1,000 peta operations per second (POPS) – or one quintillion operations per second – the report said.
- — China launches satellites to start building the world's first supercomputer in orbit. The South China Morning Post.
The satellites feature advanced AI capabilities, up to 100 Gbps laser inter-satellite links and remote sensing payloads—data from which will be processed onboard, reducing data transmission requirements. One satellite also carries a cosmic X-ray polarimeter developed by Guangxi University and the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC), which will detect, identify and classify transient events such as gamma-ray bursts, while also triggering messages to enable followup observations by other missions.
Maintenance will be difficult.
Previously:
(2025) PA's Largest Coal Plant to Become 4.5GW Gas-Fired AI Hub
(2025) FTC Removes Posts Critical of Amazon, Microsoft, and AI Companies
(2025) Real Datacenter Emissions Are A Dirty Secret
(2022) Amazon and Microsoft Want to Go Big on Data Centres, but the Power Grid Can't Support Them
EU users have less than two weeks to opt out of Meta's AI training:
Privacy watchdog Noyb sent a cease-and-desist letter to Meta Wednesday, threatening to pursue a potentially billion-dollar class action to block Meta's AI training, which starts soon in the European Union.
In the letter, Noyb noted that Meta only recently notified EU users on its platforms that they had until May 27 to opt their public posts out of Meta's AI training data sets. According to Noyb, Meta is also requiring users who already opted out of AI training in 2024 to opt out again or forever lose their opportunity to keep their data out of Meta's models, as training data likely cannot be easily deleted. That's a seeming violation of the General Data Protection Regulation (GDPR), Noyb alleged.
"Meta informed data subjects that, despite that fact that an objection to AI training under Article 21(2) GDPR was accepted in 2024, their personal data will be processed unless they object again—against its former promises, which further undermines any legitimate trust in Meta's organizational ability to properly execute the necessary steps when data subjects exercise their rights," Noyb's letter said.
[...] The letter accused Meta of further deceptions, like planning to seize data that users may not consider "public," like disappearing stories typically only viewed by small audiences. That, Noyb said, differs significantly from AI crawlers scraping information posted on a public website.
According to Noyb, there would be no issue with Meta's AI training in the EU if Meta would use a consent-based model rather than requiring rushed opt-outs. As Meta explained in a blog following a threatened preliminary injunction on AI training in Germany, the company plans to collect AI training data using a "legitimate interest" legal basis, which supposedly "follows the clear guidelines of the European Data Protection Committee of December 2024, which reflect the consensus between EU data protection authorities."
But Noyb Chairman Max Schrems doesn't believe that Meta has a legitimate interest in sweeping data collection for AI training.
"The European Court of Justice has already held that Meta cannot claim a 'legitimate interest' in targeting users with advertising," Schrems said in a press release. "How should it have a 'legitimate interest' to suck up all data for AI training? While the 'legitimate interest' assessment is always a multi-factor test, all factors seem to point in the wrong direction for Meta. Meta simply says that its interest in making money is more important than the rights of its users."
In a statement, Meta's spokesperson defended the opt-out approach, noting that "we've provided EU users with a clear way to object to their data being used for training AI at Meta, notifying them via email and in-app notifications that they can object at any time."
The spokesperson criticized "Noyb's copycat actions" as "part of an attempt by a vocal minority of activist groups to delay AI innovation in the EU, which is ultimately harming consumers and businesses who could benefit from these cutting-edge technologies."
[...] Meta argued that its AI training efforts in the EU are far more transparent than efforts from competitors Google and OpenAI, which, Meta noted, "have already used data from European users to train their AI models," supposedly without taking the steps Meta has to inform users.
Also echoing a common refrain in the AI industry, another Meta blog warned that efforts to further delay Meta's AI training in the EU could lead to "major setbacks," pushing the EU behind rivals in the AI race.
"Without a reform and simplification of the European regulatory system, Europe threatens to fall further and further behind in the global AI race and lose ground compared to the USA and China," Meta warned.
[...] "This fight is essentially about whether to ask people for consent or simply take their data without it," Schrems said, adding, "Meta's absurd claims that stealing everyone's personal data is necessary for AI training is laughable. Other AI providers do not use social network data—and generate even better models than Meta."
https://www.theregister.com/2025/05/13/nextcloud_play_store_complaint/
https://nextcloud.com/blog/nextcloud-android-file-upload-issue-google/
Exclusive European software vendor Nextcloud has accused Google of deliberately crippling its Android Files application, which it says has more than 800,000 users.
The problem lies with the "All files access" permission, where broad access to files on a device is required. While most applications can make do with Google's more privacy-friendly storage access tools, such as Storage Access Framework (SAF) or the MediaStore API, others require more permissions – hence the "All files access" privilege.
Nextcloud's Android Files app is a file synchronization tool that, according to the company, has long had permission to read and write all file types. "Nextcloud has had this feature since its inception in 2016," it said, "and we never heard about any security concerns from Google about it."
That changed in 2024, when someone or something at Google's Play Store decided to revoke the permission, effectively crippling the application. Nextcloud was instructed to use "a more privacy-aware replacement."
According to Nextcloud, "SAF cannot be used, as it is for sharing/exposing our files to other apps ... MediaStore API cannot be used as it does not allow access to other files, but only media files."
Scientists have re-calculate when the universe will end and it's a lot sooner than previously expected.
"sooner" still means a mind-bending 10 to the power of 78 years from now. That is a 1 followed by 78 zeros, which is unimaginably far into the future. However, in cosmic terms, this estimate is a dramatic advancement from the previous prediction of 10 to the power of 1,100 years, made by Falcke and his team in 2023.
Lets hope the next predictions does not come with the same results. The logarithmic end of time.
M&S forces customer password resets after data breach:
Marks and Spencer (M&S) has confirmed that customer data was stolen during the Easter DragonForce ransomware attack on its server infrastructure and will be prompting all online customers to reset their account passwords as a precautionary move.
The attack unfolded three weeks ago and is thought to have been the work of a white-label affiliate of DragonForce – possibly the notorious Scattered Spider operation, which uses social engineering tactics to conduct its intrusions.
The stolen tranche of data is understood to include contact details email addresses, postal addresses and phone numbers; personal information including names and dates of birth; and data on customer interactions with the chain, including online order histories, household information, and 'masked' payment card details.
M&S added that customer reference numbers, but not payment information, belonging to holders of M&S credit cards or Sparks Pay cards – including former cardholders – may also have been taken.
"We have written to customers today to let them know that unfortunately, some personal customer information has been taken," said M&S chief exec Stuart Machin.
"Importantly there is no evidence that the information has been shared and it does not include useable card or payment details, or account passwords, so there is no need for customers to take any action."
[...] NordVPN chief technology officer, Marijus Briedis, described M&S' assertion that the attackers have not yet leaked or shared the stolen data was "overly optimistic" under the circumstances and warned that even if passwords or credit card details were not exposed, the data that was taken was still very useful to cyber criminals.
"This type of data can be used in phishing campaigns or combined with other leaked information to commit identity theft," explained Briedis.
"Consumers often underestimate how damaging 'harmless' data like order history or email addresses can be in the wrong hands. These M&S hackers could use this data to build highly personalised phishing emails, designed to look identical to what the retailer would send, and these are much harder to spot.
"This breach highlights how companies must not only secure financial data, but also treat seemingly less sensitive information – like customer profiles and purchase records – as critical assets that require protection."
Max Vetter, vice president of cyber at Immersive and a former money laundering investigator with London's Metropolitan Police, also had harsh words for M&S.
"M&S saying that customers could change their passwords "for extra peace of mind" does little to reassure those worried about who has access to their personal information," he said. "As the fallout from this attack continues, customers want clear assurances about their personal data and what M&S is doing to keep it safe from being published online.
"M&S want to appear in control and telling people to be more vigilant, however, telling customers there's no need to act risks does potentially the wrong message. We recommend all customers reset their password.
Zetter reaffirmed the stolen data would be prime material for downstream social engineering and phishing attacks, especially if it is indeed in the hands of Scattered Spider who, he said, "often play a long game".
See also: RansomHub Went Dark April 1; Affiliates Fled to Qilin, DragonForce Claimed Control
Co-op cyber attack affects customer data, firm admits, after hackers contact BBC:
Cyber criminals have told BBC News their hack against Co-op is far more serious than the company previously admitted.
Hackers contacted the BBC with proof they had infiltrated IT networks and stolen huge amounts of customer and employee data.
After being approached on Friday, a Co-op spokesperson said the hackers "accessed data relating to a significant number of our current and past members".
Co-op had previously said that it had taken "proactive measures" to fend off hackers and that it was only having a "small impact" on its operations.
It also assured the public that there was "no evidence that customer data was compromised".
The cyber criminals claim to have the private information of 20 million people who signed up to Co-op's membership scheme, but the firm would not confirm that number.
The criminals, who are using the name DragonForce, say they are also responsible for the ongoing attack on M&S and an attempted hack of Harrods.
The attacks have led government minister Pat McFadden to warn companies to "treat cyber security as an absolute priority".
[...] Co-op has more than 2,500 supermarkets as well as 800 funeral homes and an insurance business.
[...] On Thursday, it was revealed Co-op staff were being urged to keep their cameras on during Teams meetings, ordered not to record or transcribe calls, and to verify that all participants were genuine Co-op staff.
The security measure now appears to be a direct result of the hackers having access to internal Teams chats and calls.
[...] Since the BBC contacted Co-op about the hackers' evidence, the firm has disclosed the full extent of the breach to its staff and the stock market.
"This data includes Co-op Group members' personal data such as names and contact details, and did not include members' passwords, bank or credit card details, transactions or information relating to any members' or customers' products or services with the Co-op Group," a spokesperson said.
DragonForce want the BBC to report the hack - they are apparently trying to extort the company for money.
But the criminals wouldn't say what they plan to do with the data if they don't get paid.
They refused to talk about M&S or Harrods and when asked about how they feel about causing so much distress and damage to business and customers, they refused to answer.
[...] It's not known who is ultimately using the DragonForce service to attack the retailers, but some security experts say the tactics seen are similar to that of a loosely coordinated group of hackers who have been called Scattered Spider or Octo Tempest.
The gang operates on Telegram and Discord channels and is English-speaking and young – in some cases only teenagers.
Harrods is latest British retailer to be hit by cyber attack:
London department store Harrods said on Thursday hackers had attempted to break into its systems, the third high-profile cyber attack on a UK retailer in two weeks, following incidents at Marks & Spencer and the Co-op Group.
British companies, public bodies and institutions have been hit by a wave of cyber attacks in recent years, costing them tens of millions of pounds and often months of disruption.
"We recently experienced attempts to gain unauthorised access to some of our systems," a statement from Harrods, owned by the Qatar Investment Authority, said.
"Our seasoned IT security team immediately took proactive steps to keep systems safe and as a result we have restricted internet access at our sites today."
It said all its sites, including its flagship Knightsbridge store in London, H beauty stores and airport stores remained open and customers could also continue to shop online.
The Harrods and Co-op incidents appear to have had less of an impact than the attack on M&S, one of Britain's best known retailers, which has paused taking clothing and home orders through its website and app for the last seven days.
[...] Technology specialist site BleepingComputer, citing multiple sources, said a ransomware attack that encrypted M&S's servers was believed to have been conducted by a hacking collective known as "Scattered Spider".
Arthur T Knackerbracket has processed the following story:
Police and federal agencies have found a controversial new way to skirt the growing patchwork of laws that curb how they use facial recognition: an AI model that can track people using attributes like body size, gender, hair color and style, clothing, and accessories.
The tool, called Track and built by the video analytics company Veritone, is used by 400 customers, including state and local police departments and universities all over the US. It is also expanding federally: US attorneys at the Department of Justice began using Track for criminal investigations last August. Veritone’s broader suite of AI tools, which includes bona fide facial recognition, is also used by the Department of Homeland Security—which houses immigration agencies—and the Department of Defense, according to the company.
“The whole vision behind Track in the first place,” says Veritone CEO Ryan Steelberg, was “if we’re not allowed to track people’s faces, how do we assist in trying to potentially identify criminals or malicious behavior or activity?” In addition to tracking individuals where facial recognition isn’t legally allowed, Steelberg says, it allows for tracking when faces are obscured or not visible.
The product has drawn criticism from the American Civil Liberties Union, which—after learning of the tool through MIT Technology Review—said it was the first instance they’d seen of a nonbiometric tracking system used at scale in the US.They warned that it raises many of the same privacy concerns as facial recognition but also introduces new ones at a time when the Trump administration is pushing federal agencies to ramp up monitoring of protesters, immigrants, and students.
Veritone gave us a demonstration of Track in which it analyzed people in footage from different environments, ranging from the January 6 riots to subway stations. You can use it to find people by specifying body size, gender, hair color and style, shoes, clothing, and various accessories. The tool can then assemble timelines, tracking a person across different locations and video feeds. It can be accessed through Amazon and Microsoft cloud platforms.
In an interview, Steelberg said that the number of attributes Track uses to identify people will continue to grow. When asked if Track differentiates on the basis of skin tone, a company spokesperson said it’s one of the attributes the algorithm uses to tell people apart but that the software does not currently allow users to search for people by skin color. Track currently operates only on recorded video, but Steelberg claims the company is less than a year from being able to run it on live video feeds.
Agencies using Track can add footage from police body cameras, drones, public videos on YouTube, or so-called citizen upload footage (from Ring cameras or cell phones, for example) in response to police requests.
“We like to call this our Jason Bourne app,” Steelberg says. He expects the technology to come under scrutiny in court cases but says, “I hope we’re exonerating people as much as we’re helping police find the bad guys.” The public sector currently accounts for only 6% of Veritone’s business (most of its clients are media and entertainment companies), but the company says that’s its fastest-growing market, with clients in places including California, Washington, Colorado, New Jersey, and Illinois.
[...] Track’s expansion comes as laws limiting the use of facial recognition have spread, sparked by wrongful arrests in which officers have been overly confident in the judgments of algorithms. Numerous studies have shown that such algorithms are less accurate with nonwhite faces. Laws in Montana and Maine sharply limit when police can use it—it’s not allowed in real time with live video—while San Francisco and Oakland, California have near-complete bans on facial recognition. Track provides an alternative.
Though such laws often reference “biometric data,” Wessler says this phrase is far from clearly defined. It generally refers to immutable characteristics like faces, gait and fingerprints rather than things that change, like clothing. But certain attributes, such as body size, blur this distinction.
Consider also, Wessler says, someone in winter who frequently wears the same boots, coat, and backpack. “Their profile is going to be the same day after day,” Wessler says. “The potential to track somebody over time based on how they’re moving across a whole bunch of different saved video feeds is pretty equivalent to face recognition.”
In other words, Track might provide a way of following someone that raises many of the same concerns as facial recognition, but isn’t subject to laws restricting use of facial recognition because it does not technically involve biometric data. Steelberg said there are several ongoing cases that include video evidence from Track, but that he couldn’t name the cases or comment further. So for now, it’s unclear whether it’s being adopted in jurisdictions where facial recognition is banned.