Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
AI arms dealer relies on Taiwanese advanced packaging plants for top-specced GPUs:
US manufacturing of Nvidia GPUs is underway and CEO Jensen Huang is celebrating the first Blackwell wafer to come out of TSMC's Arizona chip factory. However, to be part of a complete product, those chips may need to visit Taiwan.
Nvidia first announced plans to produce chips at Fab21 just six months ago..
Speaking during an event in Phoenix on Friday, Huang lauded TSMC's manufacturing prowess while pandering to US President Donald Trump's America First agenda.
"This is the vision of President Trump of reindustrialization — to bring back manufacturing to America, to create jobs, of course, but also this is the single most vital manufacturing industry and the most important technology industry in the world," he said.
But while the silicon may be homegrown, Nvidia remains reliant on Taiwanese packaging plants to turn those wafers into its most powerful and highest-demand GPUs.
Modern GPUs are composed of multiple compute and memory dies. The company's Blackwell family of datacenter chips feature two reticle-sized compute dies along with eight stacks of HBM3e memory, all stitched together using TSMC's CoWoS packaging tech.
Up to this point, all of TSMC's packaging facilities have been located in Taiwan. Amkor, an outsourced semiconductor assembly and test services (OSAT) provider, is working on building an advanced packaging plant in the US capable of stitching together silicon dies using TSMC's chip-on-wafer-on-substrate (CoWoS) tech. But until it's done – expected in 2027 or 2028 – the next stop for Nvidia's wafers will likely be Taiwan.
During TSMC's Q3 earnings call last week, CEO C.C. Wei confirmed the Amkor plan was moving forward, but the site was only now breaking ground.
It's worth noting that, while Nvidia's most potent accelerators rely on CoWoS, not all of its Blackwell chips do. The RTX Pro 6000, a 96GB workstation and server card aimed at AI inference, data visualization, and digital twins doesn't feature a single GPU die fed by GDDR7 memory rather than HBM3e. This means Nvidia doesn't need CoWoS to produce the chip. The same is true for much of Nvidia's RTX family of gaming cards.
Long-term, Nvidia isn't limited to TSMC or Amkor for packaging either. Nvidia has already announced plans to produce GPU tiles built by TSMC for Intel client processors that will presumably make use of the x86 giant's EMIB and/or Foveros advanced packaging technologies.
Nvidia hasn't said which are the first Blackwell wafers to roll off Fab21's production line. El Reg has reached out for clarification; we'll let you know what we hear back.
Plus spy helping spy: Typhoons teaming up:
Security researchers now say more Chinese crews - likely including Salt Typhoon - than previously believed exploited a critical Microsoft SharePoint vulnerability, and used the flaw to target government agencies, telecommunications providers, a university, and a finance company across multiple continents.
Threat intel analysts at Broadcom-owned Symantec and Carbon Black uncovered additional victims and malware tools the intruders used, and published those and other details about the attacks in a Wednesday report.
In July, Microsoft patched the so-called ToolShell vulnerability (CVE-2025-53770), a critical remote code execution bug in on-premises SharePoint servers. But before Redmond fixed the flaw, Chinese attackers found and exploited it as a zero-day, compromising more than 400 organizations, including the US Energy Department.
Trend Micro's research team says they've uncovered additional evidence of China-aligned groups, specifically Salt Typhoon and its Beijing botnet-building brethren Flax Typhoon, collaborating in "what looks like a single cyber campaign at first sight."
In these attacks, Salt Typhoon (aka Earth Estries, FamousSparrow) performs the initial break-in, then hands the compromised org over to Flax Typhoon (aka Earth Naga).
"This phenomenon, which we have termed 'Premier Pass,' represents a new level of coordination in cyber campaigns, particularly among China-aligned APT actors," the Trend researchers said.
At the time, Microsoft attributed the break-ins to three China-based groups. These included two government-backed groups: Linen Typhoon (aka Emissary Panda, APT27), which typically steals intellectual property, and Violet Typhoon (aka Zirconium, Judgment Panda, APT31), which focuses on espionage and targets former government and military personnel and other high-value individuals.
Microsoft also accused a suspected China-based criminal org, Storm-2603, of exploiting the bug to infect victims with Warlock ransomware.
It now appears other Beijing crews – including Salt Typhoon, which famously hacked America's major telecommunications firms and stole information belonging to nearly every American – also joined in the attacks.
In a study published last month https://www.nature.com/articles/s41562-025-02297-0 researchers analyze internal sentence representation for both humans and LLMs. It turns out that humans and LLMs use similar tree structures. Quote from their conclusions: "The results also add to the literature showing that the human brain and LLM, albeit fundamentally different in terms of the implementation, can have aligned internal representations of language."
Originally seen on techxplore https://techxplore.com/news/2025-10-humans-llms-sentences-similarly.html:
A growing number of behavioral science and psychology studies have thus started comparing the performance of humans to those of LLMs on specific tasks, in the hope of shedding new light on the cognitive processes involved in the encoding and decoding of language. As humans and LLMs are inherently different, however, designing tasks that realistically probe how both represent language can be challenging.
Researchers at Zhejiang University have recently designed a new task for studying sentence representation and tested both LLMs and humans on it. Their results, published in Nature Human Behavior, show that when asked to shorten a sentence, humans and LLMs tend to delete the same words, hinting at commonalities in their representation of sentences.
"Understanding how sentences are represented in the human brain, as well as in large language models (LLMs), poses a substantial challenge for cognitive science," wrote Wei Liu, Ming Xiang, and Nai Ding in their paper. "We develop a one-shot learning task to investigate whether humans and LLMs encode tree-structured constituents within sentences."
[...] nterestingly, the researchers' findings suggest that the internal sentence representations of LLMs are aligned with linguistics theory. In the task they designed, both humans and ChatGPT tended to delete full constituents (i.e., coherent grammatical units) as opposed to random word sequences. Moreover, the word strings they deleted appeared to vary based on the language they were completing the task in (i.e., Chinese or English), following language-specific rules.
"The results cannot be explained by models that rely only on word properties and word positions," wrote the authors. "Crucially, based on word strings deleted by either humans or LLMs, the underlying constituency tree structure can be successfully reconstructed."
Overall, the team's results suggest that when processing language, both humans and LLMs are guided by latent syntactic representations, specifically tree-structured sentence representations. Future studies could build on this recent work to further investigate the language representation patterns of LLMs and humans, either using adapted versions of the team's word deletion task or entirely new paradigms.
Journal Reference: Liu, W., Xiang, M. & Ding, N. Active use of latent tree-structured sentence representation in humans and large language models. Nat Hum Behav (2025). https://doi.org/10.1038/s41562-025-02297-0
Trump Eyes Government Control of Quantum Computing Firms With Intel-Like Deals
Donald Trump is eyeing taking equity stakes in quantum computing firms in exchange for federal funding, The Wall Street Journal reported.
At least five companies are weighing whether allowing the government to become a shareholder would be worth it to snag funding that the Trump administration has "earmarked for promising technology companies," sources familiar with the potential deals told the WSJ.
IonQ, Rigetti Computing, and D-Wave Quantum are currently in talks with the government over potential funding agreements, with minimum awards of $10 million each, some sources said. Quantum Computing Inc. and Atom Computing are reportedly "considering similar arrangements," as are other companies in the sector, which is viewed as critical for scientific advancements and next-generation technologies.
No deals have been completed yet, sources said, and terms could change as quantum-computing firms weigh the potential risks of government influence over their operations.
ESA astronauts take to helicopters for Moon landing training:
European Space Agency (ESA) astronauts have completed a helicopter training course to prepare them for upcoming lunar landings.
The astronauts in question include Alexander Gerst, Matthias Maurer, Samantha Cristoforetti, and Thomas Pesquet.
The course consisted of one week of simulator instruction followed by two weeks of practical flying in Airbus EC135 helicopters. ESA said: "Helicopter training offers a realistic analogue for the dynamics of planetary landings, requiring capabilities such as vertical take-off and landing, terrain-based decision-making, and high levels of coordination and situational awareness."
The Apollo astronauts also honed their Moon landing skills using helicopters, although with occasional catastrophic consequences. On January 23, 1971, the Bell 47G helicopter flown by Apollo 14 backup commander Gene Cernan crashed into the Indian River lagoon near Malabar, Florida. An accident investigation board, headed by Apollo 13 commander Jim Lovell, pinned much of the blame on Cernan. He'd found the altitude difficult to judge when skimming the surface of the water and accidentally ditched the helicopter. The incident didn't stop Cernan from being the last person on the Moon on the Apollo 17 mission.
A better real-world simulator was the Lunar Landing Training Vehicle (LLTV), which featured a vertically mounted turbofan engine capable of lifting the machine – nicknamed "the flying bedstead" – to simulate the reduced lunar gravity. Astronauts spoke highly of it. Apollo 11 commander Neil Armstrong called it a "most valuable training experience." He was almost killed by its predecessor, the Lunar Landing Research Vehicle (LLRV), in 1968.
Cernan said: "Although there is nothing quite like the real thing, flying the LLTV had been a step toward realism from 'flying' the stationary simulators.
"In the LLTV you had your butt strapped to a machine that you had to land safely or you didn't make it."
Andreas Mogensen, ESA's Human Exploration Group Leader, told The Register:
ESA has yet to strap its astronauts into something as potentially hazardous as the LLTV. However, the helicopter raises some interesting questions – what does ESA expect its astronauts to use for a lunar landing?
Landing the towering Starship manually would be a challenge, while the other Human Landing System (HLS) contender from Blue Origin won't be ready until Artemis V.
The helicopter training is an introductory course that will give ESA astronauts the skills and knowledge to participate in advanced helicopter courses, like NASA's HAATS, which is a requirement for participating in Artemis lunar landing missions.
The purpose of HAATS and similar courses is to train astronauts in vertical landing profiles and to recognize the visual and optical illusions that can arise from a visual environment characterized by mono-colours and stark shadows. Helicopter pilots are well-aware of these illusions, especially when flying in snow and mountain environments. The goal is thus to equip astronauts with the skills to visually monitor the descent and judge obstacles and risks, regardless of the actual vehicle used to land on the moon.
An approach it calls "quantum echoes" takes 13,000 times longer on a supercomputer
[...] Today, Google and a large collection of academic collaborators are publishing a paper describing a computational approach that demonstrates a quantum advantage compared to current algorithms—and may actually help us achieve something useful.
Google's latest effort centers on something it's calling "quantum echoes." The approach could be described as a series of operations on the hardware qubits that make up its machine. These qubits hold a single bit of quantum information in a superposition between two values, with probabilities of finding the qubit in one value or the other when it's measured. Each qubit is entangled with its neighbors, allowing its probability to influence those of all the qubits around it. The operations that allow computation, called gates, are ways of manipulating these probabilities. Most current hardware, including Google's, perform manipulations on one or two qubits at a time (termed one- and two-qubit gates, respectively.
[...] So how do you turn quantum echoes into an algorithm? On its own, a single "echo" can't tell you much about the system—the probabilities ensure that any two runs might show different behaviors. But if you repeat the operations multiple times, you can begin to understand the details of this quantum interference. And performing the operations on a quantum computer ensures that it's easy to simply rerun the operations with different random one-qubit gates and get many instances of the initial and final states—and thus a sense of the probability distributions involved.
This is also where Google's quantum advantage comes from. Everyone involved agrees that the precise behavior of a quantum echo of moderate complexity can be modeled using any leading supercomputer. But doing so is very time-consuming, so repeating those simulations a few times becomes unrealistic. The paper estimates that a measurement that took its quantum computer 2.1 hours to perform would take the Frontier supercomputer approximately 3.2 years. Unless someone devises a far better classical algorithm than what we have today, this represents a pretty solid quantum advantage.
But is it a useful algorithm? The repeated sampling can act a bit like the Monte Carlo sampling done to explore the behavior of a wide variety of physical systems. Typically, however, we don't view algorithms as modeling the behavior of the underlying hardware they're being run on; instead, they're meant to model some other physical system we're interested in. That's where Google's announcement stands apart from its earlier work—the company believes it has identified an interesting real-world physical system with behaviors that the quantum echoes can help us understand.
[...] For now, the team stuck to demonstrations on very simple molecules, making this work mostly a proof of concept. But the researchers are optimistic that there are many ways the system could be used to extract structural information from molecules at distances that are currently unobtainable using NMR. They list a lot of potential upsides that should be explored in the discussion of the paper, and there are plenty of smart people who would love to find new ways of using their NMR machines, so the field is likely to figure out pretty quickly which of these approaches turns out to be practically useful.
The fact that the demonstrations were done with small molecules, however, means that the modeling run on the quantum computer could also have been done on classical hardware (it only required 15 hardware qubits). So Google is claiming both quantum advantage and quantum utility, but not at the same time. The sorts of complex, long-distance interactions that would be out of range of classical simulation are still a bit beyond the reach of the current quantum hardware. O'Brien estimated that the hardware's fidelity would have to improve by a factor of three or four to model molecules that are beyond classical simulation.
The quantum advantage issue should also be seen as a work in progress. Google has collaborated with enough researchers at enough institutions that there's unlikely to be a major improvement in algorithms that could allow classical computers to catch up. Until the community as a whole has some time to digest the announcement, though, we shouldn't take that as a given.
The other issue is verifiability. Some quantum algorithms will produce results that can be easily verified on classical hardware—situations where it's hard to calculate the right result but easy to confirm a correct answer. Quantum echoes isn't one of those, so we'll need another quantum computer to verify the behavior Google has described.
Journal: "Observation of constructive interference at the edge of quantum ergodicity", Nature, 2025. DOI: 10.1038/s41586-025-09526-6
No tech, no snark, no politics, just good pictures
Your day is about to get a lot better! After so much anticipation, the Nikon Comedy Wildlife Awards entry finalists have finally been revealed, and they are great. They're hilarious. Witty. Dynamic. And they're inspiring us to pick up the camera, too.
Today, we're featuring the finalist photos in all their glory, so scroll down to add a bit of humor and sunshine to your life. If anyone you know needs their spirits picked up, be sure to send them this way.
New AI-powered web browsers such as OpenAI's ChatGPT ATLAS and Perplexity's Comet are trying to unseat Google Chrome as the front door to the internet for billions of users. A key selling point of these products are their web browsing AI agents, which promise to complete tasks on a user's behalf by clicking around on websites and filling out forms.
But consumers may not be aware of the major risks to user privacy that come along with agentic browsing, a problem that the entire tech industry is trying to grapple with.
Cybersecurity experts who spoke to TechCrunch say AI browser agents pose a larger risk to user privacy compared to traditional browsers. They say consumers should consider how much access they give web browsing AI agents, and whether the purported benefits outweigh the risks.
[...] There are a few practical ways users can protect themselves while using AI browsers. Rachel Tobac, CEO of the security awareness training firm SocialProof Security, tells TechCrunch that user credentials for AI browsers are likely to become a new target for attackers. She says users should ensure they're using unique passwords and multi-factor authentication for these accounts to protect them.
Tobac also recommends users to consider limiting what these early versions of ChatGPT Atlas and Comet can access, and siloing them from sensitive accounts related to banking, health, and personal information. Security around these tools will likely improve as they mature, and Tobac recommends waiting before giving them broad control.
Based on these concerns, would you use such browsers ?
An Anonymous Coward has submitted the following:
A December update to Microsoft Teams that will be disabled by default will reportedly track user location and report it if the feature is enabled. This will allow bosses to tell if an employee is in the office or working from home and set their status accordingly. It will also be able to tell if the user is not at their normal home logon location and provide evidence to employers showing the user's location. Workers who have been taking mini holidays while claiming to be working from home may be affected by this new feature.
The idea of the new feature is to eliminate confusion for bosses about where a worker is within the building and to see if they are working remotely.
But those who work from home argue it is an invasion of privacy.
"Micro management at peak? All online work doesn't need you to be in the office, we can do it from home," one X user said.
"Why is this needed?" another added.
Almost half of Gen Z workers surveyed (44 per cent) revealed last year that they took a secret trip, with most giving their workplace the impression they were working normal hours and using a virtual background in meetings to trick their employer.
Ella Maree, 26, started hush-tripping after Covid when her corporate workplace adopted a 3:2 work week, which meant she could work from home on Mondays and Fridays.
"Since travel options were limited, hush trips became my go-to choice," she said.
"I flew out Thursday evening and worked by the hotel pool, restaurant and room on Friday. I maintained the same level of productivity as if I were physically in the office or working from home, so really, a win-win situation.
"Most of my office work from home Friday, so really, I'm just making the most of our remote work flexibility."
Ms Maree insisted her boss "wouldn't mind" given workplaces are mostly connected online and that she was always getting her work done.
How many Soylentils still have the ability to WFH, either full-time or part-time? I thought one of the attractions of WFH is the ability to work when the hours suit you and not the standard 9-5 (for non-Usians). Would you consider working from a different location a breach of your contract?
Alibaba Cloud says it cut Nvidia AI GPU use by 82% with new pooling system:
Alibaba Cloud claims its new Aegaeon pooling system reduces the number of Nvidia GPUs required to serve large language models by 82% during a multi-month beta test inside its Model Studio marketplace. The result, published in a peer-reviewed paper presented at the 2025 ACM Symposium on Operating Systems (SOSP) in Seoul, suggests that cloud providers may be able to extract significantly more inference capacity from existing silicon, especially in constrained markets like China, where the supply of Nvidia's latest H20s remains limited.
Unlike training-time breakthroughs that chase model quality or speed, Aegaeon is an inference-time scheduler designed to maximize GPU utilization across many models with bursty or unpredictable demand. Instead of pinning one accelerator to one model, Aegaeon virtualizes GPU access at the token level, allowing it to schedule tiny slices of work across a shared pool. This means one H20 could serve several different models simultaneously, with system-wide “goodput” — a measure of effective output — rising by as much as nine times compared to older serverless systems.
The system was tested in production over several months, according to the paper, which lists authors from both Peking University and Alibaba’s infrastructure division, including CTO Jingren Zhou. During that window, the number of GPUs needed to support dozens of different LLMs — ranging in size up to 72 billion parameters — fell from 1,192 to just 213.
https://intezer.com/blog/beginners-guide-to-malware-analysis-and-reverse-engineering/
https://archive.ph/U2ZWQ
Malware analysis and reverse engineering are powerful but can also be challenging and time-consuming. Performing a thorough analysis typically requires deep knowledge, specialized tools, and extensive experience. However, not every security analyst has the expertise or the resources to conduct an exhaustive investigation for every suspicious file they encounter. Moreover, a comprehensive, in-depth reverse engineering effort isn’t always necessary or practical, for example, if another researcher has already reported and documented the file.
This blog series on “Breaking down malware” introduces a flexible, practical approach to malware analysis. Our goal is to guide you through determining the level of analysis required based on the context and initial findings. We will explore various techniques and tools that can help you efficiently assess a suspicious file, quickly determining whether a deeper dive is warranted or if initial triage provides sufficient insight.
[...] Malware (short for malicious software) analysis involves examining malicious software to understand its behavior, capabilities, and effects. By gaining insights into how malware functions, security teams can create effective detection, mitigation, and prevention strategies. It resembles digital forensics, where analysts serve as detectives, dissecting malware to uncover its mechanisms and defense methods. Just as doctors research diseases to develop cures, security researchers study malware to improve defense systems.
Ford will ramp up production of the F-150 and F-Series Super Duty in 2026, but the Lightning will pay the price :
A fire at a Novelis aluminum plant has disrupted operations for several automakers, including Ford and its top-selling F-150. The setback has been costly, but the Blue Oval plans to bounce back next year by ramping up truck production.
Under the plan, the Dearborn Truck Plant will add a third shift with roughly 1,200 employees. This will be supported by more than 90 new workers at Dearborn Stamping as well as more than 80 additional employees at Dearborn Diversified Manufacturing.
Thanks to these workers and the extra shift, Ford aims to produce an additional 45,000+ F-150s in 2026. They’ll have traditional powertrains as the F-150 Lightning hasn’t lived up to expectations.
[...] In total, the automaker will increase production by more than 50,000 units and create up to 1,000 new jobs. Ford’s Chief Operating Officer, Kumar Galhotra, said “The people who keep our country running depend on America’s most popular vehicle – F-Series trucks – and we are mobilizing our team to meet that demand.”
Related:
Nexperia, a Chinese Semiconductor manufacturing plant, located in the Netherlands, was seized by Dutch authorities last week in response to embargo pressures.
A Dutch seizure of Chinese-owned computer chip maker Nexperia came after rising U.S. pressure on the company, a court ruling released on Tuesday showed, underscoring how the firm has been caught in the crossfire between Washington and Beijing.
The government said on Sunday that it had intervened in Netherlands-based Nexperia, which makes chips for cars and consumer electronics. It cited worries about possible transfer of technology to its Chinese parent company, Wingtech.
[...] Nexperia is one of the largest makers globally of basic chips such as transistors that are not technically sophisticated but are needed in large volumes.
[...] The source said that company executives in the meeting believed that Dutch authorities were acquiescing to the United States and added that the company was very confident that it could have the decision reversed.
The Dutch government said on Tuesday there was no U.S. involvement or pressure in the decision to intervene in Nexperia.
Do you live in Australia and have an old module Samsung phone? If so, check your SMS messages as your phone may soon no longer work. Due to recent issues with triple zero and subsequent lawsuits Australian Telcos are blocking devices that cannot fallback to make calls on the national 000 number. Emergency Management Minister Kristy McBain has ruled out government assistance to Australians whose mobile phone may be unable to call triple-0. Devices affected by this block will no longer work after 26/11/2025.
The mobile devices affected by the issue are Galaxy A7 (2017), Galaxy A5 2017, Galaxy J1 2016j, Galaxy J3 2016, Galaxy J5 (2017), Galaxy S6, Galaxy S6 edge, Galaxy S6 Edge+, Galaxy S7 and Galaxy S7 Edge.
Ms McBain said the telecommunications companies were working to assess how many devices were impacted, but the number was estimated to be about 10,000.
[...] In a statement, TPG Telecom, which owns Vodafone, said it had identified a cohort of older Samsung handsets leading into the 3G network shutdown in 2024 that were unable to make triple-0 calls on the TPG/Vodafone mobile network and could not be fixed with a software upgrade.
"These devices were blocked from the Vodafone network as part of the 3G shutdown process," a spokesman said.
"Recently, we became aware that some of those same handsets that worked on other networks were unable to connect to triple-0 when only Vodafone coverage was available.
"These Samsung devices were found to be configured in way that permanently locked them to making triple-0 calls on the Vodafone 3G network even if being used with the SIM of another mobile operator and able to make triple-0 calls on their 4G network. This limitation was not previously known to TPG Telecom."
[...] An Optus spokesman earlier said during emergencies, and at times mobile phones could not connect to its regular network, phones were designed to search for another available network to reach triple-0.
"These situations relate to rare occasions when both the Optus and Telstra networks are unavailable and the phone needs to switch to Vodafone in order to contact emergency services," a spokesman said.
"This only happens under very specific conditions, but it's critically important that all devices can reach triple-0."
Is it the Year of the Linux Phone yet?
Cache Poisoning Vulnerabilities Found in 2 DNS Resolving Apps
The makers of BIND, the Internet's most widely used software for resolving domain names, are warning of two vulnerabilities that allow attackers to poison entire caches of results and send users to malicious destinations that are indistinguishable from the real ones.
The vulnerabilities, tracked as CVE-2025-40778 and CVE-2025-40780, stem from a logic error and a weakness in generating pseudo-random numbers, respectively. They each carry a severity rating of 8.6. Separately, makers of the Domain Name System resolver software Unbound warned of similar vulnerabilities that were reported by the same researchers. The unbound vulnerability severity score is 5.6
[...] In 2008, researcher Dan Kaminsky revealed one of the more severe Internet-wide security threats ever. Known as DNS cache poisoning, it made it possible for attackers to send users en masse to imposter sites instead of the real ones belonging to Google, Bank of America, or anyone else. With industry-wide coordination, thousands of DNS providers around the world—in coordination with makers of browsers and other client applications—implemented a fix that averted this doomsday scenario.
[...] What Kaminsky realized was that there were only 65,536 possible transaction IDs. An attacker could exploit this limitation by flooding a DNS resolver with lookup results for a specific domain. Each result would use a slight variation in the domain name, such as 1.arstechnica.com, 2.arstechnica.com, 3.arstechnica.com, and so on. Each result would also include a different transaction ID. Eventually, an attacker would reproduce the correct number of an outstanding request, and the malicious IP would get fed to all users who relied on the resolver that made the request. The attack was called DNS cache poisoning because it tainted the resolver's store of lookups.
[...] "Because exploitation is non-trivial, requires network-level spoofing and precise timing, and only affects cache integrity without server compromise, the vulnerability is considered Important rather than Critical," Red Hat wrote in its disclosure of CVE-2025-40780.
The vulnerabilities nonetheless have the potential to cause harm in some organizations. Patches for all three should be installed as soon as practicable.