Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
The Guardian has a long and very interesting article about pain and its psychology and physiology. Some gripping anecdotes like the soldier who picks his torn arm from the ground and walks to receive medical attention or the woman who worked and walked around for 10 hours with a burst cyst and a "a belly full of blood."
Why some people can withstand high pain while others cry over a little knock in their knee?
Some say it was John Sattler's own fault. The lead-up to the 1970 rugby league grand final had been tense; the team he led, the South Sydney Rabbitohs, had lost the 1969 final. Here was an opportunity for redemption. The Rabbitohs were not about to let glory slip through their fingers again.
Soon after the starting whistle, Sattler went in for a tackle. As he untangled – in a move not uncommon in the sport at the time – he gave the Manly Sea Eagles' John Bucknall a clip on the ear.
Seconds later – just three minutes into the game – the towering second rower returned favour with force: Bucknall's mighty right arm bore down on Sattler, breaking his jaw in three places and tearing his skin; he would later need eight stitches. When his teammate Bob McCarthy turned to check on him, he saw his captain spurting blood, his jaw hanging low. Forty years later Sattler would recall that moment. One thought raged in his shattered head: "I have never felt pain like this in my life."
But he played on. Tackling heaving muscular players as they advanced. Being tackled in turn, around the head, as he pushed forward. All the while he could feel his jaw in pieces.
At half-time the Rabbitohs were leading. In the locker room, Sattler warned his teammates, "Don't play me out of this grand final."
McCarthy told him, "Mate, you've got to go off."
He refused. "I'm staying."
Sattler played the whole game. The remaining 77 minutes. At the end, he gave a speech and ran a lap of honour. The Rabbitohs had won. The back page of the next day's Sunday Mirror screamed "BROKEN JAW HERO".
[...]
How can a person bitten by a shark calmly paddle their surfboard to safety, then later liken the sensation of the predator clamping down on their limb to the feeling of someone giving their arm "a shake"? How is it that a woman can have a cyst on her ovary burst, her abdomen steadily fill with blood, but continue working at her desk for six hours? Or that a soldier can have his legs blown off then direct his own emergency treatment? [16:06 and quite moving.]
Each one of us feels pain. We all stub our toes, burn our fingers, knock our knees. And worse. The problem with living in just one mind and body is that we can never know whether our six out of 10 on the pain scale is the same as the patient in the chair next to us.
[...] But what is happening in the body and mind of a person who does not seem to feel the pain they "should" be feeling. Do we all have the capacity to be one of these heroic freaks?
And how did John Sattler play those 77 minutes?
Questions like these rattled around the mind of Lorimer Moseley when he showed up at Sydney's Royal North Shore hospital years ago as an undergraduate physiotherapy student. He wanted to interrogate a quip made by a neurology professor as he left the lecture theatre one day, that the worst injuries are often the least painful. So Moseley sat in the emergency room and watched people come in, recording their injuries and asking them how much they hurt.
"And this guy came in with a hammer stuck in his neck – the curly bit had got in the back and was coming out the front and blood was pouring all down," Moseley recalls. "But he was relaxed. He just walked in holding the hammer, relaxed. Totally fine."
Then the man turned around, hit his knee on a low table and began jumping up and down at the pain of the small knock.
"And I think, 'Whoa, what is happening there?'"
The curious student ruled out drugs, alcohol, shock. He realised that the reason the man did not feel pain from his hammer injury was due to the very point of pain itself.
"Pain is a feeling that motivates us to protect ourselves," says Moseley, now the chair in physiotherapy and a professor of clinical neurosciences at the University of South Australia.
"One of the beautiful things about pain is that it will motivate us to protect the body part that's in danger, really anatomically specific – it can narrow it right down to a tiny little spot."
[...] Prof Michael Nicholas is used to stories like these. "You can see it in probably every hospital ward. If you stay around long enough you'll hear comments like 'this person has more pain than they should have' or 'you might be surprised that they're not in pain'," he says. "What that highlights to me is the general tendency for all of us to think there should be a close relationship between a stimulus like an injury or a noxious event and the degree of pain the person feels.
"In fact, that's generally wrong. But it doesn't stop us believing it."
The reason we get it wrong, Nicholas says, "is that we have a sort of mind-body problem".
Eastern medicine and philosophy has long recognised the interconnectedness of body and mind, and so too did the west in early civilisations. In ancient Greece the Algea, the gods of physical pain, were also gods associated with psychic pain – with grief and distress. But in the 1600s the French philosopher René Descartes set western thinking on a different course, asserting that the mind and body were separate entities.
"When people come to see me, they're often worried they're being told it's all in their head," Nicholas says.
"Of course pain is in your head. It's in your brain. You know, it's the brain that is where you get that experience ... It's never all physical."
This is true of people who tolerate acute pain. It's never all physical. And it has little to do with heroism or freakishness.
[...] And so the experience of acute pain is caught in the realm of mystery and mythology; where we can understand much of what is happening in a body and part of what is happening in a brain but never actually know what another person feels.
The legend of John Sattler goes that after that fateful right hook from Bucknall, the bloodied captain turned to his teammate Matthew Cleary. That no one knew, perhaps not even himself, the damage that had been done to him became his mythological power.
"Hold me up," he said. "So they don't know I'm hurt."
FuguIta has been mentioned here recently.
The creator has released a, "FuguIta desktop environment demo version" featuring:
Desktop environment: xfce-4.20.0
- Web browser: firefox-137.0
- Mailer: thunderbird-128.9.0
- Office: libreoffice-25.2.1.2v0
- Media player: vlc-3.0.21p2
- Audio player: audacious-4.4.2
- Fonts: noto-cjk-20240730, noto-emoji-20240730, noto-fonts-24.9.1v0
From the creator:
I made a demo version of FuguIta with a desktop environment. This demo version demonstrates that FuguIta can be used with a desktop environment as easily as a regular live system.
This demo version uses the following features of Fuguita and OpenBSD.
- Automatic file saving at shutdown using the /etc/rc.shutdown file
- Automatic startup using the noasks file
- Automatic login using the xenodm-config file
- Additional partition mounting using the /etc/fuguita/fstab.tail file
- Initialization only at first startup using /etc/rc.firsttime
There's also the example on how to setup the Fluxbox Window Manager, too, for example.
Arthur T Knackerbracket has processed the following story:
Satellite data suggests cloud darkening is responsible for much of the warming since 2001, and the good news is that it is a temporary effect due to a drop in sulphate pollution
Clouds have been getting darker and reflecting less sunlight as a result of falling sulphate air pollution, and this may be responsible for a lot of recent warming beyond that caused by greenhouse gases.
“Two-thirds of the global warming since 2001 is SO2 reduction rather than CO2 increases,” says Peter Cox at the University of Exeter in the UK.
Some of the sunshine that reaches Earth is reflected and some is absorbed and later radiated as heat. Rising carbon dioxide levels trap more of that radiant heat – a greenhouse effect that causes global warming. But the planet’s albedo – how reflective it is – also has a big influence on its temperature.
Since 2001, satellite instruments called CERES have been directly measuring how much sunlight is reflected versus how much is absorbed. These measurements show a fall in how much sunlight is being reflected, meaning the planet is getting darker – its albedo is falling – and this results in additional warming.
There are many reasons for the falling albedo, from less snow and sea ice to less cloud cover. But an analysis of CERES data from 2001 to 2019 by Cox and Margaux Marchant, also at Exeter, suggests the biggest factor is that clouds are becoming darker.
It is known that sulphate pollution from industry and ships can increase the density of droplets in clouds, making them brighter or more reflective. This is the basis of one proposed form of geoengineering, known as marine cloud brightening. But these emissions have been successfully reduced in recent years, partly by moving away from high-sulphur fuels such as coal.
So Marchant and Cox looked at whether the decline in cloud brightness corresponded with areas with falling levels of SO2 pollution, and found that it did. The pair presented their preliminary results at the Exeter Climate Forum earlier this month.
The results are encouraging because the rapid warming in recent years has led some researchers to suggest that Earth’s climate sensitivity – how much it warms in response to a given increase in atmospheric CO2 – is on the high side of estimates. As it turns out, extra warming due to falling pollution will be short-lived, whereas if the cloud darkening was a feedback caused by rising CO2, it would mean ever more warming due to this as CO2 levels keep rising.
“If this darkening is a change in cloud properties due to the recent decrease in SO2 emissions, rather than a change in cloud feedbacks that indicate a higher-than-anticipated climate sensitivity, then this is great news,” says Laura Wilcox at the University of Reading in the UK, who wasn’t involved in the study.
There are some limitations with the datasets Marchant and Cox used, says Wilcox. For instance, the data on SO2 pollution has been updated since the team did their analysis.
And two recent studies have suggested the darkening is mainly due to a reduction in cloud cover, rather than darker clouds, she says. “The drivers of the recent darkening trends are a hotly debated topic at the moment.”
Overall, though, Wilcox says her own work also supports the conclusion that the recent acceleration in global warming has been primarily driven by the decrease in air pollution, and that it is likely to be a temporary effect.
Brothers-in-law use construction knowledge to compete against Comcast in Michigan:
Samuel Herman and Alexander Baciu never liked using Comcast's cable broadband. Now, the residents of Saline, Michigan, operate a fiber Internet service provider that competes against Comcast in their neighborhoods and has ambitions to expand.
[...] "Many times we would have to call Comcast and let them know our bandwidth was slowing down... then they would say, 'OK, we'll refresh the system.' So then it would work again for a week to two weeks, and then again we'd have the same issues," he said.
Herman, now 25, got married in 2021 and started building his own house, and he tried to find another ISP to serve the property. He was familiar with local Internet service providers because he worked in construction for his father's company, which contracts with ISPs to build their networks.
But no fiber ISP was looking to compete directly against Comcast where he lived, though Metronet and 123NET offer fiber elsewhere in the city, Herman said. He ended up paying Comcast $120 a month for gigabit download service with slower upload speeds. Baciu, who lives about a mile away from Herman, was also stuck with Comcast and was paying about the same amount for gigabit download speeds.
Herman said he was the chief operating officer of his father's construction company and that he shifted the business "from doing just directional drilling to be a turnkey contractor for ISPs." Baciu, Herman's brother-in-law (having married Herman's oldest sister), was the chief construction officer. Fueled by their knowledge of the business and their dislike of Comcast, they founded a fiber ISP called Prime-One.
Now, Herman is paying $80 a month to his own company for symmetrical gigabit service. Prime-One also offers 500Mbps for $75, 2Gbps for $95, and 5Gbps for $110. The first 30 days are free, and all plans have unlimited data and no contracts.
[...] Comcast seems to have noticed, Herman said. "They've been calling our clients nonstop to try to come back to their service, offer them discounted rates for a five-year contract and so on," he said.
A Comcast spokesperson told Ars that "we have upgraded our network in this area and offer multi-gig speeds there, and across Michigan, as part of our national upgrade that has been rolling out."
Meanwhile, Comcast's controversial data caps are being phased out. With Comcast increasingly concerned about customer losses, it recently overhauled its offerings with four plans that come with unlimited data. The Comcast data caps aren't quite dead yet because customers with caps have to switch to a new plan to get unlimited data.
Comcast told us that customers in Saline "have access to our latest plans with simple and predictable all-in pricing that includes unlimited data, Wi-Fi equipment, a line of Xfinity Mobile, and the option for a one or five-year price guarantee."
https://www.eff.org/deeplinks/2025/07/radio-hobbyists-rejoice-good-news-lora-mesh
A set of radio devices and technologies are opening the doorway to new and revolutionary forms of communication. These have the potential to break down the over-reliance on traditional network hierarchies, and present collaborative alternatives where resistance to censorship, control and surveillance are baked into the network topography itself. Here, we look at a few of these technologies and what they might mean for the future of networked communications.
The idea of what is broadly referred to as mesh networking isn't new: the resilience and scalability of mesh technology has seen it adopted in router and IoT protocols for decades. What's new is cheap devices that can be used without a radio license to communicate over (relatively) large distances, or LOng RAnge, thus the moniker LoRa.
Although using different operating frequencies in different countries, LoRa works in essentially the same way everywhere. It uses Chirp Spread Spectrum to broadcast digital communications across a physical landscape, with a range of several kilometers in the right environmental conditions. When other capable devices pick up a signal, they can then pass it along to other nodes until the message reaches its destination—all without relying on a single centralized host.
These communications are of very low bit-rate—often less than a few KBps (kilobytes per second) at a distance—and use very little power. You won't be browsing the web or streaming video over LoRa, but it is useful for sending messages in a wide range of situations where traditional infrastructure is lacking or intermittent, and communication with others over dispersed or changing physical terrain is essential. For instance, a growing body of research is showing how Search and Rescue (SAR) teams can greatly benefit from the use of LoRa, specifically when coupled with GPS sensors, and especially when complimented by line-of-sight LoRa repeaters.
The most popular of these indie LoRa communication systems is Meshtastic by far. For hobbyists just getting started in the world of LoRa mesh communications, it is the easiest way to get up, running, and texting with others in your area that also happen to have a Meshtastic-enabled device. It also facilitates direct communication with other nodes using end-to-end encryption. And by default, a Meshtastic device will repeat messages to others if originating from 3 or fewer nodes (or "hops") away. This means messages tend to propagate farther with the power of the mesh collaborating to make delivery possible. As a single-application use of LoRa, it is an exciting experiment to take part in.
While Reticulum is often put into the same category as Meshtastic, and although both enable communication over LoRa, the comparison breaks down quickly after that. Reticulum is not a single application, but an entire network stack that can be arbitrarily configured to connect through existing TCP/IP, the anonymizing I2P network, directly through a local WiFi connection, or through LoRa radios. The Reticulum network's LXMF transfer protocol allows arbitrary applications to be built on top of it, such as messaging, voice calls, file transfer, and light-weight, text-only browsing. And that's only to name a few applications which have already been developed—the possibilities are endless.
[...] On a more somber note, let's face it: we live in an uncertain world. With the frequency of environmental disasters, political polarization, and infrastructure attacks increasing, the stability of networks we have traditionally relied upon is far from assured.
Yet even with the world as it is, developers are creating new communications networks that have the potential to help in unexpected situations we might find ourselves in. Not only are these technologies built to be useful and resilient, they are also empowering individuals by circumventing censorship and platform control— allowing a way for people to empower each other through sharing resources.
In that way, it can be seen as a technological inheritor of the hopefulness and experimentation—and yes, fun!—that was so present in the early internet. These technologies offer a promising path forward for building our way out of tech dystopia.
The Wall Street Journal published a look at new automation for farms, as reported by Mint
In the verdant hills of Washington state's Palouse region, Andrew Nelson's tractor hums through the wheat fields on his 7,500-acre farm. Inside the cab, he's not gripping the steering wheel—he's on a Zoom call or checking messages.
A software engineer and fifth-generation farmer, Nelson, 41, is at the vanguard of a transformation that is changing the way we grow and harvest our food. The tractor isn't only driving itself; its array of sensors, cameras, and analytic software is also constantly deciding where and when to spray fertilizer or whack weeds.
Many modern farms already use GPS-guided tractors and digital technology such as farm-management software systems. Now, advances in artificial intelligence mean that the next step—the autonomous farm, with only minimal human tending—is finally coming into focus.
Imagine a farm where fleets of autonomous tractors, drones and harvesters are guided by AI that tweaks operations minute by minute based on soil and weather data. Sensors would track plant health across thousands of acres, triggering precise sprays or irrigation exactly where needed. Farmers could swap long hours in the cab for monitoring dashboards and making high-level decisions. Every seed, drop of water and ounce of fertilizer would be optimized to boost yields and protect the land—driven by a connected system that gets smarter with each season.
[...] "We're just getting to a turning point in the commercial viability of a lot of these technologies," says David Fiocco, a senior partner at McKinsey & Co. who leads research on agricultural innovation.
[...] Automation, now most often used on large farms with wheat or corn laid out in neat rows, is a bigger challenge for crops like fruits and berries, which ripen at different times and grow on trees or bushes. Maintaining and harvesting these so-called specialty crops is labor-intensive. "In specialty crops, the small army of weeders and pickers could soon be replaced by just one or two people overseeing the technology. That may be a decade out, but that's where we're going," says Fiocco of McKinsey.
Fragile fruits like strawberries and grapes pose a huge challenge. Tortuga, an agriculture tech startup in Denver, developed a robot to do the job. Tortuga was acquired in March by vertical farming company Oishii. The robot resembles NASA's Mars Rover with fat tires and extended arms. It rolls along a bed of strawberries or grapes and uses a long pincher arm to reach into the vine and snip off a single berry or a bunch of grapes, placing them gingerly into a basket.
[...] A crop is only as healthy as its soil. Traditionally, farmers send topsoil samples to a lab to have them analyzed. New technology that uses sensors to scan the soil on-site is enabling a precise diagnosis covering large areas of farms rather than spot checks.
The diagnosis includes microbial analysis as well as identifying areas of soil compaction, when the soil becomes dense, hindering water infiltration, root penetration and gas exchange. Knowing this can help a farmer plan where to till and make other decisions about the new season.
New technology is also changing livestock management. The creation of virtual fences, which are beginning to be adopted in the U.S., Europe and Australia, has the potential to help ranchers save money on expensive fencing and help them better manage their herds.
Livestock are given GPS-enabled collars, and virtual boundaries are drawn on a digital map. If an animal approaches the virtual boundary, it first gets an auditory warning. If it continues, it gets zapped with a mild but firm electric shock.
Is this what Bill Gates is doing with all the farmland he owns?
DOGE staffer with access to Americans' personal data leaked private xAI API key:
A DOGE staffer with access to the private information on millions of Americans held by the U.S. government reportedly exposed a private API key used for interacting with Elon Musk's xAI chatbot.
Independent security journalist Brian Krebs reports that Marko Elez, a special government employee who in recent months has worked on sensitive systems at the U.S. Treasury, the Social Security Administration, and Homeland Security, recently published code to his GitHub containing the private key. The key allowed access to dozens of models developed by xAI, including Grok.
Philippe Caturegli, founder of consultancy firm Seralys, alerted Elez to the leak earlier this week. Elez removed the key from his GitHub but the key itself was not revoked, allowing continued access to the AI models.
"If a developer can't keep an API key private, it raises questions about how they're handling far more sensitive government information behind closed doors," Caturegli told KrebsOnSecurity.
Arthur T Knackerbracket has processed the following story:
Despite claims that layoffs target mostly mid-level managers.
Intel this month officially began to cut down its workforce in the U.S. and other countries, thus revealing actual numbers of positions to be cut. The Oregonian reports that the company will cut as many as 2,392 positions in Oregon and around 4,000 positions across its American operations, including Arizona, California, and Texas.
To put the 2,392 number into context, Intel is the largest employer in Oregon with around 20,000 of workers there. 2,392 is around 12% of the workforce, which is a lower end of layoff expectations, yet 2,400 is still a lot of people. The Oregon reduction rose sharply from an initial count of around 500 to a revised figure of 2,392, making it one of the largest layoffs in the state’s history. Intel began reducing staff earlier in the week but confirmed the larger number by Friday evening through a filing with Oregon state authorities.
Intel's Oregon operations have already seen 3,000 jobs lost over the past year through earlier buyouts and dismissals. This time around, Intel does not offer voluntarily retirement or buyouts, it indeed lays off personnel in Aloha (192) and Hillsboro (2,200).
Although Intel officially says that it is trying to get rid of mid-level managers to flatten the organization and focus on engineers, the list of positions that Intel is cutting is led by module equipment technicians (325), module development engineers (302), module engineers (126), and process integration development engineers (88). In fact, based on the Oregon WARN filing, a total of 190 employees with 'Manager' in their job titles (8% of personnel being laid off) were included among those laid off by Intel. These comprised various software, hardware, and operational management roles across the affected sites.
[...] Interestingly, Intel is implementing a new approach to workforce reductions, allowing individual departments to decide how to meet financial goals rather than announcing large, centralized cuts. This decentralized process has led to ongoing job losses across the company, with marketing functions being outsourced to Accenture and the automotive division completely shut down.
Engineering the Origin of the Wheel:
Some historians believe the wheel is the most significant invention ever created. Historians and archeologists have artifacts from the wheel's history that go back thousands of years, but knowing that the wheel first originated back in 3900 B.C. doesn't tell the entire story of this essential technology's development.
A recent study [2024] by Daniel Guggenheim School of Aerospace Engineering Associate Professor Kai James, Lee Alacoque, and Richard Bulliet analyzes the wheels' invention and its evolution. Their analysis supports a new theory that copper miners from the Carpathian Mountains in southeastern Europe may have invented the wheel. However, the study also recognizes that the wheel's evolution occurred incrementally over time — and likely through considerable trial and error. The findings suggest that the original developers of the wheel benefited from uniquely favorable environmental conditions that augmented their human ingenuity. The study, published in the journal Royal Society Open Science, has gained the worldwide attention of experts and more than 58 media outlets, including Popular Mechanics, Interesting Engineering, and National Geographic en Español.
"The way technology evolves is very complex. It's never as simple as somebody having an epiphany, going to their lab, drawing up a perfect prototype, and manufacturing it — and then end of story," said James. "The evidence, even before our theory, suggests that the wheel evolved over centuries, across a very broad geographical range, with contributions from many different people, and that's true of all engineering systems. Understanding this complexity and seeing the process as a journey, rather than a moment in time, is one of the main outcomes of our study."
[...] James and his team use computational analysis and design as a forensic tool to learn about the past, studying engineered systems designed by prehistoric people. Computational analysis offers a deeper understanding of how these systems were created.
"We have to interpret clues from ancient societies without a writing system — artifacts like bows and arrows, flutes, or boats — but we need to use additional tools to do this," James explained. "Carbon dating tells us when, but it doesn't tell us how or why. Using solid mechanics and computational modeling to recreate these environments and scenarios that gave rise to these technologies is a potential game-changer."
Their theory suggests that the wheel evolved from simple rollers, which took the form of a series of untethered cylinders, poles, or tree trunks. These rollers were arranged side-by-side in a row on the ground, and the workers would transport their cargo on top of the rollers to avoid the friction caused by dragging. "Over time, the shape of these rollers evolved such that the central portion of the cylinder grew progressively narrower, eventually leaving only a slender axle capped on either end by round discs, which we now refer to as wheels," James explained.
The researchers derived a series of mathematical equations that describe the physics of the rollers. They then created a computer algorithm that simulates the progression from roller to wheel-and-axle by repeatedly solving these equations.
"Our investigation also indicates that environmental conditions played a key role in this evolutionary process," he said. "Previous studies have shown that rollers are only effective under very specific circumstances. They require flat, firm, and level terrain, as well as a straight path. Neolithic mines, with their human-made tunnels and covered terrain would have offered an environment highly conducive to roller-based transport."
Journal Reference:Alacoque, L. R., Bulliet, R. W., & James, K. A. (2024). Reconstructing the invention of the wheel using computational structural analysis and Design. Royal Society Open Science, 11(10). https://doi.org/10.1098/rsos.240373
GPUhammer is the first to flip bits in onboard GPU memory. It likely won't be the last:
Nvidia is recommending a mitigation for customers of one of its GPU product lines that will degrade performance by up to 10 percent in a bid to protect users from exploits that could let hackers sabotage work projects and possibly cause other compromises.
The move comes in response to an attack a team of academic researchers demonstrated against Nvidia's RTX A6000, a widely used GPU for high-performance computing that's available from many cloud services. A vulnerability the researchers discovered opens the GPU to Rowhammer, a class of attack that exploits physical weakness in DRAM chip modules that store data.
Rowhammer allows hackers to change or corrupt data stored in memory by rapidly and repeatedly accessing—or hammering—a physical row of memory cells. By repeatedly hammering carefully chosen rows, the attack induces bit flips in nearby rows, meaning a digital zero is converted to a one or vice versa. Until now, Rowhammer attacks have been demonstrated only against memory chips for CPUs, used for general computing tasks.
[...] The researchers' proof-of-concept exploit was able to tamper with deep neural network models used in machine learning for things like autonomous driving, healthcare applications, and medical imaging for analyzing MRI scans. GPUHammer flips a single bit in the exponent of a model weight—for example in y, where a floating point is represented as x times 2y. The single bit flip can increase the exponent value by 16. The result is an altering of the model weight by a whopping 216, degrading model accuracy from 80 percent to 0.1 percent, said Gururaj Saileshwar, an assistant professor at the University of Toronto and co-author of an academic paper demonstrating the attack.
"This is like inducing catastrophic brain damage in the model: with just one bit flip, accuracy can crash from 80% to 0.1%, rendering it useless," Saileshwar wrote in an email. "With such accuracy degradation, a self-driving car may misclassify stop signs (reading a stop sign as a speed limit 50 mph sign), or stop recognizing pedestrians. A healthcare model might misdiagnose patients. A security classifier may fail to detect malware."
In response, Nvidia is recommending users implement a defense that could degrade overall performance by as much as 10 percent. Among machine learning inference workloads the researchers studied, the slowdown affects the "3D U-Net ML Model" the most. This model is used for an array of HPC tasks, such as medical imaging.
The performance hit is caused by the resulting reduction in bandwidth between the GPU and the memory module, which the researchers estimated as 12 percent. There's also a 6.25 percent loss in memory capacity across the board, regardless of the workload. Performance degradation will be the highest for applications that access large amounts of memory.
A figure in the researchers' academic paper provides the overhead breakdowns for the workloads tested.
Belkin shows tech firms getting too comfortable with bricking customers' stuff:
In a somewhat anticipated move, Belkin is killing most of its smart home products. On January 31, the company will stop supporting the majority of its Wemo devices, leaving users without core functionality and future updates.
In an announcement emailed to customers and posted on Belkin's website, Belkin said:
After careful consideration, we have made the difficult decision to end technical support for older Wemo products, effective January 31, 2026. After this date, several Wemo products will no longer be controllable through the Wemo app. Any features that rely on cloud connectivity, including remote access and voice assistant integrations, will no longer work.
The company said that people with affected devices that are under warranty on or after January 31 "may be eligible for a partial refund" starting in February.
The 27 affected devices have last sold dates that go back to August 2015 and are as recent as November 2023.
The announcement means that soon, features like the ability to work with Amazon Alexa will suddenly stop working on some already-purchased Wemo devices. The Wemo app will also stop working and being updated, removing the simplest way to control Wemo products, including connecting to Wi-Fi, monitoring usage, using timers, and activating Away Mode, which is supposed to make it look like people are in an empty home by turning the lights on and off randomly. Of course, the end of updates and technical support has security implications for the affected devices, too.
[...] Belkin acknowledged that some people who invested in Wemo devices will see their gadgets rendered useless soon: "For any Wemo devices you have that are out of warranty, will not work with HomeKit, or if you are unable to use HomeKit, we recommend disposing of these devices at an authorized e-waste recycling center."
Belkin started selling Wemo products in 2011, but said that "as technology evolves, we must focus our resources on different parts of the Belkin business.
Belkin currently sells a variety of consumer gadgets, including power adapters, charging cables, computer docks, and Nintendo Switch 2 charging cases.
For those who follow smart home news, Belkin's discontinuation of Wemo was somewhat expected. Belkin hasn't released a new Wemo product since 2023, when it announced that it was taking "a big step back" to "regroup" and "rethink" about whether or not it would support Matter in Wemo products.
Even with that inkling that Belkin's smart home commitment may waver, that's little comfort for people who have to reconfigure their smart home system.
Belkin's abandonment of most of its Wemo products is the latest example of an Internet of Things (IoT) company ending product support and turning customer devices into e-waste. The US Public Interest Research Group (PIRG) nonprofit estimates that "a minimum of 130 million pounds of electronic waste has been created by expired software and canceled cloud services since 2014," Lucas Gutterman, director of the US PIRG Education Fund's Designed to Last Campaign, said in April.
What Belkin is doing has become a way of life for connected device makers, suggesting that these companies are getting too comfortable with selling people products and then reducing those products' functionality later.
Belkin itself pulled something similar in April 2020, when it said it would end-of-life its Wemo NestCam home security cameras the following month (Belkin eventually extended support until the end of June 2020). At the time, Forbes writer Charles Radclyffe mused that "Belkin May Never Be Trusted Again After This Story." But five years later, Belkin is telling customers a similar story—at least this time, its customers have more advance notice.
IoT companies face fierce challenges around selling relatively new types of products, keeping old and new products secure and competitive, and making money. Sometimes companies fail in those endeavors, and sometimes they choose to prioritize the money part.
[...] With people constantly buying products that stop working as expected a few years later, activists are pushing for legislation [PDF] that would require tech manufacturers to tell shoppers how long they will support the smart products they sell. In November, the FTC warned that companies that don't disclose how long they will support their connected devices could be violating the Magnuson Moss Warranty Act.
I don't envy the obstacles facing IoT firms like Belkin. Connected devices are central to many people's lives, and without companies like Belkin figuring out how to keep their (and customers') lights on, modern tech would look very different today.
But it's alarming how easy it is for smart device makers to decide that your property won't work. There's no easy solution to this problem. However, the lack of accountability carried by companies that brick customer devices neglects the people who support smart tech companies. If tech firms can't support the products they make, then people—and perhaps the law one day—may be less supportive of their business.
Arthur T Knackerbracket has processed the following story:
Details behind HoloMem’s holographic tape innovations are beginning to come into clearer view. The UK-based startup recently chatted with Blocks & Files about its potentially disruptive technology for long-term cold storage. HoloMem is another emerging storage idea which relies on optical technology - to enable holographic storage. However, it cleverly melds the durability and density advantage of optical formats with a flexible polymer ribbon-loaded cartridge, so it can usurp entrenched LTO magnetic tape storage systems with minimal friction.
According to the inventors of HoloMem, their new cold storage technology offers far greater capacity than magnetic tape, with a much longer shelf life, and “zero energy storage” costs. HoloMem carts can fit up to 200TB, which is more than 11x the capacity of LTO-10 magnetic tape. Also, the optical-based new tech’s touted 50-year life is 10x the life of magnetic tape.
Magnetic tape has been around for 70 years or more, so it isn’t surprising that a new technology has at last been designed as a serious replacement, beating it by all key metrics. However, the HoloMem makers have revealed quite a few more attractive features of their new storage solution, which could or should lead to success.
Probably one of the biggest attractions of HoloMem is that it minimizes friction for users who may be interested in replacing existing tape storage. The firm claims that a HoloDrive can be integrated into a legacy cold storage system “with minimal hardware and software disruption.” This allows potential customers to phase-in HoloMem use, reducing the chance of abrupt transition issues. Moreover, its LTO-sized cartridges can be transported by a storage library’s robot transporters with no change.
Another feather in HoloMem’s cap is the technology’s reliance on cheap and off-the-shelf component products. Blocks & Files says that the holographic read/write head is just a $5 laser diode, for example. As for media, it makes use of mass-produced polymer sheets which sandwich a 16 micron thick light-sensitive polymer that “costs buttons.” The optical ribbon tapes produced, claimed to be robust and around 120 microns thick in total, work in a WORM (write-once, read-many) format.
Thanks to the storage density that the multiple layers of holograms written on these ribbons enable, HoloMem tapes need only be around 100m long for 200TB of storage. Contrast that with the 1,000m length of fragile magnetic tape that enables LTO-10’s up to 18TB capacity.
Blocks & Files shares some insight gained from talking to HoloMem founder Charlie Gale, who earned his stripes at Dyson, working on products like robot vacuum cleaners and hair dryers. During his time at Dyson, Gale helped devise the firm’s multi-hologram security sticker labels. This work appears to have planted the seed from which HoloMem has blossomed.
Rival would-be optical storage revolutionaries like Cerabyte or Microsoft’s Project Silica may face far greater friction for widespread adoption, we feel. Their systems require more expensive read/write hardware to work with their inflexible slivers of silica glass, and will find it harder to deliver such easy swap-out upgrades versus companies buying into HoloDrives.
HoloMem has a working prototype now and is backed by notable investors such as Intel Ignite and Innovate UK. However, there is no official ‘launch date’ set. Blocks & Files says the first HoloDrives will be used at TechRe consultants, in its UK data centers to verify product performance, reliability, and robustness.
AI research nonprofit METR conducted the in-depth study of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with.
Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24%. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%.
The study's lead authors, Joel Becker and Nate Rush, said they were shocked by the results: prior to the study, Rush had written down that he expected "a 2x speed up, somewhat obviously."
The findings challenge the belief that AI always makes expensive human engineers much more productive, a factor that has attracted substantial investment into companies selling AI products to aid software development.
This is now a loaded question: "Do artificial-intelligence tools speed up your work?"
-- Hendrik Boom
Industrial Waste Is Turning Into Rock in Just Decades, Study Suggests:
The geological processes that create rocks usually take place over thousands if not millions of years. With the help of a coin and a soda can tab, researchers have identified rocks in England that formed in less than four decades. Perhaps unsurprisingly, the cause is human activity.
Researchers from the University of Glasgow's School of Geographical and Earth Sciences discovered that slag (a waste product of the steel industry) formed a new type of rock in West Cumbria in 35 years—at most. As detailed in a study published April 10 in the journal Geology, the researchers claim to be the first to fully document and date a complete "rapid anthropoclastic rock cycle" on land: a significantly accelerated rock cycle that incorporates human-made materials. They suggest that this phenomenon is likely harming ecosystems and biodiversity at similar industrial waste locations around the world.
"When waste material is first deposited, it's loose and can be moved around as required. What our finding shows is that we don't have as much time as we thought to find somewhere to put it where it will have minimal impact on the environment–instead, we may have a matter of just decades before it turns into rock, which is much more difficult to manage," co-author Amanda Owen said in a university statement.
During the 19th and 20th centuries, Derwent Howe in West Cumbria hosted heavy iron and steel industries. The 953 million cubic feet (27 million cubic meters) of slag generated by the factories turned into cliffs along the coastline, where strange formations along the human-made cliffs caught Owen and her colleagues' attention, according to the statement.
By analyzing 13 sites along the coast, the researchers concluded that Derwent Howe's slag contains deposits of calcium, magnesium, iron, and manganese. When exposed to seawater and air through coastal erosion, these reactive elements create natural cements such as brucite, calcite, and goethite—the same ones that bind natural sedimentary rocks together over thousands to millions of years.
"What's remarkable here is that we've found these human-made materials being incorporated into natural systems and becoming lithified–essentially turning into rock–over the course of decades instead," Owen explained. "It challenges our understanding of how a rock is formed, and suggests that the waste material we've produced in creating the modern world is going to have an irreversible impact on our future."
Modern objects stuck in the lithified slag, such as a King George V coin from 1934 and an aluminum can tab from no earlier than 1989, substantiated the team's dating of the material. Because slag clearly has all the necessary ingredients to create rocks in the presence of seawater and air, co-author David Brown suggested that the same process is likely happening at similar coastal slag deposits around the world.
Whether it's in England or elsewhere, "that rapid appearance of rock could fundamentally affect the ecosystems above and below the water, as well as change the way that coastlines respond to the challenges of rising sea levels and more extreme weather as our planet warms," Owen warned. "Currently, none of this is accounted for in our models of erosion of land management, which are key to helping us try to adapt to climate change."
Moving forward, the team hopes to continue investigating this new Earth system cycle by analyzing other slag deposits. Ultimately, the study suggests that humans aren't just driving global warming—we're also accelerating the ancient geological processes unfolding beneath our very feet.
Also, at The Register Plastic is the new rock, say Geologists:
Geologists have identified what they say is a new class of rock.
'Plastiglomerates', as the new rocks are called, form when plastic debris washes up on beaches, breaks down into small pieces, becomes mixed in sand or sticks to other rocks and solidifies into an agglomerate mixing all of the above. Such rocks, say US and Canadian boffins in a paper titled An anthropogenic marker horizon in the future rock record, have "great potential to form a marker horizon of human pollution, signalling the occurrence of the informal Anthropocene epoch."
The paper identifies four types of plastiglomerate, namely: A: In situ plastiglomerate wherein molten plastic is adhered to the surface of a basalt flow B: Clastic plastiglomerate containing molten plastic and basalt and coral fragments C: Plastic amygdales in a basalt flow
About a fifth of plastiglomerates consist of "fishing-related debris" such as "netting, ropes, nylon fishing line, as well as remnants of oyster spacer tubes". "Confetti", the "embrittled remains of intact products, such as containers" is also very prevalent, but whole containers and lids are also found in plastiglomerates.
The paper explains that the plastiglomerates studied come mainly from a single Hawaiian beach that, thanks to local currents, collects an unusual amount of plastic. But the authors also note that as some samples were formed when trapped within organic material, while others were the result of plastic being melted onto rock, plastiglomerates can pop up anywhere.
Journal Reference:
Amanda Owen, John Murdoch MacDonald, David James Brown. Evidence for a rapid anthropoclastic rock cycle, Geology (DOI: 10.1130/G52895.1)
See also:
Merger of two massive black holes is one for the record books:
Physicists with the LIGO/Virgo/KAGRA collaboration have detected the gravitational wave signal (dubbed GW231123) of the most massive merger between two black holes yet observed, resulting in a new black hole that is 225 times more massive than our Sun. The results were presented at the Edoardo Amaldi Conference on Gravitational Waves in Glasgow, Scotland.
The LIGO/Virgo/KAGRA collaboration searches the universe for gravitational waves produced by the mergers of black holes and neutron stars. LIGO detects gravitational waves via laser interferometry, using high-powered lasers to measure tiny changes in the distance between two objects positioned kilometers apart. LIGO has detectors in Hanford, Washington, and in Livingston, Louisiana. A third detector in Italy, Advanced Virgo, came online in 2016. In Japan, KAGRA is the first gravitational-wave detector in Asia and the first to be built underground. Construction began on LIGO-India in 2021, and physicists expect it will turn on sometime after 2025.
To date, the collaboration has detected dozens of merger events since its first Nobel Prize-winning discovery. Early detected mergers involved either two black holes or two neutron stars. In 2021, LIGO/Virgo/KAGRA confirmed the detection of two separate "mixed" mergers between black holes and neutron stars.
LIGO/Virgo/KAGRA started its fourth observing run in 2023, and by the following year had announced the detection of a signal indicating a merger between two compact objects, one of which was most likely a neutron star. The other had an intermediate mass—heavier than a neutron star and lighter than a black hole. It was the first gravitational-wave detection of a mass-gap object paired with a neutron star and hinted that the mass gap might be less empty than astronomers previously thought.
Until now, the most massive back hole merger was GW190521, detected in 2020. It produced a new black hole with an intermediate mass—about 140 times as heavy as our Sun. Also found in the fourth run, GW231123 dwarfs the prior merger. According to the collaboration, the two black holes that merged were about 100 and 140 solar masses, respectively. It took some time to announce the discovery because the objects were spinning rapidly, near the limits imposed by the general theory of relativity, making the signal much more difficult to interpret.
The discovery is also noteworthy because it conflicts with current theories about stellar evolution. The progenitor black holes are too big to have formed from a supernova. Like its predecessor, GW190521, GW231123 may be an example of a so-called "hierarchical merger," meaning the two progenitor black holes were themselves each the result of a previous merger before they found each other and merged.
"The discovery of such a massive and highly spinning system presents a challenge not only to our data analysis techniques but will have a major effect on the theoretical studies of black hole formation channels and waveform modeling for many years to come," said Ed Porter of CNRS in Paris.
Arthur T Knackerbracket has processed the following story:
The two black holes had masses bigger than any before confirmed in such a collision. One had about 140 times the mass of the sun, and the other about 100 solar masses. And both were spinning at nearly the top speed allowed by physics.
“We don’t think it’s possible to form black holes with those masses by the usual mechanism of a star collapsing after it has died,” says physicist Mark Hannam of Cardiff University in Wales, a physicist working on the Laser Interferometer Gravitational-Wave Observatory, or LIGO, which detected the crash. That has researchers considering other black hole backstories.
Scientists deduced the properties of the black holes from shudders of the fabric of spacetime called gravitational waves. Those waves were detected on November 23, 2023, by LIGO’s two detectors in Hanford, Wash., and Livingston, La.
The two black holes spiraled around one another, drawing closer and closer before coalescing into one, blasting out gravitational waves in the process. The merger produced a black hole with a mass about 225 times that of the sun, researchers report in a paper posted July 13 at arXiv.org and to be presented at the International Conference on General Relativity and Gravitation and the Edoardo Amaldi Conference on Gravitational Waves in Glasgow, Scotland, on July 14. The biggest bang-up previously confirmed produced a black hole of about 140 solar masses, researchers announced in 2020. In the new event, one of the two black holes alone had a similar mass.
Black holes with masses below about 60 times that of the sun are formed when a star collapses at the end of its life. But there’s a window of masses for black holes — between about 60 to 130 solar masses — where this mechanism is thought not to work. The stars that would form the black holes in that mass range are expected to fully explode when they die, leaving behind no remnant black hole.
For the newly reported black holes, uncertainties on the mass estimates mean it’s likely that at least one of them — and possibly both — fell in that forbidden mass gap.
The prediction of this mass gap is “a hill at least some people were willing to get wounded on, if not necessarily die on,” says Cole Miller of the University of Maryland in College Park, who was not involved with the research. So, to preserve the mass gap idea, scientists are looking for other explanations for the two black holes’ birth.
One possibility is that they were part of a family tree, with each black hole forming from an earlier collision of smaller black holes. Such repeated mergers might happen in dense clusters of stars and black holes. And they would result in rapidly spinning black holes, like the ones seen.
Every black hole has a maximum possible spinning speed, depending on its mass. One of the black holes in the collision was spinning at around 90 percent of its speed limit, and the other close to 80 percent. These are among the highest black hole spins that LIGO has confidently measured, Hannam says. Those high spins strengthen the case for the repeated-merger scenario, Hannam says. “We’ve seen signs of this sort of thing before but nothing as extreme as this.”
But there’s an issue with that potential explanation, Miller says. The black holes’ masses are so large that, if they came from a family tree, that tree might have required multiple generations of ancestors. That would suggest black holes that are spinning fast, but not quite as fast as these black holes are, Miller says. That’s because the black holes that merged in previous generations could have been spinning in a variety of different directions.
An alternative explanation is that the black holes bulked up in the shadow of a much bigger black hole, in what’s called an active galactic nucleus. This is a region of a galaxy surrounding a centerpiece supermassive black hole that is feeding on a disk of gas. If the black holes were born or fell into that disk, they could gobble up gas, ballooning in mass before merging.
Here, the spin also raises questions, Miller says. There’s a hint that the two black holes that merged in the collision weren’t perfectly aligned: They weren’t spinning in the same direction. That conflicts with expectations for black holes all steeping in the same disk.
“This event doesn’t have a clear and obvious match with any of the major formation mechanisms,” Miller says. None fit perfectly, but none are entirely ruled out. Even the simplest explanation, with black holes formed directly from collapsing stars, could still be on the table if one is above the mass gap and the other is below it.
Because the black holes are so massive, the scientists were able to capture only the last few flutters of gravitational waves, about 0.1 second from the tail end of the collision. That makes the event particularly difficult to interpret. What’s more, these black holes were so extreme that the models the scientists use to interpret their properties didn’t fully agree with one another. That led to less certainty about the characteristics of the black holes. Further work could improve the understanding of the black holes’ properties and how they formed.
Some physicists have reported hints that there are even more huge black holes out there. In a reanalysis of earlier public LIGO data, a team of physicists found evidence for five smashups that created black holes with masses around 100 to 300 times that of the sun, astrophysicist Karan Jani and colleagues reported May 28 in Astrophysical Journal Letters. This new discovery further confirms existence of a new population of massive black holes.
Before LIGO’s discoveries, such massive black holes were thought not to exist, says Jani, of Vanderbilt University in Nashville, who is also a member of the LIGO collaboration. “It’s very exciting that there is now a new population of black holes of this mass.”
The LIGO Scientific Collaboration, the Virgo Collaboration and the KAGRA Collaboration. GW231123: a Binary Black Hole Merger with Total Mass 190-265 M⊙. Published online July 13, 2025.
S. Bini. New results from the LIGO, Virgo and KAGRA Observatory Network. The International Conference on General Relativity and Gravitation and the Edoardo Amaldi Conference on Gravitational Waves. Glasgow, July 14, 2025.
K. Ruiz-Rocha et al. Properties of “lite” intermediate-mass black hole candidates in LIGO-Virgo’s third observing run. Astrophysical Journal Letters. Vol. 985, May 28, 2025, doi: 10.3847/2041-8213/adc5f8