Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:165 | Votes:267

posted by jelizondo on Monday September 29, @07:46PM   Printer-friendly

https://phys.org/news/2025-09-world-screwworm-parasite-northern-mexico.html

A dangerous parasite once eliminated in the United States has been detected in northern Mexico, close to the U.S. border.

Mexico's agriculture ministry confirmed Sunday that an 8-month-old cow in Nuevo León tested positive for New World screwworm. The animal was part of a shipment of 100 cattle from Veracruz, but only one showed signs of infestation.

The cow was treated, and all others received ivermectin, an antiparasitic medication, officials said.

The case was found in Sabinas Hidalgo, a small city less than 70 miles from Texas. It is the northernmost detection so far, moving much closer to the U.S. border than earlier outbreaks in other parts of Mexico.

Screwworm flies lay eggs in wounds and their larvae feed on living tissue, causing serious injury in livestock. The parasite was eradicated from the U.S. in the 1960s by mass-producing and releasing sterile flies to contain the flies' range, but recent outbreaks in Central America and Mexico have caused concerns again.

It is a "national security priority" U.S. Agriculture Secretary Brooke Rollins said in a statement.

The U.S. Department of Agriculture (USDA) and multiple other agencies are "executing a phased response strategy that includes early detection, rapid containment and long-term eradication efforts," the statement said.

Further, the USDA has invested nearly $30 million this year to expand sterile fly production in Panama and build a new facility in Texas, The New York Times reported.

Thousands of fly traps have also been placed along the border, with no infected flies detected so far.

Mexican President Claudia Sheinbaum said U.S. officials recently inspected local control measures and will issue a report soon. U.S. ports remain closed to livestock, bison and horse imports from Mexico until further notice, The Times said.


Original Submission

posted by jelizondo on Monday September 29, @03:02PM   Printer-friendly

8,000 years of human activities have caused wild animals to shrink and domestic animals to grow:

Humans have caused wild animals to shrink and domestic animals to grow, according to a new study out of the University of Montpellier in southern France. Researchers studied tens of thousands of animal bones from Mediterranean France covering the last 8,000 years to see how the size of both types of animals has changed over time.

Scientists already know that human choices, such as selective breeding, influence the size of domestic animals, and that environmental factors also impact the size of both. However, little is known about how these two forces have influenced the size of wild and domestic animals over such a prolonged period. This latest research, published in the Proceedings of the National Academy of Sciences , fills a major gap in our knowledge.

The scientists analyzed more than 225,000 bones from 311 archaeological sites in Mediterranean France. They took thousands of measurements of things like the length, width, and depth of bones and teeth from wild animals, such as foxes, rabbits and deer, as well as domestic ones, including goats, cattle, pigs, sheep and chickens.

But the researchers didn't just focus on the bones. They also collected data on the climate, the types of plants growing in the area, the number of people living there and what they used the land for. And then, with some sophisticated statistical modeling, they were able to track key trends and drivers behind the change in animal size.

The research team's findings reveal that for around 7,000 years, wild and domestic animals evolved along similar paths, growing and shrinking together in sync with their shared environment and human activity. However, all that changed around 1,000 years ago. Their body sizes began to diverge dramatically, especially during the Middle Ages.

Domestic animals started to get much bigger as they were being actively bred for more meat and milk. At the same time, wild animals began to shrink in size as a direct result of human pressures, such as hunting and habitat loss. In other words, human activities replaced environmental factors as the main force shaping animal evolution.

"Our results demonstrate that natural selection prevailed as an evolutionary force on domestic animal morphology until the last millennium," commented the researchers in their paper. "Body size is a sensitive indicator of systemic change, revealing both resilience and vulnerability within evolving human–animal–environment relationships."

This study is more than a look at ancient bones. By providing a long-term historical record of how our actions have affected the animal kingdom, the findings can also help with modern-day conservation efforts.


Original Submission

posted by jelizondo on Monday September 29, @10:17AM   Printer-friendly

Physicists nearly double speed of superconducting qubit readout in quantum computers

RIKEN physicists have found a way to speed up the readout of qubits in superconducting quantum computers, which should help to make them faster and more reliable.

After decades of being theoretical propositions, working quantum computers are just starting to emerge. For experimentalists such as Peter Spring of the RIKEN Center for Quantum Computing (RQC), it's an auspicious time to be working in the field.

"It's very exciting. It feels like this is a very fast-moving field that has a lot of momentum," says Spring. "And it really feels like experiments are catching up with theory."

When they come online, mature quantum computers promise to revolutionize computing, being able to perform calculations that are well beyond the capabilities of today's supercomputers. And it feels like that prospect is not so far off.

Currently, half a dozen technologies are jockeying to become the preferred platform for tomorrow's quantum computers. A leading contender is a technology based on superconducting electrical circuits. One of its advantages is the ability to perform calculations faster than other technologies.

Because of the very sensitive nature of quantum states, it is vital to regularly correct any errors that may have crept in. This necessitates repeatedly measuring a selection of qubits, the building blocks of quantum computers. But this operation is slower than quantum gate operations, making it a bit of a bottleneck.

"If qubit measurement is much slower than the other things you're doing, then basically it becomes a bottleneck on the clock speed," explains Spring. "So we wanted to see how fast we could perform qubit measurements in a superconducting circuit."

Now, Spring, Yasunobu Nakamura, also of RQC, and their co-workers have found a way to simultaneously measure four qubits in superconducting quantum computers in a little over 50 nanoseconds, which is about twice as fast as the previous record. The findings are published in the journal PRX Quantum.

A special filter ensures that the measurement line used to send the measurement signals doesn't interfere with the qubit itself. Spring and colleagues realized the filter by "coupling" a readout resonator with a filter resonator in such a way that energy from the qubits wasn't able to escape through the measurement line.

They were able to measure the qubits at very high accuracies, or "fidelities." "We were surprised at how high fidelity the readout turned out to be," says Spring. "On the best qubit, we achieved a fidelity of more than 99.9%. We hadn't expected that in such a short measurement time."

The team aims to achieve even faster qubit measurements by optimizing the shape of the microwave pulse used for the measurement.

More information: Peter A. Spring et al, Fast Multiplexed Superconducting-Qubit Readout with Intrinsic Purcell Filtering Using a Multiconductor Transmission Line, PRX Quantum (2025). DOI: 10.1103/prxquantum.6.020345
       


Original Submission

posted by jelizondo on Monday September 29, @05:35AM   Printer-friendly
from the data-goldmine dept.

The most alluring aspect of a CRM system, its centralized collection of customer data, is also its Achilles' heel:

Customer relationship management (CRM) systems sit at the heart of modern business. They store personal data, behavioral histories, purchase records, and every digital breadcrumb that shapes customer identity.

Yet while these platforms are marketed as engines of efficiency, they've become prime targets for cybercriminals.

The uncomfortable truth is that CRMs are often riddled with blind spots. Companies invest heavily in deployment, but treat cybersecurity as an afterthought. That oversight has left the door wide open to sophisticated attacks that exploit both technical gaps and human error. Let's take a look at how to fortify your defenses.

[...] More than anything, centralization multiplies risk. A breach doesn't just compromise one isolated dataset; it unlocks a holistic map of customer interactions. Sophisticated actors exploit these unified records to fuel identity theft and targeted phishing campaigns [PDF].

Worse still, because CRMs often integrate with marketing automation, billing, and support systems, a single compromise can cascade through multiple business-critical platforms.

The article goes on to discuss the human element of CRM insecurity, how integration fuels exploitation, and the costs of neglecting CRM security and convenience.


Original Submission

posted by hubie on Monday September 29, @12:49AM   Printer-friendly

If the technology can be scaled up, it could help make AI cheaper and more efficient:

A regular computer chip cannot reuse energy. All the electrical energy it draws to perform computations immediately becomes useless heat. Your phone or laptop will "use energy once and then throw it away," says Michael Frank, a scientist at Vaire Computing, the London company where the new test chip was made. When your device is working hard, you can feel the warmth of all that wasted energy.

[...] The new chip, tested in August, drew around 30 percent less energy than a regular chip performing the same computation. The system was reusing a portion of its electrical energy instead of wasting it as heat. "This is quite exciting," says Aatmesh Shrivastava, a computer engineer at Northeastern University in Boston. "We all want a computing system where we can recover energy."

To develop Ice River, Frank and the team at Vaire reimagined two inefficient features of modern computer chips.

First, chips sold now waste energy by erasing information. A typical chip's logic — the circuitry and rules that determine the way the chip processes information — only works in one direction. When you do a computation, the original 1s and 0s are erased, generating heat. IceRiver instead uses reversible logic, which allows it to un-compute and get the original information back. This avoids losing heat to erasures.

Second, modern chips waste energy when their voltage rapidly changes. Like a hammer coming down, the power supply slams 1s into 0s or vice versa. This allows for very fast computation, but those rapid changes give off heat.

In contrast, IceRiver uses an approach called adiabatic computing, in which voltages gradually go up and down. "You can think of [the energy] as sloshing back and forth," Frank says. It's more like a pendulum than a hammer. The system can partly keep itself going and reuse energy in the next operations. Importantly, the power supply doing all this is housed on the chip itself.

Computer scientists have known since the 1960s that this sort of system was theoretically possible. In the 1990s at MIT, Frank worked on test systems that showed reversible logic working. But Ice River is the first physical chip to combine reversible logic with a pendulum-like power supply on board, he says. With just one or the other, Frank says, you can't reuse a meaningful amount of energy.

[...] Erik DeBenedictis, who runs the computing company Zettaflops in Albuquerque, N. M., says Vaire is "much closer" to a reversible chip that would be useful in the real world than anybody has come before.

However, there's still a long way to go. "This type of technology will take a long time to become more mainstream," Shrivastava says. For one thing, adiabatic computing "is a slow process," he says. Because these chips don't heat up like usual ones, you can pack them more closely together to make up for the slower speed, but that ups the cost. Vaire will need to find ways to scale up effectively and to reuse even more energy. "They have a challenge ahead of them," DeBenedictis says.


Original Submission

posted by hubie on Sunday September 28, @08:01PM   Printer-friendly

A "nation-state" is said to be involved:

The US Secret Service announced this morning that it has located and seized a cache of telecom devices large enough to "shut down the cellular network in New York City." And it believes a nation-state is responsible.

According to the agency, "more than 300 co-located SIM servers and 100,000 SIM cards" were discovered at multiple locations within the New York City area. Photos of the seized gear show what appear to be "SIM boxes" bristling with antennas and stuffed with SIM cards, then stacked on six-shelf racks. (SIM boxes are often used for fraud.) One photo even shows neatly stacked towers of punched-out SIM card packaging, suggesting that whoever assembled the system invested some quality time in just getting the whole thing set up.

The gear was identified as part of a Secret Service investigation into "anonymous telephonic threats" made against several high-ranking US government officials, but the setup seems designed for something larger than just making a few threats. The Secret Service believes that the system could have been capable of activities like "disabling cell phone towers, enabling denial of services attacks, and facilitating anonymous, encrypted communication between potential threat actors and criminal enterprises."

Analysis of data from so many devices will take time, but preliminary investigation already suggests that "nation-state threat actors" were involved; that is, this is probably some country's spy hardware. With the UN General Assembly taking place this week in New York, it is possible that the system was designed to spy on or disrupt delegates, but the gear was found in various places up to 35 miles from the UN. BBC reporting suggests that the equipment was "seized from SIM farms at abandoned apartment buildings across more than five sites," and the ultimate goal remains unclear.

While the gear has been taken offline, no arrests have yet been made, and the investigation continues.


Original Submission

posted by hubie on Sunday September 28, @03:15PM   Printer-friendly
from the neurons-of-a-model-flock-together dept.

https://phys.org/news/2025-09-physics-ai-local-flocking-motion.html

Researchers at Seoul National University and Kyung Hee University report a framework to control collective motions, such as ring, clumps, mill, flock, by training a physics-informed AI to learn the local rules that govern interactions among individuals.

The paper is published in the journal Cell Reports Physical Science.

The approach specifies when an ordered state should appear from random initial conditions and tunes geometric features (average radius, cluster size, flock size). Furthermore, trained on published GPS trajectories of real pigeons, the model uncovers interaction mechanisms observed in real flocks.

Collective motion is an emergent phenomenon in which many self-propelled individuals (birds, fish, insects, robots, even human crowds) produce large-scale patterns without any central decision-making. Each individual reacts only to nearby neighbors, yet the group exhibits coherent collective motion. Analyzing how simple local interactions give rise to such global order is challenging because these systems are noisy and nonlinear, and perception is often directional.

To address these challenges, the team built neural networks that obey the laws of dynamics and are trained on simple pattern characteristics and, when available, experimental trajectories.

The neural networks infer two basic types of local interaction rules: distance-based rules that set spacing, velocity-based rules that align headings, as well as their combination. The team also showed that self-propelled agents following these rules reproduce intended target collective patterns with specified geometrical characteristics.

Examples include adjusting ring radius, cluster size in clumps, and rotational mode (either single or double) in mill; inducing continuous transitions among different collective modes; and achieving motions near obstacles and within confined areas.

The same framework can be fit to short segments of real trajectories by incorporating an anisotropic field of view, yielding interaction laws consistent with the leader-follower hierarchy observed in nature.

By turning collective behavior into something that can be decoded, this approach offers practical engineering and scientific benefits. In robotics, it provides a blueprint for programming drone and ground-robot swarms to form and switch patterns on demand.

In the natural sciences, it helps quantitatively identify which local interactions are sufficient to explain observed flocking, enabling hypothesis testing about sensory ranges and alignment strength.

More broadly, the method could guide the design of active materials that self-assemble into target shapes and help generate realistic synthetic datasets for studying complex, decentralized systems.

More information: Dongjo Kim et al, Commanding emergent behavior with neural networks, Cell Reports Physical Science (2025). DOI: 10.1016/j.xcrp.2025.102857


Original Submission

posted by hubie on Sunday September 28, @10:29AM   Printer-friendly

"They have to have on-orbit refueling because they don't access space as frequently as we do."

SpaceX scored its 500th landing of a Falcon 9 first stage booster on an otherwise routine flight earlier this month, sending 28 Starlink communications satellites into orbit. Barring any unforeseen problems, SpaceX will mark the 500th re-flight of a Falcon first stage later this year.

A handful of other US companies, including Blue Origin, Rocket Lab, Relativity Space, and Stoke Space, are on the way to replicating or building on SpaceX's achievements in recycling rocket parts. These launch providers are racing a medley of Chinese rocket builders to become the second company to land and reuse a first stage booster.

But it will be many years—perhaps a decade or longer—until anyone else matches the kinds of numbers SpaceX is racking up in the realm of reusable rockets. SpaceX's dominance in this field is one of the most important advantages the United States has over China as competition between the two nations extends into space, US Space Force officials said Monday.

"It's concerning how fast they're going," said Brig. Gen. Brian Sidari, the Space Force's deputy chief of space operations for intelligence. "I'm concerned about when the Chinese figure out how to do reusable lift that allows them to put more capability on orbit at a quicker cadence than currently exists."

[...] "They've put more satellites on orbit," Sidari said, referring to China. "They still do not compare to the US, but it is concerning once they figure out that reusable lift. The other one is the megaconstellations. They've seen how the megaconstellations provide capability to the US joint force and the West, and they're mimicking it. So, that does concern me, how fast they're going, but we'll see. It's easier said than done. They do have to figure it out, and they do have some challenges that we haven't dealt with."

One of those challenges is China's continued reliance on expendable rockets. This has made it more important for China to make "game-changing" advancements in other areas, according to Chief Master Sgt. Ron Lerch, the Space Force's senior enlisted advisor for intelligence.

Lerch pointed to the recent refueling of a Chinese satellite in geosynchronous orbit, more than 22,000 miles (nearly 36,000 kilometers) over the equator. China's Shijian-21 and Shijian-25 satellites, known as SJ-21 and SJ-25 for short, came together on July 2 and have remained together ever since, according to open source orbital tracking data.

No one has refueled a spacecraft so far from Earth before. SJ-25 appears to be the refueler for SJ-21, a Chinese craft capable of latching onto other satellites and towing them to different orbits. Chinese officials say SJ-21 is testing "space debris mitigation" techniques, but US officials have raised concerns that China is testing a counter-space weapon that could sidle up to an American or allied satellite and take control of it.

Lerch said satellite refueling is more important to China than it is to the United States. With refueling, China can achieve a different kind of reuse in space while the government waits for reusable rockets to enter service.

"They have to have on-orbit refueling as a capability because they don't access space as frequently as we do," Lerch said Monday at the Air Force Association's Air, Space, and Cyber Conference. "When it comes to replenishing our toolkit, getting more capability (on orbit) and reconstitution, having reusable launch is what affords us that ability, and the Chinese don't have that. So, pursuing things like refueling on orbit, it is game-changing for them."

[...] Meanwhile, China recently started deploying its own satellite megaconstellations. Chinese officials claim these new satellite networks will be used for Internet connectivity. That may be so, but Pentagon officials worry China can use them for other purposes, just as the Space Force is doing with Starlink, Starshield, and other programs.

[...] China's military has "observed how we fight, the techniques we use, the weapons systems we have," Pearson said. "When you combine that with intellectual property theft that has fueled a lot of their modernization, they have deliberately developed and modernized to counter our American way of war."


Original Submission

posted by hubie on Sunday September 28, @05:44AM   Printer-friendly
from the if-only-there-was-another-OS-that-could-run-on-that-hardware dept.

Consumer Reports slams Microsoft for Windows 10 mess, urges extension of free updates:

Consumer Reports (CR), the venerable consumer rights organization known for its in-depth product testing, sent a letter to Microsoft CEO Satya Nadella this week. The letter, authored by the nonprofit's policy fellow Stacey Higginbotham and director of technology policy Justin Brookman, expressed "concern about Microsoft's decision to end free ongoing support for Windows 10 next month."

Consumer Reports isn't the first organization to come to the defense of the soon-to-be-orphaned Windows 10. Nearly two years ago, in October 2023, the Public Interest Research Group (PIRG) urged Microsoft to reconsider its decision, calling it "a bad deal for both users and the planet." The group warned that up to 400 million perfectly functional PCs could be discarded simply because they don't meet Windows 11's hardware requirements.

PIRG issued a new plea this week, bringing together a group of consumer and environmental organizations, including the European Right to Repair coalition, iFixit, and Consumer Reports.

In its letter, CR argues on behalf of its 5 million members that Microsoft's decision "will strand millions of consumers who have computers that are incompatible with Windows 11, and force them to pay $30 for a one-year extension of support, spend hundreds on a new Windows 11-capable computer, or do nothing and see the security and functionality of their computer degrade over time."

And this isn't just a consumer issue: Having hundreds of millions of unprotected PCs that can be commandeered for attacks on other entities is a risk to national security.

The group cites a member survey from earlier this year, covering more than 100,000 laptop and desktop computer owners. "More than 95% of all laptop and desktop computers purchased since the beginning of 2019 and owned for no more than five years were still in use," they reported. Those members tend to keep their Windows-based computers for a long time, the group concluded. "[I]t's clear that consumers purchased machines before Microsoft announced the hardware needs for Windows 11, expecting to be able to operate them through the next Microsoft OS transition."

The letter's authors also spotlight a fundamental contradiction in Microsoft's plans. "Arguing that Windows 11 is an essential upgrade to boost cybersecurity while also leaving hundreds of millions of machines more vulnerable to cyber attacks is hypocritical." The decision to offer extended security updates for one year is also consumer-hostile, they contend, with customers forced to pay $30 to preserve their machine's security, or use unrelated Microsoft products and services "just so Microsoft can eke out a bit of market share over competitors."

[...] After all that, the group is making a fairly modest request. "Consumer Reports asks Microsoft to extend security updates for free to all users who are unable to update their machine while also working to entice more people to get off Windows 10. ... [W]e also ask that Microsoft create a partnership to provide recycling of those machines to consumers abandoning their hardware."

This probably isn't the publicity that Microsoft wants as it urges its customers to buy a new Windows 11 PC. The Consumer Reports brand is also likely to break through to mainstream media in a way that more technical organizations can't.

Will this be enough to change hearts and minds in Redmond? It looks unlikely. I asked Microsoft for comment, and after a week, a spokesperson responded that the company had "nothing to share" on the subject.


Original Submission

posted by hubie on Sunday September 28, @12:56AM   Printer-friendly

Airlines Seen as Vulnerable as Ransomware Confirmed in Weekend Cyberattack

A ransomware attack was confirmed as the source of the weekend's airport disruption:

While no one crew has claimed responsibility for the attack that disrupted a number of European airports, including in Brussels, Berlin, London, Dublin and Cork this weekend, Europe's cybersecurity agency (ENISA) confirmed to the BBC that a ransomware attack was behind the chaos.

"The type of ransomware has been identified. Law enforcement is involved to investigate," the agency told Reuters.

The cyberattack disrupted check-in and baggage systems last Friday (19 September), targeting 'Muse' (multi-user system environment), a software tool made by Collins Aerospace, which provides a range of aircraft technologies, including baggage tagging and handling.

Experts had been warning for some time that airlines are particularly susceptible to widespread attacks. In July, after UK retailers were hit hard with Scattered Spider attacks, the FBI and cyber experts warned that airlines were likely to be next in line. Hackers using Scattered Spider tactics are renowned for targeting one sector at a time, although there is no indication as yet that they were behind this attack.

[...] "The aviation sector, with its complex network of third-party suppliers and contractors, presents an attractive target," said Haris Pylarinos, founder and CEO of cybersecurity company Hack the Box back in July. "If just one weak link is compromised, the ripple effects could be massive."

While the effects of the weekend attack were limited, it is certainly a major wake-up call for the airline industry.

"I'm deeply concerned but not surprised by the scale of the cyberattack on European airports," said Adam Blake, CEO and founder of cybersecurity company ThreatSpike

"Businesses are pouring vast sums of money into advanced security tools and bolt-on solutions, but it's just fragmenting security posture, creating overlapping controls and gaps for adversaries to exploit.

"Cybersecurity needs to be treated a lot more holistically, as a strategic priority built on end-to-end visibility, consistent monitoring and response, and proactive threat detection," he warned. "Where organisations stitch together a patchwork of vendors, vulnerabilities will inevitably emerge."

UK Arrests Man Linked to Ransomware Attack That Caused Airport Disruptions Across Europe

UK arrests man linked to ransomware attack that caused airport disruptions across Europe

The U.K.'s National Crime Agency (NCA) said on Wednesday that a man was arrested in connection to the ransomware attack that has caused delays and disruptions at several European airports since the weekend.

The hack, which began Friday, targeted check-in systems provided by Collins Aerospace, causing delays at Brussels, Berlin, and Dublin airports, as well as London's Heathrow, which lasted until yesterday.

While the NCA did not name the arrested man, the agency said he is "in his forties" and that he was arrested in the southern county of West Sussex on Tuesday under the country's Computer Misuse Act "as part of an investigation into a cyber incident impacting Collins Aerospace."

The man was released on conditional bail, according to the agency.

"Although this arrest is a positive step, the investigation into this incident is in its early stages and remains ongoing," said Paul Foster, deputy director and head of the NCA's National Cyber Crime Unit, in a statement.


Original Submission #1Original Submission #2

posted by hubie on Saturday September 27, @08:13PM   Printer-friendly

How Your Utility Bills Are Subsidizing Power-Hungry AI:

This summer, across the Eastern United States, home electricity bills have been rising. From Pittsburgh to Ohio, people are paying $10 to $27 more per month for electricity. The reason? The rising costs of powering data centers running AI. As providers of the largest and most compute-intensive AI models keep adding them into more and more aspects of our digital lives with little regard for efficiency (and without giving users much of a choice), they grow increasingly dependent on a growing share of the existing energy and natural resources, leading to rising costs for everyone else.

In particular, this means that average citizens living in states that host data centers bear the cost of these choices, even though they rarely reap any benefits themselves. This is because data centers are connected to the whole world via the Internet, but use energy locally, where they're physically located. And unlike the apartments, offices, and buildings connected to a traditional energy grid, the energy use of AI data centers is highly concentrated; think as much as an entire metal smelting plant over a location the size of a small warehouse. For example, the state of Virginia is home to 35% of all known AI data centers worldwide, and together they use more than a quarter of the state's electricity. And they're expanding fast — in the last 7 years, global energy use by data centers has grown 12% a year, and it's set to more than double by 2030, using as much electricity as the whole of Japan.

The costs of this brash expansion of data centers for AI are reflected first and foremost in the energy bills of everyday consumers. In the United States, utility companies fund infrastructure projects by raising the costs of their services for their entire client base (who often have no choice in who provides them with electricity). These increased rates are then leveraged to expand the energy grid to connect new data centers to new and existing energy sources and build mechanisms to keep the grid balanced despite the increased ebb and flow of supply and demand, particularly in places like Virginia that have a high concentration of data centers. Also, on top of amortizing the base infrastructure cost, electricity prices fluctuate based on demand, which means that the cost of having your lights on or running your AC will rise when there is a high demand from data centers on the same grid.

These costs also come with dire impacts on the stability of an energy infrastructure that is already stretched to the breaking point by growing temperatures and extreme weather. In fact, last summer, a lightning storm caused a surge protector to fail near Fairfax, Virginia, which resulted in 200 data centers switching to local generators, causing the demand on the local energy grid to plummet drastically. This nearly caused a grid-wide blackout and, for the first time, made federal regulators recognize data centers as a new source of instability in power supplies, on top of natural disasters and accidents.

[...] While recent decisions at the federal level in the US have been more aligned with a "full speed ahead" approach to data center infrastructure, states such as Ohio and Georgia are passing laws that would put the onus on data centers, not consumers, to pay the cost of new investments to expand the power grid. Countries such as the Netherlands and Ireland have gone a step further, putting moratoriums on the construction of new data centers in key regions until grid operators can stabilize existing grids and make sure that they don't get overwhelmed.

But we still need to rethink our relationship with multi-purpose, generative AI approaches. At a high level, we need to move away from treating AI as a centrally developed commodity, where developers need to push adoption across the board to justify rising cost in a vicious cycle that leads to ever costlier technology, and toward AI that is developed based on specific demands, using the right tool and the right model to solve real problems at reasonable cost. This would enable choosing smaller, task-specific models for tasks like question answering and classification, using and sharing open-source models to allow incremental progress and reuse of models by the community, and incentivize measuring and disclosing the energy consumption of models using approaches like AI Energy Score.

The next few years will be pivotal for determining the future of AI and its impact on energy grids worldwide. It's important to keep efficiency and transparency at the heart of the decisions we make. A future when AI is more decentralized, efficient and community-driven can ensure that we are not collectively paying the price for the profits of a few.

How AI and data center growth drive copper demand in the US:

The growing demand for copper is no secret: over the last year the need for copper — as well as motivation for more domestically produced copper in the US — has been widely-discussed among market participants, major companies, the current administration and even the general public. Copper demand in data centers is also increasing rapidly. The country and the world, needs copper and in the next ten years they're only going to need more.

At the same time, conversations surrounding sustainability and energy needs in the US have changed over the last year, with the focus shifting from electric vehicles (EVs) to artificial intelligence (AI). And with this rise comes heightened demand for data centers — where copper plays a huge part.

According to Fastmarkets' analysts latest 10-year outlook report, published in May, copper consumption from energy transition sectors will rise at a compound annual growth rate (CAGR) of 8.9% in the next 10 years — including 10.4% for the EV sector, 6.8% for the solar power industry and 7.8% for the wind power industry — while consumption from traditional non-energy transition sectors will rise at a CAGR of 1.1%. A recent report by Macquarie also estimated that between 330,000 and 420,000 tonnes of copper will be used in data centers by 2030.

[...] According to the CDA, copper demand in North America's data center sector is being driven by several converging factors: the pace of new construction is fast, pushing demand higher and data centers themselves require large volumes of copper for power distribution and grounding.

Leavens said the data centers themselves are the biggest factor: an estimated 30-40% of data center construction involves electrical, all of which contains copper.

[...] "Particularly now with the rise of AI, we are using more energy, which means there's more heat... which means you need more cooling," Leavens said.

"Newer data centers — especially those built for artificial intelligence — are designed to handle far greater power loads, which means even more copper is needed to safely manage, transmit and ground electricity," Kotbra told Fastmarkets on Wednesday September 3.

"Together, rapid capacity growth, higher power intensity and advanced cooling methods are making copper more critical than ever to the data center industry," Kotbra said.

Leavens also noted that when the US government calculates the data for construction, it is only looking at what is permanent in the building, which includes the shell of the building and the wires within. Since the servers are not permanent, they are not included in the same construction statistics.

"So, it's a little confusing how the government records it," Leavens said.

[...] According to a National Mining Association (NMA) report published on August 19, copper consumption is expected to increase by more than 110 percent by 2050, while US energy demand is projected to increase by as much as 30% in the same period of time, and the AI industry is projected to reach trillions of dollars in market value in just the next few years.

[...] As the need for copper grows and data centers expand, so does the potential for innovation, sources said, and sustainability is a key factor in the copper business and the energy industry, especially globally.

Sustainability goals are reshaping how data centers think about copper procurement, Kotbra said, with operators increasingly focused on sourcing copper products with higher recycled content, underscoring the importance of accurately quantifying and verifying those amounts.

"This push for transparency not only supports corporate sustainability commitments, but it also drives demand for recycling infrastructure that keeps copper in circulation," Kotbra said.

On the topic of recyclability, Leavens said "our companies are facilitating that re-sensibility through design of products. We recognize that if you bury the copper where it's difficult to get it out of the device, you're not making it as sustainable as it could be; so they're thinking ahead [about the] end-of-life of this product how easy could it be to take it apart and pull to get the copper out."

Kotbra agreed that recyclability is one of the most important innovations in the data center industry, citing similar considerations for end-of-life recovery that make it easier to separate copper from other materials and ensure it can be reclaimed at high rates.

"Since copper is infinitely recyclable without losing performance, these design improvements are helping the industry capture even more of its value and keep it in circulation for generations to come," Kotbra told Fastmarkets.

[...] "Copper is a vital part of every economy globally," Leavens told Fastmarkets. "I hear so much about the security risk for copper — it's a critical material now, and the government recognizes it as that. We need it; but also there are huge sources of it around, through stockpiles or friendly countries such as Chile or Australia. As an economist, I'm just not concerned that there would be an absolute shutdown like you saw with oil in the 1970s," Leavens said.

Journal Reference: DOI: https://dl.acm.org/doi/10.1145/3630106.3658542


Original Submission

posted by jelizondo on Saturday September 27, @03:27PM   Printer-friendly

Cloudflare DDoSed itself with React useEffect hook blunder:

Cloudflare has confessed to a coding error using a React useEffect hook, notorious for being problematic if not handled carefully, that caused an outage for the platform's dashboard and many of its APIs.

The outage was on September 12, lasted for over an hour, and was triggered by a bug in the dashboard, which caused "repeated, unnecessary calls to the Tenant Service API," according to VP of engineering Tom Lianza. This API is part of the API request authorization logic and therefore affected other APIs.

The cause was hard to troubleshoot since the apparent issue was with the API availability, disguising the fact that it was the dashboard that was overloading it.

Lianza said the core issue was a React useEffect hook with a "problematic object in its dependency array." The useEffect hook is a function with parameters including a setup function that returns a cleanup function, and an optional list of dependencies. The setup function runs every time a dependency changes.

In this Cloudflare case, the function made calls to the Tenant Service API, and one of the dependencies was an object that was "recreated on every state or prop change." The consequence was that the hook ran repeatedly during a single render of the dashboard, when it was only intended to run once. The function ran so often that the API was overloaded, causing the outage.

The useEffect hook is powerful but often overused. The documentation is full of warnings about misuse and common errors, and encouragement to use other approaches where possible. Performance pitfalls with useEffect are common.

The incident triggered a discussion in the community about the pros and cons of useEffect. One developer said on Reddit there were too many complaints about useEffect, that it is an essential part of React, and "the idea that it is a bad thing to use is just silly." Another reaction, though, was "the message has not yet been received. Nearly everyone I know continues to put tons of useEffects everywhere for no reason."

Another remarked: "the real problem is the API going down by excessive API calls... in a company that had dedicated services to prevent DDoS [Distributed Denial of Service]."

Lianza said the Tenant Service had not been allocated sufficient capacity to "handle spikes in load like this" and more resources have now been allocated to it, along with improved monitoring. In addition, new information has been added to API calls from the dashboard to distinguish retries from new requests, since if the team had known that it was seeing "a large volume of new requests, it would have made it easier to identify the issue as a loop in the dashboard."

Cloudflare accidentally DDoS-attacked itself:

Cloudflare, a platform that provides network services, was the victim of a DDoS attack last week. It was also accidentally the cause of it.

You might remember Cloudflare was linked to a massive outage in June of this year. When Cloudflare went down, so did sites like Spotify, Google, Snapchat, Discord, Character.ai, and more, all of which rely on Cloudflare's services. That time, the disruption was sparked by a Google Cloud outage. Earlier this month, Cloudflare had another blunder, albeit much less disruptive than its outage from the summer — but this time, it did it to itself.

"We had an outage in our Tenant Service API which led to a broad outage of many of our APIs and the Cloudflare Dashboard," Tom Lianza, the vice president of engineering for Cloudflare and Joaquin Madruga, the vice president of engineering for the developer platform at Cloudflare, wrote in a Sept. 13 blog post. "The incident's impact stemmed from several issues, but the immediate trigger was a bug in the dashboard."

The bug, according to Lianza and Madruga, caused "repeated, unnecessary calls to the Tenant Service API." On accident, Cloudflare included a "problematic object in its dependency array" which was recreated, treated as new, caused it to re-run, and, eventually, the "API call executed many times during a single dashboard render instead of just once."

"When the Tenant Service became overloaded, it had an impact on other APIs and the dashboard because Tenant Service is part of our API request authorization logic. Without Tenant Service, API request authorization can not be evaluated. When authorization evaluation fails, API requests return 5xx status codes," the blog reads.

Everything is back on track at Cloudflare for now.

"We're very sorry about the disruption," the blog post reads. "We will continue to investigate this issue and make improvements to our systems and processes."


Original Submission

posted by jelizondo on Saturday September 27, @10:41AM   Printer-friendly

Magma displacement triggered tens of thousands of earthquakes, Santorini swarm study finds

Tens of thousands of earthquakes shook the Greek island of Santorini and the surrounding area at the beginning of the year. Now, researchers have published a comprehensive geological analysis of the seismic crisis in the journal Nature.

The researchers—from GFZ Helmholtz Center for Geosciences and GEOMAR Helmholtz Center for Ocean Research Kiel, together with international colleagues—integrated data from earthquake stations and ocean bottom instruments deployed at the Kolumbo underwater volcano seven kilometers away from Santorini and used a newly developed AI-based method for locating earthquakes.

This enabled reconstructing the processes in the underground with unique detail, revealing that around 300 million cubic meters of magma rose from the deep crust and came to rest at a depth of around four kilometers below the ocean floor. During its ascent through the crust, the molten magma generated thousands of earthquakes and seismic tremors.

Santorini is located in the eastern Mediterranean and forms part of the Hellenic volcanic arc, a highly active geological zone. This world-famous island group forms the rim of a caldera, which was created by a massive volcanic eruption around 3,600 years ago.

The active underwater volcano Kolumbo lies in the immediate vicinity. In addition, the region is crossed by several active geological fault zones, which are the result of the African Plate pushing northeast against the Hellenic Plate. Earth's crust beneath the Mediterranean region has broken up into several microplates that shift against each other, and in some cases subduct and melt, thus sourcing volcanic activity.

Santorini has produced multiple eruptions in historic times, most recently in 1950. In 1956, two severe earthquakes occurred in the southern Aegean Sea, only 13 minutes apart, between Santorini and the neighboring island of Amorgos. These had magnitudes of 7.4 and 7.2 respectively, triggering a tsunami.

The earthquake swarm that initiated in late January 2025 took place in exactly this region. During the crisis, more than 28,000 earthquakes were recorded. The strongest of these reached magnitudes of over 5.0. The severe shaking caused great public concern during the seismic crisis, partly because the cause was initially unclear, being potentially either tectonic or volcanic.

The new study now shows that the earthquake swarm was triggered by the deep transport of magma. The chain of events had already begun in July 2024, when magma rose into a shallow reservoir beneath Santorini. This initially led to a barely noticeable uplift of Santorini by a few centimeters. At the beginning of January 2025, seismic activity intensified, and from the end of January, magma began to rise from the depths, accompanied by intense seismic activity.

However, the seismic activity shifted away from Santorini over a distance of more than 10 kilometers to the northeast. During this phase, the foci of the quakes moved in several pulses from a depth of 18 kilometers upwards to a depth of only 3 kilometers below the seafloor. The high-resolution temporal and spatial analysis of the earthquake distribution, combined with satellite radio interferometry (InSAR), GPS ground stations and seafloor stations, made it possible to model the events.

Dr. Marius Isken, geophysicist at the GFZ and one of the two lead authors of the study, says, "The seismic activity was typical of magma ascending through Earth's crust. The migrating magma breaks the rock and forms pathways, which causes intense earthquake activity. Our analysis enabled us to trace the path and dynamics of the magma ascent with a high degree of accuracy."

As a result of the magma movement, the island of Santorini subsided again, which the authors interpret as evidence of a previously unknown hydraulic connection between the two volcanoes.

Dr. Jens Karstens, marine geophysicist at GEOMAR and also lead author of the study, explains, "Through close international cooperation and the combination of various geophysical methods, we were able to follow the development of the seismic crisis in near real time and even learn something about the interaction between the two volcanoes. This will help us to improve the monitoring of both volcanoes in the future."

Two factors in particular enabled the exceptionally detailed mapping of the subsurface. For one, an AI-driven method developed at the GFZ for the automatic evaluation of large seismic data sets. Secondly, GEOMAR had already deployed underwater sensors at the crater of the underwater volcano Kolumbo at the beginning of January as part of the MULTI-MAREX project. These sensors not only measured seismic signals directly above the reservoir, but also pressure changes resulting from the subsidence of the seabed by up to 30 centimeters during the intrusion of magma beneath Kolumbo.

Scientific research activity on Santorini is continuing despite the decline in seismic activity. The GFZ is conducting repeated gas and temperature measurements on Santorini, while GEOMAR currently has eight seabed sensor platforms in operation.

Prof. Dr. Heidrun Kopp, Professor of Marine Geodesy at GEOMAR and project manager of MULTI-MAREX, says, "The joint findings were always shared with the Greek authorities in order to enable the fastest and most accurate assessment of the situation possible in the event of new earthquakes."

Co-author Prof. Dr. Paraskevi Nomikou is Professor of Geological Oceanography at the University of Athens and works closely with the German partner institutes on the MULTI-MAREX project. She adds, "This long-standing cooperation made it possible to jointly manage the events at the beginning of the year and to analyze them so precisely from a scientific point of view. Understanding the dynamics in this geologically highly active region as accurately as possible is crucial for the safety and protection of the population."

More information: Marius Isken can be reached via the GFZ press office, Volcanic crisis reveals coupled magma system at Santorini and Kolumbo, Nature (2025). DOI: 10.1038/s41586-025-09525-7. www.nature.com/articles/s41586-025-09525-7
       


Original Submission

posted by jelizondo on Saturday September 27, @05:56AM   Printer-friendly

China's latest GPU arrives with claims of CUDA compatibility and RT support:

While Innosilicon Technology's products may not be prominently featured on the list of the best graphics cards, the company has been hard at work developing its Fenghua (translated as Fantasy) series of graphics cards. As ITHome reported, Innosilicon recently unveiled the Fenghua No. 3, the company's latest flagship GPU. The company promises that its third GPU iteration is a significant advancement over its predecessors.

While previous Fenghua No.1 and Fenghua No.2 graphics cards were based on Imagination Technologies' PowerVR IP, the new Fenghua No.3 leverages the open-source RISC-V architecture instead. The graphics card reportedly borrows a page from OpenCore Institute's Nanhu V3 project.

The company representative didn't provide any more details on the Fenghua No.3 during the launch event, only that it features a home-grown design from the ground up. The Fenghua No.3 is also purportedly compatible with Nvidia's proprietary CUDA platform, which could open many doors for the graphics card if it holds true.

The Fenghua No.3 is designed for a bunch of different workloads, as Innosilicon describes it as an "all-function GPU" (translation). The company plans to deploy the graphics card in different sectors, including AI, scientific computing, CAD work, medical imaging, and gaming. Therefore, it's safe to assume there will be other variants of the Fenghua No.3.

From a gaming perspective, the Fenghua No.3 claims support for the latest APIs, including DirectX 12, Vulkan 1.2, and OpenGL 4.6. The graphics card is also reportedly equipped to support ray tracing. The team demonstrated the Fenghua No.3 in titles such as Tomb Raider, Delta Force, and Valorant at the press conference, and reports claim that the gameplay was smooth. However, there was no available information on game settings, resolution, and actual frame rates, so take these claims with a grain of salt.

The Fenghua No.3 reportedly comes equipped with 112GB+ of HBM memory, making it an ideal product for AI. A single Fenghua No. 3 can handle 32B and 72B LLM models, while eight of them in unison work with 671B and 685B parameter models. Innosilicon claims unconditional support for the DeepSeek V3, R1, and V3.1 models, as well as the Qwen 2.5 and Qwen 3 model families.

Innosilicon also boasted that the Fenghua No.3 is China's first graphics card to support the YUV444 format, which offers the best color detail and fidelity—a feature particularly beneficial for users who perform extensive CAD industrial work or video editing. The manufacturer also highlighted the Fenghua No.3's support for 8K (7680 x 4320) displays. The graphics card can drive up to six 8K monitors at 30 Hz.

The Fenghua No.3 is the world's first graphics card to offer native support for DICOM (Digital Imaging and Communication in Medicine). It enables the precise visualization of X-rays, MRIs, CT scans, and ultrasounds on standard monitors, eliminating the need for costly, specialized grayscale medical displays.

China's semiconductor industry is gradually improving. Although it is unlikely to rival that of the United States in the near future, it may not necessarily need to do so. China's primary goal is to achieve self-sufficiency in key areas. Announcements such as the Fenghua No.3 may seem insignificant individually. Collectively, they might amass into substantial progress, akin to accumulating grains of sand that eventually form a small beach.


Original Submission

posted by jelizondo on Saturday September 27, @01:06AM   Printer-friendly

https://phys.org/news/2025-09-india-unplanned-hydropower-tunnels-disrupting.html

Uttarakhand, referred to as the land of gods, is also known as the energy state of India. It is home to several fast-flowing rivers at high altitudes that serve as the perfect backdrop for harnessing energy from water to produce hydroelectric power.

In this state, the Tehri dam, situated in Garhwal, is the highest dam in India. The amalgamation of rivers and high mountains in this area is ideally suited to producing electricity for rural and urban areas through hydropower and other renewable energy sources such as solar and wind.

In the neighboring state of Ladakh, the Zoji La is one of the highest mountain passes in the world. It's surrounded by the rugged terrain of Trans-Himalayas, with cold desert slopes, snow-capped peaks and alpine meadows. This biodiverse region is home to snow leopards, Himalayan brown bears, wolves, Pallas cats, yaks and lynx.

Zoji La also serves as a gateway for the movement of Indian military troops, enabling a constant armed force presence at the Indo-Chinese border. The construction of the Zoji La tunnel, poised to become the longest tunnel in Asia, allows India to rapidly deploy troops near the border with China while claiming to promote economic development in rural areas. Existing roads remain blocked by snow for up to six months each year, so without the new tunnel, access is limited.

Its construction, however, uses extensive blasting and carving of the mountain slopes using dynamite, which disrupts fragile geological structures of the already unstable terrain, generating severe noise and air pollution, thereby putting wildlife at risk.

Hydropower harnesses the power of flowing water as it moves from higher to lower elevations. Through a series of turbines and generators, hydroelectric power plants convert the movement of water from rivers and waterfalls into electrical energy. This so-called "kinetic energy" contributes 14.3% of the global renewable energy mix.

However, development of hydropower projects and rapid urbanization in the Indian Himalayas are actively degrading the environmental and ecological landscape, particularly in the ecologically sensitive, seismically active and fragile regions of Joshimath in Uttarakhand and Zoji La in Ladakh.

The construction of hydropower plants, along with associated railways, all-weather highways and tunnels across the Himalayan mountains, is being undertaken without adequate urban planning, design or implementation.

At an altitude of 1,800m in the Garhwal region, land is subsiding or sinking in the town of Joshimath where more than 850 homes have been deemed as inhabitable due to cracks. Subsidence occurs naturally as a result of flash flooding, for example, but is also being accelerated by human activities, such as the construction of hydropower projects in this fragile, soft-slope area.

Satellite data shows that Joshimath sank by 5.4cm within 12 days between December 27 2022 and January 8 2023. Between April and November 2022, the town experienced a rapid subsidence of 9cm.

One 2024 study analyzed land deformation in Joshimath using remote sensing data. The study found significant ground deformation during the year 2022–23, with the maximum subsidence in the north-western part of the town coinciding with the near completion of the Tapovan Vishnugad hydropower project in 2023. Another 2025 study highlights that hydropower projects, particularly the Tapovan Vishnugad plant near Joshimath, play a significant role in destabilizing the region.

As part of my Ph.D. research, I've been interviewing locals about how this is affecting them. "The subsidence in Joshimath is not solely the result of natural calamities," said apple farmer Rivya Dimri, who once lived in the town but relocated to Lansdowne due to the inhospitable conditions of her ancestral home. She believes that a significant part of the problem stems from dam construction, frequent tunneling and blasting, plus the widespread deforestation that has taken place to accommodate infrastructure development.

Farmer Tanzong Le from Leh told me that "the government is prioritizing military agendas over the safety and security of local communities and the ecology of Ladakh." He believes that "the use of dynamite for blasting through mountains not only destabilizes the geological foundations of the Trans-Himalayan mountains but also endangers wildlife and the surrounding natural environment, exacerbating vulnerability in these already sensitive mountain regions."

The twin challenges of haphazard and unplanned infrastructure development in Joshimath and Zoji La represent two sides of the same coin: poorly executed infrastructure projects that prioritize economic, energy, military and geopolitical ambitions over the safeguarding of nature and communities. Hydropower plants, tunnels and highways may bring economic benefits and geopolitical advantages, but without urgent safeguards, India risks undermining the very mountains that protect its people, wildlife, ecosystems and borders.


Original Submission