Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:114 | Votes:200

posted by hubie on Sunday September 28, @05:44AM   Printer-friendly
from the if-only-there-was-another-OS-that-could-run-on-that-hardware dept.

Consumer Reports slams Microsoft for Windows 10 mess, urges extension of free updates:

Consumer Reports (CR), the venerable consumer rights organization known for its in-depth product testing, sent a letter to Microsoft CEO Satya Nadella this week. The letter, authored by the nonprofit's policy fellow Stacey Higginbotham and director of technology policy Justin Brookman, expressed "concern about Microsoft's decision to end free ongoing support for Windows 10 next month."

Consumer Reports isn't the first organization to come to the defense of the soon-to-be-orphaned Windows 10. Nearly two years ago, in October 2023, the Public Interest Research Group (PIRG) urged Microsoft to reconsider its decision, calling it "a bad deal for both users and the planet." The group warned that up to 400 million perfectly functional PCs could be discarded simply because they don't meet Windows 11's hardware requirements.

PIRG issued a new plea this week, bringing together a group of consumer and environmental organizations, including the European Right to Repair coalition, iFixit, and Consumer Reports.

In its letter, CR argues on behalf of its 5 million members that Microsoft's decision "will strand millions of consumers who have computers that are incompatible with Windows 11, and force them to pay $30 for a one-year extension of support, spend hundreds on a new Windows 11-capable computer, or do nothing and see the security and functionality of their computer degrade over time."

And this isn't just a consumer issue: Having hundreds of millions of unprotected PCs that can be commandeered for attacks on other entities is a risk to national security.

The group cites a member survey from earlier this year, covering more than 100,000 laptop and desktop computer owners. "More than 95% of all laptop and desktop computers purchased since the beginning of 2019 and owned for no more than five years were still in use," they reported. Those members tend to keep their Windows-based computers for a long time, the group concluded. "[I]t's clear that consumers purchased machines before Microsoft announced the hardware needs for Windows 11, expecting to be able to operate them through the next Microsoft OS transition."

The letter's authors also spotlight a fundamental contradiction in Microsoft's plans. "Arguing that Windows 11 is an essential upgrade to boost cybersecurity while also leaving hundreds of millions of machines more vulnerable to cyber attacks is hypocritical." The decision to offer extended security updates for one year is also consumer-hostile, they contend, with customers forced to pay $30 to preserve their machine's security, or use unrelated Microsoft products and services "just so Microsoft can eke out a bit of market share over competitors."

[...] After all that, the group is making a fairly modest request. "Consumer Reports asks Microsoft to extend security updates for free to all users who are unable to update their machine while also working to entice more people to get off Windows 10. ... [W]e also ask that Microsoft create a partnership to provide recycling of those machines to consumers abandoning their hardware."

This probably isn't the publicity that Microsoft wants as it urges its customers to buy a new Windows 11 PC. The Consumer Reports brand is also likely to break through to mainstream media in a way that more technical organizations can't.

Will this be enough to change hearts and minds in Redmond? It looks unlikely. I asked Microsoft for comment, and after a week, a spokesperson responded that the company had "nothing to share" on the subject.


Original Submission

posted by hubie on Sunday September 28, @12:56AM   Printer-friendly

Airlines Seen as Vulnerable as Ransomware Confirmed in Weekend Cyberattack

A ransomware attack was confirmed as the source of the weekend's airport disruption:

While no one crew has claimed responsibility for the attack that disrupted a number of European airports, including in Brussels, Berlin, London, Dublin and Cork this weekend, Europe's cybersecurity agency (ENISA) confirmed to the BBC that a ransomware attack was behind the chaos.

"The type of ransomware has been identified. Law enforcement is involved to investigate," the agency told Reuters.

The cyberattack disrupted check-in and baggage systems last Friday (19 September), targeting 'Muse' (multi-user system environment), a software tool made by Collins Aerospace, which provides a range of aircraft technologies, including baggage tagging and handling.

Experts had been warning for some time that airlines are particularly susceptible to widespread attacks. In July, after UK retailers were hit hard with Scattered Spider attacks, the FBI and cyber experts warned that airlines were likely to be next in line. Hackers using Scattered Spider tactics are renowned for targeting one sector at a time, although there is no indication as yet that they were behind this attack.

[...] "The aviation sector, with its complex network of third-party suppliers and contractors, presents an attractive target," said Haris Pylarinos, founder and CEO of cybersecurity company Hack the Box back in July. "If just one weak link is compromised, the ripple effects could be massive."

While the effects of the weekend attack were limited, it is certainly a major wake-up call for the airline industry.

"I'm deeply concerned but not surprised by the scale of the cyberattack on European airports," said Adam Blake, CEO and founder of cybersecurity company ThreatSpike

"Businesses are pouring vast sums of money into advanced security tools and bolt-on solutions, but it's just fragmenting security posture, creating overlapping controls and gaps for adversaries to exploit.

"Cybersecurity needs to be treated a lot more holistically, as a strategic priority built on end-to-end visibility, consistent monitoring and response, and proactive threat detection," he warned. "Where organisations stitch together a patchwork of vendors, vulnerabilities will inevitably emerge."

UK Arrests Man Linked to Ransomware Attack That Caused Airport Disruptions Across Europe

UK arrests man linked to ransomware attack that caused airport disruptions across Europe

The U.K.'s National Crime Agency (NCA) said on Wednesday that a man was arrested in connection to the ransomware attack that has caused delays and disruptions at several European airports since the weekend.

The hack, which began Friday, targeted check-in systems provided by Collins Aerospace, causing delays at Brussels, Berlin, and Dublin airports, as well as London's Heathrow, which lasted until yesterday.

While the NCA did not name the arrested man, the agency said he is "in his forties" and that he was arrested in the southern county of West Sussex on Tuesday under the country's Computer Misuse Act "as part of an investigation into a cyber incident impacting Collins Aerospace."

The man was released on conditional bail, according to the agency.

"Although this arrest is a positive step, the investigation into this incident is in its early stages and remains ongoing," said Paul Foster, deputy director and head of the NCA's National Cyber Crime Unit, in a statement.


Original Submission #1Original Submission #2

posted by hubie on Saturday September 27, @08:13PM   Printer-friendly

How Your Utility Bills Are Subsidizing Power-Hungry AI:

This summer, across the Eastern United States, home electricity bills have been rising. From Pittsburgh to Ohio, people are paying $10 to $27 more per month for electricity. The reason? The rising costs of powering data centers running AI. As providers of the largest and most compute-intensive AI models keep adding them into more and more aspects of our digital lives with little regard for efficiency (and without giving users much of a choice), they grow increasingly dependent on a growing share of the existing energy and natural resources, leading to rising costs for everyone else.

In particular, this means that average citizens living in states that host data centers bear the cost of these choices, even though they rarely reap any benefits themselves. This is because data centers are connected to the whole world via the Internet, but use energy locally, where they're physically located. And unlike the apartments, offices, and buildings connected to a traditional energy grid, the energy use of AI data centers is highly concentrated; think as much as an entire metal smelting plant over a location the size of a small warehouse. For example, the state of Virginia is home to 35% of all known AI data centers worldwide, and together they use more than a quarter of the state's electricity. And they're expanding fast — in the last 7 years, global energy use by data centers has grown 12% a year, and it's set to more than double by 2030, using as much electricity as the whole of Japan.

The costs of this brash expansion of data centers for AI are reflected first and foremost in the energy bills of everyday consumers. In the United States, utility companies fund infrastructure projects by raising the costs of their services for their entire client base (who often have no choice in who provides them with electricity). These increased rates are then leveraged to expand the energy grid to connect new data centers to new and existing energy sources and build mechanisms to keep the grid balanced despite the increased ebb and flow of supply and demand, particularly in places like Virginia that have a high concentration of data centers. Also, on top of amortizing the base infrastructure cost, electricity prices fluctuate based on demand, which means that the cost of having your lights on or running your AC will rise when there is a high demand from data centers on the same grid.

These costs also come with dire impacts on the stability of an energy infrastructure that is already stretched to the breaking point by growing temperatures and extreme weather. In fact, last summer, a lightning storm caused a surge protector to fail near Fairfax, Virginia, which resulted in 200 data centers switching to local generators, causing the demand on the local energy grid to plummet drastically. This nearly caused a grid-wide blackout and, for the first time, made federal regulators recognize data centers as a new source of instability in power supplies, on top of natural disasters and accidents.

[...] While recent decisions at the federal level in the US have been more aligned with a "full speed ahead" approach to data center infrastructure, states such as Ohio and Georgia are passing laws that would put the onus on data centers, not consumers, to pay the cost of new investments to expand the power grid. Countries such as the Netherlands and Ireland have gone a step further, putting moratoriums on the construction of new data centers in key regions until grid operators can stabilize existing grids and make sure that they don't get overwhelmed.

But we still need to rethink our relationship with multi-purpose, generative AI approaches. At a high level, we need to move away from treating AI as a centrally developed commodity, where developers need to push adoption across the board to justify rising cost in a vicious cycle that leads to ever costlier technology, and toward AI that is developed based on specific demands, using the right tool and the right model to solve real problems at reasonable cost. This would enable choosing smaller, task-specific models for tasks like question answering and classification, using and sharing open-source models to allow incremental progress and reuse of models by the community, and incentivize measuring and disclosing the energy consumption of models using approaches like AI Energy Score.

The next few years will be pivotal for determining the future of AI and its impact on energy grids worldwide. It's important to keep efficiency and transparency at the heart of the decisions we make. A future when AI is more decentralized, efficient and community-driven can ensure that we are not collectively paying the price for the profits of a few.

How AI and data center growth drive copper demand in the US:

The growing demand for copper is no secret: over the last year the need for copper — as well as motivation for more domestically produced copper in the US — has been widely-discussed among market participants, major companies, the current administration and even the general public. Copper demand in data centers is also increasing rapidly. The country and the world, needs copper and in the next ten years they're only going to need more.

At the same time, conversations surrounding sustainability and energy needs in the US have changed over the last year, with the focus shifting from electric vehicles (EVs) to artificial intelligence (AI). And with this rise comes heightened demand for data centers — where copper plays a huge part.

According to Fastmarkets' analysts latest 10-year outlook report, published in May, copper consumption from energy transition sectors will rise at a compound annual growth rate (CAGR) of 8.9% in the next 10 years — including 10.4% for the EV sector, 6.8% for the solar power industry and 7.8% for the wind power industry — while consumption from traditional non-energy transition sectors will rise at a CAGR of 1.1%. A recent report by Macquarie also estimated that between 330,000 and 420,000 tonnes of copper will be used in data centers by 2030.

[...] According to the CDA, copper demand in North America's data center sector is being driven by several converging factors: the pace of new construction is fast, pushing demand higher and data centers themselves require large volumes of copper for power distribution and grounding.

Leavens said the data centers themselves are the biggest factor: an estimated 30-40% of data center construction involves electrical, all of which contains copper.

[...] "Particularly now with the rise of AI, we are using more energy, which means there's more heat... which means you need more cooling," Leavens said.

"Newer data centers — especially those built for artificial intelligence — are designed to handle far greater power loads, which means even more copper is needed to safely manage, transmit and ground electricity," Kotbra told Fastmarkets on Wednesday September 3.

"Together, rapid capacity growth, higher power intensity and advanced cooling methods are making copper more critical than ever to the data center industry," Kotbra said.

Leavens also noted that when the US government calculates the data for construction, it is only looking at what is permanent in the building, which includes the shell of the building and the wires within. Since the servers are not permanent, they are not included in the same construction statistics.

"So, it's a little confusing how the government records it," Leavens said.

[...] According to a National Mining Association (NMA) report published on August 19, copper consumption is expected to increase by more than 110 percent by 2050, while US energy demand is projected to increase by as much as 30% in the same period of time, and the AI industry is projected to reach trillions of dollars in market value in just the next few years.

[...] As the need for copper grows and data centers expand, so does the potential for innovation, sources said, and sustainability is a key factor in the copper business and the energy industry, especially globally.

Sustainability goals are reshaping how data centers think about copper procurement, Kotbra said, with operators increasingly focused on sourcing copper products with higher recycled content, underscoring the importance of accurately quantifying and verifying those amounts.

"This push for transparency not only supports corporate sustainability commitments, but it also drives demand for recycling infrastructure that keeps copper in circulation," Kotbra said.

On the topic of recyclability, Leavens said "our companies are facilitating that re-sensibility through design of products. We recognize that if you bury the copper where it's difficult to get it out of the device, you're not making it as sustainable as it could be; so they're thinking ahead [about the] end-of-life of this product how easy could it be to take it apart and pull to get the copper out."

Kotbra agreed that recyclability is one of the most important innovations in the data center industry, citing similar considerations for end-of-life recovery that make it easier to separate copper from other materials and ensure it can be reclaimed at high rates.

"Since copper is infinitely recyclable without losing performance, these design improvements are helping the industry capture even more of its value and keep it in circulation for generations to come," Kotbra told Fastmarkets.

[...] "Copper is a vital part of every economy globally," Leavens told Fastmarkets. "I hear so much about the security risk for copper — it's a critical material now, and the government recognizes it as that. We need it; but also there are huge sources of it around, through stockpiles or friendly countries such as Chile or Australia. As an economist, I'm just not concerned that there would be an absolute shutdown like you saw with oil in the 1970s," Leavens said.

Journal Reference: DOI: https://dl.acm.org/doi/10.1145/3630106.3658542


Original Submission

posted by jelizondo on Saturday September 27, @03:27PM   Printer-friendly

Cloudflare DDoSed itself with React useEffect hook blunder:

Cloudflare has confessed to a coding error using a React useEffect hook, notorious for being problematic if not handled carefully, that caused an outage for the platform's dashboard and many of its APIs.

The outage was on September 12, lasted for over an hour, and was triggered by a bug in the dashboard, which caused "repeated, unnecessary calls to the Tenant Service API," according to VP of engineering Tom Lianza. This API is part of the API request authorization logic and therefore affected other APIs.

The cause was hard to troubleshoot since the apparent issue was with the API availability, disguising the fact that it was the dashboard that was overloading it.

Lianza said the core issue was a React useEffect hook with a "problematic object in its dependency array." The useEffect hook is a function with parameters including a setup function that returns a cleanup function, and an optional list of dependencies. The setup function runs every time a dependency changes.

In this Cloudflare case, the function made calls to the Tenant Service API, and one of the dependencies was an object that was "recreated on every state or prop change." The consequence was that the hook ran repeatedly during a single render of the dashboard, when it was only intended to run once. The function ran so often that the API was overloaded, causing the outage.

The useEffect hook is powerful but often overused. The documentation is full of warnings about misuse and common errors, and encouragement to use other approaches where possible. Performance pitfalls with useEffect are common.

The incident triggered a discussion in the community about the pros and cons of useEffect. One developer said on Reddit there were too many complaints about useEffect, that it is an essential part of React, and "the idea that it is a bad thing to use is just silly." Another reaction, though, was "the message has not yet been received. Nearly everyone I know continues to put tons of useEffects everywhere for no reason."

Another remarked: "the real problem is the API going down by excessive API calls... in a company that had dedicated services to prevent DDoS [Distributed Denial of Service]."

Lianza said the Tenant Service had not been allocated sufficient capacity to "handle spikes in load like this" and more resources have now been allocated to it, along with improved monitoring. In addition, new information has been added to API calls from the dashboard to distinguish retries from new requests, since if the team had known that it was seeing "a large volume of new requests, it would have made it easier to identify the issue as a loop in the dashboard."

Cloudflare accidentally DDoS-attacked itself:

Cloudflare, a platform that provides network services, was the victim of a DDoS attack last week. It was also accidentally the cause of it.

You might remember Cloudflare was linked to a massive outage in June of this year. When Cloudflare went down, so did sites like Spotify, Google, Snapchat, Discord, Character.ai, and more, all of which rely on Cloudflare's services. That time, the disruption was sparked by a Google Cloud outage. Earlier this month, Cloudflare had another blunder, albeit much less disruptive than its outage from the summer — but this time, it did it to itself.

"We had an outage in our Tenant Service API which led to a broad outage of many of our APIs and the Cloudflare Dashboard," Tom Lianza, the vice president of engineering for Cloudflare and Joaquin Madruga, the vice president of engineering for the developer platform at Cloudflare, wrote in a Sept. 13 blog post. "The incident's impact stemmed from several issues, but the immediate trigger was a bug in the dashboard."

The bug, according to Lianza and Madruga, caused "repeated, unnecessary calls to the Tenant Service API." On accident, Cloudflare included a "problematic object in its dependency array" which was recreated, treated as new, caused it to re-run, and, eventually, the "API call executed many times during a single dashboard render instead of just once."

"When the Tenant Service became overloaded, it had an impact on other APIs and the dashboard because Tenant Service is part of our API request authorization logic. Without Tenant Service, API request authorization can not be evaluated. When authorization evaluation fails, API requests return 5xx status codes," the blog reads.

Everything is back on track at Cloudflare for now.

"We're very sorry about the disruption," the blog post reads. "We will continue to investigate this issue and make improvements to our systems and processes."


Original Submission

posted by jelizondo on Saturday September 27, @10:41AM   Printer-friendly

Magma displacement triggered tens of thousands of earthquakes, Santorini swarm study finds

Tens of thousands of earthquakes shook the Greek island of Santorini and the surrounding area at the beginning of the year. Now, researchers have published a comprehensive geological analysis of the seismic crisis in the journal Nature.

The researchers—from GFZ Helmholtz Center for Geosciences and GEOMAR Helmholtz Center for Ocean Research Kiel, together with international colleagues—integrated data from earthquake stations and ocean bottom instruments deployed at the Kolumbo underwater volcano seven kilometers away from Santorini and used a newly developed AI-based method for locating earthquakes.

This enabled reconstructing the processes in the underground with unique detail, revealing that around 300 million cubic meters of magma rose from the deep crust and came to rest at a depth of around four kilometers below the ocean floor. During its ascent through the crust, the molten magma generated thousands of earthquakes and seismic tremors.

Santorini is located in the eastern Mediterranean and forms part of the Hellenic volcanic arc, a highly active geological zone. This world-famous island group forms the rim of a caldera, which was created by a massive volcanic eruption around 3,600 years ago.

The active underwater volcano Kolumbo lies in the immediate vicinity. In addition, the region is crossed by several active geological fault zones, which are the result of the African Plate pushing northeast against the Hellenic Plate. Earth's crust beneath the Mediterranean region has broken up into several microplates that shift against each other, and in some cases subduct and melt, thus sourcing volcanic activity.

Santorini has produced multiple eruptions in historic times, most recently in 1950. In 1956, two severe earthquakes occurred in the southern Aegean Sea, only 13 minutes apart, between Santorini and the neighboring island of Amorgos. These had magnitudes of 7.4 and 7.2 respectively, triggering a tsunami.

The earthquake swarm that initiated in late January 2025 took place in exactly this region. During the crisis, more than 28,000 earthquakes were recorded. The strongest of these reached magnitudes of over 5.0. The severe shaking caused great public concern during the seismic crisis, partly because the cause was initially unclear, being potentially either tectonic or volcanic.

The new study now shows that the earthquake swarm was triggered by the deep transport of magma. The chain of events had already begun in July 2024, when magma rose into a shallow reservoir beneath Santorini. This initially led to a barely noticeable uplift of Santorini by a few centimeters. At the beginning of January 2025, seismic activity intensified, and from the end of January, magma began to rise from the depths, accompanied by intense seismic activity.

However, the seismic activity shifted away from Santorini over a distance of more than 10 kilometers to the northeast. During this phase, the foci of the quakes moved in several pulses from a depth of 18 kilometers upwards to a depth of only 3 kilometers below the seafloor. The high-resolution temporal and spatial analysis of the earthquake distribution, combined with satellite radio interferometry (InSAR), GPS ground stations and seafloor stations, made it possible to model the events.

Dr. Marius Isken, geophysicist at the GFZ and one of the two lead authors of the study, says, "The seismic activity was typical of magma ascending through Earth's crust. The migrating magma breaks the rock and forms pathways, which causes intense earthquake activity. Our analysis enabled us to trace the path and dynamics of the magma ascent with a high degree of accuracy."

As a result of the magma movement, the island of Santorini subsided again, which the authors interpret as evidence of a previously unknown hydraulic connection between the two volcanoes.

Dr. Jens Karstens, marine geophysicist at GEOMAR and also lead author of the study, explains, "Through close international cooperation and the combination of various geophysical methods, we were able to follow the development of the seismic crisis in near real time and even learn something about the interaction between the two volcanoes. This will help us to improve the monitoring of both volcanoes in the future."

Two factors in particular enabled the exceptionally detailed mapping of the subsurface. For one, an AI-driven method developed at the GFZ for the automatic evaluation of large seismic data sets. Secondly, GEOMAR had already deployed underwater sensors at the crater of the underwater volcano Kolumbo at the beginning of January as part of the MULTI-MAREX project. These sensors not only measured seismic signals directly above the reservoir, but also pressure changes resulting from the subsidence of the seabed by up to 30 centimeters during the intrusion of magma beneath Kolumbo.

Scientific research activity on Santorini is continuing despite the decline in seismic activity. The GFZ is conducting repeated gas and temperature measurements on Santorini, while GEOMAR currently has eight seabed sensor platforms in operation.

Prof. Dr. Heidrun Kopp, Professor of Marine Geodesy at GEOMAR and project manager of MULTI-MAREX, says, "The joint findings were always shared with the Greek authorities in order to enable the fastest and most accurate assessment of the situation possible in the event of new earthquakes."

Co-author Prof. Dr. Paraskevi Nomikou is Professor of Geological Oceanography at the University of Athens and works closely with the German partner institutes on the MULTI-MAREX project. She adds, "This long-standing cooperation made it possible to jointly manage the events at the beginning of the year and to analyze them so precisely from a scientific point of view. Understanding the dynamics in this geologically highly active region as accurately as possible is crucial for the safety and protection of the population."

More information: Marius Isken can be reached via the GFZ press office, Volcanic crisis reveals coupled magma system at Santorini and Kolumbo, Nature (2025). DOI: 10.1038/s41586-025-09525-7. www.nature.com/articles/s41586-025-09525-7
       


Original Submission

posted by jelizondo on Saturday September 27, @05:56AM   Printer-friendly

China's latest GPU arrives with claims of CUDA compatibility and RT support:

While Innosilicon Technology's products may not be prominently featured on the list of the best graphics cards, the company has been hard at work developing its Fenghua (translated as Fantasy) series of graphics cards. As ITHome reported, Innosilicon recently unveiled the Fenghua No. 3, the company's latest flagship GPU. The company promises that its third GPU iteration is a significant advancement over its predecessors.

While previous Fenghua No.1 and Fenghua No.2 graphics cards were based on Imagination Technologies' PowerVR IP, the new Fenghua No.3 leverages the open-source RISC-V architecture instead. The graphics card reportedly borrows a page from OpenCore Institute's Nanhu V3 project.

The company representative didn't provide any more details on the Fenghua No.3 during the launch event, only that it features a home-grown design from the ground up. The Fenghua No.3 is also purportedly compatible with Nvidia's proprietary CUDA platform, which could open many doors for the graphics card if it holds true.

The Fenghua No.3 is designed for a bunch of different workloads, as Innosilicon describes it as an "all-function GPU" (translation). The company plans to deploy the graphics card in different sectors, including AI, scientific computing, CAD work, medical imaging, and gaming. Therefore, it's safe to assume there will be other variants of the Fenghua No.3.

From a gaming perspective, the Fenghua No.3 claims support for the latest APIs, including DirectX 12, Vulkan 1.2, and OpenGL 4.6. The graphics card is also reportedly equipped to support ray tracing. The team demonstrated the Fenghua No.3 in titles such as Tomb Raider, Delta Force, and Valorant at the press conference, and reports claim that the gameplay was smooth. However, there was no available information on game settings, resolution, and actual frame rates, so take these claims with a grain of salt.

The Fenghua No.3 reportedly comes equipped with 112GB+ of HBM memory, making it an ideal product for AI. A single Fenghua No. 3 can handle 32B and 72B LLM models, while eight of them in unison work with 671B and 685B parameter models. Innosilicon claims unconditional support for the DeepSeek V3, R1, and V3.1 models, as well as the Qwen 2.5 and Qwen 3 model families.

Innosilicon also boasted that the Fenghua No.3 is China's first graphics card to support the YUV444 format, which offers the best color detail and fidelity—a feature particularly beneficial for users who perform extensive CAD industrial work or video editing. The manufacturer also highlighted the Fenghua No.3's support for 8K (7680 x 4320) displays. The graphics card can drive up to six 8K monitors at 30 Hz.

The Fenghua No.3 is the world's first graphics card to offer native support for DICOM (Digital Imaging and Communication in Medicine). It enables the precise visualization of X-rays, MRIs, CT scans, and ultrasounds on standard monitors, eliminating the need for costly, specialized grayscale medical displays.

China's semiconductor industry is gradually improving. Although it is unlikely to rival that of the United States in the near future, it may not necessarily need to do so. China's primary goal is to achieve self-sufficiency in key areas. Announcements such as the Fenghua No.3 may seem insignificant individually. Collectively, they might amass into substantial progress, akin to accumulating grains of sand that eventually form a small beach.


Original Submission

posted by jelizondo on Saturday September 27, @01:06AM   Printer-friendly

https://phys.org/news/2025-09-india-unplanned-hydropower-tunnels-disrupting.html

Uttarakhand, referred to as the land of gods, is also known as the energy state of India. It is home to several fast-flowing rivers at high altitudes that serve as the perfect backdrop for harnessing energy from water to produce hydroelectric power.

In this state, the Tehri dam, situated in Garhwal, is the highest dam in India. The amalgamation of rivers and high mountains in this area is ideally suited to producing electricity for rural and urban areas through hydropower and other renewable energy sources such as solar and wind.

In the neighboring state of Ladakh, the Zoji La is one of the highest mountain passes in the world. It's surrounded by the rugged terrain of Trans-Himalayas, with cold desert slopes, snow-capped peaks and alpine meadows. This biodiverse region is home to snow leopards, Himalayan brown bears, wolves, Pallas cats, yaks and lynx.

Zoji La also serves as a gateway for the movement of Indian military troops, enabling a constant armed force presence at the Indo-Chinese border. The construction of the Zoji La tunnel, poised to become the longest tunnel in Asia, allows India to rapidly deploy troops near the border with China while claiming to promote economic development in rural areas. Existing roads remain blocked by snow for up to six months each year, so without the new tunnel, access is limited.

Its construction, however, uses extensive blasting and carving of the mountain slopes using dynamite, which disrupts fragile geological structures of the already unstable terrain, generating severe noise and air pollution, thereby putting wildlife at risk.

Hydropower harnesses the power of flowing water as it moves from higher to lower elevations. Through a series of turbines and generators, hydroelectric power plants convert the movement of water from rivers and waterfalls into electrical energy. This so-called "kinetic energy" contributes 14.3% of the global renewable energy mix.

However, development of hydropower projects and rapid urbanization in the Indian Himalayas are actively degrading the environmental and ecological landscape, particularly in the ecologically sensitive, seismically active and fragile regions of Joshimath in Uttarakhand and Zoji La in Ladakh.

The construction of hydropower plants, along with associated railways, all-weather highways and tunnels across the Himalayan mountains, is being undertaken without adequate urban planning, design or implementation.

At an altitude of 1,800m in the Garhwal region, land is subsiding or sinking in the town of Joshimath where more than 850 homes have been deemed as inhabitable due to cracks. Subsidence occurs naturally as a result of flash flooding, for example, but is also being accelerated by human activities, such as the construction of hydropower projects in this fragile, soft-slope area.

Satellite data shows that Joshimath sank by 5.4cm within 12 days between December 27 2022 and January 8 2023. Between April and November 2022, the town experienced a rapid subsidence of 9cm.

One 2024 study analyzed land deformation in Joshimath using remote sensing data. The study found significant ground deformation during the year 2022–23, with the maximum subsidence in the north-western part of the town coinciding with the near completion of the Tapovan Vishnugad hydropower project in 2023. Another 2025 study highlights that hydropower projects, particularly the Tapovan Vishnugad plant near Joshimath, play a significant role in destabilizing the region.

As part of my Ph.D. research, I've been interviewing locals about how this is affecting them. "The subsidence in Joshimath is not solely the result of natural calamities," said apple farmer Rivya Dimri, who once lived in the town but relocated to Lansdowne due to the inhospitable conditions of her ancestral home. She believes that a significant part of the problem stems from dam construction, frequent tunneling and blasting, plus the widespread deforestation that has taken place to accommodate infrastructure development.

Farmer Tanzong Le from Leh told me that "the government is prioritizing military agendas over the safety and security of local communities and the ecology of Ladakh." He believes that "the use of dynamite for blasting through mountains not only destabilizes the geological foundations of the Trans-Himalayan mountains but also endangers wildlife and the surrounding natural environment, exacerbating vulnerability in these already sensitive mountain regions."

The twin challenges of haphazard and unplanned infrastructure development in Joshimath and Zoji La represent two sides of the same coin: poorly executed infrastructure projects that prioritize economic, energy, military and geopolitical ambitions over the safeguarding of nature and communities. Hydropower plants, tunnels and highways may bring economic benefits and geopolitical advantages, but without urgent safeguards, India risks undermining the very mountains that protect its people, wildlife, ecosystems and borders.


Original Submission

posted by jelizondo on Friday September 26, @08:20PM   Printer-friendly

American Kids Can't Read or Do Math Anymore:

Teenagers are dumb, regardless of generation. I was dumb as a teenager. You were dumb as a teenager. Now that we've established that baseline of stupidity, the latest data from the National Assessment of Educational Progress (NAEP) [DOCX --JE], nicknamed "the Nation's Report Card," shows that today's American high school seniors are even less educated than we suspected. Their math and reading skills in 2024 have plunged to historic lows.

The NAEP, a federally run assessment that regularly quizzes America's youth on their ability to do basic math and understand words, just released its first post-pandemic results. They're bleak. If the children are our future, we're f**ked.

12th graders scored the lowest in math since 2005 and the lowest in reading since the assessments began in 1992. That is a dramatic way of saying it's the weakest performance since the assessments began.

Nearly half of all seniors performed below the "Basic" level in math, and almost a third couldn't hit that low bar in reading. Meanwhile, eighth-graders are flailing in science, with their worst scores since 2009.

So, what the f**k happened? You could point to the pandemic. Remote learning, while keeping the kids and their families alive and healthy, didn't spark academic excellence.

Studies have shown it also torpedoed kids' mental health and educational development. The growing use of generative AI probably isn't helping, since it's more than likely doing their homework and writing their papers for them rather than assisting them to learn how to do it themselves.

That's a recipe for churning out a lot of poorly educated teenagers who are about to enter the real world without any of the tools necessary to survive in it.

But there's a twist: the instinct to blame COVID and pandemic lockdowns solely doesn't explain why those numbers were already on the downturn before the pandemic. According to Matthew Soldner, acting commissioner at the National Center for Education Statistics, the decline has been brewing for years, especially among the lowest-performing students. COVID just kicked the crumbling foundation out from under the whole system.

As the kids fail, the adults in charge of ensuring that we prep the future generations for success are failing even more miserably. The Department of Education is hemorrhaging staff and funding due to Trump-led efforts to dismantle the department entirely.

The Department of Education, currently being headed by Linda McMahon, the wife of WWE's Vince McMahon [...], gutted funding of the Institute of Education Sciences, a subunit of the Department of Education that monitors the state of US education and funds research to improve academic outcomes.

In other words, our children are going to get dumber. And for at least the next 3 ½-ish years, we aren't going to be gathering enough data to figure out how to fix any of it. It's all going to get much worse before it gets better.


Original Submission

posted by jelizondo on Friday September 26, @03:35PM   Printer-friendly

https://phys.org/news/2025-09-world-coastal-settlements-retreating-seas.html

Human settlements around the world are moving inland and relocating away from coastlines as sea levels rise and coastal hazards grow more severe, but a new international study shows the poorest regions are being forced to stay put or even move closer to danger.

The study, published in Nature Climate Change, analyzed decades (1992–2019) of satellite nighttime light data across 1,071 coastal regions in 155 countries.

It found that human settlements in 56% of the regions analyzed relocated further inland, 28% stayed put, and 16% moved closer to the coast.

Low-income groups were more likely to move closer to the coast, driven largely by the growth of informal settlements and the search for better livelihoods. Human settlements shifted most towards coastlines in South America (up to 17.7% ) and Asia (17.4%), followed by Europe (14.8%), Oceania (13.8%), Africa (12.4%) and North America (8.8%).

Lead author Xiaoming Wang, an adjunct professor based at the Monash Department of Civil and Environmental Engineering, said relocation was largely driven by vulnerability and the capacity to respond.

"For the first time, we've mapped how human settlements are relocating from coasts around the world. It's clear that moving inland is happening, but only where people have the means to do so.

"In poorer regions, people may have to be forced to stay exposed to climate risks, either for living or no capacity to move. These communities can face increasingly severe risk in a changing climate," said Wang.

Oceania had some of the closest settlements to the coast globally, reflecting the region's reliance on coastal economies.

"In Oceania, we see a common reality where wealthier and poorer communities are both likely to relocate towards coastlines in addition to moving inland," adjunct professor Wang said.

"On one hand, the movement closer to coastlines can expose vulnerable populations to the impacts of storms, erosion, and sea-level rise. On the other hand, it can expose those wealthy communities to the growing coastal hazards."

The study also highlights concerns that overconfidence in protective infrastructure encouraged risky development close to the coast.

"It is interesting to note that high-income groups also had a relatively higher likelihood to remain on coastlines, such as in Europe and North America. This can be due to their capacity and wealth accumulated in coastal areas," adjunct professor Wang said.

The study warns that relocation inland may become unavoidable as sea levels rise and climate change intensifies.

"Relocating away from the coast must be part of a long-term climate strategy, and the rationale for policy and planning to relocate people requires meticulous consideration of both economic and social implications across individuals, communities and regions," adjunct professor Wang said.

"Alongside climate change mitigation, it needs to be combined with efforts to reduce coastal hazard exposure and vulnerability, improve informal settlements, balance coastal risks with livelihoods and maintain sustainable lifestyles in the long-term. Without this, coastal adaptation gaps will continue to be widened and leave the world's poorest behind."

The study was an international collaboration on climate adaptation research between adjunct professor Wang, the Institute for Disaster Management and Reconstruction at Sichuan University, and researchers from Denmark and Indonesia.

The collaboration aims to understand how communities cope with recurring coastal hazards and highlights gaps in adaptation that need urgent attention.

More information: Lilai Xu et al, Global coastal human settlement retreat driven by vulnerability to coastal climate hazards, Nature Climate Change (2025). DOI: 10.1038/s41558-025-02435-6
                                                                                                       


Original Submission

posted by janrinok on Friday September 26, @10:53AM   Printer-friendly

Huntington's disease successfully treated for first time:

One of the cruellest and most devastating diseases – Huntington's – has been successfully treated for the first time, say doctors.

The disease runs through families, relentlessly kills brain cells and resembles a combination of dementia, Parkinson's and motor neurone disease.

An emotional research team became tearful as they described how data shows the disease was slowed by 75% in patients.

It means the decline you would normally expect in one year would take four years after treatment, giving patients decades of "good quality life", Prof Sarah Tabrizi told BBC News.

The new treatment is a type of gene therapy given during 12 to 18 hours of delicate brain surgery.

The first symptoms of Huntington's disease tend to appear in your 30s or 40s and is normally fatal within two decades – opening the possibility that earlier treatment could prevent symptoms from ever emerging.

Prof Tabrizi, director of the University College London Huntington's Disease Centre, described the results as "spectacular".

"We never in our wildest dreams would have expected a 75% slowing of clinical progression," she said.

None of the patients who have been treated are being identified, but one was medically retired and has returned to work. Others in the trial are still walking despite being expected to need a wheelchair.

Treatment is likely to be very expensive. However, this is a moment of real hope in a disease that hits people in their prime and devastates families.

Huntington's runs through Jack May-Davis' family. He has the faulty gene that causes the disease, as did his dad, Fred, and his grandmother, Joyce.

[...] This mutation turns a normal protein needed in the brain – called the huntingtin protein – into a killer of neurons.

The goal of the treatment is to reduce levels of this toxic protein permanently, in a single dose.

The therapy uses cutting edge genetic medicine combining gene therapy and gene silencing technologies.

It starts with a safe virus that has been altered to contain a specially designed sequence of DNA.

This is infused deep into the brain using real-time MRI scanning to guide a microcatheter to two brain regions - the caudate nucleus and the putamen. This takes 12 to 18 hours of neurosurgery.

The virus then acts like a microscopic postman – delivering the new piece of DNA inside brain cells, where it becomes active.

This turns the neurons into a factory for making the therapy to avert their own death.

The cells produce a small fragment of genetic material (called microRNA) that is designed to intercept and disable the instructions (called messenger RNA) being sent from the cells' DNA for building mutant huntingtin.

This results in lower levels of mutant huntingtin in the brain.

Results from the trial - which involved 29 patients - have been released in a statement by the company uniQure, but have not yet been published in full for review by other specialists.

The data showed that three years after surgery there was an average 75% slowing of the disease based on a measure which combines cognition, motor function and the ability to manage in daily life.

The data also shows the treatment is saving brain cells. Levels of neurofilaments in spinal fluid – a clear sign of brain cells dying – should have increased by a third if the disease continued to progress, but was actually lower than at the start of the trial.

"This is the result we've been waiting for," said Prof Ed Wild, consultant neurologist at the National Hospital for Neurology and Neurosurgery at UCLH. "There was every chance that we would never see a result like this, so to be living in a world where we know this is not only possible, but the actual magnitude of the effect is breathtaking, it's very difficult to fully encapsulate the emotion."

[...] The treatment was considered safe, although some patients did develop inflammation from the virus that caused headaches and confusion that either resolved or needed steroid treatment.

Prof Wild anticipates the therapy "should last for life" because brain cells are not replaced by the body in the same manner as blood, bone and skin are constantly renewed.

Approximately 75,000 people have Huntington's disease in the UK, US and Europe with hundreds of thousands carrying the mutation meaning they will develop the disease.

UniQure says it will apply for a licence in the US in the first quarter of 2026 with the aim of launching the drug later that year. Conversations with authorities in the UK and Europe will start next year, but the initial focus is on the US. Dr Walid Abi-Saab, the chief medical officer at uniQure, said he was "incredibly excited" about what the results mean for families, and added that the treatment had "the potential to fundamentally transform" Huntington's disease.

However, the drug will not be available for everyone due to the highly complex surgery and the anticipated cost.

"It will be expensive for sure," says Prof Wild. There isn't an official price for the drug. Gene therapies are often pricey, but their long-term impact means that can still be affordable. In the UK, the NHS does pay for a £2.6m-per-patient gene therapy for haemophilia B.


Original Submission

posted by janrinok on Friday September 26, @06:11AM   Printer-friendly

https://phys.org/news/2025-09-facebook-reveal-devastating-real-world.html

Twenty-one years after Facebook's launch, Australia's top 25 news outlets now have a combined 27.6 million followers on the platform. They rely on Facebook's reach more than ever, posting far more stories there than in the past.

With access to Meta's Content Library (Meta is the owner of Facebook), our big data study analyzed more than three million posts from 25 Australian news publishers. We wanted to understand how content is distributed, how audiences engage with news topics, and the nature of misinformation spread.

The study enabled us to track de-identified Facebook comments and take a closer look at examples of how misinformation spreads. These included cases about election integrity, the environment (floods) and health misinformation such as hydroxychloroquine promotion during the COVID pandemic.

The data reveal misinformation's real-world impact: it isn't just a digital issue, it's linked to poor health outcomes, falling public trust, and significant societal harm.

Take the example of the false claims that antimalarial drug hydroxychloroquine was a viable COVID treatment.

In Australia, as in the United States, political figures and media played leading roles in the spread of this idea. Mining billionaire and then leader of the United Australia Party, Clive Palmer, actively promoted hydroxychloroquine as a COVID treatment. In March 2020 he announced he would fund trials, manufacture, and stockpile the drug.

He placed a two-page advertisement in The Australian. Federal Coalition MPs Craig Kelly and George Christensen also championed hydroxychloroquine, coauthoring an open letter advocating its use.

We examined 7,000 public comments responding to 100 hydroxychloroquine posts from the selected media outlets during the pandemic. Contrary to concerns that public debate is siloed in echo chambers, we found robust online exchanges about the drug's effectiveness in combating COVID.

Yet, despite fact-checking efforts, we find that facts alone fail to stop the spread of misinformation and conspiracy theories about hydroxychloroquine. This misinformation targeted not only the drug, but also the government, media and "big pharma."

To put the real-world harm in perspective, public health studies estimate hydroxychloroquine use was linked to at least 17,000 deaths worldwide, though the true toll is likely higher.

The topic modeling also highlighted the personal toll caused by this misinformation spread. These include the secondary harm of the drug's unavailability (due to stockpiling) for legitimate treatment of non-COVID conditions such as rheumatoid arthritis and lupus, leading to distress, frustration and worsening symptoms.

In other instances, we saw how misinformation can hurt public trust in institutions and non-government organizations. Following the 2022 floods in Queensland and New South Wales, we again saw that despite fact-checking efforts, misinformation about the Red Cross charity flourished online and was amplified by political commentary.

Without repeating the falsehoods here, the misinformation led to changes in some public donation behavior, such as buying gift cards for flood victims rather than trusting the Red Cross to distribute much-needed funds. This highlights the significant harm misinformation can inflict on public trust and disaster response efforts.

The data also reveal the cyclical nature of misinformation. We call this misinformation's "stickiness," because it reappears at regular intervals such as elections. In one example, electoral administrators were targeted with false accusations that polling officials rigged the election outcome by rubbing out votes marked with pencils.

While this is an old conspiracy theory about voter fraud that predates social media and it is also not unique to Australia, the data show misinformation's persistence online during state and federal elections, including the 2023 Voice referendum.

Here, multiple debunking efforts from electoral commissioners, fact-checkers, media and social media seem to have limited levels of public engagement compared to a noisy minority. When we examined 60,000 sentences on electoral topics from the past decade, we detected just 418 sentences from informed or official sources.

Again, high-profile figures such as Palmer have played a central role in circulating this misinformation. The chart below demonstrates its stickiness.

Our study has lessons for public figures and institutions. They, especially politicians, must lead in curbing misinformation, as their misleading statements are quickly amplified by the public.

Social media and mainstream media also play an important role in limiting the circulation of misinformation. As Australians increasingly rely on social media for news, mainstream media can provide credible information and counter misinformation through their online story posts. Digital platforms can also curb algorithmic spread and remove dangerous content that leads to real-world harms.

The study offers evidence of a change over time in audiences' news consumption patterns. Whether this is due to news avoidance or changes in algorithmic promotion is unclear. But it is clear that from 2016 to 2024, online audiences increasingly engaged with arts, lifestyle and celebrity news over politics, leading media outlets to prioritize posting stories that entertain rather than inform. This shift may pose a challenge to mitigating misinformation with hard news facts.

Finally, the study shows that fact-checking, while valuable, is not a silver bullet. Combating misinformation requires a multi-pronged approach, including counter-messaging by trusted civic leaders, media and digital literacy campaigns, and public restraint in sharing unverified content.


Original Submission

posted by janrinok on Friday September 26, @01:27AM   Printer-friendly

China starts producing world-first non-binary AI chips for aviation, manufacturing:

China has started mass production of the world's first non-binary chips, adding this new technology to important industries like aviation and manufacturing.

Spearheaded by Professor Li Hongge and his team at Beihang University in Beijing, this project resolves key problems in older systems by blending binary logic with random or probability-based logic. In doing so, it has enabled unprecedented fault tolerance and power efficiency, while smoothly sidestepping US chip restrictions.

Today's chip technologies face two insurmountable challenges – the power wall and the architectural wall, according to Professor Li. They use too much power, and new chips struggle to work with older systems.

Having been on the search for a solution since 2022, his team came up with a new system called Hybrid Stochastic Number (HSN), which mixes regular binary numbers with probability-based numbers to improve performance.

Binary logic, used by all computers worldwide, uses variables in 0s and 1s to carry out arithmetic operations. However, large-scale binary computations require advanced hardware resources.

In contrast, probabilistic computing uses high voltage signals – how they appear over a set time to represent different values. This method uses less hardware and has already been used in areas such as image processing, neural networks, and deep learning. However, there's one drawback – it takes longer to process information given the way it represents values.

Based on probabilistic computation, Professor Li's team developed a new smart chip for touch and display in 2023 using leading Chinese chipmaker Semiconductor Manufacturing International Corporation's mature 110-nanometer process technology.

The project results were published in the IEEE Journal of Solid-State Circuits two years ago. Followed by that, the team came up with another chip for machine learning, fabricated using a standard 28 nm CMOS process.

Apart from HSN, it also features in-memory computing algorithms that reduce the need to move data constantly between the memory and processors. This helps save energy and makes the chip more efficient.

The chip also uses a system-on-chip(SoC) design that combines different computing units to handle multiple tasks simultaneously, unlike traditional chips that process one task at a time.

The chip is now being used in smart control systems, such as touch screens, where it filters background noise to detect weaker signals and improve how users interact with devices.

Professor Li also told Guangming Daily that his team is developing a special set of instructions and chip design tailored for hybrid probabilistic computing. They plan to use the chip in areas like speech and image processing, speeding up large AI models, and handling other complex tasks.

"The current chip already achieves on-chip computing latency at the microsecond level, striking a balance between high-performance hardware acceleration and flexible software programmability," Li said.

While China's move towards non-binary hybrid AI chips is certainly exciting and innovative, it's important not to overhype the breakthrough yet, as several hurdles still need to be crossed, such as compatibility limitations and long-term uncertainties related to the chip's usage.


Original Submission

posted by janrinok on Thursday September 25, @08:42PM   Printer-friendly

The Future Of Nuclear Reactors Is Making Its Way To The US:

America has a checkered history with nuclear power. Stemming from the Three Mile Island accident in 1979, safety fears surrounding nuclear energy have curtailed the country's development of commercial nuclear reactors. Ultimately, from 1977 until 2013, there were no new construction starts for nuclear power stations. Yet, the country remains the world's largest producer of nuclear power, generating close to 30% of the world's total nuclear output. It's also the third biggest method of power generation in the U.S., producing 18% of America's electricity; only natural gas and coal add more power to the grid. However, the slump in investment in nuclear power may be coming to an end.

Building on President Trump's four executive orders to revitalize the sector, Chris Wright, the U.S. Secretary of Energy, recently announced a "pathway" to streamline the development and deployment of advanced nuclear reactors. Speaking at the International Atomic Energy Agency's (IAEA) General Conference, Secretary Wright cited the growing demand for affordable power and the rise of high-power demand industries like AI as driving forces behind the strategy change. He said, "We established an expedited pathway to approve advanced reactors, set standards to evaluate new construction licenses within 18 months." The goal is to deploy Small Modular Reactors (SMRs) as part of President Trump's plan to add 300 gigawatts of nuclear capacity to the grid by 2050.

The key to that future lies in the aforementioned SMRs, a new generation of reactors that are designed to be smaller, safer, and faster to build.

For those of us who associate nuclear power stations with behemoths like Chernobyl or Japan's Kashiwazaki-Kariwa plant, the new generation being planned by the U.S. might come as a surprise. Rather than being large, static plants, the U.S. Government sees the future of nuclear energy as being smaller in scale. The President's executive order details the need for the U.S. to develop advanced Generation III+ reactors. These include small modular reactors (SMRs) and microreactors. The executive order also notes that these should be developed in both stationary and mobile formats to build greater resilience into critical electrical infrastructure.

One of the cornerstones of the executive order is the use of SMRs. As the name suggests, these are small reactors with a power capacity of up to 300 megawatts per module. Because of their small size and scalability, these can be installed in places where traditional reactors are unsuitable. The modular aspect of the design also means they can be pre-built at a factory and quickly installed on site. SMRs can also be quickly — and relatively easily — installed in rural areas with limited electrical infrastructure.

Microreactors are an SMR subclass; these are smaller reactors that typically generate a maximum of 10 megawatts. Microreactors have many of the same advantages as larger SMRs. Additionally, they are also a cost-effective solution for isolated areas and can also be used for backup power or as a replacement for diesel generators. Incidentally, the U.S. Army is developing a microreactor.

For the U.S., the renewed focus on nuclear power isn't just about clean and reliable energy sources — it's also about jobs and security. In employment terms, the nuclear industry already employs close to 500,000 workers. Additionally, these are well-paid jobs with salaries around 50% higher than comparable jobs within other energy generation sectors. However, the development of SMRs is still a work-in-progress, and these are seen as critical for the future of the industry. President Trump further indicated America's commitment to their development when he announced a $900 million package, split across two tiers, to support the development of SMRs. The majority of the funding is intended to support the development of new commercial projects. The remainder is to be used to help deployments by smoothing out prohibiting factors like design and supply chain issues.

Security is also a driving force behind the re-emergence of the U.S. nuclear power sector. Historically, both the U.S. and Europe were central in developing international safeguards designed to prevent nuclear proliferation, an influence that has waned in recent years. With the advanced technology being developed and the moves by the U.S. Government to support and encourage the sector, the aim is to restore the U.S. influence across global energy markets.

Despite nuclear fusion records continuing to be broken, it's still considered a technology for the future. In the meantime, SMRs may just be the bridge that keeps our lights on as we move away from fossil fuels.


Original Submission

posted by janrinok on Thursday September 25, @03:54PM   Printer-friendly

https://phys.org/news/2025-09-magic-mushrooms-unique-biochemical-paths.html

A German-Austrian team led by Friedrich Schiller University Jena and Leibniz-HKI has been able to biochemically demonstrate for the first time that different types of mushrooms produce the same mind-altering active substance, psilocybin, in different ways.

Both Psilocybe mushrooms and fiber cap mushrooms of the genus Inocybe produce this substance, but use completely different enzymes and reaction sequences for this process. The results are published in Angewandte Chemie International Edition.

"This concerns the biosynthesis of a molecule that has a very long history with humans," explains Prof. Dirk Hoffmeister, head of the research group Pharmaceutical Microbiology at Friedrich Schiller University Jena and the Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI).

"We are referring to psilocybin, a substance found in so-called 'magic mushrooms,' which our body converts into psilocin—a compound that can profoundly alter consciousness. However, psilocybin not only triggers psychedelic experiences, but is also considered a promising active compound in the treatment of therapy-resistant depression," says Hoffmeister.

The study, which was conducted within the Cluster of Excellence "Balance of the Microverse," shows for the first time that fungi have developed the ability to produce psilocybin at least twice independently of each other. While Psilocybe species use a known enzyme toolkit for this purpose, fiber cap mushrooms employ a completely different biochemical arsenal—and yet arrive at the same molecule.

This finding is considered an example of convergent evolution: Different species have independently developed a similar trait, but the magic mushrooms have gone their own way in doing so.

Tim Schäfer, lead author of the study and doctoral researcher in Hoffmeister's team, explains, "It was like looking at two different workshops, but both ultimately delivering the same product. In the fiber caps, we found a unique set of enzymes that have nothing to do with those found in Psilocybe mushrooms. Nevertheless, they all catalyze the steps necessary to form psilocybin."

The researchers analyzed the enzymes in the laboratory. Protein models created by Innsbruck chemist Bernhard Rupp confirmed that the sequence of reactions differs significantly from that known in Psilocybe.

"Here, nature has actually invented the same active compound twice," says Schäfer.

However, why two such different groups of fungi produce the same active compound remains unclear. "The real answer is that we don't know," emphasizes Hoffmeister. "Nature does nothing without reason. So there must be an advantage to both fiber cap mushrooms in the forest and Psilocybe species on manure or wood mulch producing this molecule—we just don't know what it is yet."

"One possible reason could be that psilocybin is intended to deter predators. Even the smallest injuries cause Psilocybe mushrooms to turn blue through a chemical chain reaction, revealing the breakdown products of psilocybin. Perhaps the molecule is a type of chemical defense mechanism," says Hoffmeister.

Although it is still unclear why different fungi ultimately produce the same molecule, the discovery nevertheless has practical implications.

"Now that we know about additional enzymes, we have more tools in our toolbox for the biotechnological production of psilocybin," explains Hoffmeister.

Schäfer is also looking ahead, stating, "We hope that our results will contribute to the future production of psilocybin for pharmaceuticals in bioreactors without the need for complex chemical syntheses."

At the Leibniz-HKI in Jena, Hoffmeister's team is working closely with the Bio Pilot Plant, which is developing processes for producing natural products such as psilocybin on an industry-like scale.

At the same time, the study provides exciting insights into the diversity of chemical strategies used by fungi and their interactions with their environment.

More information: Dissimilar Reactions and Enzymes for Psilocybin Biosynthesis in Inocybe and Psilocybe Mushrooms, Angewandte Chemie International Edition (2025). DOI: 10.1002/anie.202512017


Original Submission

posted by hubie on Thursday September 25, @11:43AM   Printer-friendly

https://phys.org/news/2025-09-ganges-river-drying-unprecedented.html

The Ganges River is in crisis. This lifeline for around 600 million people in India and neighboring countries is experiencing its worst drying period in 1,300 years. Using a combination of historical data, paleoclimate records and hydrological models, researchers from IIT Gandhinagar and the University of Arizona discovered that human activity is the main cause. They also found that the current drying is more severe than any recorded drought in the river's history.

In their study, published in the Proceedings of the National Academy of Sciences, researchers first reconstructed the river's flow for the last 1,300 years (700 to 2012 C.E.) by analyzing tree rings from the Monsoon Asia Drought Atlas (MADA) dataset. Then they used powerful computer programs to combine this tree-ring data with modern records to create a timeline of the river's flow. To ensure its accuracy, they double-checked it against documented historical droughts and famines.

The scientists found that the recent drying of the Ganges River from 1991 to 2020 is 76% worse than the previous worst recorded drought, which occurred during the 16th century. Not only is the river drier overall, but droughts are now more frequent and last longer. The main reason, according to the researchers, is human activity. While some natural climate patterns are at play, the primary driver is the weakening of the summer monsoon.

This weakening is linked to human-driven factors such as the warming of the Indian Ocean and air pollution from anthropogenic aerosols. These are liquid droplets and fine solid particles that come from factories, vehicles and power plants, among other sources and can suppress rainfall. The scientists also found that most climate models failed to spot the severe drying trend.

"The recent drying is well beyond the realm of last millennium climate variability, and most global climate models fail to capture it," the authors wrote in their paper. "Our findings underscore the urgent need to examine the interactions among the factors that control summer monsoon precipitation, including large-scale climate variability and anthropogenic forcings."

The researchers suggest two main courses of action. Given the mismatch between climate models and what they actually found, they are calling for better modeling to account for the regional impacts of human activity.

And because the Ganges is a vital source of water for drinking, agricultural production, industrial use and wildlife, the team also recommends implementing new adaptive water management strategies to mitigate potential water scarcity.

More information: Dipesh Singh Chuphal et al, Recent drying of the Ganga River is unprecedented in the last 1,300 years, Proceedings of the National Academy of Sciences (2025). DOI: 10.1073/pnas.2424613122


Original Submission