Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you put ketchup on the hot dog you are going to consume?

  • Yes, always
  • No, never
  • Only when it would be socially awkward to refuse
  • Not when I'm in Chicago
  • Especially when I'm in Chicago
  • I don't eat hot dogs
  • What is this "hot dog" of which you speak?
  • It's spelled "catsup" you insensitive clod!

[ Results | Polls ]
Comments:52 | Votes:147

posted by janrinok on Tuesday January 27, @09:02PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

[...] Proton VPN has announced a significant push to modernize its Linux offerings.

The Swiss-based company has confirmed that a complete interface overhaul is in the works, while simultaneously dropping a massive feature update for command-line users. For those relying on the best VPN for privacy, this is a welcome signal that the Linux ecosystem remains a top priority. While the provider has spent the last year bringing its Windows and Mac apps to new heights, the Linux VPN client is now getting the "speedrun" treatment to close the gap.

[...] "With the speedrun of additional features added to the ProtonVPN Linux (GUI) client in recent roadmap cycles, most requests are now for a (overdue) GUI refresh," Peterson stated. "Work has been progressing on this behind the scenes, with the first milestone hit last week."

That "first milestone" has been identified in the official release notes as a major under-the-hood update for the Linux GUI beta (version 4.14.0).

The app has officially been updated to GTK4, a modern toolkit for creating graphical user interfaces. While the release notes clarify that "the visual appearance remains unchanged" for now, this architectural shift is critical.

It "refreshes the underlying framework and paves the way for future UI enhancements," effectively building the foundation upon which the new, modern look will sit.

While the graphical update is setting the stage for the future, the immediate value for power users lies in the Command Line Interface (CLI).

"For non-GUI-enjoyers, we are also rapidly fleshing out the features for the Proton VPN Linux CLI that we relaunched last year," Peterson added.

According to the latest release notes, these updates are split between the stable and beta channels, addressing some of the biggest pain points for terminal users.

[...] For current users, the instruction is simple: if you are a CLI user, update your package via your terminal to pull the latest feature set. If you prefer the graphical app, you can test the new GTK4 framework via the beta repos, though the visual facelift is still to come.


Original Submission

posted by janrinok on Tuesday January 27, @04:19PM   Printer-friendly

https://nand2mario.github.io/posts/2026/80386_multiplication_and_division/

When Intel released the 80386 in October 1985, it marked a watershed moment for personal computing. The 386 was the first 32-bit x86 processor, increasing the register width from 16 to 32 bits and vastly expanding the address space compared to its predecessors. This wasn't just an incremental upgrade—it was the foundation that would carry the PC architecture for decades to come.

...

In addition to its architectural advances, the 386 delivered a major jump in arithmetic performance. On the earlier 8086, multiplication and division were slow — 16-bit multiplication typically required 120–130 cycles, with division taking even longer at over 150 cycles. The 286 significantly improved on this by introducing faster microcode routines and modest hardware enhancements.

The 386 pushed performance further with dedicated hardware that processes multiplication and division at the rate of one bit per cycle, combined with a native 32-bit datapath width. The microcode still orchestrates the operation, but the heavy lifting happens in specialized datapath logic that advances every cycle.


Original Submission

posted by hubie on Tuesday January 27, @11:39AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Satya Nadella talked about how AI should benefit people and how it can avoid a bubble.

“The zeitgeist is a little bit about the admiration for AI in its abstract form or as technology. But I think we, as a global community, have to get to a point where we are using it to do something that changes the outcomes of people and communities and countries and industries,” Nadella said. “Otherwise, I don’t think this makes much sense, right? In fact, I would say we will quickly lose even the social permission to actually take something like energy, which is a scarce resource, and use it to generate these tokens, if these tokens are not improving health outcomes, education outcomes, public sector efficiency, private sector competitiveness across all sectors, small and large. And that, to me, is ultimately the goal.”

The rush to build AI infrastructure is putting a strain on many different resources. For example, we’re in the middle of a memory chip shortage because of the massive demand for HBM that AI GPUs require. It’s estimated that data centers will consume 70% of memory chips made this year, with the shortage going beyond RAM modules and SSDs and starting to affect other components and products like GPUs and smartphones.

[...] Aside from talking about the impact of AI on people, the two industry leaders also covered the AI bubble. Many industry leaders and institutions are warning about an AI bubble, especially as tech companies are continually pouring money into its development while only seeing limited benefits. “For this not to be a bubble, by definition, it requires that the benefits of this [technology] are much more evenly spread. I mean, I think, a tell-tale sign of if it’s a bubble would be if all we’re talking about are the tech firms,” said the Microsoft chief. “If all we talk about is what’s happening to the technology side, then it’s just purely supply side.”


Original Submission

posted by hubie on Tuesday January 27, @06:54AM   Printer-friendly
from the systemd-and-Wayland-enter-the-chat dept.

Arthur T Knackerbracket has processed the following story:

In the fast-paced world of modern web development, we've witnessed an alarming trend: the systematic over-engineering of the simplest HTML elements. A recent deep dive into the popular Shadcn UI library has revealed a shocking reality – what should be a single line of HTML has ballooned into a complex system requiring multiple dependencies, hundreds of lines of code, and several kilobytes of JavaScript just to render a basic radio button.

<input type="radio" name="beverage" value="coffee" />

Let's start with what should be the end: a functional radio button in HTML.

This single line of code has worked reliably for over 30 years. It's accessible by default, works across all browsers, requires zero JavaScript, and provides exactly the functionality users expect. Yet somehow, the modern web development ecosystem has convinced itself that this isn't good enough.

The Shadcn radio button component imports from @radix-ui/react-radio-group and lucide-react, creating a dependency chain that ultimately results in 215 lines of React code importing 7 additional files. This is for functionality that browsers have provided natively since the early days of the web.

Underneath Shadcn lies Radix UI, described as "a low-level UI component library with a focus on accessibility, customization and developer experience." The irony is palpable – in the name of improving developer experience, they've created a system that's exponentially more complex than the native alternative.

[...] The complexity isn't just academic – it has real-world consequences. The Shadcn radio button implementation adds several kilobytes of JavaScript to applications. Users must wait for this JavaScript to load, parse, and execute before they can interact with what should be a basic form element.

[...] The radio button crisis is a symptom of a larger problem in web development: we've lost sight of the elegance and power of web standards. HTML was designed to be simple, accessible, and performant. When we replace a single line of HTML with hundreds of lines of JavaScript, we're not innovating – we're regressing.

The most radical thing modern web developers can do is embrace simplicity. Use native HTML elements. Write semantic markup. Leverage browser capabilities instead of fighting them. Your users will thank you with faster load times, better accessibility, and more reliable experiences.

As the original article author eloquently concluded: "It's just a radio button." And sometimes, that's exactly what it should be – nothing more, nothing less.


Original Submission

posted by hubie on Tuesday January 27, @02:08AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

After the failures of the first two Dojo supercomputers, fingers crossed that Dojo3 will be the first truly successful variant.

Elon Musk has confirmed on X that Tesla has restarted work on the Dojo3 supercomputer following the new success of its AI5 chip design. The billionaire stated in a recent X post that the AI5 chip design is now in "good shape", enabling Tesla to shuffle resources back to the Dojo 3 project. Musk also added that he is hiring more people to help build the chips that will inevitably be used in Tesla's next-gen supercomputer.

This news follows Tesla's decision that it was cancelling Dojo's wafer-level processor initiative in late 2025. Dojo 3 has gone through several iterations since Elon Musk first chimed in on the project, but according to Musk's latest thoughts on it, Dojo 3 will be the first Tesla-built supercomputer to take advantage of purely in-house hardware only. Previous iterations, such as Dojo2, took advantage of a mixture of in-house chips and Nvidia AI GPUs.

[...] According to Musk, the Dojo3 will use AI5/AI6 or AI7, the latter two being part of Musk's new 9-month cadence roadmap. AI5 is AI5 is almost ready for deployment and is Tesla's most competitive chip yet, yielding Hopper-class performance on a single chip and Blackwell-class performance with two chips working together using "much less power". Work on Dojo 3 coincides directly with Musk's new nine-month release cycle, where Tesla will start producing new chips every nine months, starting with its AI6 chip. AI7, we believe, will likely be an iterative upgrade to AI6; building a brand new architecture every 9 months would be extremely difficult, if not impossible.

It will be interesting to see whether or not Dojo3 will prove to be successful. Dojo 1 was supposed to be one of the most powerful supercomputers when it was built, but competition from Nvidia prevented that from happening, among other problems. Dojo 2 was cancelled mid-way through development. If Tesla can deliver competitive performance with Nvidia GPUs consistently, Dojo 3 has the potential to be Tesla's first truly successful supercomputer. Elon also hinted that Dojo 3 will be used for "space-based AI compute".


Original Submission

posted by hubie on Monday January 26, @09:19PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In a move that signals a fundamental shift in Apple's relationship with its users, the company is quietly testing a new App Store design that deliberately obscures the distinction between paid advertisements and organic search results. This change, currently being A/B tested on iOS 26.3, represents more than just a design tweak — it's a betrayal of the premium user experience that has long justified Apple's higher prices and walled garden approach.

For years, Apple's App Store has maintained a clear visual distinction between sponsored content and organic search results. Paid advertisements appeared with a distinctive blue background, making it immediately obvious to users which results were promoted content and which were genuine search matches. This transparency wasn't just good design — it was a core part of Apple's value proposition.

Now, that blue background is disappearing. In the new design being tested, sponsored results look virtually identical to organic ones, with only a small "Ad" banner next to the app icon serving as the sole differentiator. This change aligns with Apple's December 2025 announcement that App Store search results will soon include multiple sponsored results per query, creating a landscape where advertisements dominate the user experience.

This move places Apple squarely in the company of tech giants who have spent the last decade systematically degrading user experience in pursuit of advertising revenue. Google pioneered this approach, gradually removing the distinctive backgrounds that once made ads easily identifiable in search results. What was once a clear yellow background became increasingly subtle until ads became nearly indistinguishable from organic results.

[...] What makes Apple's adoption of these practices particularly troubling is how it contradicts the company's fundamental value proposition. Apple has long justified its premium pricing and restrictive ecosystem by promising a superior user experience. The company has built its brand on the idea that paying more for Apple products means getting something better — cleaner design, better privacy, less intrusive advertising.

This App Store change represents a direct violation of that promise. Users who have paid premium prices for iPhones and iPads are now being subjected to the same deceptive advertising practices they might encounter on free, ad-supported platforms. The implicit contract between Apple and its users — pay more, get a better experience — is being quietly rewritten.

[...] Apple's motivation for this change is transparently financial. The company's services revenue, which includes App Store advertising, has become increasingly important as iPhone sales growth has plateaued. Advertising revenue offers attractive margins and recurring income streams that hardware sales cannot match.

By making advertisements less distinguishable from organic results, Apple can likely increase click-through rates significantly. Users who would normally skip obvious advertisements might click on disguised ones, generating more revenue per impression. This short-term revenue boost comes at the cost of long-term user trust and satisfaction.

The timing is also significant. As Apple faces increasing regulatory pressure around its App Store practices, the company appears to be maximizing revenue extraction while it still can. This suggests a defensive posture rather than confidence in the sustainability of current business models.

[...] The technical implementation of these changes reveals their deliberate nature. Rather than simply removing the blue background, Apple has carefully redesigned the entire search results interface to create maximum visual similarity between ads and organic results. Font sizes, spacing, and layout elements have been adjusted to eliminate distinguishing characteristics.

[...] This App Store change represents more than just a design decision — it's a signal about Apple's evolving priorities and business model. The company appears to be transitioning from a hardware-first approach that prioritizes user experience to a services-first model that prioritizes revenue extraction.

[...] For Apple, the challenge now is whether to continue down this path or respond to user concerns. The company has historically been responsive to user feedback, particularly when it threatens the brand's premium positioning. However, the financial incentives for advertising revenue are substantial and may override user experience considerations.

Users have several options for responding to these changes. They can provide feedback through Apple's official channels, adjust their App Store usage patterns to account for increased advertising, or consider alternative platforms where available.

Developers face a more complex situation. While the changes may increase the cost of app discovery through advertising, they also create new opportunities for visibility. The long-term impact on the app ecosystem remains to be seen.

[...] As one community member aptly summarized: "The enshittification of Apple is in full swing." Whether this proves to be a temporary misstep or a permanent shift in Apple's priorities remains to be seen, but the early signs are deeply concerning for anyone who values transparent, user-focused design.


Original Submission

posted by hubie on Monday January 26, @04:38PM   Printer-friendly
from the verbosive-WinDoze dept.

Am not a big fan of Power(s)Hell, but British Tech site TheRegister announced its creator Jeffery Snover is retiring after moving from M$ to G$ a few years ago.

In that write-up, Snover details how the original name for Cmdlets was Functional Units, or FUs:
  "This abbreviation reflected the Unix smart-ass culture I was embracing at the time. Plus I was developing this in a hostile environment, and my sense of diplomacy was not yet fully operational."

Reading that sentence, it would seem his "sense of diplomacy" has eventually come online. 😉

While he didn't start at M$ until the late 90s, that kind of thinking would have served him well in an old Usenet Flame War.

Happy retirement, Jeffrey!

(IMHO, maybe he’ll do something fun with his time, like finally embrace bash and python.)


Original Submission

posted by hubie on Monday January 26, @11:55AM   Printer-friendly

https://www.extremetech.com/internet/psa-starlink-now-uses-customers-personal-data-for-ai-training

Starlink recently updated its Privacy Policy to explicitly allow it to share personal customer data with companies to train AI models. This appears to have been done without any warning to customers (I certainly didn't get any email about it), though some eagle-eyed users noticed a new opt-out toggle on their profile page.

The updated Privacy Policy buries the AI training declaration at the end of its existing data sharing policies. It reads:

"We may share your personal information with our affiliates, service providers, and third-party collaborators for the purposes we outline above (e.g., hosting and maintaining our online services, performing backup and storage services, processing payments, transmitting communications, performing advertising or analytics services, or completing your privacy rights requests) and, unless you opt out, for training artificial intelligence models, including for their own independent purposes."

SpaceX doesn't make it clear which AI companies or AI models it might be involved in training, though xAI's Grok seems the most likely, given that it is owned and operated by SpaceX CEO Elon Musk.

Elsewhere in Starlink's Privacy Policy, it also discusses using personal data to train its own AI models, stating:

"We may use your personal information: [...] to train our machine learning or artificial intelligence models for the purposes outlined in this policy."

Unfortunately, there doesn't appear to be any opt-out option for that. I asked the Grok support bot whether opting out with the toggle would prevent Starlink from using data for AI training, too, and it said it would, but I'm not sure I believe it.

How to Opt Out of Starlink AI Training

To opt out of Starlink's data sharing for AI training purposes, navigate to the Starlink website and log in to your account. On your account page, select Settings from the left-hand menu, then select the Edit Profile button in the top-right of the window.

In the window that appears, look to the bottom, where you should see a toggle box labeled "Share personal data with Starlink's trusted collaborators to train AI models."

Select the box to toggle the option off, then select the Save button. You'll be prompted to verify your identity through an email or SMS code, but once you've done that, Starlink shouldn't be able to share your data with AI companies anymore.

At the time of writing, it doesn't appear you can change this setting in the Starlink app.


Original Submission

posted by hubie on Monday January 26, @07:11AM   Printer-friendly
from the snap-to-it dept.

https://distrowatch.com/dwres.php?resource=showheadline&story=20123

Alan Pope, a former Ubuntu contributor and current Snap package maintainer, has raised a concern on his blog about attackers sneaking malicious Snap packages into Canonical's package repository.

"There's a relentless campaign by scammers to publish malware in the Canonical Snap Store. Some gets caught by automated filters, but plenty slips through. Recently, these miscreants have changed tactics - they're now registering expired domains belonging to legitimate snap publishers, taking over their accounts, and pushing malicious updates to previously trustworthy applications. This is a significant escalation."

Details on the attack are covered in Pope's blog post.


Original Submission

posted by hubie on Monday January 26, @03:26AM   Printer-friendly
from the who's-responsible-when-AI-crashes-your-system? dept.

Arthur T Knackerbracket has processed the following story:

UK financial regulators must conduct stress testing to ensure businesses are ready for AI-driven market shocks, MPs have warned.

The Bank of England, Financial Conduct Authority, and HM Treasury risk exposing consumers and the financial system to "potentially serious harm" by taking a wait-and-see approach, according to a House of Commons Treasury Committee report published today.

During its hearings, the committee found a troubling lack of accountability and understanding of the risks involved in spreading AI across the financial services sector.

David Geale, the FCA's Executive Director for Payments and Digital Finance, said individuals within financial services firms were "on the hook" for harm caused to consumers through AI. Yet trade association Innovate Finance testified that management in financial institutions struggled to assess AI risk. The "lack of explainability" of AI models directly conflicted with the regime's requirement for senior managers to demonstrate they understood and controlled risks, the committee argued.

The committee said there should be clear lines of accountability when AI systems produce harmful or unfair outcomes. "For instance, if an AI system unfairly denies credit to a customer in urgent need – such as for medical treatment – there must be clarity on who is responsible: the developers, the institution deploying the model, or the data providers."

[...] Financial services is one of the UK's most important economic sectors. In 2023, it contributed £294 billion to the economy [PDF], or around 13 percent of the gross value added of all economic sectors.

However, successive governments have adopted a light-touch approach to AI regulation for fear of discouraging investment.

Treasury Select Committee chair Dame Meg Hillier said: "Firms are understandably eager to try and gain an edge by embracing new technology, and that's particularly true in our financial services sector, which must compete on the global stage.

"Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk."


Original Submission

posted by jelizondo on Sunday January 25, @10:36PM   Printer-friendly

[Source]: Microsoft Gave FBI a Set of BitLocker Encryption Keys to Unlock Suspects' Laptops

Microsoft provided the FBI with the recovery keys to unlock encrypted data on the hard drives of three laptops as part of a federal investigation, Forbes reported on Friday.

Many modern Windows computers rely on full-disk encryption, called BitLocker, which is enabled by default. This type of technology should prevent anyone except the device owner from accessing the data if the computer is locked and powered off.

But, by default, BitLocker recovery keys are uploaded to Microsoft's cloud, allowing the tech giant — and by extension law enforcement — to access them and use them to decrypt drives encrypted with BitLocker, as with the case reported by Forbes.

[...] Microsoft told Forbes that the company sometimes provides BitLocker recovery keys to authorities, having received an average of 20 such requests per year.

[Also Covered By]: TechCrunch


Original Submission

posted by jelizondo on Sunday January 25, @05:53PM   Printer-friendly
from the alfred-hitchcock-lover dept.

https://arstechnica.com/features/2026/01/this-may-be-the-grossest-eye-pic-ever-but-the-cause-is-whats-truly-horrifying/

A generally healthy 63-year-old man in the New England area went to the hospital with a fever, cough, and vision problems in his right eye. His doctors eventually determined that a dreaded hypervirulent bacteria—which is rising globally—was ravaging several of his organs, including his brain.
[...]
At the hospital, doctors took X-rays and computed tomography (CT) scans of his chest and abdomen. The images revealed over 15 nodules and masses in his lungs. But that's not all they found. The imaging also revealed a mass in his liver that was 8.6 cm in diameter (about 3.4 inches). Lab work pointed toward an infection, so doctors admitted him to the hospital
[...]
On his third day, he woke up with vision loss in his right eye, which was so swollen he couldn't open it. Magnetic resonance imaging (MRI) revealed another surprise: There were multiple lesions in his brain.
[...]
In a case report in this week's issue of the New England Journal of Medicine, doctors explained how they solved the case and treated the man.
[...]
There was one explanation that fit the condition perfectly: hypervirulent Klebsiella pneumoniae or hvKP.
[...]
An infection with hvKP—even in otherwise healthy people—is marked by metastatic infection. That is, the bacteria spreads throughout the body, usually starting with the liver, where it creates a pus-filled abscess. It then goes on a trip through the bloodstream, invading the lungs, brain, soft tissue, skin, and the eye (endogenous endophthalmitis). Putting it all together, the man had a completely typical clinical case of an hvKP infection.

Still, definitively identifying hvKP is tricky. Mucus from the man's respiratory tract grew a species of Klebsiella, but there's not yet a solid diagnostic test to differentiate hvKP from the classical variety.
[...]
it was too late for the man's eye. By his eighth day in the hospital, the swelling had gotten extremely severe
[...]
Given the worsening situation—which was despite the effective antibiotics—doctors removed his eye.


Original Submission

posted by jelizondo on Sunday January 25, @01:02PM   Printer-friendly

OpenAI has decided to incorporate advertisements into its ChatGPT service for free users and those on the lower-tier Go plan, a shift announced just days ago:

The company plans to begin testing these ads in the United States by the end of January 2026, placing them at the bottom of responses where they match the context of the conversation. Officials insist the ads will be clearly marked, optional to personalize, and kept away from sensitive subjects. Higher-paying subscribers on Plus, Pro, Business, and Enterprise levels will remain ad-free, preserving a premium experience for those willing to pay.

This development comes as OpenAI grapples with enormous operational costs, including a staggering $1.4 trillion infrastructure expansion to keep pace with demand. Annualized revenue reached $20 billion in 2025, a tenfold increase from two years prior, yet the burn rate on computing power and development continues to outstrip income from subscriptions alone. Analysts like Mark Mahaney from Evercore ISI project that if executed properly, ads could bring in $25 billion annually by 2030, providing a vital lifeline for sustainability.

[...] The timing of OpenAI's announcement reveals underlying pressures in the industry. As one observer put it, "OpenAI Moves First on Ads While Google Waits. The Timing Tells You Everything." With ChatGPT boasting 800 million weekly users compared to Gemini's 650 million monthly active ones, OpenAI can't afford to lag in revenue generation. Delaying could jeopardize the company's future, according to tech analyst Ben Thompson, who warned that postponing ads "risks the entire company."

[...] From a broader view, this reflects how Big Tech giants are reshaping technology to serve their bottom lines, often at the expense of individual freedoms. If ads become the norm in AI chatbots, it might accelerate a divide between those who can afford untainted access and those stuck with sponsored content. Critics argue this model echoes past controversies, like Meta's data scandals, fueling distrust in how personal interactions are commodified.

Also discussed by Bruce Schneier.

Related: Google Confirms AI Search Will Have Ads, but They May Look Different


Original Submission

posted by jelizondo on Sunday January 25, @08:30AM   Printer-friendly
from the as-the-years-go-by-I-am-sinking dept.

Human-driven land sinking now outpaces sea-level rise in many of the world's major delta systems, threatening more than 236 million people:

A study published on Jan. 14 in Nature shows that many of the world's major river deltas are sinking faster than sea levels are rising, potentially affecting hundreds of millions of people in these regions.

The major causes are groundwater withdrawal, reduced river sediment supply, and urban expansion.

[...] The findings show that in nearly every river delta examined, at least some portion is sinking faster than the sea is rising. Sinking land, or subsidence, already exceeds local sea-level rise in 18 of the 40 deltas, heightening near-term flood risk for more than 236 million people.

[...] Deltas experiencing concerning rates of elevation loss include the Mekong, Nile, Chao Phraya, Ganges–Brahmaputra, Mississippi, and Yellow River systems.

"In many places, groundwater extraction, sediment starvation, and rapid urbanization are causing land to sink much faster than previously recognized," Ohenhen said.

Some regions are sinking at more than twice the current global rate of sea-level rise.

"Our results show that subsidence isn't a distant future problem — it is happening now, at scales that exceed climate-driven sea-level rise in many deltas," said Shirzaei, co-author and director of Virginia Tech's Earth Observation and Innovation Lab.

Groundwater depletion emerged as the strongest overall predictor of delta sinking, though the dominant driver varies regionally.

"When groundwater is over-pumped or sediments fail to reach the coast, the land surface drops," said Werth, who co-led the groundwater analysis. "These processes are directly linked to human decisions, which means the solutions also lie within our control."

Journal Reference: Ohenhen, L.O., Shirzaei, M., Davis, J.L. et al. Global subsidence of river deltas. Nature (2026). https://doi.org/10.1038/s41586-025-09928-6


Original Submission

posted by jelizondo on Sunday January 25, @03:38AM   Printer-friendly

https://phys.org/news/2026-01-greenwashing-false-stability-companies.html

Companies engaging in 'greenwashing' to appear more favorable to investors, don't achieve durable financial stability in the long term, according to a new Murdoch University study.

The paper, "False Stability? How Greenwashing Shapes Firm Risk in the Short and Long Run," is published in the Journal of Risk and Financial Management.

Globally, there has been a rise in Environmental Social Governance (ESG) investing, where lenders prioritize a firm's sustainability performance when allocating capital. As a result, ESG scores have become an important measure for investors when assessing risk.

"However, ESG scores do not always reflect a firm's true environmental performance," said Tanvir Bhuiyan, associate lecturer in finance at the Murdoch Business School.

Greenwashing refers to the gap between what firms claim about their environmental performance and how they actually perform.

"In simple terms, it is when companies talk green but do not act green," Dr. Bhuiyan said. "Firms do this to gain reputational benefits, attract investors, and appear lower-risk and more responsible without necessarily reducing their carbon footprint."

The study examined Australian companies from 2014 to 2023 to understand how greenwashing affects financial risk and stability. To measure whether companies were exaggerating their sustainability performance, they created a comprehensive quantitative framework to measure greenwashing by directly comparing ESG scores with carbon emissions, allowing them to identify when sustainability claims were inflated.

They then analyzed how greenwashing affected a company's stability, by looking at its volatility in the stock market.

According to Dr. Bhuiyan, the key finding from the research was that greenwashing enhances firms' stability in the short term, but that effect fades away over time.

"In the short term, firms that exaggerate their ESG credentials appear less risky in the market, as investors interpret strong ESG signals as a sign of safety," he said.

"However, this benefit fades over time. When discrepancies between ESG claims and actual emissions become clearer, the market corrects its earlier optimism, and the stabilizing effect of greenwashing weakens."

Dr. Ariful Hoque, senior lecturer in finance at the Murdoch Business School, who also worked on the study, said they also found that greenwashing was a persistent trend for Australian firms from 2014–2022.

"On average, firms consistently reported ESG scores that were higher than what their actual carbon emissions would justify," Dr. Hoque said.

However, in 2023, he said there was a noticeable decline in greenwashing, "likely reflecting stronger ASIC enforcement, mandatory climate-risk disclosures policy starting from 2025, and greater investor scrutiny."

"For regulators, our results support the push for tighter ESG disclosure standards and stronger anti-greenwashing enforcement, as misleading sustainability claims distort risk pricing," he said.

"For investors, the findings highlight the importance of looking beyond headline ESG scores and examining whether firms' environmental claims match their actual emissions.

"For companies, this research indicates that greenwashing may buy short-term credibility, but genuine emissions reduction and transparent reporting are far more effective for managing long-term risk."

More information:

Rahma Mirza et al, False Stability? How Greenwashing Shapes Firm Risk in the Short and Long Run, Journal of Risk and Financial Management (2025). DOI: 10.3390/jrfm18120691


Original Submission