Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 10 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:166 | Votes:301

posted by jelizondo on Sunday October 12, @08:50PM   Printer-friendly
from the resistance-is-futile-you-will-be-assimilated dept.

As a very long time user of MythTV and free OTA ATSC 1.0 TV, reading this one did not make my day:

CordCutters published news of a recent FCC decision to allow broadcasters flexibility on switching to ATSC 3.0 technology:

In a major shift for American television viewers, the Federal Communications Commission (FCC) has decided against setting a hard deadline to end the old digital TV system that powers most broadcasts and cable services today. [...] The agency, now headed by Brendan Carr, had initially pushed for a quicker switch to the advanced ATSC 3.0 technology, known as NextGen TV. But after hearing concerns from consumer groups, cable companies, and satellite providers, the FCC is choosing a more flexible, voluntary approach to make the change easier for everyone involved.

According to the new proposal this would "tentatively conclude that television stations should be allowed to choose when to stop broadcasting in 1.0 and start broadcasting exclusively in 3.0."

To understand this, it's helpful to step back and explain the basics. For over 15 years, U.S. TV stations and multichannel video programming distributors (MVPDs)—think cable giants like Comcast or satellite services like DirecTV and DISH—have relied on ATSC 1.0. This is the standard digital TV technology that replaced fuzzy analog signals in 2009, delivering clearer pictures and more channels. It's the "original" digital TV, or what some call the "OG" of modern broadcasting. ATSC 1.0 works universally across free over-the-air antennas, cable boxes, and satellite dishes, reaching nearly every household without special upgrades.

NextGen TV, built on ATSC 3.0, promises even better features: sharper 4K video, interactive apps, and stronger signals that can cut through buildings or bad weather. It's like upgrading from a reliable old smartphone to one with a bigger screen and faster apps. The transition started voluntarily during the Biden administration, with a handful of cities testing it out. But since Trump's return in January 2025—about nine and a half months ago—the push intensified. FCC leaders wanted a nationwide shutdown of ATSC 1.0 by a set date to speed things up, arguing it would modernize broadcasting and free up airwaves for new uses.

This aggressive stance hit a wall of opposition. Consumer advocates, led by the Consumer Technology Association (CTA) and its president Gary Shapiro, warned that forcing the change too fast could leave millions of viewers in the dark. Older TVs and set-top boxes might stop working, forcing families to buy new equipment they can't afford. Cable and satellite lobbies echoed these fears, pointing out the massive costs of rewiring their networks to carry the new signals. For context, imagine every home suddenly needing a software update or new hardware just to watch local news—disruptive and expensive, especially for low-income or rural households.

The FCC's latest move, outlined in a document called the Fifth Further Notice of Proposed Rulemaking (FNPRM), listens to these voices. Instead of a mandatory cutoff, the agency proposes keeping the transition market-driven and optional. Broadcasters— the TV stations that send out signals—would get to decide when, or even if, they fully drop ATSC 1.0. Many are already "simulcasting," meaning they beam both the old and new signals at the same time, like offering two radio stations on one frequency. The FCC wants to ease rules around this, removing red tape that currently limits how long stations can keep the old signal running. This builds on policies from the Democratic-led FCC, extending the grace period without a strict timeline.

The plan also calls for ways to cut costs and smooth the ride for all players. For consumers, that could mean subsidies or incentives to upgrade TVs or antennas without breaking the bank. Manufacturers might get breaks on producing hybrid devices that handle both standards. Smaller broadcasters in rural areas, who often operate on tight budgets, would benefit from fewer mandates. And MVPDs could phase in NextGen support at their own pace, avoiding a sudden overhaul that might raise monthly bills.

But the FCC isn't stopping at flexibility—it's opening the floor for public input on trickier issues. One big question: Should new TVs sold in stores be required to receive ATSC 3.0 signals right out of the box? This echoes a famous FCC rule from the 1960s, when regulators under Chairman Newton Minow mandated UHF tuners in TVs. That move helped spark the growth of companies like Sinclair Inc., now a leading cheerleader for NextGen TV. Yet today, the CTA and others are pushing back hard, saying it could hike prices for basic sets and slow sales.

This compromise feels like a win for balance. Proponents of NextGen, like Sinclair, get regulatory green lights to experiment and expand. Critics, including the cable industry, avoid the chaos of a rushed shutdown. For everyday viewers, it means no panic-buying of new gear tomorrow. The transition, which began quietly years ago at events like a 2019 FCC symposium, can now evolve naturally. Back then, questions about integrating NextGen into cable systems lingered unanswered by groups like Pearl TV or the ATSC standards body. Today's proposal nods to those gaps, seeking fresh input.

Reflecting on history adds irony. A quarter-century ago, ATSC 1.0 was hailed as revolutionary, even as early tech from firms like Sinclair hinted at what 3.0 could become. Now, with costs in mind, the FCC is ensuring the next leap doesn't repeat past disruptions. As comments roll in over the coming months, this could shape TV for the next generation—literally. For now, Americans can keep flipping channels without fear of a digital cliff.

Hardware requirements aside, ATSC 3.0 will have DRM which, as I understand it, will make recording impossible. I know there are far worse things going on in Washington now, but wow this sucks.


Original Submission

posted by hubie on Sunday October 12, @04:05PM   Printer-friendly
from the AI-earthquake-overlords dept.

https://arstechnica.com/science/2025/10/like-putting-on-glasses-for-the-first-time-how-ai-improves-earthquake-detection/

On January 1, 2008, at 1:59 am in Calipatria, California, an earthquake happened. You haven't heard of this earthquake; even if you had been living in Calipatria, you wouldn't have felt anything. It was magnitude -0.53, about the same amount of shaking as a truck passing by. Still, this earthquake is notable, not because it was large but because it was small—and yet we know about it.

Over the past seven years, AI tools based on computer imaging have almost completely automated one of the fundamental tasks of seismology: detecting earthquakes.
[...]
"In the best-case scenario, when you adopt these new techniques, even on the same old data, it's kind of like putting on glasses for the first time, and you can see the leaves on the trees," said Kyle Bradley, co-author of the Earthquake Insights newsletter.
[...]
Less certain is what comes next. Earthquake detection is a fundamental part of seismology, but there are many other data processing tasks that have yet to be disrupted. The biggest potential impacts, all the way to earthquake forecasting, haven't materialized yet.

"It really was a revolution," said Joe Byrnes, a professor at the University of Texas at Dallas. "But the revolution is ongoing."
[...]
The main tool that scientists traditionally use is a seismometer. These record the movement of the Earth in three directions: up–down, north–south, and east–west. If an earthquake happens, seismometers can measure the shaking in that particular location.
[...]
Before good algorithms, earthquake cataloging had to happen by hand. Byrnes said that "traditionally, something like the lab at the United States Geological Survey would have an army of mostly undergraduate students or interns looking at seismograms."
[...]
"The field of seismology historically has always advanced as computing has advanced," Bradley told me.

There's a big challenge with traditional algorithms, though: They can't easily find smaller quakes, especially in noisy environments.
[...]
earthquakes have a characteristic "shape." The magnitude 7.7 earthquake above looks quite different from the helicopter landing, for instance.

So one idea scientists had was to make templates from human-labeled datasets. If a new waveform correlates closely with an existing template, it's almost certainly an earthquake.

Template matching works very well if you have enough human-labeled examples. In 2019, Zach Ross' lab at Caltech used template matching to find 10 times as many earthquakes in Southern California as had previously been known
[...]
Also, template matching is computationally expensive. Creating a Southern California quake dataset using template matching took 200 Nvidia P100 GPUs running for days on end.
[...]
AI detection models solve all of these problems:

  • They are faster than template matching.
  • Because AI detection models are very small (around 350,000 parameters compared to billions in LLMs like GPT4.0), they can be run on consumer CPUs.
  • AI models generalize well to regions not represented in the original dataset.

[...]
To train an AI model, scientists take large amounts of labeled data, like what's above, and do supervised training.
[...]
Earthquake Transformer, which was developed around 2020 by a Stanford University team led by S. Mostafa Mousavi, who later became a Harvard professor.

Like many earthquake detection models, Earthquake Transformer adapts ideas from image classification. Readers may be familiar with AlexNet, a famous image-recognition model that kicked off the deep-learning boom in 2012.
[...]
Earthquake Transformer converts raw waveform data into a collection of high-level representations that indicate the likelihood of earthquakes and other seismologically significant events. This is followed by a series of deconvolution layers that pinpoint exactly when an earthquake—and its all-important P and S waves—occurred.

The model also uses an attention layer in the middle of the model to mix information between different parts of the time series. The attention mechanism is most famous in large language models, where it helps pass information between words. It plays a similar role in seismographic detection.
[...]
Ars has previously reported on how the introduction of ImageNet, an image recognition benchmark, helped spark the deep learning boom. Large, publicly available earthquake datasets have played a similar role in seismology.

Earthquake Transformer was trained using the Stanford Earthquake Dataset (STEAD), which contains 1.2 million human-labeled segments of seismogram data from around the world. (The paper for STEAD explicitly mentions ImageNet as an inspiration). Other models, like PhaseNet, were also trained on hundreds of thousands or millions of labeled segments.
[...]
The holy grail of earthquake science is earthquake prediction. For instance, scientists know that a large quake will happen near Seattle but have little ability to know whether it will happen tomorrow or in a hundred years. It would be helpful if we could predict earthquakes precisely enough to allow people in affected areas to evacuate.

You might think AI tools would help predict earthquakes, but that doesn't seem to have happened yet.
[...]
As in many other scientific fields, seismologists face some pressure to adopt AI methods, whether or not they are relevant to their research.

"The schools want you to put the word AI in front of everything," Byrnes said. "It's a little out of control."

This can lead to papers that are technically sound but practically useless. Hubbard and Bradley told me that they've seen a lot of papers based on AI techniques that "reveal a fundamental misunderstanding of how earthquakes work."
[...]
While these are real issues, and ones Understanding AI has reported on before, I don't think they detract from the success of AI earthquake detection. In the last five years, an AI-based workflow has almost completely replaced one of the fundamental tasks in seismology for the better.

That's pretty cool.

Earthquake in SoylentNews stories:
Earthquake search on SoylentNews


Original Submission

posted by hubie on Sunday October 12, @11:20AM   Printer-friendly

Kessler syndrome is bad; atmospheric incineration may be worse:

If you had to guess how many Starlink satellites burn up in Earth's atmosphere on an average day, how many would you pick? This isn't a trick question - SpaceX is deorbiting about one or two satellites daily, and that number is only going to grow.

What that means for our planet isn't entirely clear, says Harvard astrophysicist and space tracker Jonathan McDowell. Even so, Starlink isn't the space junk risk that some other satellite operations are.

McDowell commented on the massive volume of reentering Starlink satellites to science news site EarthSky last week. He explained that once Starlink and other planned low Earth orbit constellations together total about 30,000 satellites, roughly five could reenter the atmosphere each day, given an average replacement cycle of around five years.

[...] Starlink isn't the biggest concern when it comes to passing the Kessler tipping point, McDowell told us – but it is still a source of worry.

"Active satellite maneuvers to avoid collisions will help avoid Kessler," McDowell said in a phone conversation. "If they're successful. And that's a big if."

The current strategy to de-orbit Starlink satellites, which operate in a low orbit below 600 kilometers, is to use the satellites' thrusters to move them to such a low orbit that they eventually catch drag in the atmosphere and burn up in what McDowell calls an "uncontrolled but assisted" reentry.

Purposeful de-orbiting, plus successful dodging, mean we can avoid Kessler syndrome, McDowell told us.

[...] Excepting the possibility of unplanned disaster, Starlink's operations aren't the biggest concern, McDowell added. China's satellite plans are far more worrying.

"The region of space closest to Kessler is the 600 to 1,000 kilometer range," McDowell said. "It's full of old Soviet rocket stages and other stuff, and the more we add there, the more likely it is for Kessler syndrome to occur."

While many of China's proposed satellite constellations are going to be in low Earth orbit at the same altitude as Starlink, McDowell noted that a number are planning to fly above 1,000 kilometers. Were something to go wrong up there, McDowell noted, "we're probably screwed."

"That higher altitude means the atmosphere won't drag them down for centuries," McDowell added. "And I haven't seen [China] demonstrate any retirement plans for those satellites."

Kessler's bad, but destroying the atmosphere is worse

It would be a tragedy if humanity polluted Earth's orbit so much that we were effectively cut off from space, but were we to poison ourselves by filling the atmosphere with the remnants of burned-up satellites and die before we reached Kessler syndrome, that would arguably be worse.

McDowell is definitely worried about both, explaining that the effects on our planet of "using the upper atmosphere as an incinerator" are largely unknown, and a massive, dangerous blind spot. Not a lot of research has been done on what the growing number of atmospheric reentries could do to Earth and the life it harbors, but it's already shocking how much stuff is floating around above our heads.

According to the US National Oceanic and Atmospheric Administration, around 10 percent of the aerosol particles in the stratosphere (the second layer of Earth's atmosphere where the ozone layer lives) contain aluminum and exotic metals believed to be from rockets and satellites that have burned up on reentry. NOAA believes that number could grow to as much as 50 percent as space launches and reentries increase.

What little research has been done into the effects of so much foreign material burning up in Earth's atmosphere has been inconclusive, McDowell explained.

"So far answers have ranged from 'this is too small to be a problem' to 'we're already screwed,'" McDowell told us. "But the uncertainty is large enough that there's already a possibility we're damaging the upper atmosphere."


Original Submission

posted by hubie on Sunday October 12, @06:37AM   Printer-friendly
from the laughing-in-IRC dept.

Discord has revealed that one of its customer service providers has suffered a data breach. The attackers gained access to Government-ID images, and user details.

Discord doesn't actually mention when the breach took place, it only says it "recently discovered an incident". The fact that Government ID images were stolen is important, the U.K.'s Online Safety Act came into effect on July 25, 2025. So, that means the data breach happened sometime between then and October 3rd, when the news about the incident was revealed. It's also worth noting that the victim of the hack was a third-party customer service that has not been named.

As for the attack, the incident involved an unauthorized party compromising one of the messaging services' customer service providers, which in turn allowed the hackers access to limited customer data, pertaining to those who had contacted Customer Support and/or Trust & Safety teams. Discord says it revoked the breached service provider's access to its ticketing system. It is investigating the matter with the help of a computer forensics firm, and is working with law enforcement. Users who were impacted by the incident are being notified via an email that is sent from [email protected]

Here's what Discord says the hackers managed to access: Name, Discord username, email and other contact details that were provided to customer support, billing information such as payment type, the last four digits of credit cards, and purchase history of the accounts, IP addresses, messages with customer service agents, and limited corporate data (training materials, internal presentations).

There was something else.

"The unauthorized party also gained access to a small number of government?ID images (e.g., driver's license, passport) from users who had appealed an age determination. If your ID may have been accessed, that will be specified in the email you receive."

The story continues:

https://www.ghacks.net/2025/10/06/discord-customer-service-data-breached-government-id-images-and-user-details-stolen/


Original Submission

posted by hubie on Sunday October 12, @01:47AM   Printer-friendly

Covert Eavesdropping through Computer Mice

The abstract from the arXiv paper states:

High-Performance Optical Sensors in Mice expose a critical vulnerability — one where confidential user speech can be leaked. Attackers can exploit these sensors' ever-increasing polling rate and sensitivity to emulate a makeshift microphone and covertly eavesdrop on unsuspecting users. We present an attack vector that capitalizes on acoustic vibrations propagated through the user's work surface, and we show that existing consumer-grade mice can detect these vibrations. However, the collected signal is low-quality and suffers from non-uniform sampling, a non-linear frequency response, and extreme quantization. We introduce Mic-E-Mouse, a pipeline consisting of successive signal processing and machine learning techniques to overcome these challenges and achieve intelligible reconstruction of user speech. We measure Mic-E-Mouse against consumer-grade sensors on the VCTK and AudioMNIST speech datasets, and we achieve an SI-SNR increase of +19𝑑𝐵, a Speaker-Recognition accuracy of 80% on the automated tests and a WER of 16.79% on the human study

Additional details: Computer mice can eavesdrop on private conversations, researchers discover

High-end computer mice can be used to eavesdrop on the voice conversations of nearby PC users, researchers from the University of California, Irvine, have shown in a new proof-of-concept demonstration.

Given the catchy name 'Mic-E-Mouse' (Microphone-Emulating Mouse), the ingenious technique outlined in Invisible Ears at Your Fingertips: Acoustic Eavesdropping via Mouse Sensors is based on the discovery that some optical mice pick up incredibly small sound vibrations reaching them through the desk surfaces on which they are being used.

These vibrations could then be captured by different types of software on PC, Mac or Linux computers, including non-privileged 'user space' programs such as web browsers or games engines or, failing that, privileged components at OS kernel level.

Although the captured signals were inaudible at first, the team were able to enhance them using Wiener and neural network statistical filtering to boost signal strength relative to noise.

As the video demonstration of this process shows, this made it possible to extract spoken words from an eavesdropped data stream that at first sounded impossibly muffled.

"Through our Mic-E-Mouse pipeline, vibrations detected by the mouse on the victim user's desk are transformed into comprehensive audio, allowing an attacker to eavesdrop on confidential conversations," the researchers wrote.

Moreover, they said, this type of attack would be undetectable by defenders: "This process is stealthy since the vibrations signals collection is invisible to the victim user and does not require high privileges on the attacker's side."

[...] However, there are important caveats that limit the scope of Mic-E-Mouse. The noise level of the environment being eavesdropped upon must be low, with desks no more than 3cm thick, and with the mouse mostly stationary to isolate voice vibrations.

The researchers also used mice with a DPI of at least 20,000, significantly above that of the average mouse in use today.

Under real-world conditions, extracting voice data would be possible but challenging. Attackers would likely only be able to capture some conversation, rather than everything being said.

Another weakness is that defending against it wouldn't be difficult: using a rubber pad or mouse mat under a mouse would stop vibrations from being picked up.


Original Submission

posted by janrinok on Saturday October 11, @08:58PM   Printer-friendly

From Cory Doctorow's blog:

Like you, I'm sick to the back teeth of talking about AI. Like you, I keep getting dragged into discussions of AI. Unlike you, I spent the summer writing a book about why I'm sick of writing about AI, which Farrar, Straus and Giroux will publish in 2026.

A week ago, I turned that book into a speech, which I delivered as the annual Nordlander Memorial Lecture at Cornell, where I'm an AD White Professor-at-Large. This was my first-ever speech about AI and I wasn't sure how it would go over, but thankfully, it went great and sparked a lively Q&A. One of those questions came from a young man who said something like "So, you're saying a third of the stock market is tied up in seven AI companies that have no way to become profitable and that this is a bubble that's going to burst and take the whole economy with it?"

I said, "Yes, that's right."

He said, "OK, but what can we do about that?"

So I re-iterated the book's thesis: that the AI bubble is driven by monopolists who've conquered their markets and have no more growth potential, who are desperate to convince investors that they can continue to grow by moving into some other sector, e.g. "pivot to video," crypto, blockchain, NFTs, AI, and now "super-intelligence." Further: the topline growth that AI companies are selling comes from replacing most workers with AI, and re-tasking the surviving workers as AI babysitters ("humans in the loop"), which won't work. Finally: AI cannot do your job, but an AI salesman can 100% convince your boss to fire you and replace you with an AI that can't do your job, and when the bubble bursts, the money-hemorrhaging "foundation models" will be shut off and we'll lose the AI that can't do your job, and you will be long gone, retrained or retired or "discouraged" and out of the labor market, and no one will do your job. AI is the asbestos we are shoveling into the walls of our society and our descendants will be digging it out for generations:

The only thing (I said) that we can do about this is to puncture the AI bubble as soon as possible, to halt this before it progresses any further and to head off the accumulation of social and economic debt. To do that, we have to take aim at the material basis for the AI bubble (creating a growth story by claiming that defective AI can do your job).

"OK," the young man said, "but what can we do about the crash?" He was clearly very worried.

"I don't think there's anything we can do about that. I think it's already locked in. I mean, maybe if we had a different government, they'd fund a jobs guarantee to pull us out of it, but I don't think Trump'll do that, so –"

[...] I firmly believe the (economic) AI apocalypse is coming. These companies are not profitable. They can't be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people's money and then lighting it on fire. Eventually those other people are going to want to see a return on their investment, and when they don't get it, they will halt the flow of billions of dollars. Anything that can't go on forever eventually stops.

[...] The data-center buildout has genuinely absurd finances – there are data-center companies that are collateralizing their loans by staking their giant Nvidia GPUs as collateral. This is wild: there's pretty much nothing (apart from fresh-caught fish) that loses its value faster than silicon chips. That goes triple for GPUs used in AI data-centers, where it's normal for tens of thousands of chips to burn out over a single, 54-day training run.

That barely scratches the surface of the funny accounting in the AI bubble. Microsoft "invests" in Openai by giving the company free access to its servers. Openai reports this as a ten billion dollar investment, then redeems these "tokens" at Microsoft's data-centers. Microsoft then books this as ten billion in revenue.

That's par for the course in AI, where it's normal for Nvidia to "invest" tens of billions in a data-center company, which then spends that investment buying Nvidia chips. It's the same chunk of money is being energetically passed back and forth between these closely related companies, all of which claim it as investment, as an asset, or as revenue (or all three).

[...] Industry darlings like Coreweave (a middleman that rents out data-centers) are sitting on massive piles of debt, secured by short-term deals with tech companies that run out long before the debts can be repaid. If they can't find a bunch of new clients in a couple short years, they will default and collapse.

[...] Plan for a future where you can buy GPUs for ten cents on the dollar, where there's a buyer's market for hiring skilled applied statisticians, and where there's a ton of extremely promising open source models that have barely been optimized and have vast potential for improvement.

[...] The most important thing about AI isn't its technical capabilities or limitations. The most important thing is the investor story and the ensuing mania that has teed up an economical catastrophe that will harm hundreds of millions or even billions of people. AI isn't going to wake up, become superintelligent and turn you into paperclips – but rich people with AI investor psychosis are almost certainly going to make you much, much poorer.


Original Submission

posted by janrinok on Saturday October 11, @04:13PM   Printer-friendly
from the stand-up-to-bullies dept.

Last week the U.S. Education Secretary Linda McMahon produced the latest attack on academia, "Compact for Academic Excellence in Higher Education," which was addressed to a small group of well known US universities. If you missed it, there is a description at https://en.wikipedia.org/wiki/Compact_for_Academic_Excellence_in_Higher_Education

Today (10/10/2025) MIT was the first of the group to reject the offer. Here is the letter from MIT's president, https://orgchart.mit.edu/letters/regarding-compact
It's not long and worth a read, here is the punch line,

In our view, America's leadership in science and innovation depends on independent thinking and open competition for excellence. In that free marketplace of ideas, the people of MIT gladly compete with the very best, without preferences. Therefore, with respect, we cannot support the proposed approach to addressing the issues facing higher education.

And here's one of her bullet points,

MIT opens its doors to the most talented students regardless of their family's finances. Admissions are need-blind. Incoming undergraduates whose families earn less than $200,000 a year pay no tuition. Nearly 88% of our last graduating class left MIT with no debt for their education. We make a wealth of free courses and low-cost certificates available to any American with an internet connection. Of the undergraduate degrees we award, 94% are in STEM fields. And in service to the nation, we cap enrollment of international undergraduates at roughly 10%.


Original Submission

posted by janrinok on Saturday October 11, @11:28AM   Printer-friendly

Baseload power is functionally extinct:

Much has been made of the notion that "renewables can't supply baseload power". This line suggests we need to replace Australia's ageing coal fleet with new coal or nuclear. The fact of the matter is that, already, "baseload" is an outdated concept and baseload generators face extinction.

Traditional utility grid management suggests there are three types of load: baseload, shoulder, and peak. Baseload is underlying 24/7 energy demand. Peak load is regular, but short-lived periods of high demand and shoulder loads are what lie in between. Under this model, system planning is straightforward – assign different types of energy generation to the different loads according to the price and qualitative characteristics.

Traditional, simple dispatch of generation technologies according to cost and flexibility

Historically in Australia, coal supplies most baseload demand since it is relatively cheap and very slow to ramp its output up or down. In some countries, baseload is met with nuclear since it is even less flexible than coal, but only two countries generate more than 50% of their energy from nuclear.

With the roles of different generators clearly delineated, power planners' jobs are much easier in this idealised system than today's grid.

In a system with lots of solar, prices fall dramatically at around midday because solar has no fuel cost. Because much of Australian solar is on rooftops, grid demand also falls. For those hours, baseload generators must either operate at a loss or shut down. Continuing to generate produces more energy than the grid requires at very low or negative prices. This is not a conscious choice—it is the structure of the market that the cheapest bid gets dispatched first.

In practice, most baseload generators are simply not capable of ramping up and down fast enough – they must bear loss-making prices in the middle of the day and try to make it up with high prices at peak periods. Moreover, this daily up/down ramp (called "load-following") brings efficiency losses and extra maintenance costs.

The situation in modern Australia – because baseload generators cannot be turned off, cheap solar is curtailed in the middle of the day.

As solar increases, this dynamic makes baseload generators impractical and unprofitable. Already, this is the situation in South Australia – in the last week of Winter 2024, SA ran on more than 100% net renewables. SA is instantaneously meeting 100% of demand from solar alone most days. It is no surprise that SA's last coal-fired power plant shut nearly a decade ago, in 2016, after years of being operated only seasonally.

The rest of Australia has not yet caught up to SA and Tasmania in terms of renewables and there is still a case for coal in the national energy market. However, the trend in solar uptake is abundantly clear and there will be no economic case for coal in just a few short years' time anywhere in Australia.

Excess energy in the middle of the day is useless if no-one wants to use it or if they want to use it overnight; this is where firming is required. When variable renewables are paired with enough storage or back-up power, it is called "firm". For a utility grid, this means large amounts of storage such as batteries and pumped hydro energy storage, as well as flexible generation such as hydro and possibly open cycle gas turbines.

In our transitioning grid, baseload generators run at a loss in the day while storage offtakes cheap solar to sell at peak times. This is called energy arbitrage —buying low and selling high — and it is extremely profitable. It is tempting to think this arrangement could continue, but it cannot. As more batteries come online, the economics of baseload generators gets worse.

We are set for a storage surge as:

utility batteries come online, electric vehicles ingrate with the grid, Albanese offers household battery subsidies, and battery prices continue to plummet. In this future, midday energy is still practically free because storage cannot consume it all and peak power prices are reduced because of battery arbitrage. Without profitable peak power prices, the economics of baseload generation are well and truly dead.

Power-hungry data centres have been meeting planning roadblocks because they consume more power than local infrastructure can handle. Rather than waiting for third parties to build out infrastructure, big tech companies want to take matters into their own hands. The possibility of big tech companies commissioning or commandeering nuclear reactors to supply new data centres with 24/7 power has created a media buzz.

It is unlikely that a self-reliant data centre would look to 100% renewables. This is not because renewables are unreliable, it is because firming renewables is easier at larger scales – wide geography helps to smooth out locally variable weather. Although nuclear is the most expensive option, big tech has cash to burn. The bigger hurdle to new nuclear is a 10-year-plus build timeline.

But whether or not data centres adopt nuclear is irrelevant for civil electricity because utility electricity grids are not data centres. If big tech builds nuclear to power data centres, it neither proves nor disproves that that technology is a good option for the whole grid.

Peter Dutton, if he succeeds in the upcoming election, faces an uphill battle to enact his nuclear energy policy. Not only must he overturn federal and state bans on nuclear power, he has to figure out how they would make money. If Dutton were to build a nuclear plant, it would require a forever-subsidy to compete in the market.

The industry is aware of this. Daniel Westerman, chief executive of the market operator AEMO, was recently quoted as saying: "Australia's operational paradigm is no longer 'baseload-and-peaking." AEMO has said competition from renewables is a key reason why coal has been retiring faster than announced.

The market is aware, and the industry is aware that baseload is not endangered, it is already functionally extinct. If the Coalition do build a nuclear power plant, Australian taxpayers will be the proud owners of an unprofitable, uncompetitive, expensive and unsellable liability.


Original Submission

posted by janrinok on Saturday October 11, @06:42AM   Printer-friendly
from the living-history dept.

David C Brock interviewed Ken Thompson for the Computer History Museum. It's a long interview with a video with a written transcript. The video is just over 4.5 hours long. The transcript weighs in at 64 pages as a downloadable PDF locked behind a CPU- and RAM-chewing web app.

This is an oral history interview with Ken Thompson, created in partnership by the Association for Computing Machinery and the Computer History Museum, in connection with his A.M. Turing Award in 1983. The interview begins with Thompson's family background and youth, detailing the hobbies he pursued intently from electronics and radio projects, to music, cars, and chess. He describes his experience at the University of California, Berkeley, and his deepening engagement with computers and computer programming there.

The interview then moves to his recruitment to the Bell Telephone Laboratories, and his experience of the Multics project. Thompson next describes his development of Unix and, with Dennis Ritchie, the programming language C. He describes the development of Unix and the Unix community at Bell Labs, and then details his work using Unix for the Number 5 Electronic Switching System. Thompson details his Turing Award lecture, the work on compromised compilers that led to it, and his views on computer security.

Next, he details his career in computer chess and work he did for Bell Labs artist Lillian Schwartz. Thompson describes his work on the Plan 9 operating system at Bell Labs with Rob Pike, and his efforts to create a digital music archive. He then details his post Bell Labs career at Entrisphere and then Google, including his role in Google Books and the creation of the Go programming language.

Previously:
(2025) Why Bell Labs Worked
(2022) Unix History: A Mighty Origin Story
(2019) Vintage Computer Federation East 2019 -- Brian Kernighan Interviews Ken Thompson


Original Submission

posted by janrinok on Saturday October 11, @02:01AM   Printer-friendly

From the Trenches

An interesting article about software quality over the years - by Denis Stetskov

The Apple Calculator leaked 32GB of RAM.

Not used. Not allocated. Leaked. A basic calculator app is haemorrhaging more memory than most computers had a decade ago.

Twenty years ago, this would have triggered emergency patches and post-mortems. Today, it's just another bug report in the queue.

We've normalized software catastrophes to the point where a Calculator leaking 32GB of RAM barely makes the news. This isn't about AI. The quality crisis started years before ChatGPT existed. AI just weaponized existing incompetence.

The Numbers Nobody Wants to Discuss:

I've been tracking software quality metrics for three years. The degradation isn't gradual—it's exponential.

Memory consumption has lost all meaning:

VS Code: 96GB memory leaks through SSH connections

Microsoft Teams: 100% CPU usage on 32GB machines

Chrome: 16GB consumption for 50 tabs is now "normal"

Discord: 32GB RAM usage within 60 seconds of screen sharing

Spotify: 79GB memory consumption on macOS

These aren't feature requirements. They're memory leaks that nobody bothered to fix.

This isn't sustainable. Physics doesn't negotiate. Energy is finite. Hardware has limits.

The companies that survive won't be those who can outspend the crisis.

There'll be those who remember how to engineer. We're living through the greatest software quality crisis in computing history. A Calculator leaks 32GB of RAM. AI assistants delete production databases. Companies spend $364 billion to avoid fixing fundamental problems.


Original Submission

posted by janrinok on Friday October 10, @09:14PM   Printer-friendly
from the messy-wires dept.

Qualcomm on Tuesday said it has acquired Arduino, an Italian not-for-profit firm that makes hardware and software for developing prototypes of robots and other electronic gadgets, Reuters reports at https://www.reuters.com/world/asia-pacific/qualcomm-buys-open-source-electronics-firm-arduino-2025-10-07/.

Arduino's own announcement can be found at https://blog.arduino.cc/2025/10/07/a-new-chapter-for-arduino-with-qualcomm-uno-q-and-you/.

Along with the news that might confuse those that could not imagine "Arduino" itself as a tangible sales item, Arduino introduced a new model in the Uno form factor that comprises a Qualcomm Dragonwing QRB2210 to run Linux, an STM32U585 microcontroller for hardware interfacing, and some new high density connector on the bottom side. It is priced at $44 in the Arduino store.

Reception of the news seems to be mixed in various channels, many doubt Qualcomm with its history would be a good steward for an ecosystem like Arduino.

The new Arduino Q moves squarely into Raspberry Pi territory, where the Pi 5 currently sells for around $55 with mostly comparable features, at least if the RP2040-like features in the RP1 I/O controller are counted in.


Original Submission

posted by jelizondo on Friday October 10, @04:31PM   Printer-friendly
from the trading-climate-abatement-for-microplastics-infiltration dept.

Turning dissolved carbon dioxide from seawater to biodegradable plastic is an especially powerful way to clean up the ocean:

Not-so-fun fact: our oceans hold 150 times more carbon dioxide than the Earth's atmosphere. Adding to that causes ocean acidification, which can disrupt marine food chains and reduce biodiversity.

Addressing this could not only help restore balance to underwater ecosystems, but also take advantage of an opportunity to sustainably use this stored CO2 for a variety of purposes – including producing the industrial chemicals needed to make plastic.

The first towards this is called Direct Ocean Capture – which refers to removing dissolved carbon directly from seawater – happens through electrochemical processes. While there are a bunch of companies working on this, it hasn't extensively been applied at scale yet, and the cost benefit doesn't look great at the moment (it's estimated that removing 1 ton of CO2 from the ocean could cost at least US$373, according to Climate Interventions).

Scientists from the Chinese Academy of Sciences and the University of Electronic Science and Technology of China – both in Shenzhen, China – have devised a DOC method which involves converting the captured CO2 into biodegradable plastic precursors. This approach is also described as operating at 70% efficiency, while consuming a relatively small amount of energy (3 kWh per kg of CO2), and working out to an impressive $230 per ton of CO2.

What's also worth noting is the use of modified marine bacteria for the last step. Here's a breakdown of the process, described in a paper appearing in Nature Catalysis:

First, electricity is used in a special reactor to acidify natural seawater. This converts the invisible, dissolved carbon into pure gas, which is collected. The system then restores the water's natural chemistry before returning it to the ocean.

Next, the captured CO2 gas is fed into a second reactor containing a bismuth-based catalyst to yield a concentrated, pure liquid called formic acid. Formic acid is a critical intermediate because it is an energy-rich food source for microbes.

Engineered marine microbes, specifically Vibrio natriegens, are fed the pure formic acid as their sole source of carbon. The microbes metabolize the formic acid and efficiently produce succinic acid, which is then used directly as the essential precursor to synthesize biodegradable plastics, such as polybutylene succinate (PBS).

That's a pretty good start. The researchers note there's room for optimization to boost yields and integrate this system into industrial processes. It could also be altered to produce chemicals for use in fuels, drugs, and foods.

It also remains to be seen how quickly the team can commercialize this DOC method, because it may have formidable competition. For example, Netherlands-based Brineworks says it will get to under $200/ton by 2030 with its electrolysis-based solution. The next couple of years will be worth watching in this fascinating niche of decarbonization.

Journal Reference: Li, C., Guo, M., Yang, B. et al. Efficient and scalable upcycling of oceanic carbon sources into bioplastic monomers. Nat Catal (2025). https://doi.org/10.1038/s41929-025-01416-4


Original Submission

posted by jelizondo on Friday October 10, @11:47AM   Printer-friendly
from the better-late-than-never-news dept.

The transistor was patented 75 years ago today:

75 years ago, the three Bell Labs scientists behind the invention of the transistor would, at last, have the U.S. Patent in their hands. This insignificant-looking semiconductor device with three electrodes sparked the third industrial revolution. Moreover, it ushered in the age of silicon and software, which still dominates business and human society to this day.

The first working transistor was demonstrated in 1947, but it wasn't until October 3, 1950, that the patent was secured by John Bardeen, Walter Brattain, and William Shockley. The patent was issued for a "three-electrode circuit element utilizing semiconductor materials." It would take several more years before the significant impacts transistors would have on business and society were realized.

Transistors replaced the bulky, fragile and power-hungry valves, that stubbornly remain present in some guitar amplifiers, audiophile sound systems, studio gear, where their 'organic' sound profile is sometime preferred. We also still see valves in some military, scientific, and microwave/RF applications, where transistors might be susceptible to radiation or other interference. There are other niche use cases.

Beyond miniaturization, transistors would deliver dramatic boosts in - computational speed, energy efficiency, and reliability. Moreover, they became the foundation for integrated circuits and processors, where billions of transistors could operate reliably in a much smaller footprint than taken up by a single valve. Processors featuring a trillion transistors are now on the horizon.

For PC enthusiasts, probably the best known piece of transistor lore comes from Intel co-founder Gordon Moore. Of course, we are talking about Moore's Law, which was an observation by the pioneering American engineer. Moore's most famous prediction was that "the number of transistors on an integrated circuit will double every two years with minimal rise in cost." (Law was revised from one to two years in 1975).

Obviously, prior to 1965, when Moore's Law was set out, the startling advance in transistor technology indicated that such an extrapolation would be reasonable. Even, now, certain semiconductor companies, engineers, and commentators reckon that Moore's Law is still alive and well. You can see Intel's position in the slides, above.

Whatever the case, it can't be denied that since the patenting of the transistor, we have seen incredible miniaturization and advances in computing and software, expanding the possibilities of minds and machines. The current tech universe is actually buzzing with firms that reckon they can make machines with minds - artificial intelligence.


Original Submission

posted by janrinok on Friday October 10, @11:11AM   Printer-friendly
from the party-in-stockholm-whoooo! dept.

Venezuelan opposition leader María Corina Machado has been awarded this year's Nobel Peace Prize

Somewhat better then the IG Nobel is the actual Nobel prizes. Winners started to be announced this week. So far Medicine and Physics, others to be revealed in the following days as I write this.

Physics. "for the discovery of macroscopic quantum mechanical tunnelling and energy quantisation in an electric circuit"
Medicine. "for their discoveries concerning peripheral immune tolerance"

Chemistry.
Literature.

Peace.
Economic Science.

https://www.nobelprize.org/all-nobel-prizes-2025/


Original Submission

posted by jelizondo on Friday October 10, @07:03AM   Printer-friendly

StatCounter reports that Windows 7 has gained almost 10% market share in the last month, just as Windows 10 support is coming to an end. It's clear people aren't ready to switch to Windows 11.

Someone must be wishing really hard, as according to StatCounter, Windows 7 is gaining market share in the year 2025, five years after support for it officially ended. As of this week, Windows 7 is now in use on 9.61% of Windows PCs within StatCounters pool of data, and that's up from the 3.59% it had just a month ago.

For years, Windows 7 has hovered around 2% market share on StatCounter. After mainstream support ended, the last few holdouts very quickly made the move to Windows 10, but with support for Windows 10 ending now just two weeks away, it looks like many are giving Microsoft's best version of Windows another try.

Of course, StatCounter isn't an entirely accurate measure when it comes to actual usage numbers, but it can give us a rough idea about how the market is trending, and it seems people are not happy with the idea of upgrading to Windows 11 from Windows 10. Windows 7's sudden marketshare gain is likely a blip, but interesting nonetheless.

Taking a closer look at StatCounter, it appears Windows 11 market share stalled in the last month, maintaining around 48% share. Windows 10 continued to drop, as expected, and is now on just 40% of PCs. While I wouldn't be surprised if some people had experimented with going back to Windows 7 recently, I highly doubt it's a number as high as 9.61%.

[...] Windows 11 failing to gain any market share in the final month before Windows 10's end of support is frankly shocking, and if the numbers are accurate, should be setting alarm bells off for Microsoft internally. It's clear that much of the market has rejected Windows 11, whether that be because of its high system requirements or insistence on AI features, people aren't moving to it.

In recent months, it seems Windows' reputation has fallen off a cliff. With enshittification slowly moving in, a lack of innovative new features and experiences that aren't tied to AI, and monthly updates that consistently introduce unnecessary changes and issues, people are getting tired of Microsoft's antics.


Original Submission