Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you put ketchup on the hot dog you are going to consume?

  • Yes, always
  • No, never
  • Only when it would be socially awkward to refuse
  • Not when I'm in Chicago
  • Especially when I'm in Chicago
  • I don't eat hot dogs
  • What is this "hot dog" of which you speak?
  • It's spelled "catsup" you insensitive clod!

[ Results | Polls ]
Comments:42 | Votes:126

posted by hubie on Monday January 26, @09:19PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In a move that signals a fundamental shift in Apple's relationship with its users, the company is quietly testing a new App Store design that deliberately obscures the distinction between paid advertisements and organic search results. This change, currently being A/B tested on iOS 26.3, represents more than just a design tweak — it's a betrayal of the premium user experience that has long justified Apple's higher prices and walled garden approach.

For years, Apple's App Store has maintained a clear visual distinction between sponsored content and organic search results. Paid advertisements appeared with a distinctive blue background, making it immediately obvious to users which results were promoted content and which were genuine search matches. This transparency wasn't just good design — it was a core part of Apple's value proposition.

Now, that blue background is disappearing. In the new design being tested, sponsored results look virtually identical to organic ones, with only a small "Ad" banner next to the app icon serving as the sole differentiator. This change aligns with Apple's December 2025 announcement that App Store search results will soon include multiple sponsored results per query, creating a landscape where advertisements dominate the user experience.

This move places Apple squarely in the company of tech giants who have spent the last decade systematically degrading user experience in pursuit of advertising revenue. Google pioneered this approach, gradually removing the distinctive backgrounds that once made ads easily identifiable in search results. What was once a clear yellow background became increasingly subtle until ads became nearly indistinguishable from organic results.

[...] What makes Apple's adoption of these practices particularly troubling is how it contradicts the company's fundamental value proposition. Apple has long justified its premium pricing and restrictive ecosystem by promising a superior user experience. The company has built its brand on the idea that paying more for Apple products means getting something better — cleaner design, better privacy, less intrusive advertising.

This App Store change represents a direct violation of that promise. Users who have paid premium prices for iPhones and iPads are now being subjected to the same deceptive advertising practices they might encounter on free, ad-supported platforms. The implicit contract between Apple and its users — pay more, get a better experience — is being quietly rewritten.

[...] Apple's motivation for this change is transparently financial. The company's services revenue, which includes App Store advertising, has become increasingly important as iPhone sales growth has plateaued. Advertising revenue offers attractive margins and recurring income streams that hardware sales cannot match.

By making advertisements less distinguishable from organic results, Apple can likely increase click-through rates significantly. Users who would normally skip obvious advertisements might click on disguised ones, generating more revenue per impression. This short-term revenue boost comes at the cost of long-term user trust and satisfaction.

The timing is also significant. As Apple faces increasing regulatory pressure around its App Store practices, the company appears to be maximizing revenue extraction while it still can. This suggests a defensive posture rather than confidence in the sustainability of current business models.

[...] The technical implementation of these changes reveals their deliberate nature. Rather than simply removing the blue background, Apple has carefully redesigned the entire search results interface to create maximum visual similarity between ads and organic results. Font sizes, spacing, and layout elements have been adjusted to eliminate distinguishing characteristics.

[...] This App Store change represents more than just a design decision — it's a signal about Apple's evolving priorities and business model. The company appears to be transitioning from a hardware-first approach that prioritizes user experience to a services-first model that prioritizes revenue extraction.

[...] For Apple, the challenge now is whether to continue down this path or respond to user concerns. The company has historically been responsive to user feedback, particularly when it threatens the brand's premium positioning. However, the financial incentives for advertising revenue are substantial and may override user experience considerations.

Users have several options for responding to these changes. They can provide feedback through Apple's official channels, adjust their App Store usage patterns to account for increased advertising, or consider alternative platforms where available.

Developers face a more complex situation. While the changes may increase the cost of app discovery through advertising, they also create new opportunities for visibility. The long-term impact on the app ecosystem remains to be seen.

[...] As one community member aptly summarized: "The enshittification of Apple is in full swing." Whether this proves to be a temporary misstep or a permanent shift in Apple's priorities remains to be seen, but the early signs are deeply concerning for anyone who values transparent, user-focused design.


Original Submission

posted by hubie on Monday January 26, @04:38PM   Printer-friendly
from the verbosive-WinDoze dept.

Am not a big fan of Power(s)Hell, but British Tech site TheRegister announced its creator Jeffery Snover is retiring after moving from M$ to G$ a few years ago.

In that write-up, Snover details how the original name for Cmdlets was Functional Units, or FUs:
  "This abbreviation reflected the Unix smart-ass culture I was embracing at the time. Plus I was developing this in a hostile environment, and my sense of diplomacy was not yet fully operational."

Reading that sentence, it would seem his "sense of diplomacy" has eventually come online. 😉

While he didn't start at M$ until the late 90s, that kind of thinking would have served him well in an old Usenet Flame War.

Happy retirement, Jeffrey!

(IMHO, maybe he’ll do something fun with his time, like finally embrace bash and python.)


Original Submission

posted by hubie on Monday January 26, @11:55AM   Printer-friendly

https://www.extremetech.com/internet/psa-starlink-now-uses-customers-personal-data-for-ai-training

Starlink recently updated its Privacy Policy to explicitly allow it to share personal customer data with companies to train AI models. This appears to have been done without any warning to customers (I certainly didn't get any email about it), though some eagle-eyed users noticed a new opt-out toggle on their profile page.

The updated Privacy Policy buries the AI training declaration at the end of its existing data sharing policies. It reads:

"We may share your personal information with our affiliates, service providers, and third-party collaborators for the purposes we outline above (e.g., hosting and maintaining our online services, performing backup and storage services, processing payments, transmitting communications, performing advertising or analytics services, or completing your privacy rights requests) and, unless you opt out, for training artificial intelligence models, including for their own independent purposes."

SpaceX doesn't make it clear which AI companies or AI models it might be involved in training, though xAI's Grok seems the most likely, given that it is owned and operated by SpaceX CEO Elon Musk.

Elsewhere in Starlink's Privacy Policy, it also discusses using personal data to train its own AI models, stating:

"We may use your personal information: [...] to train our machine learning or artificial intelligence models for the purposes outlined in this policy."

Unfortunately, there doesn't appear to be any opt-out option for that. I asked the Grok support bot whether opting out with the toggle would prevent Starlink from using data for AI training, too, and it said it would, but I'm not sure I believe it.

How to Opt Out of Starlink AI Training

To opt out of Starlink's data sharing for AI training purposes, navigate to the Starlink website and log in to your account. On your account page, select Settings from the left-hand menu, then select the Edit Profile button in the top-right of the window.

In the window that appears, look to the bottom, where you should see a toggle box labeled "Share personal data with Starlink's trusted collaborators to train AI models."

Select the box to toggle the option off, then select the Save button. You'll be prompted to verify your identity through an email or SMS code, but once you've done that, Starlink shouldn't be able to share your data with AI companies anymore.

At the time of writing, it doesn't appear you can change this setting in the Starlink app.


Original Submission

posted by hubie on Monday January 26, @07:11AM   Printer-friendly
from the snap-to-it dept.

https://distrowatch.com/dwres.php?resource=showheadline&story=20123

Alan Pope, a former Ubuntu contributor and current Snap package maintainer, has raised a concern on his blog about attackers sneaking malicious Snap packages into Canonical's package repository.

"There's a relentless campaign by scammers to publish malware in the Canonical Snap Store. Some gets caught by automated filters, but plenty slips through. Recently, these miscreants have changed tactics - they're now registering expired domains belonging to legitimate snap publishers, taking over their accounts, and pushing malicious updates to previously trustworthy applications. This is a significant escalation."

Details on the attack are covered in Pope's blog post.


Original Submission

posted by hubie on Monday January 26, @03:26AM   Printer-friendly
from the who's-responsible-when-AI-crashes-your-system? dept.

Arthur T Knackerbracket has processed the following story:

UK financial regulators must conduct stress testing to ensure businesses are ready for AI-driven market shocks, MPs have warned.

The Bank of England, Financial Conduct Authority, and HM Treasury risk exposing consumers and the financial system to "potentially serious harm" by taking a wait-and-see approach, according to a House of Commons Treasury Committee report published today.

During its hearings, the committee found a troubling lack of accountability and understanding of the risks involved in spreading AI across the financial services sector.

David Geale, the FCA's Executive Director for Payments and Digital Finance, said individuals within financial services firms were "on the hook" for harm caused to consumers through AI. Yet trade association Innovate Finance testified that management in financial institutions struggled to assess AI risk. The "lack of explainability" of AI models directly conflicted with the regime's requirement for senior managers to demonstrate they understood and controlled risks, the committee argued.

The committee said there should be clear lines of accountability when AI systems produce harmful or unfair outcomes. "For instance, if an AI system unfairly denies credit to a customer in urgent need – such as for medical treatment – there must be clarity on who is responsible: the developers, the institution deploying the model, or the data providers."

[...] Financial services is one of the UK's most important economic sectors. In 2023, it contributed £294 billion to the economy [PDF], or around 13 percent of the gross value added of all economic sectors.

However, successive governments have adopted a light-touch approach to AI regulation for fear of discouraging investment.

Treasury Select Committee chair Dame Meg Hillier said: "Firms are understandably eager to try and gain an edge by embracing new technology, and that's particularly true in our financial services sector, which must compete on the global stage.

"Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk."


Original Submission

posted by jelizondo on Sunday January 25, @10:36PM   Printer-friendly

[Source]: Microsoft Gave FBI a Set of BitLocker Encryption Keys to Unlock Suspects' Laptops

Microsoft provided the FBI with the recovery keys to unlock encrypted data on the hard drives of three laptops as part of a federal investigation, Forbes reported on Friday.

Many modern Windows computers rely on full-disk encryption, called BitLocker, which is enabled by default. This type of technology should prevent anyone except the device owner from accessing the data if the computer is locked and powered off.

But, by default, BitLocker recovery keys are uploaded to Microsoft's cloud, allowing the tech giant — and by extension law enforcement — to access them and use them to decrypt drives encrypted with BitLocker, as with the case reported by Forbes.

[...] Microsoft told Forbes that the company sometimes provides BitLocker recovery keys to authorities, having received an average of 20 such requests per year.

[Also Covered By]: TechCrunch


Original Submission

posted by jelizondo on Sunday January 25, @05:53PM   Printer-friendly
from the alfred-hitchcock-lover dept.

https://arstechnica.com/features/2026/01/this-may-be-the-grossest-eye-pic-ever-but-the-cause-is-whats-truly-horrifying/

A generally healthy 63-year-old man in the New England area went to the hospital with a fever, cough, and vision problems in his right eye. His doctors eventually determined that a dreaded hypervirulent bacteria—which is rising globally—was ravaging several of his organs, including his brain.
[...]
At the hospital, doctors took X-rays and computed tomography (CT) scans of his chest and abdomen. The images revealed over 15 nodules and masses in his lungs. But that's not all they found. The imaging also revealed a mass in his liver that was 8.6 cm in diameter (about 3.4 inches). Lab work pointed toward an infection, so doctors admitted him to the hospital
[...]
On his third day, he woke up with vision loss in his right eye, which was so swollen he couldn't open it. Magnetic resonance imaging (MRI) revealed another surprise: There were multiple lesions in his brain.
[...]
In a case report in this week's issue of the New England Journal of Medicine, doctors explained how they solved the case and treated the man.
[...]
There was one explanation that fit the condition perfectly: hypervirulent Klebsiella pneumoniae or hvKP.
[...]
An infection with hvKP—even in otherwise healthy people—is marked by metastatic infection. That is, the bacteria spreads throughout the body, usually starting with the liver, where it creates a pus-filled abscess. It then goes on a trip through the bloodstream, invading the lungs, brain, soft tissue, skin, and the eye (endogenous endophthalmitis). Putting it all together, the man had a completely typical clinical case of an hvKP infection.

Still, definitively identifying hvKP is tricky. Mucus from the man's respiratory tract grew a species of Klebsiella, but there's not yet a solid diagnostic test to differentiate hvKP from the classical variety.
[...]
it was too late for the man's eye. By his eighth day in the hospital, the swelling had gotten extremely severe
[...]
Given the worsening situation—which was despite the effective antibiotics—doctors removed his eye.


Original Submission

posted by jelizondo on Sunday January 25, @01:02PM   Printer-friendly

OpenAI has decided to incorporate advertisements into its ChatGPT service for free users and those on the lower-tier Go plan, a shift announced just days ago:

The company plans to begin testing these ads in the United States by the end of January 2026, placing them at the bottom of responses where they match the context of the conversation. Officials insist the ads will be clearly marked, optional to personalize, and kept away from sensitive subjects. Higher-paying subscribers on Plus, Pro, Business, and Enterprise levels will remain ad-free, preserving a premium experience for those willing to pay.

This development comes as OpenAI grapples with enormous operational costs, including a staggering $1.4 trillion infrastructure expansion to keep pace with demand. Annualized revenue reached $20 billion in 2025, a tenfold increase from two years prior, yet the burn rate on computing power and development continues to outstrip income from subscriptions alone. Analysts like Mark Mahaney from Evercore ISI project that if executed properly, ads could bring in $25 billion annually by 2030, providing a vital lifeline for sustainability.

[...] The timing of OpenAI's announcement reveals underlying pressures in the industry. As one observer put it, "OpenAI Moves First on Ads While Google Waits. The Timing Tells You Everything." With ChatGPT boasting 800 million weekly users compared to Gemini's 650 million monthly active ones, OpenAI can't afford to lag in revenue generation. Delaying could jeopardize the company's future, according to tech analyst Ben Thompson, who warned that postponing ads "risks the entire company."

[...] From a broader view, this reflects how Big Tech giants are reshaping technology to serve their bottom lines, often at the expense of individual freedoms. If ads become the norm in AI chatbots, it might accelerate a divide between those who can afford untainted access and those stuck with sponsored content. Critics argue this model echoes past controversies, like Meta's data scandals, fueling distrust in how personal interactions are commodified.

Also discussed by Bruce Schneier.

Related: Google Confirms AI Search Will Have Ads, but They May Look Different


Original Submission

posted by jelizondo on Sunday January 25, @08:30AM   Printer-friendly
from the as-the-years-go-by-I-am-sinking dept.

Human-driven land sinking now outpaces sea-level rise in many of the world's major delta systems, threatening more than 236 million people:

A study published on Jan. 14 in Nature shows that many of the world's major river deltas are sinking faster than sea levels are rising, potentially affecting hundreds of millions of people in these regions.

The major causes are groundwater withdrawal, reduced river sediment supply, and urban expansion.

[...] The findings show that in nearly every river delta examined, at least some portion is sinking faster than the sea is rising. Sinking land, or subsidence, already exceeds local sea-level rise in 18 of the 40 deltas, heightening near-term flood risk for more than 236 million people.

[...] Deltas experiencing concerning rates of elevation loss include the Mekong, Nile, Chao Phraya, Ganges–Brahmaputra, Mississippi, and Yellow River systems.

"In many places, groundwater extraction, sediment starvation, and rapid urbanization are causing land to sink much faster than previously recognized," Ohenhen said.

Some regions are sinking at more than twice the current global rate of sea-level rise.

"Our results show that subsidence isn't a distant future problem — it is happening now, at scales that exceed climate-driven sea-level rise in many deltas," said Shirzaei, co-author and director of Virginia Tech's Earth Observation and Innovation Lab.

Groundwater depletion emerged as the strongest overall predictor of delta sinking, though the dominant driver varies regionally.

"When groundwater is over-pumped or sediments fail to reach the coast, the land surface drops," said Werth, who co-led the groundwater analysis. "These processes are directly linked to human decisions, which means the solutions also lie within our control."

Journal Reference: Ohenhen, L.O., Shirzaei, M., Davis, J.L. et al. Global subsidence of river deltas. Nature (2026). https://doi.org/10.1038/s41586-025-09928-6


Original Submission

posted by jelizondo on Sunday January 25, @03:38AM   Printer-friendly

https://phys.org/news/2026-01-greenwashing-false-stability-companies.html

Companies engaging in 'greenwashing' to appear more favorable to investors, don't achieve durable financial stability in the long term, according to a new Murdoch University study.

The paper, "False Stability? How Greenwashing Shapes Firm Risk in the Short and Long Run," is published in the Journal of Risk and Financial Management.

Globally, there has been a rise in Environmental Social Governance (ESG) investing, where lenders prioritize a firm's sustainability performance when allocating capital. As a result, ESG scores have become an important measure for investors when assessing risk.

"However, ESG scores do not always reflect a firm's true environmental performance," said Tanvir Bhuiyan, associate lecturer in finance at the Murdoch Business School.

Greenwashing refers to the gap between what firms claim about their environmental performance and how they actually perform.

"In simple terms, it is when companies talk green but do not act green," Dr. Bhuiyan said. "Firms do this to gain reputational benefits, attract investors, and appear lower-risk and more responsible without necessarily reducing their carbon footprint."

The study examined Australian companies from 2014 to 2023 to understand how greenwashing affects financial risk and stability. To measure whether companies were exaggerating their sustainability performance, they created a comprehensive quantitative framework to measure greenwashing by directly comparing ESG scores with carbon emissions, allowing them to identify when sustainability claims were inflated.

They then analyzed how greenwashing affected a company's stability, by looking at its volatility in the stock market.

According to Dr. Bhuiyan, the key finding from the research was that greenwashing enhances firms' stability in the short term, but that effect fades away over time.

"In the short term, firms that exaggerate their ESG credentials appear less risky in the market, as investors interpret strong ESG signals as a sign of safety," he said.

"However, this benefit fades over time. When discrepancies between ESG claims and actual emissions become clearer, the market corrects its earlier optimism, and the stabilizing effect of greenwashing weakens."

Dr. Ariful Hoque, senior lecturer in finance at the Murdoch Business School, who also worked on the study, said they also found that greenwashing was a persistent trend for Australian firms from 2014–2022.

"On average, firms consistently reported ESG scores that were higher than what their actual carbon emissions would justify," Dr. Hoque said.

However, in 2023, he said there was a noticeable decline in greenwashing, "likely reflecting stronger ASIC enforcement, mandatory climate-risk disclosures policy starting from 2025, and greater investor scrutiny."

"For regulators, our results support the push for tighter ESG disclosure standards and stronger anti-greenwashing enforcement, as misleading sustainability claims distort risk pricing," he said.

"For investors, the findings highlight the importance of looking beyond headline ESG scores and examining whether firms' environmental claims match their actual emissions.

"For companies, this research indicates that greenwashing may buy short-term credibility, but genuine emissions reduction and transparent reporting are far more effective for managing long-term risk."

More information:

Rahma Mirza et al, False Stability? How Greenwashing Shapes Firm Risk in the Short and Long Run, Journal of Risk and Financial Management (2025). DOI: 10.3390/jrfm18120691


Original Submission

posted by janrinok on Saturday January 24, @10:54PM   Printer-friendly
from the written-in-stone dept.

A stunning discovery in a Moroccan cave is forcing scientists to reconsider the narrative of human origins. Unearthed from a site in Casablanca, 773,000-year-old fossils display a perplexing blend of ancient and modern features, suggesting that key traits of our species emerged far earlier and across a wider geographic area than previously believed:

The remains, found in the Grotte à Hominidés cave, include lower jawbones from two adults and a toddler, along with teeth, a thigh bone and vertebrae. The thigh bone bears hyena bite marks, indicating the individual may have been prey. The fossils present a mosaic: the face is relatively flat and gracile, resembling later Homo sapiens, while other features like the brow ridge and overall skull shape remain archaic, akin to earlier Homo species.

This mix of characteristics places the population at a critical evolutionary juncture. Paleoanthropologist Jean-Jacques Hublin, lead author of the study, stated, "I would be cautious about labeling them as 'the last common ancestor,' but they are plausibly close to the populations from which later African H. sapiens and Eurasian Neanderthal and Denisovan - lineages ultimately emerged."

[...] The find directly challenges the traditional "out-of-Africa" model, which holds that anatomically modern humans evolved in Africa around 200,000 years ago before migrating and replacing other hominin species. Instead, it supports a more complex picture where early human populations left Africa well before fully modern traits had evolved, with differentiation happening across continents.

"The fossils show a mosaic of primitive and derived traits, consistent with evolutionary differentiation already underway during this period, while reinforcing a deep African ancestry for the H. sapiens lineage," Hublin added.

Detailed analysis reveals the nuanced transition. One jaw shows a long, low shape similar to H. erectus, but its teeth and internal features resemble both modern humans and Neanderthals. The right canine is slender and small, akin to modern humans, while some incisor roots are longer, closer to Neanderthals. The molars present a unique blend, sharing traits with North African teeth, the Spanish species H. antecessor and archaic African H. erectus.

Related:


Original Submission

posted by janrinok on Saturday January 24, @06:11PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

[...] In an unexpected turn of events, Micron announced plans to buy Powerchip Semiconductor Manufacturing Corporation's (PSMC) P5 fabrication site in Tongluo, Miaoli County, Taiwan, for a total cash consideration of $1.8 billion. To a large degree, the transaction would evolve Micron's long-term 'technology-for-capacity' strategy, which it has used for decades. This also signals that DRAM fabs are now so capital-intensive that it is no longer viable for companies like PSMC to build them and get process technologies from companies like Micron. The purchase is also set against the backdrop of the ongoing DRAM supply squeeze, as data centers are set to consumer 70% of all memory chips made in 2026.

"This strategic acquisition of an existing cleanroom complements our current Taiwan operations and will enable Micron to increase production and better serve our customers in a market where demand continues to outpace supply," said Manish Bhatia, executive vice president of global operations at Micron Technology. "The Tongluo fab's close proximity to Micron's Taichung site will enable synergies across our Taiwan operations."

The deal between Micron and PSMC includes 300,000 square feet of existing 300mm cleanroom space, which will greatly expand Micron's production footprint in Taiwan. By today's standards, a 300,000 square foot cleanroom is a relatively large one, but it will be dwarfed by Micron's next-generation DRAM campus in New York, which will feature four cleanrooms of 600,000 square feet each. However, the first of those fabs will only come online in the late 2020s or in the early 2030s.

The transaction is expected to close by Q2 2026, pending receipt of all necessary approvals. After closing, Micron will gradually equip and ramp the site for DRAM production, with meaningful wafer output starting in the second half of 2027.

The agreement also establishes a long-term strategic partnership under which PSMC will support Micron with assembly services, while Micron will assist PSMC's legacy DRAM portfolio.

While the P5 site in Tongluo isn't producing memory in high volumes today, the change of ownership and inevitable upgrade of the fab itself will have an impact on global DRAM supply, which is good news for a segment that is experiencing unprecedented demand. While it is important that Micron is set to buy a production facility in Taiwan, it is even more important that the transaction marks an end to its technology-for-capacity approach to making memory on the island. In the past, instead of building large amounts of new greenfield fabs in Taiwan, Micron partnered with local foundries (most notably PSMC, but also with Inotera and Nanya) and provided advanced DRAM process technology in exchange for wafer capacity, manufacturing services, or fab access.

This approach allowed Micron to expand output faster and with less capital risk, leveraged Taiwan's mature 300mm manufacturing ecosystem, and avoided duplicating the front-end infrastructure, which was already in place.

However, it looks like the traditional technology-for-capacity model — which worked well in the 90nm – 20nm-class node era — no longer works. It worked well when DRAM fabs cost a few billion dollars, when process ramps were straightforward, and when partners could justify their capital risks in exchange for technologies (which cost billions in R&D investments) and stable wafer demand.

Today’s advanced DRAM fabs require $15 – $25 billion or more of upfront investment. This would go into equipment like pricey EUV scanners, as well as longer and riskier yield ramps. In that environment, a partner running someone else's IP absorbs massive CapEx and execution risk while getting limited advantages, which makes the economics increasingly unattractive: after all, if you can invest over $20 billion in a fab, you can certainly invest $2 billion in R&D.

In recent years, Micron's behavior has reflected this shift in thinking. Early technology-for-capacity deals helped it scale quickly, but once fabs crossed a certain cost and complexity threshold, Micron had to move on and own fabs instead of renting capacity. This is reflected in moves like its Elpida acquisition in 2013, where the company purchased a bankrupt memory maker to secure the company's capacity. This was followed up in 2016 with the Inotera acquisition, and now with PSMC.

[...] Now, the American company will own the site and invest in its transition to its latest process technologies.


Original Submission

posted by janrinok on Saturday January 24, @01:25PM   Printer-friendly
from the splish-splash-I-was-taking-a-bath dept.

Limescale deposits in wells, pipes, and bathing facilities provide information about Pompeii's ancient water supply:

The city of Pompeii was buried by the eruption of Mount Vesuvius in AD 79. Researchers at Johannes Gutenberg University Mainz (JGU) have now reconstructed the city's water supply system based on carbonate deposits – particularly the transition from wells to an aqueduct. The results were published yesterday in the journal PNAS. "The baths were originally supplied by deep wells with water-lifting devices, and the hygienic conditions in them were far from ideal," says Dr. Gül Sürmelihindi from the Institute of Geosciences at JGU, first author of the publication. "Over time, however, the water-lifting devices were upgraded through technological developments before being replaced by an aqueduct in the first century AD, which provided more water and allowed more frequent refreshment of water for bathing."

To reconstruct the ancient water supply, Sürmelihindi and her colleague Professor Cees Passchier used isotope analysis to examine carbonate deposits that had formed in various components of the city's water infrastructure – such as the aqueduct, water towers, well shafts, and the pools of the public baths. "We found completely different patterns of stable isotopes and trace elements in the carbonates from the aqueduct and in those from the wells," says Sürmelihindi. Based on these different geochemical characteristics, the team was able to determine the origin of the bathing water and draw conclusions about Pompeii's water management system and quality changes in provided water. They discovered that the wells tapped into highly mineralized groundwater from volcanic deposits, which was not ideal for drinking purposes. This agrees well with what was previously known: during the reign of Augustus, the aqueduct was built in Pompeii, significantly increasing the amount of available water for bathing and providing drinking water.

"In the so-called Republican Baths – the oldest public bathing facilities in the city, dating back to pre-Roman times around 130 BC – we were able to prove through isotope analysis that the bath water was provided from wells, and not renewed regularly. Therefore, the hygienic condition did not meet the high hygienic standards usually attributed to the Romans," explains Sürmelihindi. Probably, the water was only changed once daily, which, according to Sürmelihindi, would not be surprising: "After all, the baths were supplied by a water-lifting machine, powered by slaves via a kind of treadwheel."

The researchers also found lead, zinc, and copper peaks in the anthropogenic carbonate deposits which indicates contamination with heavy metals in water of the baths. This suggests that boilers and water pipes were replaced, which increased the heavy metal concentrations. An increase in stable oxygen isotopes also shows that the pools in the Republican Baths provided warmer water after the renovation.

The researchers also found peculiar, cyclic patterns in the carbon isotope ratio of carbonate from the wells. According to Passchier, a possible cause could lie in the fluctuating amount of volcanic carbon dioxide in the groundwater – this cyclicity may provide information on the activity of Mount Vesuvius long before the AD 79 eruption.

Journal Reference: G. Sürmelihindi et al., Seeing Roman life through water: Exploring Pompeii's public baths via carbonate deposits, PNAS, 12 January 2026,
DOI: 10.1073/pnas.2517276122


Original Submission

posted by janrinok on Saturday January 24, @08:42AM   Printer-friendly

I came across a very interesting social media post by John Carlos Baez about a paper published a few weeks ago that showed you can build a universal computation machine using a single billiard ball on a carefully crafted table. According to one of the paper's authors (Eva Miranda):

With Isaac Ramos, we show that 2D billiard systems are Turing complete, implying the existence of undecidable trajectories in physically natural models from hard-sphere gases to celestial mechanics.
Determinism ≠ predictability.

From Baez:

More precisely: you can create a computer that can run any program, using just a single point moving frictionlessly in a region of the plane and bouncing off the walls elastically.

Since the halting problem is undecidable, this means there are some yes-or-no questions about the eventual future behavior of this point that cannot be settled in a finite time by any computer program.

This is true even though the point's motion is computable to arbitrary accuracy for any given finite time. In fact, since the methodology here does *not* exploit the chaos that can occur for billiards on certain shaped tables, it's not even one of those cases where the point's motion is computable in principle but your knowledge of the initial conditions needs to be absurdly precise.

Achieving Turing completeness using billiards goes back to the early 80s with a paper by Fredkin and Toffoli that established the idea of "Conservative Logic," which was also mentioned by Richard Feynman in his Feynman Lectures on Computation, but that system used the interactions of multiple billiard balls whereas this paper shows you only need one (if you carefully layout the edges of your table).

The Baez link has some very interesting comments, including from Eva Miranda.


Original Submission

posted by janrinok on Saturday January 24, @04:01AM   Printer-friendly
from the cloudflop-again dept.

Arthur T Knackerbracket has processed the following story:

On January 8, 2026, a seemingly innocuous code change at Cloudflare triggered a cascade of DNS resolution failures across the internet, affecting millions of users worldwide. The culprit wasn't a cyberattack, server outage, or configuration error — it was something far more subtle: the order in which DNS records appeared in responses from 1.1.1.1, one of the world's most popular public DNS resolvers.

[...] The story begins on December 2, 2025, when Cloudflare engineers introduced what appeared to be a routine optimization to their DNS caching system. The change was designed to reduce memory usage — a worthy goal for infrastructure serving millions of queries per second. After testing in their development environment for over a month, the change began its global rollout on January 7, 2026.

By January 8 at 17:40 UTC, the update had reached 90% of Cloudflare's DNS servers. Within 39 minutes, the company had declared an incident as reports of DNS resolution failures poured in from around the world. The rollback began immediately, but it took another hour and a half to fully restore service.

The affected timeframe was relatively short — less than two hours from incident declaration to resolution — but the impact was significant. Users across multiple platforms and operating systems found themselves unable to access websites and services that relied on CNAME records, a fundamental building block of modern DNS infrastructure.

To understand what went wrong, it's essential to grasp how DNS CNAME (Canonical Name) records work. When you visit a website like www.example.com, your request might follow a chain of aliases before reaching the final destination:

Each step in this chain has its own Time-To-Live (TTL) value, indicating how long the record can be cached. When some records in the chain expire while others remain valid, DNS resolvers like 1.1.1.1 can optimize by only resolving the expired portions and combining them with cached data. This optimization is where the trouble began.

The problematic change was deceptively simple. Previously, when merging cached CNAME records with newly resolved data, Cloudflare's code created a new list and placed CNAME records first:

let mut answer_rrs = Vec::with_capacity(entry.answer.len() + self.records.len());
answer_rrs.extend_from_slice(&self.records); // CNAMEs first
answer_rrs.extend_from_slice(&entry.answer); // Then A/AAAA records

To save memory allocations, engineers changed this to append CNAMEs to the existing answer list. This seemingly minor optimization had a profound consequence: CNAME records now sometimes appeared after the final resolved answers instead of before them.

The reason this change caused widespread failures lies in how many DNS client implementations process responses. Some clients, including the widely-used getaddrinfo function in glibc (the GNU C Library used by most Linux systems), parse DNS responses sequentially while tracking the expected record name.

When processing a response in the correct order:

  • Find records for www.example.com
  • Encounter www.example.com CNAME cdn.example.com
  • Update expected name to cdn.example.com
  • Find cdn.example.com A 198.51.100.1
  • Success!

But when CNAMEs appear after A records:

  • Find records for www.example.com
  • Ignore cdn.example.com A 198.51.100.1 (doesn't match expected name)
  • Encounter www.example.com CNAME cdn.example.com
  • Update expected name to cdn.example.com
  • No more records found — resolution fails

This sequential parsing approach, while seemingly fragile, made sense when it was implemented. It's efficient, requires minimal memory, and worked reliably for decades because most DNS implementations naturally placed CNAME records first.

The impact of this change was far-reaching but unevenly distributed. The primary victims were systems using glibc's getaddrinfo function, which includes most traditional Linux distributions that don't use systemd-resolved as an intermediary caching layer.

Perhaps most dramatically affected were certain Cisco ethernet switches. Three specific models experienced spontaneous reboot loops when they received responses with reordered CNAMEs from 1.1.1.1. Cisco has since published a service document describing the issue, highlighting how deeply this problem penetrated into network infrastructure.

Interestingly, many modern systems were unaffected. Windows, macOS, iOS, and Android all use different DNS resolution libraries that handle record ordering more flexibly. Even on Linux, distributions using systemd-resolved were protected because the local caching resolver reconstructed responses according to its own ordering logic.

At the heart of this incident lies a fundamental ambiguity in RFC 1034, the 1987 specification that defines much of DNS behavior.

The phrase "possibly preface" suggests that CNAME records should appear before other records, but the language isn't normative. RFC 1034 predates RFC 2119 (published in 1997), which standardized the use of keywords like "MUST" and "SHOULD" to indicate requirements versus suggestions.

Further complicating matters, RFC 1034 also states that "the difference in ordering of the RRs in the answer section is not significant," though this comment appears in the context of a specific example comparing two A records, not different record types.

This ambiguity has persisted for nearly four decades, with different implementers reaching different conclusions about what the specification requires.

One of the most puzzling aspects of this incident is how it survived testing for over a month without detection. The answer reveals the complexity of modern internet infrastructure and the challenges of comprehensive testing.

Cloudflare's testing environment likely used systems that weren't affected by the change. Most modern operating systems handle DNS record ordering gracefully, and many Linux systems use systemd-resolved, which masks the underlying issue. The specific combination of factors needed to trigger the problem — direct use of glibc's resolver with CNAME chains from 1.1.1.1 — may not have been present in their test scenarios.

This highlights a broader challenge in infrastructure testing: the internet's diversity means that edge cases can have mainstream impact. What works in a controlled testing environment may fail when exposed to the full complexity of real-world deployments.

The DNS community's response to this incident has been swift and constructive. Cloudflare has committed to maintaining CNAME-first ordering in their responses and has authored an Internet-Draft proposing to clarify the ambiguous language in the original RFC.

The proposed specification would explicitly require CNAME records to appear before other record types in DNS responses, codifying what has been common practice for decades. If adopted, this would prevent similar incidents in the future by removing the ambiguity that allowed different interpretations.

The incident also sparked broader discussions about DNS implementation robustness. While Cloudflare's change exposed fragility in some client implementations, it also highlighted the importance of defensive programming in critical infrastructure components.

[...] The incident revealed an even deeper complexity: even when CNAME records appear first, their internal ordering can cause problems.

[...] For the broader DNS community, this incident serves as a reminder of the importance of specification clarity and comprehensive testing. As internet infrastructure continues to evolve, identifying and resolving these legacy ambiguities becomes increasingly important.

The incident also highlights the value of diverse DNS resolver implementations. The fact that different resolvers handle record ordering differently provided natural resilience — when one approach failed, others continued working.

The January 8, 2026 DNS incident demonstrates how seemingly minor changes to critical infrastructure can have far-reaching consequences. A memory optimization that moved CNAME records from the beginning to the end of DNS responses triggered failures across multiple platforms and caused network equipment to reboot.

At its core, this was a story about assumptions — assumptions built into 40-year-old specifications, assumptions made by implementers over decades, and assumptions about how systems would behave under different conditions. When those assumptions collided with reality, the result was a brief but significant disruption to internet connectivity.

[...] As Cloudflare's engineers learned, sometimes the order of things matters more than we realize. In the complex world of internet infrastructure, even the smallest details can have the largest consequences.


Original Submission