Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you put ketchup on the hot dog you are going to consume?

  • Yes, always
  • No, never
  • Only when it would be socially awkward to refuse
  • Not when I'm in Chicago
  • Especially when I'm in Chicago
  • I don't eat hot dogs
  • What is this "hot dog" of which you speak?
  • It's spelled "catsup" you insensitive clod!

[ Results | Polls ]
Comments:39 | Votes:117

posted by jelizondo on Sunday January 25, @10:36PM   Printer-friendly

[Source]: Microsoft Gave FBI a Set of BitLocker Encryption Keys to Unlock Suspects' Laptops

Microsoft provided the FBI with the recovery keys to unlock encrypted data on the hard drives of three laptops as part of a federal investigation, Forbes reported on Friday.

Many modern Windows computers rely on full-disk encryption, called BitLocker, which is enabled by default. This type of technology should prevent anyone except the device owner from accessing the data if the computer is locked and powered off.

But, by default, BitLocker recovery keys are uploaded to Microsoft's cloud, allowing the tech giant — and by extension law enforcement — to access them and use them to decrypt drives encrypted with BitLocker, as with the case reported by Forbes.

[...] Microsoft told Forbes that the company sometimes provides BitLocker recovery keys to authorities, having received an average of 20 such requests per year.

[Also Covered By]: TechCrunch


Original Submission

posted by jelizondo on Sunday January 25, @05:53PM   Printer-friendly
from the alfred-hitchcock-lover dept.

https://arstechnica.com/features/2026/01/this-may-be-the-grossest-eye-pic-ever-but-the-cause-is-whats-truly-horrifying/

A generally healthy 63-year-old man in the New England area went to the hospital with a fever, cough, and vision problems in his right eye. His doctors eventually determined that a dreaded hypervirulent bacteria—which is rising globally—was ravaging several of his organs, including his brain.
[...]
At the hospital, doctors took X-rays and computed tomography (CT) scans of his chest and abdomen. The images revealed over 15 nodules and masses in his lungs. But that's not all they found. The imaging also revealed a mass in his liver that was 8.6 cm in diameter (about 3.4 inches). Lab work pointed toward an infection, so doctors admitted him to the hospital
[...]
On his third day, he woke up with vision loss in his right eye, which was so swollen he couldn't open it. Magnetic resonance imaging (MRI) revealed another surprise: There were multiple lesions in his brain.
[...]
In a case report in this week's issue of the New England Journal of Medicine, doctors explained how they solved the case and treated the man.
[...]
There was one explanation that fit the condition perfectly: hypervirulent Klebsiella pneumoniae or hvKP.
[...]
An infection with hvKP—even in otherwise healthy people—is marked by metastatic infection. That is, the bacteria spreads throughout the body, usually starting with the liver, where it creates a pus-filled abscess. It then goes on a trip through the bloodstream, invading the lungs, brain, soft tissue, skin, and the eye (endogenous endophthalmitis). Putting it all together, the man had a completely typical clinical case of an hvKP infection.

Still, definitively identifying hvKP is tricky. Mucus from the man's respiratory tract grew a species of Klebsiella, but there's not yet a solid diagnostic test to differentiate hvKP from the classical variety.
[...]
it was too late for the man's eye. By his eighth day in the hospital, the swelling had gotten extremely severe
[...]
Given the worsening situation—which was despite the effective antibiotics—doctors removed his eye.


Original Submission

posted by jelizondo on Sunday January 25, @01:02PM   Printer-friendly

OpenAI has decided to incorporate advertisements into its ChatGPT service for free users and those on the lower-tier Go plan, a shift announced just days ago:

The company plans to begin testing these ads in the United States by the end of January 2026, placing them at the bottom of responses where they match the context of the conversation. Officials insist the ads will be clearly marked, optional to personalize, and kept away from sensitive subjects. Higher-paying subscribers on Plus, Pro, Business, and Enterprise levels will remain ad-free, preserving a premium experience for those willing to pay.

This development comes as OpenAI grapples with enormous operational costs, including a staggering $1.4 trillion infrastructure expansion to keep pace with demand. Annualized revenue reached $20 billion in 2025, a tenfold increase from two years prior, yet the burn rate on computing power and development continues to outstrip income from subscriptions alone. Analysts like Mark Mahaney from Evercore ISI project that if executed properly, ads could bring in $25 billion annually by 2030, providing a vital lifeline for sustainability.

[...] The timing of OpenAI's announcement reveals underlying pressures in the industry. As one observer put it, "OpenAI Moves First on Ads While Google Waits. The Timing Tells You Everything." With ChatGPT boasting 800 million weekly users compared to Gemini's 650 million monthly active ones, OpenAI can't afford to lag in revenue generation. Delaying could jeopardize the company's future, according to tech analyst Ben Thompson, who warned that postponing ads "risks the entire company."

[...] From a broader view, this reflects how Big Tech giants are reshaping technology to serve their bottom lines, often at the expense of individual freedoms. If ads become the norm in AI chatbots, it might accelerate a divide between those who can afford untainted access and those stuck with sponsored content. Critics argue this model echoes past controversies, like Meta's data scandals, fueling distrust in how personal interactions are commodified.

Also discussed by Bruce Schneier.

Related: Google Confirms AI Search Will Have Ads, but They May Look Different


Original Submission

posted by jelizondo on Sunday January 25, @08:30AM   Printer-friendly
from the as-the-years-go-by-I-am-sinking dept.

Human-driven land sinking now outpaces sea-level rise in many of the world's major delta systems, threatening more than 236 million people:

A study published on Jan. 14 in Nature shows that many of the world's major river deltas are sinking faster than sea levels are rising, potentially affecting hundreds of millions of people in these regions.

The major causes are groundwater withdrawal, reduced river sediment supply, and urban expansion.

[...] The findings show that in nearly every river delta examined, at least some portion is sinking faster than the sea is rising. Sinking land, or subsidence, already exceeds local sea-level rise in 18 of the 40 deltas, heightening near-term flood risk for more than 236 million people.

[...] Deltas experiencing concerning rates of elevation loss include the Mekong, Nile, Chao Phraya, Ganges–Brahmaputra, Mississippi, and Yellow River systems.

"In many places, groundwater extraction, sediment starvation, and rapid urbanization are causing land to sink much faster than previously recognized," Ohenhen said.

Some regions are sinking at more than twice the current global rate of sea-level rise.

"Our results show that subsidence isn't a distant future problem — it is happening now, at scales that exceed climate-driven sea-level rise in many deltas," said Shirzaei, co-author and director of Virginia Tech's Earth Observation and Innovation Lab.

Groundwater depletion emerged as the strongest overall predictor of delta sinking, though the dominant driver varies regionally.

"When groundwater is over-pumped or sediments fail to reach the coast, the land surface drops," said Werth, who co-led the groundwater analysis. "These processes are directly linked to human decisions, which means the solutions also lie within our control."

Journal Reference: Ohenhen, L.O., Shirzaei, M., Davis, J.L. et al. Global subsidence of river deltas. Nature (2026). https://doi.org/10.1038/s41586-025-09928-6


Original Submission

posted by jelizondo on Sunday January 25, @03:38AM   Printer-friendly

https://phys.org/news/2026-01-greenwashing-false-stability-companies.html

Companies engaging in 'greenwashing' to appear more favorable to investors, don't achieve durable financial stability in the long term, according to a new Murdoch University study.

The paper, "False Stability? How Greenwashing Shapes Firm Risk in the Short and Long Run," is published in the Journal of Risk and Financial Management.

Globally, there has been a rise in Environmental Social Governance (ESG) investing, where lenders prioritize a firm's sustainability performance when allocating capital. As a result, ESG scores have become an important measure for investors when assessing risk.

"However, ESG scores do not always reflect a firm's true environmental performance," said Tanvir Bhuiyan, associate lecturer in finance at the Murdoch Business School.

Greenwashing refers to the gap between what firms claim about their environmental performance and how they actually perform.

"In simple terms, it is when companies talk green but do not act green," Dr. Bhuiyan said. "Firms do this to gain reputational benefits, attract investors, and appear lower-risk and more responsible without necessarily reducing their carbon footprint."

The study examined Australian companies from 2014 to 2023 to understand how greenwashing affects financial risk and stability. To measure whether companies were exaggerating their sustainability performance, they created a comprehensive quantitative framework to measure greenwashing by directly comparing ESG scores with carbon emissions, allowing them to identify when sustainability claims were inflated.

They then analyzed how greenwashing affected a company's stability, by looking at its volatility in the stock market.

According to Dr. Bhuiyan, the key finding from the research was that greenwashing enhances firms' stability in the short term, but that effect fades away over time.

"In the short term, firms that exaggerate their ESG credentials appear less risky in the market, as investors interpret strong ESG signals as a sign of safety," he said.

"However, this benefit fades over time. When discrepancies between ESG claims and actual emissions become clearer, the market corrects its earlier optimism, and the stabilizing effect of greenwashing weakens."

Dr. Ariful Hoque, senior lecturer in finance at the Murdoch Business School, who also worked on the study, said they also found that greenwashing was a persistent trend for Australian firms from 2014–2022.

"On average, firms consistently reported ESG scores that were higher than what their actual carbon emissions would justify," Dr. Hoque said.

However, in 2023, he said there was a noticeable decline in greenwashing, "likely reflecting stronger ASIC enforcement, mandatory climate-risk disclosures policy starting from 2025, and greater investor scrutiny."

"For regulators, our results support the push for tighter ESG disclosure standards and stronger anti-greenwashing enforcement, as misleading sustainability claims distort risk pricing," he said.

"For investors, the findings highlight the importance of looking beyond headline ESG scores and examining whether firms' environmental claims match their actual emissions.

"For companies, this research indicates that greenwashing may buy short-term credibility, but genuine emissions reduction and transparent reporting are far more effective for managing long-term risk."

More information:

Rahma Mirza et al, False Stability? How Greenwashing Shapes Firm Risk in the Short and Long Run, Journal of Risk and Financial Management (2025). DOI: 10.3390/jrfm18120691


Original Submission

posted by janrinok on Saturday January 24, @10:54PM   Printer-friendly
from the written-in-stone dept.

A stunning discovery in a Moroccan cave is forcing scientists to reconsider the narrative of human origins. Unearthed from a site in Casablanca, 773,000-year-old fossils display a perplexing blend of ancient and modern features, suggesting that key traits of our species emerged far earlier and across a wider geographic area than previously believed:

The remains, found in the Grotte à Hominidés cave, include lower jawbones from two adults and a toddler, along with teeth, a thigh bone and vertebrae. The thigh bone bears hyena bite marks, indicating the individual may have been prey. The fossils present a mosaic: the face is relatively flat and gracile, resembling later Homo sapiens, while other features like the brow ridge and overall skull shape remain archaic, akin to earlier Homo species.

This mix of characteristics places the population at a critical evolutionary juncture. Paleoanthropologist Jean-Jacques Hublin, lead author of the study, stated, "I would be cautious about labeling them as 'the last common ancestor,' but they are plausibly close to the populations from which later African H. sapiens and Eurasian Neanderthal and Denisovan - lineages ultimately emerged."

[...] The find directly challenges the traditional "out-of-Africa" model, which holds that anatomically modern humans evolved in Africa around 200,000 years ago before migrating and replacing other hominin species. Instead, it supports a more complex picture where early human populations left Africa well before fully modern traits had evolved, with differentiation happening across continents.

"The fossils show a mosaic of primitive and derived traits, consistent with evolutionary differentiation already underway during this period, while reinforcing a deep African ancestry for the H. sapiens lineage," Hublin added.

Detailed analysis reveals the nuanced transition. One jaw shows a long, low shape similar to H. erectus, but its teeth and internal features resemble both modern humans and Neanderthals. The right canine is slender and small, akin to modern humans, while some incisor roots are longer, closer to Neanderthals. The molars present a unique blend, sharing traits with North African teeth, the Spanish species H. antecessor and archaic African H. erectus.

Related:


Original Submission

posted by janrinok on Saturday January 24, @06:11PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

[...] In an unexpected turn of events, Micron announced plans to buy Powerchip Semiconductor Manufacturing Corporation's (PSMC) P5 fabrication site in Tongluo, Miaoli County, Taiwan, for a total cash consideration of $1.8 billion. To a large degree, the transaction would evolve Micron's long-term 'technology-for-capacity' strategy, which it has used for decades. This also signals that DRAM fabs are now so capital-intensive that it is no longer viable for companies like PSMC to build them and get process technologies from companies like Micron. The purchase is also set against the backdrop of the ongoing DRAM supply squeeze, as data centers are set to consumer 70% of all memory chips made in 2026.

"This strategic acquisition of an existing cleanroom complements our current Taiwan operations and will enable Micron to increase production and better serve our customers in a market where demand continues to outpace supply," said Manish Bhatia, executive vice president of global operations at Micron Technology. "The Tongluo fab's close proximity to Micron's Taichung site will enable synergies across our Taiwan operations."

The deal between Micron and PSMC includes 300,000 square feet of existing 300mm cleanroom space, which will greatly expand Micron's production footprint in Taiwan. By today's standards, a 300,000 square foot cleanroom is a relatively large one, but it will be dwarfed by Micron's next-generation DRAM campus in New York, which will feature four cleanrooms of 600,000 square feet each. However, the first of those fabs will only come online in the late 2020s or in the early 2030s.

The transaction is expected to close by Q2 2026, pending receipt of all necessary approvals. After closing, Micron will gradually equip and ramp the site for DRAM production, with meaningful wafer output starting in the second half of 2027.

The agreement also establishes a long-term strategic partnership under which PSMC will support Micron with assembly services, while Micron will assist PSMC's legacy DRAM portfolio.

While the P5 site in Tongluo isn't producing memory in high volumes today, the change of ownership and inevitable upgrade of the fab itself will have an impact on global DRAM supply, which is good news for a segment that is experiencing unprecedented demand. While it is important that Micron is set to buy a production facility in Taiwan, it is even more important that the transaction marks an end to its technology-for-capacity approach to making memory on the island. In the past, instead of building large amounts of new greenfield fabs in Taiwan, Micron partnered with local foundries (most notably PSMC, but also with Inotera and Nanya) and provided advanced DRAM process technology in exchange for wafer capacity, manufacturing services, or fab access.

This approach allowed Micron to expand output faster and with less capital risk, leveraged Taiwan's mature 300mm manufacturing ecosystem, and avoided duplicating the front-end infrastructure, which was already in place.

However, it looks like the traditional technology-for-capacity model — which worked well in the 90nm – 20nm-class node era — no longer works. It worked well when DRAM fabs cost a few billion dollars, when process ramps were straightforward, and when partners could justify their capital risks in exchange for technologies (which cost billions in R&D investments) and stable wafer demand.

Today’s advanced DRAM fabs require $15 – $25 billion or more of upfront investment. This would go into equipment like pricey EUV scanners, as well as longer and riskier yield ramps. In that environment, a partner running someone else's IP absorbs massive CapEx and execution risk while getting limited advantages, which makes the economics increasingly unattractive: after all, if you can invest over $20 billion in a fab, you can certainly invest $2 billion in R&D.

In recent years, Micron's behavior has reflected this shift in thinking. Early technology-for-capacity deals helped it scale quickly, but once fabs crossed a certain cost and complexity threshold, Micron had to move on and own fabs instead of renting capacity. This is reflected in moves like its Elpida acquisition in 2013, where the company purchased a bankrupt memory maker to secure the company's capacity. This was followed up in 2016 with the Inotera acquisition, and now with PSMC.

[...] Now, the American company will own the site and invest in its transition to its latest process technologies.


Original Submission

posted by janrinok on Saturday January 24, @01:25PM   Printer-friendly
from the splish-splash-I-was-taking-a-bath dept.

Limescale deposits in wells, pipes, and bathing facilities provide information about Pompeii's ancient water supply:

The city of Pompeii was buried by the eruption of Mount Vesuvius in AD 79. Researchers at Johannes Gutenberg University Mainz (JGU) have now reconstructed the city's water supply system based on carbonate deposits – particularly the transition from wells to an aqueduct. The results were published yesterday in the journal PNAS. "The baths were originally supplied by deep wells with water-lifting devices, and the hygienic conditions in them were far from ideal," says Dr. Gül Sürmelihindi from the Institute of Geosciences at JGU, first author of the publication. "Over time, however, the water-lifting devices were upgraded through technological developments before being replaced by an aqueduct in the first century AD, which provided more water and allowed more frequent refreshment of water for bathing."

To reconstruct the ancient water supply, Sürmelihindi and her colleague Professor Cees Passchier used isotope analysis to examine carbonate deposits that had formed in various components of the city's water infrastructure – such as the aqueduct, water towers, well shafts, and the pools of the public baths. "We found completely different patterns of stable isotopes and trace elements in the carbonates from the aqueduct and in those from the wells," says Sürmelihindi. Based on these different geochemical characteristics, the team was able to determine the origin of the bathing water and draw conclusions about Pompeii's water management system and quality changes in provided water. They discovered that the wells tapped into highly mineralized groundwater from volcanic deposits, which was not ideal for drinking purposes. This agrees well with what was previously known: during the reign of Augustus, the aqueduct was built in Pompeii, significantly increasing the amount of available water for bathing and providing drinking water.

"In the so-called Republican Baths – the oldest public bathing facilities in the city, dating back to pre-Roman times around 130 BC – we were able to prove through isotope analysis that the bath water was provided from wells, and not renewed regularly. Therefore, the hygienic condition did not meet the high hygienic standards usually attributed to the Romans," explains Sürmelihindi. Probably, the water was only changed once daily, which, according to Sürmelihindi, would not be surprising: "After all, the baths were supplied by a water-lifting machine, powered by slaves via a kind of treadwheel."

The researchers also found lead, zinc, and copper peaks in the anthropogenic carbonate deposits which indicates contamination with heavy metals in water of the baths. This suggests that boilers and water pipes were replaced, which increased the heavy metal concentrations. An increase in stable oxygen isotopes also shows that the pools in the Republican Baths provided warmer water after the renovation.

The researchers also found peculiar, cyclic patterns in the carbon isotope ratio of carbonate from the wells. According to Passchier, a possible cause could lie in the fluctuating amount of volcanic carbon dioxide in the groundwater – this cyclicity may provide information on the activity of Mount Vesuvius long before the AD 79 eruption.

Journal Reference: G. Sürmelihindi et al., Seeing Roman life through water: Exploring Pompeii's public baths via carbonate deposits, PNAS, 12 January 2026,
DOI: 10.1073/pnas.2517276122


Original Submission

posted by janrinok on Saturday January 24, @08:42AM   Printer-friendly

I came across a very interesting social media post by John Carlos Baez about a paper published a few weeks ago that showed you can build a universal computation machine using a single billiard ball on a carefully crafted table. According to one of the paper's authors (Eva Miranda):

With Isaac Ramos, we show that 2D billiard systems are Turing complete, implying the existence of undecidable trajectories in physically natural models from hard-sphere gases to celestial mechanics.
Determinism ≠ predictability.

From Baez:

More precisely: you can create a computer that can run any program, using just a single point moving frictionlessly in a region of the plane and bouncing off the walls elastically.

Since the halting problem is undecidable, this means there are some yes-or-no questions about the eventual future behavior of this point that cannot be settled in a finite time by any computer program.

This is true even though the point's motion is computable to arbitrary accuracy for any given finite time. In fact, since the methodology here does *not* exploit the chaos that can occur for billiards on certain shaped tables, it's not even one of those cases where the point's motion is computable in principle but your knowledge of the initial conditions needs to be absurdly precise.

Achieving Turing completeness using billiards goes back to the early 80s with a paper by Fredkin and Toffoli that established the idea of "Conservative Logic," which was also mentioned by Richard Feynman in his Feynman Lectures on Computation, but that system used the interactions of multiple billiard balls whereas this paper shows you only need one (if you carefully layout the edges of your table).

The Baez link has some very interesting comments, including from Eva Miranda.


Original Submission

posted by janrinok on Saturday January 24, @04:01AM   Printer-friendly
from the cloudflop-again dept.

Arthur T Knackerbracket has processed the following story:

On January 8, 2026, a seemingly innocuous code change at Cloudflare triggered a cascade of DNS resolution failures across the internet, affecting millions of users worldwide. The culprit wasn't a cyberattack, server outage, or configuration error — it was something far more subtle: the order in which DNS records appeared in responses from 1.1.1.1, one of the world's most popular public DNS resolvers.

[...] The story begins on December 2, 2025, when Cloudflare engineers introduced what appeared to be a routine optimization to their DNS caching system. The change was designed to reduce memory usage — a worthy goal for infrastructure serving millions of queries per second. After testing in their development environment for over a month, the change began its global rollout on January 7, 2026.

By January 8 at 17:40 UTC, the update had reached 90% of Cloudflare's DNS servers. Within 39 minutes, the company had declared an incident as reports of DNS resolution failures poured in from around the world. The rollback began immediately, but it took another hour and a half to fully restore service.

The affected timeframe was relatively short — less than two hours from incident declaration to resolution — but the impact was significant. Users across multiple platforms and operating systems found themselves unable to access websites and services that relied on CNAME records, a fundamental building block of modern DNS infrastructure.

To understand what went wrong, it's essential to grasp how DNS CNAME (Canonical Name) records work. When you visit a website like www.example.com, your request might follow a chain of aliases before reaching the final destination:

Each step in this chain has its own Time-To-Live (TTL) value, indicating how long the record can be cached. When some records in the chain expire while others remain valid, DNS resolvers like 1.1.1.1 can optimize by only resolving the expired portions and combining them with cached data. This optimization is where the trouble began.

The problematic change was deceptively simple. Previously, when merging cached CNAME records with newly resolved data, Cloudflare's code created a new list and placed CNAME records first:

let mut answer_rrs = Vec::with_capacity(entry.answer.len() + self.records.len());
answer_rrs.extend_from_slice(&self.records); // CNAMEs first
answer_rrs.extend_from_slice(&entry.answer); // Then A/AAAA records

To save memory allocations, engineers changed this to append CNAMEs to the existing answer list. This seemingly minor optimization had a profound consequence: CNAME records now sometimes appeared after the final resolved answers instead of before them.

The reason this change caused widespread failures lies in how many DNS client implementations process responses. Some clients, including the widely-used getaddrinfo function in glibc (the GNU C Library used by most Linux systems), parse DNS responses sequentially while tracking the expected record name.

When processing a response in the correct order:

  • Find records for www.example.com
  • Encounter www.example.com CNAME cdn.example.com
  • Update expected name to cdn.example.com
  • Find cdn.example.com A 198.51.100.1
  • Success!

But when CNAMEs appear after A records:

  • Find records for www.example.com
  • Ignore cdn.example.com A 198.51.100.1 (doesn't match expected name)
  • Encounter www.example.com CNAME cdn.example.com
  • Update expected name to cdn.example.com
  • No more records found — resolution fails

This sequential parsing approach, while seemingly fragile, made sense when it was implemented. It's efficient, requires minimal memory, and worked reliably for decades because most DNS implementations naturally placed CNAME records first.

The impact of this change was far-reaching but unevenly distributed. The primary victims were systems using glibc's getaddrinfo function, which includes most traditional Linux distributions that don't use systemd-resolved as an intermediary caching layer.

Perhaps most dramatically affected were certain Cisco ethernet switches. Three specific models experienced spontaneous reboot loops when they received responses with reordered CNAMEs from 1.1.1.1. Cisco has since published a service document describing the issue, highlighting how deeply this problem penetrated into network infrastructure.

Interestingly, many modern systems were unaffected. Windows, macOS, iOS, and Android all use different DNS resolution libraries that handle record ordering more flexibly. Even on Linux, distributions using systemd-resolved were protected because the local caching resolver reconstructed responses according to its own ordering logic.

At the heart of this incident lies a fundamental ambiguity in RFC 1034, the 1987 specification that defines much of DNS behavior.

The phrase "possibly preface" suggests that CNAME records should appear before other records, but the language isn't normative. RFC 1034 predates RFC 2119 (published in 1997), which standardized the use of keywords like "MUST" and "SHOULD" to indicate requirements versus suggestions.

Further complicating matters, RFC 1034 also states that "the difference in ordering of the RRs in the answer section is not significant," though this comment appears in the context of a specific example comparing two A records, not different record types.

This ambiguity has persisted for nearly four decades, with different implementers reaching different conclusions about what the specification requires.

One of the most puzzling aspects of this incident is how it survived testing for over a month without detection. The answer reveals the complexity of modern internet infrastructure and the challenges of comprehensive testing.

Cloudflare's testing environment likely used systems that weren't affected by the change. Most modern operating systems handle DNS record ordering gracefully, and many Linux systems use systemd-resolved, which masks the underlying issue. The specific combination of factors needed to trigger the problem — direct use of glibc's resolver with CNAME chains from 1.1.1.1 — may not have been present in their test scenarios.

This highlights a broader challenge in infrastructure testing: the internet's diversity means that edge cases can have mainstream impact. What works in a controlled testing environment may fail when exposed to the full complexity of real-world deployments.

The DNS community's response to this incident has been swift and constructive. Cloudflare has committed to maintaining CNAME-first ordering in their responses and has authored an Internet-Draft proposing to clarify the ambiguous language in the original RFC.

The proposed specification would explicitly require CNAME records to appear before other record types in DNS responses, codifying what has been common practice for decades. If adopted, this would prevent similar incidents in the future by removing the ambiguity that allowed different interpretations.

The incident also sparked broader discussions about DNS implementation robustness. While Cloudflare's change exposed fragility in some client implementations, it also highlighted the importance of defensive programming in critical infrastructure components.

[...] The incident revealed an even deeper complexity: even when CNAME records appear first, their internal ordering can cause problems.

[...] For the broader DNS community, this incident serves as a reminder of the importance of specification clarity and comprehensive testing. As internet infrastructure continues to evolve, identifying and resolving these legacy ambiguities becomes increasingly important.

The incident also highlights the value of diverse DNS resolver implementations. The fact that different resolvers handle record ordering differently provided natural resilience — when one approach failed, others continued working.

The January 8, 2026 DNS incident demonstrates how seemingly minor changes to critical infrastructure can have far-reaching consequences. A memory optimization that moved CNAME records from the beginning to the end of DNS responses triggered failures across multiple platforms and caused network equipment to reboot.

At its core, this was a story about assumptions — assumptions built into 40-year-old specifications, assumptions made by implementers over decades, and assumptions about how systems would behave under different conditions. When those assumptions collided with reality, the result was a brief but significant disruption to internet connectivity.

[...] As Cloudflare's engineers learned, sometimes the order of things matters more than we realize. In the complex world of internet infrastructure, even the smallest details can have the largest consequences.


Original Submission

posted by janrinok on Friday January 23, @11:16PM   Printer-friendly

Caltech-led Team Finds New Superconducting State:

Superconductivity is a quantum physical state in which a metal is able to conduct electricity perfectly without any resistance. In its most familiar application, it enables powerful magnets in MRI machines to create the magnetic fields that allow doctors to see inside our bodies. Thus far, materials can only achieve superconductivity at extremely low temperatures, near absolute zero (a few tens of Kelvin or colder). But physicists dream of superconductive materials that might one day operate at room temperature. Such materials could open entirely new possibilities in areas such as quantum computing, the energy sector, and medical technologies.

"Understanding the mechanisms leading to the formation of superconductivity and discovering exotic new superconducting phases is not only one of the most stimulating pursuits in the fundamental study of quantum materials but is also driven by this ultimate dream of achieving room-temperature superconductivity," says Stevan Nadj-Perge, professor of applied physics and materials science at Caltech.

Now a team led by Nadj-Perge that includes Lingyuan Kong, AWS quantum postdoctoral scholar research associate, and other colleagues at Caltech has discovered a new superconducting state—a finding that provides a new piece of the puzzle behind this mysterious but powerful phenomenon.

[...] In normal metals, individual electrons collide with ions as they move across the metal's lattice structure made up of oppositely charged ions. Each collision causes electrons to lose energy, increasing electrical resistance. In superconductors, on the other hand, electrons are weakly attracted to each other and can bind, forming duos called Cooper pairs. As long as the electrons stay within a certain relatively small range of energy levels known as the energy gap, the electrons remain paired and do not lose energy through collisions. Therefore, it is within that relatively small energy gap that superconductivity occurs.

Typically, a superconductor's energy gap is the same at all locations within the material. For example, in a superconducting crystal without impurities, all pieces of the crystal would have the same energy gap.

But beginning in the 1960s, scientists began theorizing that the energy gap in some superconducting materials could modulate in space, meaning the gap could be stronger in some areas and weaker in others. Later, in the 2000s, the idea was further developed with the proposal of what is called the pair density wave (PDW) state, which suggests that a superconducting state could arise in which the energy gap modulates with a long wavelength, where the gap fluctuates between a larger and smaller measurement.

Over the past decade, this concept has garnered significant experimental interest with numerous materials, including iron-based superconductors being explored as potential hosts of a PDW state.

Now, working with extremely thin flakes of an iron-based superconductor, FeTe0.55Se0.45, Nadj-Perge and his colleagues have discovered a modulation of the superconducting gap with the smallest wavelength possible, matching the spacing of atoms in a crystal. They have named it the Cooper-pair density modulation (PDM) state.

"The observed gap modulation, reaching up to 40 percent, represents the strongest reported so far, leading to the clearest experimental evidence to date that gap modulation can exist even at the atomic scale," says Kong, lead author of the new paper.

This unexpected discovery was made possible by the first successful realization of scanning tunneling microscopy experiments of an iron-based superconductor on a specialized device for studying such thin flakes. Such experiments had been hampered for nearly two decades by the presence of severe surface contamination, but the Caltech team, working in the Kavli Nanoscience Institute (KNI), developed a new experimental approach that enabled a sufficiently clean surface for microscopic probes.

Journal Reference:
Kong, Lingyuan, Papaj, Michał, Kim, Hyunjin, et al. Cooper-pair density modulation state in an iron-based superconductor, Nature (DOI: 10.1038/s41586-025-08703-x)


Original Submission

posted by janrinok on Friday January 23, @06:32PM   Printer-friendly

Starlink in Iran

An interesting technical article about satellite communications and Iran

In Iran, not only mobile and fixed networks are jammed, but also Starlink. We explain how this is likely achieved despite thousands of satellites.

Reliable information is challenging to come by, as practically the entire country has been offline since the evening of January 8; the content delivery network Cloudflare registers almost no more data traffic from Iran, and the internet observation group Netblocks also speaks of a complete communication blockade.

One of the few digital ways out currently leads via satellite through the global network Starlink by SpaceX. Although usage is forbidden in Iran, terminals are smuggled into the country, and SpaceX tolerates their use; since January 13, it has even been free of charge. However, activists are reporting that Starlink is also functioning increasingly poorly in Iran, and users are being actively tracked. But how can a system of thousands of satellites be jammed from the ground, and how does the regime find users of the devices without access to customer data or the network ?

The US organization Holistic Resilience, which helps Iranians secure their internet access, speaks of around 50,000 users in the country. In this article, we will explore how Starlink works, why it functions in Iran, and how the Iranian government is likely jamming the network. While neither the regime nor SpaceX likes to reveal their cards, hackers and journalists are not deterred by this, and the laws of physics apply to everyone.


Original Submission

posted by janrinok on Friday January 23, @01:45PM   Printer-friendly

Physics of Foam Strangely Resembles AI Training:

Foams are everywhere: soap suds, shaving cream, whipped toppings and food emulsions like mayonnaise. For decades, scientists believed that foams behave like glass, their microscopic components trapped in static, disordered configurations.

Now, engineers at the University of Pennsylvania have found that foams actually flow ceaselessly inside while holding their external shape. More strangely, from a mathematical perspective, this internal motion resembles the process of deep learning, the method typically used to train modern AI systems.

The discovery could hint that learning, in a broad mathematical sense, may be a common organizing principle across physical, biological and computational systems, and provide a conceptual foundation for future efforts to design adaptive materials. The insight could also shed new light on biological structures that continuously rearrange themselves, like the scaffolding in living cells.

In a paper in Proceedings of the National Academy of Sciences, the team describes using computer simulations to track the movement of bubbles in a wet foam. Rather than eventually staying put, the bubbles continued to meander through possible configurations. Mathematically speaking, the process mirrors how deep learning involves continually adjusting an AI system's parameters — the information that encodes what an AI "knows" — during training.

"Foams constantly reorganize themselves," says John C. Crocker, Professor in Chemical and Biomolecular Engineering (CBE) and the paper's co-senior author. "It's striking that foams and modern AI systems appear to follow the same mathematical principles. Understanding why that happens is still an open question, but it could reshape how we think about adaptive materials and even living systems."

In some ways, foams behave mechanically like solids: they more or less hold their shape and can rebound when pressed. At a microscopic level, however, foams are "two-phase" materials, made up of bubbles suspended in a liquid or solid. Because foams are relatively easy to create and observe yet exhibit complex mechanical behavior, they have long served as model systems for studying other crowded, dynamic materials, including living cells.

[...] During training, modern AI systems continually adjust their parameters — the numerical values that encode what they "know." Much like bubbles in foams were once thought to descend into metaphorical valleys, searching for the positions that require the least energy to maintain, early approaches to AI training aimed to optimize systems as tightly as possible to their training data.

Deep learning accomplishes this using optimization algorithms related to the mathematical technique "gradient descent," which involves repeatedly nudging a system in the direction that most improves its performance. If an AI's internal representation of its training data were a landscape, the optimizers guide the system downhill, step by step, toward configurations that reduce error — those that best match the examples it has seen before.

Over time, researchers realized that forcing systems into the deepest possible valleys was counterproductive. Models that optimized too precisely became brittle, unable to generalize beyond the data they had already seen. "The key insight was realizing that you don't actually want to push the system into the deepest possible valley," says Robert Riggleman, Professor in CBE and co-senior author of the new paper. "Keeping it in flatter parts of the landscape, where lots of solutions perform similarly well, turns out to be what allows these models to generalize."

When the Penn researchers looked again at their foam data through this lens, the parallel was hard to miss. Rather than settling into "deep" positions in this metaphorical landscape, bubbles in foams also remained in motion, much like the parameters in modern AI systems, continuously reorganizing within broad, flat regions with similar characteristics. The same mathematics that explains why deep learning works turned out to describe what foams had been doing all along.

[...] "Why the mathematics of deep learning accurately characterizes foams is a fascinating question," says Crocker. "It hints that these tools may be useful far outside of their original context, opening the door to entirely new lines of inquiry."

Journal Reference: Amruthesh Thirumalaiswamy et al, Slow relaxation and landscape-driven dynamics in viscous ripening foams, PNAS (2025). https://doi.org/10.1073/pnas.2518994122.     https://dx.doi.org/10.48550/arxiv.2301.13400 [arXiv]


Original Submission

posted by hubie on Friday January 23, @08:59AM   Printer-friendly
from the dystopia-is-now! dept.

https://arstechnica.com/ai/2026/01/new-ai-plugin-uses-wikipedias-ai-writing-detection-rules-to-help-it-sound-human/

On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called "Humanizer," the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plugin on GitHub, where it has picked up over 1,600 stars as of Monday.

"It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X. "So much so that you can just tell your LLM to... not do that."
[...]
Chen's tool is a "skill file" for Claude Code, Anthropic's terminal-based coding assistant, which involves a Markdown-formatted file that adds a list of written instructions (you can see them here) appended to the prompt fed into the large language model (LLM) that powers the assistant. Unlike a normal system prompt, for example, the skill information is formatted in a standardized way that Claude models are fine-tuned to interpret with more precision than a plain system prompt. (Custom skills require a paid Claude subscription with code execution turned on.)
[...]
So what does AI writing look like? The Wikipedia guide is specific with many examples, but we'll give you just one here for brevity's sake.

Some chatbots love to pump up their subjects with phrases like "marking a pivotal moment" or "stands as a testament to," according to the guide. They write like tourism brochures, calling views "breathtaking" and describing towns as "nestled within" scenic regions. They tack "-ing" phrases onto the end of sentences to sound analytical: "symbolizing the region's commitment to innovation."

To work around those rules, the Humanizer skill tells Claude to replace inflated language with plain facts and offers this example transformation:

Before: "The Statistical Institute of Catalonia was officially established in 1989, marking a pivotal moment in the evolution of regional statistics in Spain."

After: "The Statistical Institute of Catalonia was established in 1989 to collect and publish regional statistics."
[...]
even though most AI language models tend toward certain types of language, they can also be prompted to avoid them, as with the Humanizer skill. (Although sometimes it's very difficult, as OpenAI found in its yearslong struggle against the em dash.)

Also, humans can write in chatbot-like ways. For example, this article likely contains some "AI-written traits" that trigger AI detectors even though it was written by a professional writer—especially if we use even a single em dash—because most LLMs picked up writing techniques from examples of professional writing scraped from the web.

[My initial reaction was, nice a way to filter out the AI slop! And there's a plugin! When in reality, it's a plugin to help the Claude LLM sound less like an AI. So, the deep dark bad path, got it. Too much optimism, I guess.]


Original Submission

posted by hubie on Friday January 23, @04:12AM   Printer-friendly
from the yarrrvidia dept.

https://torrentfreak.com/nvidia-contacted-annas-archive-to-secure-access-to-millions-of-pirated-books/

'NVIDIA Contacted Anna's Archive to Secure Access to Millions of Pirated Books'

The new complaint alleges that "competitive pressures drove NVIDIA to piracy", which allegedly included collaborating with the controversial Anna's Archive library.

According to the amended complaint, a member of Nvidia's data strategy team reached out to Anna's Archive to find out what the pirate library could offer the trillion-dollar company

"Within a week of contacting Anna's Archive, and days after being warned by Anna's Archive of the illegal nature of their collections, NVIDIA management gave 'the green light' to proceed with the piracy. Anna's Archive offered NVIDIA millions of pirated copyrighted books."

Busted? Well at least they asked. Meta blamed it on porno. Will there be any fallout?


Original Submission