Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you put ketchup on the hot dog you are going to consume?

  • Yes, always
  • No, never
  • Only when it would be socially awkward to refuse
  • Not when I'm in Chicago
  • Especially when I'm in Chicago
  • I don't eat hot dogs
  • What is this "hot dog" of which you speak?
  • It's spelled "catsup" you insensitive clod!

[ Results | Polls ]
Comments:40 | Votes:118

posted by janrinok on Saturday January 24, @10:54PM   Printer-friendly
from the written-in-stone dept.

A stunning discovery in a Moroccan cave is forcing scientists to reconsider the narrative of human origins. Unearthed from a site in Casablanca, 773,000-year-old fossils display a perplexing blend of ancient and modern features, suggesting that key traits of our species emerged far earlier and across a wider geographic area than previously believed:

The remains, found in the Grotte à Hominidés cave, include lower jawbones from two adults and a toddler, along with teeth, a thigh bone and vertebrae. The thigh bone bears hyena bite marks, indicating the individual may have been prey. The fossils present a mosaic: the face is relatively flat and gracile, resembling later Homo sapiens, while other features like the brow ridge and overall skull shape remain archaic, akin to earlier Homo species.

This mix of characteristics places the population at a critical evolutionary juncture. Paleoanthropologist Jean-Jacques Hublin, lead author of the study, stated, "I would be cautious about labeling them as 'the last common ancestor,' but they are plausibly close to the populations from which later African H. sapiens and Eurasian Neanderthal and Denisovan - lineages ultimately emerged."

[...] The find directly challenges the traditional "out-of-Africa" model, which holds that anatomically modern humans evolved in Africa around 200,000 years ago before migrating and replacing other hominin species. Instead, it supports a more complex picture where early human populations left Africa well before fully modern traits had evolved, with differentiation happening across continents.

"The fossils show a mosaic of primitive and derived traits, consistent with evolutionary differentiation already underway during this period, while reinforcing a deep African ancestry for the H. sapiens lineage," Hublin added.

Detailed analysis reveals the nuanced transition. One jaw shows a long, low shape similar to H. erectus, but its teeth and internal features resemble both modern humans and Neanderthals. The right canine is slender and small, akin to modern humans, while some incisor roots are longer, closer to Neanderthals. The molars present a unique blend, sharing traits with North African teeth, the Spanish species H. antecessor and archaic African H. erectus.

Related:


Original Submission

posted by janrinok on Saturday January 24, @06:11PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

[...] In an unexpected turn of events, Micron announced plans to buy Powerchip Semiconductor Manufacturing Corporation's (PSMC) P5 fabrication site in Tongluo, Miaoli County, Taiwan, for a total cash consideration of $1.8 billion. To a large degree, the transaction would evolve Micron's long-term 'technology-for-capacity' strategy, which it has used for decades. This also signals that DRAM fabs are now so capital-intensive that it is no longer viable for companies like PSMC to build them and get process technologies from companies like Micron. The purchase is also set against the backdrop of the ongoing DRAM supply squeeze, as data centers are set to consumer 70% of all memory chips made in 2026.

"This strategic acquisition of an existing cleanroom complements our current Taiwan operations and will enable Micron to increase production and better serve our customers in a market where demand continues to outpace supply," said Manish Bhatia, executive vice president of global operations at Micron Technology. "The Tongluo fab's close proximity to Micron's Taichung site will enable synergies across our Taiwan operations."

The deal between Micron and PSMC includes 300,000 square feet of existing 300mm cleanroom space, which will greatly expand Micron's production footprint in Taiwan. By today's standards, a 300,000 square foot cleanroom is a relatively large one, but it will be dwarfed by Micron's next-generation DRAM campus in New York, which will feature four cleanrooms of 600,000 square feet each. However, the first of those fabs will only come online in the late 2020s or in the early 2030s.

The transaction is expected to close by Q2 2026, pending receipt of all necessary approvals. After closing, Micron will gradually equip and ramp the site for DRAM production, with meaningful wafer output starting in the second half of 2027.

The agreement also establishes a long-term strategic partnership under which PSMC will support Micron with assembly services, while Micron will assist PSMC's legacy DRAM portfolio.

While the P5 site in Tongluo isn't producing memory in high volumes today, the change of ownership and inevitable upgrade of the fab itself will have an impact on global DRAM supply, which is good news for a segment that is experiencing unprecedented demand. While it is important that Micron is set to buy a production facility in Taiwan, it is even more important that the transaction marks an end to its technology-for-capacity approach to making memory on the island. In the past, instead of building large amounts of new greenfield fabs in Taiwan, Micron partnered with local foundries (most notably PSMC, but also with Inotera and Nanya) and provided advanced DRAM process technology in exchange for wafer capacity, manufacturing services, or fab access.

This approach allowed Micron to expand output faster and with less capital risk, leveraged Taiwan's mature 300mm manufacturing ecosystem, and avoided duplicating the front-end infrastructure, which was already in place.

However, it looks like the traditional technology-for-capacity model — which worked well in the 90nm – 20nm-class node era — no longer works. It worked well when DRAM fabs cost a few billion dollars, when process ramps were straightforward, and when partners could justify their capital risks in exchange for technologies (which cost billions in R&D investments) and stable wafer demand.

Today’s advanced DRAM fabs require $15 – $25 billion or more of upfront investment. This would go into equipment like pricey EUV scanners, as well as longer and riskier yield ramps. In that environment, a partner running someone else's IP absorbs massive CapEx and execution risk while getting limited advantages, which makes the economics increasingly unattractive: after all, if you can invest over $20 billion in a fab, you can certainly invest $2 billion in R&D.

In recent years, Micron's behavior has reflected this shift in thinking. Early technology-for-capacity deals helped it scale quickly, but once fabs crossed a certain cost and complexity threshold, Micron had to move on and own fabs instead of renting capacity. This is reflected in moves like its Elpida acquisition in 2013, where the company purchased a bankrupt memory maker to secure the company's capacity. This was followed up in 2016 with the Inotera acquisition, and now with PSMC.

[...] Now, the American company will own the site and invest in its transition to its latest process technologies.


Original Submission

posted by janrinok on Saturday January 24, @01:25PM   Printer-friendly
from the splish-splash-I-was-taking-a-bath dept.

Limescale deposits in wells, pipes, and bathing facilities provide information about Pompeii's ancient water supply:

The city of Pompeii was buried by the eruption of Mount Vesuvius in AD 79. Researchers at Johannes Gutenberg University Mainz (JGU) have now reconstructed the city's water supply system based on carbonate deposits – particularly the transition from wells to an aqueduct. The results were published yesterday in the journal PNAS. "The baths were originally supplied by deep wells with water-lifting devices, and the hygienic conditions in them were far from ideal," says Dr. Gül Sürmelihindi from the Institute of Geosciences at JGU, first author of the publication. "Over time, however, the water-lifting devices were upgraded through technological developments before being replaced by an aqueduct in the first century AD, which provided more water and allowed more frequent refreshment of water for bathing."

To reconstruct the ancient water supply, Sürmelihindi and her colleague Professor Cees Passchier used isotope analysis to examine carbonate deposits that had formed in various components of the city's water infrastructure – such as the aqueduct, water towers, well shafts, and the pools of the public baths. "We found completely different patterns of stable isotopes and trace elements in the carbonates from the aqueduct and in those from the wells," says Sürmelihindi. Based on these different geochemical characteristics, the team was able to determine the origin of the bathing water and draw conclusions about Pompeii's water management system and quality changes in provided water. They discovered that the wells tapped into highly mineralized groundwater from volcanic deposits, which was not ideal for drinking purposes. This agrees well with what was previously known: during the reign of Augustus, the aqueduct was built in Pompeii, significantly increasing the amount of available water for bathing and providing drinking water.

"In the so-called Republican Baths – the oldest public bathing facilities in the city, dating back to pre-Roman times around 130 BC – we were able to prove through isotope analysis that the bath water was provided from wells, and not renewed regularly. Therefore, the hygienic condition did not meet the high hygienic standards usually attributed to the Romans," explains Sürmelihindi. Probably, the water was only changed once daily, which, according to Sürmelihindi, would not be surprising: "After all, the baths were supplied by a water-lifting machine, powered by slaves via a kind of treadwheel."

The researchers also found lead, zinc, and copper peaks in the anthropogenic carbonate deposits which indicates contamination with heavy metals in water of the baths. This suggests that boilers and water pipes were replaced, which increased the heavy metal concentrations. An increase in stable oxygen isotopes also shows that the pools in the Republican Baths provided warmer water after the renovation.

The researchers also found peculiar, cyclic patterns in the carbon isotope ratio of carbonate from the wells. According to Passchier, a possible cause could lie in the fluctuating amount of volcanic carbon dioxide in the groundwater – this cyclicity may provide information on the activity of Mount Vesuvius long before the AD 79 eruption.

Journal Reference: G. Sürmelihindi et al., Seeing Roman life through water: Exploring Pompeii's public baths via carbonate deposits, PNAS, 12 January 2026,
DOI: 10.1073/pnas.2517276122


Original Submission

posted by janrinok on Saturday January 24, @08:42AM   Printer-friendly

I came across a very interesting social media post by John Carlos Baez about a paper published a few weeks ago that showed you can build a universal computation machine using a single billiard ball on a carefully crafted table. According to one of the paper's authors (Eva Miranda):

With Isaac Ramos, we show that 2D billiard systems are Turing complete, implying the existence of undecidable trajectories in physically natural models from hard-sphere gases to celestial mechanics.
Determinism ≠ predictability.

From Baez:

More precisely: you can create a computer that can run any program, using just a single point moving frictionlessly in a region of the plane and bouncing off the walls elastically.

Since the halting problem is undecidable, this means there are some yes-or-no questions about the eventual future behavior of this point that cannot be settled in a finite time by any computer program.

This is true even though the point's motion is computable to arbitrary accuracy for any given finite time. In fact, since the methodology here does *not* exploit the chaos that can occur for billiards on certain shaped tables, it's not even one of those cases where the point's motion is computable in principle but your knowledge of the initial conditions needs to be absurdly precise.

Achieving Turing completeness using billiards goes back to the early 80s with a paper by Fredkin and Toffoli that established the idea of "Conservative Logic," which was also mentioned by Richard Feynman in his Feynman Lectures on Computation, but that system used the interactions of multiple billiard balls whereas this paper shows you only need one (if you carefully layout the edges of your table).

The Baez link has some very interesting comments, including from Eva Miranda.


Original Submission

posted by janrinok on Saturday January 24, @04:01AM   Printer-friendly
from the cloudflop-again dept.

Arthur T Knackerbracket has processed the following story:

On January 8, 2026, a seemingly innocuous code change at Cloudflare triggered a cascade of DNS resolution failures across the internet, affecting millions of users worldwide. The culprit wasn't a cyberattack, server outage, or configuration error — it was something far more subtle: the order in which DNS records appeared in responses from 1.1.1.1, one of the world's most popular public DNS resolvers.

[...] The story begins on December 2, 2025, when Cloudflare engineers introduced what appeared to be a routine optimization to their DNS caching system. The change was designed to reduce memory usage — a worthy goal for infrastructure serving millions of queries per second. After testing in their development environment for over a month, the change began its global rollout on January 7, 2026.

By January 8 at 17:40 UTC, the update had reached 90% of Cloudflare's DNS servers. Within 39 minutes, the company had declared an incident as reports of DNS resolution failures poured in from around the world. The rollback began immediately, but it took another hour and a half to fully restore service.

The affected timeframe was relatively short — less than two hours from incident declaration to resolution — but the impact was significant. Users across multiple platforms and operating systems found themselves unable to access websites and services that relied on CNAME records, a fundamental building block of modern DNS infrastructure.

To understand what went wrong, it's essential to grasp how DNS CNAME (Canonical Name) records work. When you visit a website like www.example.com, your request might follow a chain of aliases before reaching the final destination:

Each step in this chain has its own Time-To-Live (TTL) value, indicating how long the record can be cached. When some records in the chain expire while others remain valid, DNS resolvers like 1.1.1.1 can optimize by only resolving the expired portions and combining them with cached data. This optimization is where the trouble began.

The problematic change was deceptively simple. Previously, when merging cached CNAME records with newly resolved data, Cloudflare's code created a new list and placed CNAME records first:

let mut answer_rrs = Vec::with_capacity(entry.answer.len() + self.records.len());
answer_rrs.extend_from_slice(&self.records); // CNAMEs first
answer_rrs.extend_from_slice(&entry.answer); // Then A/AAAA records

To save memory allocations, engineers changed this to append CNAMEs to the existing answer list. This seemingly minor optimization had a profound consequence: CNAME records now sometimes appeared after the final resolved answers instead of before them.

The reason this change caused widespread failures lies in how many DNS client implementations process responses. Some clients, including the widely-used getaddrinfo function in glibc (the GNU C Library used by most Linux systems), parse DNS responses sequentially while tracking the expected record name.

When processing a response in the correct order:

  • Find records for www.example.com
  • Encounter www.example.com CNAME cdn.example.com
  • Update expected name to cdn.example.com
  • Find cdn.example.com A 198.51.100.1
  • Success!

But when CNAMEs appear after A records:

  • Find records for www.example.com
  • Ignore cdn.example.com A 198.51.100.1 (doesn't match expected name)
  • Encounter www.example.com CNAME cdn.example.com
  • Update expected name to cdn.example.com
  • No more records found — resolution fails

This sequential parsing approach, while seemingly fragile, made sense when it was implemented. It's efficient, requires minimal memory, and worked reliably for decades because most DNS implementations naturally placed CNAME records first.

The impact of this change was far-reaching but unevenly distributed. The primary victims were systems using glibc's getaddrinfo function, which includes most traditional Linux distributions that don't use systemd-resolved as an intermediary caching layer.

Perhaps most dramatically affected were certain Cisco ethernet switches. Three specific models experienced spontaneous reboot loops when they received responses with reordered CNAMEs from 1.1.1.1. Cisco has since published a service document describing the issue, highlighting how deeply this problem penetrated into network infrastructure.

Interestingly, many modern systems were unaffected. Windows, macOS, iOS, and Android all use different DNS resolution libraries that handle record ordering more flexibly. Even on Linux, distributions using systemd-resolved were protected because the local caching resolver reconstructed responses according to its own ordering logic.

At the heart of this incident lies a fundamental ambiguity in RFC 1034, the 1987 specification that defines much of DNS behavior.

The phrase "possibly preface" suggests that CNAME records should appear before other records, but the language isn't normative. RFC 1034 predates RFC 2119 (published in 1997), which standardized the use of keywords like "MUST" and "SHOULD" to indicate requirements versus suggestions.

Further complicating matters, RFC 1034 also states that "the difference in ordering of the RRs in the answer section is not significant," though this comment appears in the context of a specific example comparing two A records, not different record types.

This ambiguity has persisted for nearly four decades, with different implementers reaching different conclusions about what the specification requires.

One of the most puzzling aspects of this incident is how it survived testing for over a month without detection. The answer reveals the complexity of modern internet infrastructure and the challenges of comprehensive testing.

Cloudflare's testing environment likely used systems that weren't affected by the change. Most modern operating systems handle DNS record ordering gracefully, and many Linux systems use systemd-resolved, which masks the underlying issue. The specific combination of factors needed to trigger the problem — direct use of glibc's resolver with CNAME chains from 1.1.1.1 — may not have been present in their test scenarios.

This highlights a broader challenge in infrastructure testing: the internet's diversity means that edge cases can have mainstream impact. What works in a controlled testing environment may fail when exposed to the full complexity of real-world deployments.

The DNS community's response to this incident has been swift and constructive. Cloudflare has committed to maintaining CNAME-first ordering in their responses and has authored an Internet-Draft proposing to clarify the ambiguous language in the original RFC.

The proposed specification would explicitly require CNAME records to appear before other record types in DNS responses, codifying what has been common practice for decades. If adopted, this would prevent similar incidents in the future by removing the ambiguity that allowed different interpretations.

The incident also sparked broader discussions about DNS implementation robustness. While Cloudflare's change exposed fragility in some client implementations, it also highlighted the importance of defensive programming in critical infrastructure components.

[...] The incident revealed an even deeper complexity: even when CNAME records appear first, their internal ordering can cause problems.

[...] For the broader DNS community, this incident serves as a reminder of the importance of specification clarity and comprehensive testing. As internet infrastructure continues to evolve, identifying and resolving these legacy ambiguities becomes increasingly important.

The incident also highlights the value of diverse DNS resolver implementations. The fact that different resolvers handle record ordering differently provided natural resilience — when one approach failed, others continued working.

The January 8, 2026 DNS incident demonstrates how seemingly minor changes to critical infrastructure can have far-reaching consequences. A memory optimization that moved CNAME records from the beginning to the end of DNS responses triggered failures across multiple platforms and caused network equipment to reboot.

At its core, this was a story about assumptions — assumptions built into 40-year-old specifications, assumptions made by implementers over decades, and assumptions about how systems would behave under different conditions. When those assumptions collided with reality, the result was a brief but significant disruption to internet connectivity.

[...] As Cloudflare's engineers learned, sometimes the order of things matters more than we realize. In the complex world of internet infrastructure, even the smallest details can have the largest consequences.


Original Submission

posted by janrinok on Friday January 23, @11:16PM   Printer-friendly

Caltech-led Team Finds New Superconducting State:

Superconductivity is a quantum physical state in which a metal is able to conduct electricity perfectly without any resistance. In its most familiar application, it enables powerful magnets in MRI machines to create the magnetic fields that allow doctors to see inside our bodies. Thus far, materials can only achieve superconductivity at extremely low temperatures, near absolute zero (a few tens of Kelvin or colder). But physicists dream of superconductive materials that might one day operate at room temperature. Such materials could open entirely new possibilities in areas such as quantum computing, the energy sector, and medical technologies.

"Understanding the mechanisms leading to the formation of superconductivity and discovering exotic new superconducting phases is not only one of the most stimulating pursuits in the fundamental study of quantum materials but is also driven by this ultimate dream of achieving room-temperature superconductivity," says Stevan Nadj-Perge, professor of applied physics and materials science at Caltech.

Now a team led by Nadj-Perge that includes Lingyuan Kong, AWS quantum postdoctoral scholar research associate, and other colleagues at Caltech has discovered a new superconducting state—a finding that provides a new piece of the puzzle behind this mysterious but powerful phenomenon.

[...] In normal metals, individual electrons collide with ions as they move across the metal's lattice structure made up of oppositely charged ions. Each collision causes electrons to lose energy, increasing electrical resistance. In superconductors, on the other hand, electrons are weakly attracted to each other and can bind, forming duos called Cooper pairs. As long as the electrons stay within a certain relatively small range of energy levels known as the energy gap, the electrons remain paired and do not lose energy through collisions. Therefore, it is within that relatively small energy gap that superconductivity occurs.

Typically, a superconductor's energy gap is the same at all locations within the material. For example, in a superconducting crystal without impurities, all pieces of the crystal would have the same energy gap.

But beginning in the 1960s, scientists began theorizing that the energy gap in some superconducting materials could modulate in space, meaning the gap could be stronger in some areas and weaker in others. Later, in the 2000s, the idea was further developed with the proposal of what is called the pair density wave (PDW) state, which suggests that a superconducting state could arise in which the energy gap modulates with a long wavelength, where the gap fluctuates between a larger and smaller measurement.

Over the past decade, this concept has garnered significant experimental interest with numerous materials, including iron-based superconductors being explored as potential hosts of a PDW state.

Now, working with extremely thin flakes of an iron-based superconductor, FeTe0.55Se0.45, Nadj-Perge and his colleagues have discovered a modulation of the superconducting gap with the smallest wavelength possible, matching the spacing of atoms in a crystal. They have named it the Cooper-pair density modulation (PDM) state.

"The observed gap modulation, reaching up to 40 percent, represents the strongest reported so far, leading to the clearest experimental evidence to date that gap modulation can exist even at the atomic scale," says Kong, lead author of the new paper.

This unexpected discovery was made possible by the first successful realization of scanning tunneling microscopy experiments of an iron-based superconductor on a specialized device for studying such thin flakes. Such experiments had been hampered for nearly two decades by the presence of severe surface contamination, but the Caltech team, working in the Kavli Nanoscience Institute (KNI), developed a new experimental approach that enabled a sufficiently clean surface for microscopic probes.

Journal Reference:
Kong, Lingyuan, Papaj, Michał, Kim, Hyunjin, et al. Cooper-pair density modulation state in an iron-based superconductor, Nature (DOI: 10.1038/s41586-025-08703-x)


Original Submission

posted by janrinok on Friday January 23, @06:32PM   Printer-friendly

Starlink in Iran

An interesting technical article about satellite communications and Iran

In Iran, not only mobile and fixed networks are jammed, but also Starlink. We explain how this is likely achieved despite thousands of satellites.

Reliable information is challenging to come by, as practically the entire country has been offline since the evening of January 8; the content delivery network Cloudflare registers almost no more data traffic from Iran, and the internet observation group Netblocks also speaks of a complete communication blockade.

One of the few digital ways out currently leads via satellite through the global network Starlink by SpaceX. Although usage is forbidden in Iran, terminals are smuggled into the country, and SpaceX tolerates their use; since January 13, it has even been free of charge. However, activists are reporting that Starlink is also functioning increasingly poorly in Iran, and users are being actively tracked. But how can a system of thousands of satellites be jammed from the ground, and how does the regime find users of the devices without access to customer data or the network ?

The US organization Holistic Resilience, which helps Iranians secure their internet access, speaks of around 50,000 users in the country. In this article, we will explore how Starlink works, why it functions in Iran, and how the Iranian government is likely jamming the network. While neither the regime nor SpaceX likes to reveal their cards, hackers and journalists are not deterred by this, and the laws of physics apply to everyone.


Original Submission

posted by janrinok on Friday January 23, @01:45PM   Printer-friendly

Physics of Foam Strangely Resembles AI Training:

Foams are everywhere: soap suds, shaving cream, whipped toppings and food emulsions like mayonnaise. For decades, scientists believed that foams behave like glass, their microscopic components trapped in static, disordered configurations.

Now, engineers at the University of Pennsylvania have found that foams actually flow ceaselessly inside while holding their external shape. More strangely, from a mathematical perspective, this internal motion resembles the process of deep learning, the method typically used to train modern AI systems.

The discovery could hint that learning, in a broad mathematical sense, may be a common organizing principle across physical, biological and computational systems, and provide a conceptual foundation for future efforts to design adaptive materials. The insight could also shed new light on biological structures that continuously rearrange themselves, like the scaffolding in living cells.

In a paper in Proceedings of the National Academy of Sciences, the team describes using computer simulations to track the movement of bubbles in a wet foam. Rather than eventually staying put, the bubbles continued to meander through possible configurations. Mathematically speaking, the process mirrors how deep learning involves continually adjusting an AI system's parameters — the information that encodes what an AI "knows" — during training.

"Foams constantly reorganize themselves," says John C. Crocker, Professor in Chemical and Biomolecular Engineering (CBE) and the paper's co-senior author. "It's striking that foams and modern AI systems appear to follow the same mathematical principles. Understanding why that happens is still an open question, but it could reshape how we think about adaptive materials and even living systems."

In some ways, foams behave mechanically like solids: they more or less hold their shape and can rebound when pressed. At a microscopic level, however, foams are "two-phase" materials, made up of bubbles suspended in a liquid or solid. Because foams are relatively easy to create and observe yet exhibit complex mechanical behavior, they have long served as model systems for studying other crowded, dynamic materials, including living cells.

[...] During training, modern AI systems continually adjust their parameters — the numerical values that encode what they "know." Much like bubbles in foams were once thought to descend into metaphorical valleys, searching for the positions that require the least energy to maintain, early approaches to AI training aimed to optimize systems as tightly as possible to their training data.

Deep learning accomplishes this using optimization algorithms related to the mathematical technique "gradient descent," which involves repeatedly nudging a system in the direction that most improves its performance. If an AI's internal representation of its training data were a landscape, the optimizers guide the system downhill, step by step, toward configurations that reduce error — those that best match the examples it has seen before.

Over time, researchers realized that forcing systems into the deepest possible valleys was counterproductive. Models that optimized too precisely became brittle, unable to generalize beyond the data they had already seen. "The key insight was realizing that you don't actually want to push the system into the deepest possible valley," says Robert Riggleman, Professor in CBE and co-senior author of the new paper. "Keeping it in flatter parts of the landscape, where lots of solutions perform similarly well, turns out to be what allows these models to generalize."

When the Penn researchers looked again at their foam data through this lens, the parallel was hard to miss. Rather than settling into "deep" positions in this metaphorical landscape, bubbles in foams also remained in motion, much like the parameters in modern AI systems, continuously reorganizing within broad, flat regions with similar characteristics. The same mathematics that explains why deep learning works turned out to describe what foams had been doing all along.

[...] "Why the mathematics of deep learning accurately characterizes foams is a fascinating question," says Crocker. "It hints that these tools may be useful far outside of their original context, opening the door to entirely new lines of inquiry."

Journal Reference: Amruthesh Thirumalaiswamy et al, Slow relaxation and landscape-driven dynamics in viscous ripening foams, PNAS (2025). https://doi.org/10.1073/pnas.2518994122.     https://dx.doi.org/10.48550/arxiv.2301.13400 [arXiv]


Original Submission

posted by hubie on Friday January 23, @08:59AM   Printer-friendly
from the dystopia-is-now! dept.

https://arstechnica.com/ai/2026/01/new-ai-plugin-uses-wikipedias-ai-writing-detection-rules-to-help-it-sound-human/

On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called "Humanizer," the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plugin on GitHub, where it has picked up over 1,600 stars as of Monday.

"It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X. "So much so that you can just tell your LLM to... not do that."
[...]
Chen's tool is a "skill file" for Claude Code, Anthropic's terminal-based coding assistant, which involves a Markdown-formatted file that adds a list of written instructions (you can see them here) appended to the prompt fed into the large language model (LLM) that powers the assistant. Unlike a normal system prompt, for example, the skill information is formatted in a standardized way that Claude models are fine-tuned to interpret with more precision than a plain system prompt. (Custom skills require a paid Claude subscription with code execution turned on.)
[...]
So what does AI writing look like? The Wikipedia guide is specific with many examples, but we'll give you just one here for brevity's sake.

Some chatbots love to pump up their subjects with phrases like "marking a pivotal moment" or "stands as a testament to," according to the guide. They write like tourism brochures, calling views "breathtaking" and describing towns as "nestled within" scenic regions. They tack "-ing" phrases onto the end of sentences to sound analytical: "symbolizing the region's commitment to innovation."

To work around those rules, the Humanizer skill tells Claude to replace inflated language with plain facts and offers this example transformation:

Before: "The Statistical Institute of Catalonia was officially established in 1989, marking a pivotal moment in the evolution of regional statistics in Spain."

After: "The Statistical Institute of Catalonia was established in 1989 to collect and publish regional statistics."
[...]
even though most AI language models tend toward certain types of language, they can also be prompted to avoid them, as with the Humanizer skill. (Although sometimes it's very difficult, as OpenAI found in its yearslong struggle against the em dash.)

Also, humans can write in chatbot-like ways. For example, this article likely contains some "AI-written traits" that trigger AI detectors even though it was written by a professional writer—especially if we use even a single em dash—because most LLMs picked up writing techniques from examples of professional writing scraped from the web.

[My initial reaction was, nice a way to filter out the AI slop! And there's a plugin! When in reality, it's a plugin to help the Claude LLM sound less like an AI. So, the deep dark bad path, got it. Too much optimism, I guess.]


Original Submission

posted by hubie on Friday January 23, @04:12AM   Printer-friendly
from the yarrrvidia dept.

https://torrentfreak.com/nvidia-contacted-annas-archive-to-secure-access-to-millions-of-pirated-books/

'NVIDIA Contacted Anna's Archive to Secure Access to Millions of Pirated Books'

The new complaint alleges that "competitive pressures drove NVIDIA to piracy", which allegedly included collaborating with the controversial Anna's Archive library.

According to the amended complaint, a member of Nvidia's data strategy team reached out to Anna's Archive to find out what the pirate library could offer the trillion-dollar company

"Within a week of contacting Anna's Archive, and days after being warned by Anna's Archive of the illegal nature of their collections, NVIDIA management gave 'the green light' to proceed with the piracy. Anna's Archive offered NVIDIA millions of pirated copyrighted books."

Busted? Well at least they asked. Meta blamed it on porno. Will there be any fallout?


Original Submission

posted by hubie on Thursday January 22, @11:28PM   Printer-friendly
from the folically-challenged dept.

Scientists have discovered that human hair grows not by being pushed out of the follicle, but by being actively pulled upward by coordinated cellular movements deep within the tissue:

Scientists have discovered that human hair does not emerge because it is pushed upward from the root. Instead, it is pulled along by forces generated by a previously unseen network of moving cells. This finding overturns long held ideas in biology and may change how scientists approach hair loss and tissue regeneration.

Researchers from L'Oréal Research & Innovation and Queen Mary University of London used advanced 3D live imaging to observe individual cells inside human hair follicles that were kept alive in laboratory culture. Their study, published in Nature Communications, revealed that cells in the outer root sheath (the layer that surrounds the hair shaft) move in a downward spiral within the same region that produces the upward pulling force responsible for hair growth.

Dr Inês Sequeira, Reader in Oral and Skin Biology at Queen Mary and one of the lead authors, said, "Our results reveal a fascinating choreography inside the hair follicle. For decades, it was assumed that hair was pushed out by the dividing cells in the hair bulb. We found that instead that it's actively being pulled upwards by surrounding tissue acting almost like a tiny motor."

[...] Dr. Thomas Bornschlögl, other lead author, from the same L'Oréal team adds: "This reveals that hair growth is not driven only by cell division – instead, outer root sheath actively pull the hair upwards." This new view of follicle mechanics opens fresh opportunities for studying hair disorders, testing drugs, and advancing tissue engineering and regenerative medicine."

While the research was carried out on human follicles in lab culture, it offers new clues from hair science and regenerative medicine. The team believes that understanding these mechanical forces could help design treatments that target the follicles physical as well as biochemical environment. Furthermore, the imaging technique developed will allow live testing of different drugs and treatments.

The study also highlights the growing role of biophysics in biology, showing how mechanical forces at microscopic scale shape the organs we see every day.

Reference: "Mapping cell dynamics in human ex vivo hair follicles suggests pulling mechanism of hair growth" by Nicolas Tissot, Gaianne Genty, Roberto Santoprete, et. al., 21 November 2025, Nature Communications. DOI: 10.1038/s41467-025-65143-x


Original Submission

posted by jelizondo on Thursday January 22, @06:40PM   Printer-friendly
from the got.milk dept.

Humans use tools, it's one of the things that make us great. Some of the other smarter monkeys also use tools. Next up we have Cows. Cow tool users. Beware the bovine master race ... also lactose tolerant.

Veronika, a cow living in an idyllic mountain village in the Austrian countryside, has spent years perfecting the art of scratching herself with sticks, rakes, and deck brushes. Now that scientists have discovered her, she has the distinction of the first cow known to use tools.

She picks up objects with her tongue, grips them tight with her mouth, and directs their ends to where she wants them most. When she's wielding a deck brush, she will use the bristled end to scratch her thick-skinned back, but switches to the smooth handle when scratching her soft, sensitive belly.

[...] The brown cow's know-how came to the attention of scientists last year after Alice Auersperg, a cognitive biologist at the University of Veterinary Medicine in Vienna, published a book on tool use in animals. Shortly after, her inbox was flooded with messages from people claiming to have seen their pets use tools. "I got all of those emails from people saying things like 'my cat is using the Amazon box as a tool. It's her new house,'" she says. Among these mundane reports was something truly new: a video of a cow picking up a rake and scratching her backside with it.

"It seemed really interesting," she recalls. "We had to take a closer look." Not long after, Auersperg and her colleague Antonio Osuna-Mascaró, a post-doctoral researcher at the same University, drove to Veronika's home.

To say Veronika was living her best life would be an understatement. Her owner, a soft-hearted baker named Witgar Wiegele, had kept Veronika and her mother as pets. She'd spent her life roaming around a picturesque pasture surrounded by forests and snow-covered mountains. Veronika, now 13 years old, has had many years to mess around with the many sticks and landscaping tools that line her enclosure.

The only downside to her idyllic lifestyle is that each summer, horse flies plague Wiegele's property. According to the researchers, the desire to shoo these flies away and scratch their bites likely drove Veronika to develop her self-scratching skills.

https://www.nationalgeographic.com/animals/article/cow-using-tools


Original Submission

posted by jelizondo on Thursday January 22, @01:46PM   Printer-friendly

Schools across the U.S. are rolling out AI-powered surveillance technology, including drones, facial recognition and even bathroom listening devices. But there's not much data to prove they keep kids safe:

Inside a white stucco building in Southern California, video cameras compare faces of passersby against a facial recognition database. Behavioral analysis AI reviews the footage for signs of violent behavior. Behind a bathroom door, a smoke detector-shaped device captures audio, listening for sounds of distress. Outside, drones stand ready to be deployed and provide intel from above, and license plate readers from $8.5 billion surveillance behemoth Flock Safety ensure the cars entering and exiting the parking lot aren't driven by criminals.

This isn't a high-security government facility. It's Beverly Hills High School.

District superintendent Alex Cherniss says the striking array of surveillance tools is a necessity, and one that ensures the safety of his students. "We are in the hub of an urban setting of Los Angeles, in one of the most recognizable cities on the planet. So we are always a target and that means our kids are a target and our staff are a target," he said. In the 2024-2025 fiscal year, the district spent $4.8 million on security, including staff. The surveillance system spots multiple threats per day, the district said.

Beverly Hills' apparatus might seem extreme, but it's not an outlier. Across the U.S., schools are rolling out similar surveillance systems they hope will keep them free of the horrific and unceasing tide of mass shootings. There have been 49 deaths from gunfire on school property this year. In 2024, there were 59, and in 2023 there were 45, per Everytown for Gun Safety. Between 2000 and 2022, 131 people were killed and 197 wounded at schools in the U.S., most of them children. Given those appalling metrics, allocating a portion of your budget to state of the art AI-powered safety and surveillance tools is a relatively easy decision.

[...] Skeptics, however, said there's little proof AI technologies are going to bring those numbers down significantly, and they ruin trust with students. A 2023 American Civil Liberties Union report found that eight of the 10 largest school shootings in America since Columbine occurred on campuses with surveillance systems. Chad Marlow, a senior policy counsel at the ACLU who authored the report, said that even with the advent of AI-powered tools, there's a dearth in independent research to verify it's any better at preventing tragedies. "It's very peculiar to make the claim that this will keep your kids safe," he said.

The report also found that the surveillance fostered an atmosphere of distrust: 32% of 14 to 18-year-old students surveyed said they felt like they were always being watched. In focus groups run by the ACLU, students said they felt less comfortable alerting educators to mental health issues and physical abuse. Marlow argues that's a lousy tradeoff. "Because kids don't trust people they view as spying on them, it ruptures trust and actually makes things less safe," he said.

Originally spotted on Schneier on Security.


Original Submission

posted by jelizondo on Thursday January 22, @09:07AM   Printer-friendly

France records more deaths than births for first time since end of second world war:

A public consultation last year found the financial cost of raising children was a barrier to parenthood for 28% of French adults. A public consultation last year found the financial cost of raising children was a barrier to parenthood for 28% of French adults. France records more deaths than births for first time since end of second world war

For the first time since the end of the second world war, France has recorded more deaths than births, suggesting that the country's long-held demographic advantage over other EU countries is slipping away.

Across the country in 2025, there were 651,000 deaths and 645,000 births, according to newly released figures from the national statistics institute Insee.

France had long been an exception across Europe, with birthrates that topped many of its neighbours'. In 2023 – the most recent year for which comparable data is available – the fertility rate in France of 1.65 children per woman was the second-highest in the EU, trailing only Bulgaria's 1.81.

This week's data, however, suggests that the country is not immune to the demographic crunch sweeping the continent as populations age and birthrates tumble.

On Tuesday, Insee said the fertility rate in France had dropped to 1.56 in 2025. This was the lowest rate since the end of the first world war.

It was also a 24% drop compared with the 2.01 rate registered 15 years ago, the institute's Sylvie Le Minez said. "Since 2010, births have been declining year after year in France."

A public consultation carried out by the national assembly late last year gave insight into why this may be happening [article in French]. Of the more than 30,000 respondents, 28% cited the financial costs of raising and caring for children as the principal obstacle to having them, while 18% cited worries about the future of society and 15% pointed to the difficulties in balancing the needs of a family with work and personal life.

The data suggests that France is poised to join the many other EU countries facing the prospect of a shrinking labour force as ageing populations increase the cost of pensions and elderly care.

Life expectancy in France reached record highs last year, at 85.9 years for women and 80.3 for men, while the share of people aged 65 or older climbed to 22%, hovering around the same proportion of those under the age of 20.

"This is not a first for European countries," said Le Minez, highlighting that 20 of the EU's 27 countries had registered more deaths than births in 2024. "But this time, this is also the case for France."

Even so, France's population grew slightly last year to 69.1 million, due to net migration which was estimated to be about 176,000. As anti-immigration sentiment, led by France's National Rally, steadily makes inroads in the country, projections have suggested that the rise of the far right could speed up population decline.

Without immigration, France's population could drop to as low as 59 million by 2100, according to recent forecasts by Eurostat, the EU's official statistics agency.


Original Submission

posted by jelizondo on Thursday January 22, @04:16AM   Printer-friendly
from the it's-all-Greek-to-me dept.

A growing number of college professors are sounding the alarm over a quiet but accelerating crisis on American campuses, as Gen Z Arriving at College Unable to Read:

According to a report by Fortune, professors across the country say students are struggling to process written sentences, complete assigned reading, or engage meaningfully with texts that were once foundational to higher education.

The problem is not confined to remedial courses or underperforming schools.

Faculty say it is widespread, structural, and getting worse.

[...] Timothy O'Malley of the University of Notre Dame said students often have no idea how to approach traditional reading assignments and instead turn to artificial intelligence tools for summaries.

"Today, if you assign that amount of reading, they often don't know what to do," O'Malley told Fortune.

[...] Professors say it is the predictable outcome of a K–12 system that no longer ensures basic competence.

Standards were lowered, accountability eroded, and reading increasingly treated as optional.

The result is a generation arriving at adulthood unprepared for rigorous work, real expectations, and the responsibilities that come with them, and universities now face the consequences.

Has AI become the modern equivalent of Cliff Notes?


Original Submission