Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
https://phys.org/news/2025-12-scientists-outline-atomic-scale-polaritons.html
Controlling light at dimensions thousands of times smaller than the thickness of a human hair is one of the pillars of modern nanotechnology.
An international team led by the Quantum Nano-Optics Group of the University of Oviedo and the Nanomaterials and Nanotechnology Research Center (CINN/Principalty of Asturias-CSIC) has published a review article in Nature Nanotechnology detailing how to manipulate fundamental optical phenomena when light couples to matter in atomically thin materials.
The study focuses on polaritons, hybrid quasiparticles that emerge when light and matter interact intensely. By using low-symmetry materials, known as van der Waals materials, light ceases to propagate in a conventional way and instead travels along specific directions, a characteristic that gives rise to phenomena that challenge conventional optics.
Among the findings reviewed are behaviors such as negative refraction, where light bends in the opposite direction to the usual one when crossing a boundary between materials, or canalized propagation, which makes it possible to guide energy without it dispersing.
"These properties offer unprecedented control over light–matter interaction in regions of the spectrum ranging from the visible to the terahertz," the team describes in the article.
This research is part of the TWISTOPTICS project, led by University of Oviedo professor Pablo Alonso González. This project is dedicated to the study of how twisting or stacking nanometric layers—a technique reminiscent of atomic-scale "Lego" pieces—makes it possible to design physical properties à la carte.
The publication is the result of an international collaboration in which—alongside the University of Oviedo—leading centers such as the Beijing Institute of Technology (BIT), the Donostia International Physics Center (DIPC), and the Max Planck Institute have participated.
The theoretical and experimental framework presented in this work lays the foundations for future practical implementations in various technological sectors, including integrated optical circuits, high-sensitivity biosensors, thermal management, and super-resolution imaging.
More information: Yixi Zhou et al, Fundamental optical phenomena of strongly anisotropic polaritons at the nanoscale, Nature Nanotechnology (2025). DOI: 10.1038/s41565-025-02039-3
One small step for chips, one giant leap for a lack of impurities:
A team from Cardiff, Wales, is experimenting with the feasibility of building semiconductors in space, and its most recent success is another step forward towards its goal. According to the BBC, Space Forge's microwave-sized furnace has been switched on in space and has reached 1,000°C (1,832°F) — one of the most important parts of the manufacturing process that the company needs to validate in space.
"This is so important because it's one of the core ingredients that we need for our in-space manufacturing process," Payload Operations Lead Veronica Vera told the BBC. "So being able to demonstrate this is amazing." Semiconductor manufacturing is a costly and labor-intensive endeavor on Earth, and while putting it in orbit might seem far more complicated, making chips in space offers some theoretical advantages. For example, microgravity conditions would help the atoms in semiconductors line up perfectly, while the lack of an atmosphere would also reduce the chance of contaminants affecting the wafer.
These two things would help reduce imperfections in the final wafer output, resulting in a much more efficient fab. "The work that we're doing now is allowing us to create semiconductors up to 4,000 times purer in space than we can currently make here today," Space Forge CEO Josh Western told the publication. "This sort of semiconductor would go on to be in the 5G tower in which you get your mobile phone signal, it's going to be in the car charger you plug an EV into, it's going to be in the latest planes."
Space Forge launched its first satellite in June 2025, hitching a ride on the SpaceX Transporter-14 rideshare mission. However, it still took the company several months before it finally succeeded in turning on its furnace, showing how complicated this project can get. Nevertheless, this advancement is quite promising, with Space Forge planning to build a bigger space factory with the capacity to output 10,000 chips. Aside from that, it also needs to work on a way to bring the finished products back to the surface. Other companies are also experimenting with orbital fabs, with U.S. startup Besxar planning to send "Fabships" into space on Falcon 9 booster rockets.
Putting semiconductor manufacturing in space could help reduce the massive amounts of power and water that these processes require from our resources while also outputting more wafers with fewer impurities. However, we also have to consider the huge environmental impact of launching multiple rockets per day just to deliver the raw materials and pick up the finished products from orbit.
Consumes 1/3 the power of optical, but costs 1/3 more than optical:
Scale-up connectivity is crucial for the performance of rack-scale AI systems, but achieving high bandwidth and low latency for such interconnections using copper wires is becoming increasingly complicated with each generation. Using optical interconnections for scale-up connectivity is a possibility, but it may be an overkill, so start-ups Point2 and AttoTude propose to use radio-based interconnections operating at millimeter-wave and terahertz frequencies over waveguides that connect to systems using standard pluggable connectors, reports IEEE Spectrum.
Point2's implementation uses what it calls an 'active radio cable' built from eight 'e-Tube' waveguides. Each waveguide carries data using two frequencies — 90 GHz and 225 GHz — and plug-in modules at both ends convert digital signals directly into modulated millimeter-wave radio and back again. A full cable delivers 1.6 Tb/s, occupies 8.1mm, or about a half the volume of a comparable active copper cable, and can reach up to seven meters, more than enough for scale-up connectivity. Point2 says the design consumes roughly one-third the power of optical links, costs about one-third as much, and adds as little as one-thousandth the latency.
A notable aspect of Point2's approach is the relative maturity of its technology. The radio transceivers can be fabricated at standard semiconductor production facilities using well-known fabrication processes — the company has already demonstrated this approach using a 28nm chip with the Korea Advanced Institute of Science and Technology (KAIST). Also, its partners Molex and Foxconn Interconnect Technology have shown that the specialized cables can be produced on existing lines without major retooling.
AttoTude is pursuing a similar concept, but at even higher frequencies. Its system combines a digital interface, a terahertz signal generator, and a mixer that encodes data onto carriers between 300 and 3,000 GHz that feeds the signal into a narrow dielectric waveguide. Early versions used hollow copper tubes, while later generations rely on fibers measuring approximately 200 micrometers across with losses as low as 0.3 dB per meter (considerably lower than copper). The company has demonstrated 224 Gb/s transmission over four meters at 970 GHz and projects viable reaches of around 20 meters.
Both companies use waveguides instead of cables because, at millimeter-wave and terahertz frequencies cables fail. While at very high data rates copper cables can pass signals, they do so by becoming thicker, shorter, and more power-hungry. Furthermore, their losses and jitter rise so fast that the link budget collapses and breaks, so cables cannot be used for such applications. Meanwhile, waveguides are not an exotic choice, they are among a few viable option for interconnects with terabit/s-class bandwidth.
A proof-of-concept is now available on the internet:
MongoBleed, a high-severity vulnerability plaguing multiple versions of MongoDB, can now easily be exploited since a proof-of-concept (PoC) is now available on the web.
Earlier this week, security researcher Joe Desimone published code that exploits a "read of uninitialized heap memory" vulnerability tracked as CVE-2025-14847. This vulnerability, rated 8.7/10 (high), stems from "mismatched length fields in Zlib compressed protocol headers".
By sending a poisoned message claiming a larger size when decompressed, the attacker can cause the server to allocate a bigger memory buffer, through which they would leak in-memory data containing sensitive information, such as credentials, cloud keys, session tokens, API keys, configurations, and other data.
What's more - the attackers exploiting MongoBleed do not need valid credentials to pull the attack off.
In its writeup, BleepingComputer confirms that there are roughly 87,000 potentially vulnerable instances exposed on the public internet, as per data from Censys. The majority are located in the United States (20,000), with notable instances in China (17,000), and Germany (around 8,000).
Here is a list of all the vulnerable versions:
- MongoDB 8.2.0 through 8.2.3
- MongoDB 8.0.0 through 8.0.16
- MongoDB 7.0.0 through 7.0.26
- MongoDB 6.0.0 through 6.0.26
- MongoDB 5.0.0 through 5.0.31
- MongoDB 4.4.0 through 4.4.29
- All MongoDB Server v4.2 versions
- All MongoDB Server v4.0 versions
- All MongoDB Server v3.6 versions
If you are running any of the above, make sure to patch up - a fix for self-hosting instances has been available since December 19. Users running MongoDB Atlas don't need to do anything, since their instances were automatically patched.
So far, there are no confirmed reports of in-the-wild abuse, although some researchers are linking MongoBleed to the recent Ubisoft Rainbow Six Siege breach.
Christmas is already behind us, but since this is an announcement from 11 December – that I missed – I'm calling this a very interesting and surprising Christmas present.
The team and I are beyond excited to share what we've been cooking up over the last little while: a full desktop environment running on QNX 8.0, with support for self-hosted compilation! This environment both makes it easier for newly-minted QNX developers to get started with building for QNX, but it also vastly simplifies the process of porting Linux applications and libraries to QNX 8.0.
↫ John Hanam at the QNX Developer BlogWhat we have here is QNX 8.0 running the Xfce desktop environment on Wayland, a whole slew of build and development tools like clang, gcc, git, etc.), a ton of popular code editors and IDEs, a web browser (looks like GNOME Web?), access to all the ports on the QNX Open-Source Dashboard, and more. For now, it's only available as a Qemu image to run on top of Ubuntu, but the plan is to also release an x86 image in the coming months so you can run this directly on real hardware.
This isn't quite the same as the QNX of old with its unique Photon microGUI, but it's been known for a while now that Photon hasn't been actively developed in a long time and is basically abandoned. Running Xfce on Wayland is obviously a much more sensible solution, and one that's quite future-proof, too. As a certified QNX desktop enthusiast of yore, I can't wait for the x86 image to arrive so I can try this out properly.
There are downsides. This image, too, is encumbered by annoying non-commercial license requirements and sign-ups, and this also wouldn't be the first time QNX starts an enthusiast effort, only to abandon it shortly after. Buyer beware, then, but I'm cautiously optimistic.
= Related: QNX at Wikipedia
Every task we perform on our computer — whether number crunching, watching a video, or typing out an article — requires different components of the machine to interact with one another. "Communication is massively crucial for any computation," says former SFI Graduate Fellow Abhishek Yadav, a Ph.D. scholar at the University of New Mexico. But scientists don't fully grasp how much energy computational devices spend on communication.
Over the last decade, SFI Professor David Wolpert has spearheaded research to unravel the principles underlying the thermodynamic costs of computation. Wolpert notes that determining the "thermodynamic bounds on the cost of communication" is an overlooked but critical issue in the field, as it applies not only to computers but also to communication systems across the board. "They are everything that holds up modern society," he says.
Now, a new study in Physical Review Research, co-authored by Yadav and Wolpert, sheds light on the unavoidable heat dissipation that occurs when information is transmitted across a system, challenging an earlier view that, in principle, communication incurs no energetic cost. For the study, the researchers drew on and combined principles from computer science, communication theory, and stochastic thermodynamics, a branch of statistical physics that deals with real-world out-of-equilibrium systems that includes smartphones and laptops.
Using a logical abstraction of generic communication channels, the researchers determined the minimum amount of heat a system must dissipate to transmit one unit of information. This abstraction could apply to any communication channel — artificial (e.g., optical cable) or biological (e.g., a neuron firing a signal in the brain). Real-world communication channels always have some noise that can interfere with the information transmission, and the framework developed by Yadav and Wolpert shows that the minimum heat dissipation is at least equal to the amount of useful information — technically called mutual information — that filters through the channel's noise.
Then, they used another broadly applicable abstraction of how modern-day computers perform computations to derive the minimum thermodynamic costs associated with encoding and decoding. Encoding and decoding steps ensure reliable transmission of messages by mitigating channel noise. Here, the researchers gained a significant insight: improving the accuracy of data transmission through better encoding and decoding algorithms comes at the cost of increased heat dissipation within the system.
Uncovering the unavoidable energy costs of sending information through communication channels could help build energy-efficient systems. Yadav reckons that the von Neumann architecture used in current computers presents significant energetic costs associated with communication between the CPU and memory. "The principles that we are outlining can be used to draw inspiration for future computer architecture," he says.
As these energy costs apply to all communication channels, the work presents a potential avenue for researchers to deepen the understanding of various energy-hungry complex systems where communication is crucial, from biological neurons to artificial logical circuits. Despite burning 20% of the body's calorie budget, the brain uses energy far more efficiently than artificial computers do, says Yadav. "So it would be interesting to see how natural computational systems like the brain are coping with the cost associated with communication."
Journal Reference: Abhishek Yadav and David Wolpert, Minimal thermodynamic cost of communication, Phys. Rev. Research 7, 043324 – Published 22 December, 2025 DOI: https://doi.org/10.1103/qvc2-32xr
https://finance.yahoo.com/news/fda-officially-confirms-kava-food-140000605.html
U.S. Food and Drug Administration (FDA), After Reviewing Historical Use and Modern Safety Evidence, Officially Confirms Kava is a Food Under Federal Law
The United States Food and Drug Administration (FDA) has officially confirmed that kava is a conventional food under federal law. This acknowledgment marks a pivotal moment in the national understanding of kava, providing long-needed clarity across federal and state systems and affirming that, when prepared and enjoyed as a beverage (i.e. kava tea), kava holds a legitimate and established place within the nation's food landscape.
This federal confirmation, issued through multiple FDA case responses, has already guided the State of Hawaii and the State of Michigan, with additional states now reviewing the same evidence, to determine that the kava beverage qualifies as Generally Recognized As Safe (GRAS) based on its extensive history of safe, cultural use. For Pacific Island communities, including Native Hawaiians whose cultural practices, ceremonies, and community life have been intertwined with kava for generations, the people of American Samoa, and the many Fijian, Tongan, and other Pacific Islander families throughout the United States, this acknowledgment carries profound significance. It affirms the deep cultural legacy of kava, strengthens recognition of Pacific Islander heritage in the United States, and honors a cultural food that is now finding an increasingly meaningful place in modern American life.
FDA Issues Written Statements Affirming Kava Tea as a Conventional Food
Kava's longstanding cultural use as a beverage informs how federal law evaluates traditional foods, and this history shaped the FDA's recent clarification. When asked to confirm how kava should be treated under federal food law, the agency provided some of its clearest language to date. In responding, the FDA affirmed the classification of the kava beverage and stated:
"You are correct that kava mixed with water as a single ingredient conventional food would generally not be regulated as a food additive if the tea is consumed as food."
In another communication, the agency reinforces this, explaining that "Kava tea can be considered as a food, provided that the tea and labeling are compliant with FDA's food safety and food labeling regulations".
More information:
The left-wing Irish government has vowed to push for the European Union to prohibit the use of anonymous social media accounts in what may set the ground for another battle over free speech with the Trump administration in the United States.
Ireland will take over the rotating Presidency of the Council of the European Union for a six month term starting in July and looks set to push for more restrictions on the internet, namely the imposition of ID-verification for social media accounts. The move would effectively end anonymity on social media, which critics have warned will hinder dissidents from speaking out against power structures.
Speaking to the Extra news outlet, Deputy Prime Minister Simon Harris said that anonymous accounts and so-called disinformation are "an issue in relation to our democracy. And I don't just mean ours. I mean democracy in the world."
"This isn't just Ireland's view. If you look at the comments of Emmanuel Macron... of Keir Starmer... recently, in terms of being open to considering what Australia have done, if you look at the actions of Australia, you know this is a global conversation Ireland will and should be a part of," he said.
Harris also said that Dublin will consider following Australia's lead in banning children under the age of 16 from accessing social media.
"We've age requirements in our country for so many things. You can't buy a pint before a certain age. You can't drive a car before a certain age. You can't place a bet before a certain age," the Deputy PM said.
"We have a digital age of consent in Ireland, which is 16, but it's simply not being enforced. And I think that's a really important move. And then I think there's the broader issue, which will require work that's not just at an Irish level, around the anonymous bots."
It comes in the wake of the U.S. State Department announcing sanctions against five British and European figures for their roles in silencing Americans and American companies.
Among those to face a visa ban sanction was former European Commissioner for Internal Market Thierry Breton, who served as the EU's censorship czar until last year and who spearheaded the bloc's Digital Services Act.
The draconian set of restrictions demand that large social media companies self-censor their platforms of so-called "hate speech" and "disinformation" or face the prospect of Brussels imposing a fine of up to six per cent of their global revenue. Earlier this month, the Digital Services Act was used to fine Elon Musk's X €120 million ($140 million).
Breton had previously threatened to use the DSA, which allows for the bloc to ban social media firms from operating on the continent, against Musk for conducting a live interview on X with then-Presidential candidate Donald Trump in the lead up to last year's elections. The Frenchman warned that the interview could result in the "amplification of harmful content" that may "generate detrimental effects on civic discourse and public security".
Announcing the sanctions against Breton and others, Secretary of State Marco Rubio said last week: "For far too long, ideologues in Europe have led organized efforts to coerce American platforms to punish American viewpoints they oppose. The Trump Administration will no longer tolerate these egregious acts of extraterritorial censorship."
New Study Reveals How the Brain Measures Distance:
Whether you are heading to bed or seeking a midnight snack, you don't need to turn on the lights to know where you are as you walk through your house at night. This hidden skill comes from a remarkable ability called path integration: your brain constantly tallies your steps and turns, allowing you to mentally track your position like a personal GPS. You're building a map by tracking movement, not sight.
Scientists at the Max Planck Florida Institute for Neuroscience (MPFI) think that understanding how the brain performs path integration could be a critical step toward understanding how our brain turns momentary experiences into memories of events that unfold over time. Publishing their findings this week in Nature Communications, they have made big strides toward this goal. Their insights may also provide information about what may be happening to patients in the early stages of Alzheimer's disease, whose first symptoms are often related to difficulty tracking distance or time.
In their study, the team trained mice to run a specific distance in a gray virtual reality environment without visual landmarks, in exchange for a reward. The animals could only judge how far they had traveled by monitoring their own movement, not by relying on environmental cues. As mice performed this task, the scientists recorded tiny electrical pulses that neurons use to communicate, allowing them to observe the activity of thousands of neurons. They focused on the activity of neurons in the hippocampus, a region essential for both navigation and memory. Using computer modeling, they then analyzed these signals to reveal the computational rules the brain uses for path integration.
"The hippocampus is known to help animals find their way through the environment. In this brain region, some neurons become active at specific places. However, in environments full of sights, sounds, and smells, it is difficult to tell whether these neurons are responding to those sensory cues or to the animal's position itself," explains senior author and MPFI group leader Yingxue Wang. "In this study, we removed as many sensory cues as possible to mimic situations such as moving in the dark. In these simplified conditions, we found that only a small number of hippocampal cells signaled a specific place or a specific time. This observation made us wonder what the rest of the neurons were doing, and whether they were helping the animal keep track of where it is by integrating how far and how long it had been moving, a process called path integration."
The scientists discovered that during navigation without landmarks, most hippocampal neurons followed one of two opposite patterns of activity. These patterns were crucial for helping the animals keep track of how far they had traveled.
In one group of neurons, activity sharply increased when the animal started moving, as if marking the start of the distance-counting process. The activity of these neurons then gradually ramped down at different rates as the animal moved further, until reaching the set distance for a reward. A second group of neurons showed the opposite pattern. Their activity dropped when the animal started moving, but gradually ramped up as the animal traveled farther.
The team discovered that these activity patterns act as a neural code for distance, with two distinct phases. The first phase (the rapid change in neural activity) marks the start of movement and the beginning of distance counting. The second phase (the gradual ramping changes in neural activity) counts the distance traveled. Both short and long distances could be tracked in the brain by using neurons with different ramping speeds.
"We have discovered that the brain encodes the elapsed distance or time needed to solve this task using neurons that show ramping activity patterns," said lead scientist Raphael Heldman. "This is the first time distance has been shown to be encoded in a way that differs from the well-known place-based coding in the hippocampus. These findings expand our understanding that the hippocampus is using multiple strategies – ramping patterns in addition to the place-based coding – to encode elapsed time and distance."
When the researchers disrupted these patterns by manipulating the circuits that produce them, the animals had difficulty performing the task accurately and often searched for the reward in the wrong location.
Dr. Wang notes that "understanding how time and distance are encoded in the brain during path integration is especially important because this ability is one of the earliest to degrade in Alzheimer's disease. Patients report early symptoms of getting spatially disoriented in familiar surroundings or not knowing how they got to a particular place."
The research team is now turning its efforts to understand how these patterns are generated in the brain, which may help reveal how our moment-to-moment experiences are encoded into memories
Journal Reference: Heldman, R., Pang, D., Zhao, X. et al. Time or distance encoding by hippocampal neurons via heterogeneous ramping rates. Nat Commun 16, 11083 (2025). https://doi.org/10.1038/s41467-025-67038-3
Security researchers have found various security-relevant errors in GnuPG and similar programs. Many of the vulnerabilities are (still) not fixed.
At the 39th Chaos Communication Congress, security researchers Lexi Groves, aka 49016, and Liam Wachter demonstrated a whole series of vulnerabilities in various tools for encrypting and signing data. In total, the researchers found 14 vulnerabilities in four different programs. All discovered problems are implementation errors, meaning they do not affect the fundamental security of the methods used, but rather their concrete – and indeed flawed – implementation in the respective tool.
The focus of the presentation was the popular PGP implementation GnuPG, whose code is generally considered to be well-established. Nevertheless, the security researchers found numerous vulnerabilities, including typical errors when processing C strings through injected null bytes. This allowed, among other things, signatures to be falsely displayed as valid, or it was possible to prepend text to signed data that was neither captured nor exposed as a modification by the signature.
The issues found in GnuPG cover a broad spectrum of causes: attackers could exploit clearly erroneous code, provoke misleading output that tempts users into fatal actions. Furthermore, they could inject ANSI sequences that, while correctly processed by GnuPG, lead to virtually arbitrary output in the victim's terminal. The latter can be exploited to give users malicious instructions that only appear to come from GnuPG, or to overwrite legitimate security queries from GnuPG with harmless follow-up questions, causing users to unintentionally approve dangerous actions.
The security researchers also found some discovered problem types in other tools, such as the newer PGP implementation Sequoia-PGP or the signature tool Minisign. In the encryption tool age, they discovered a way to execute any programs present on the victim's computer via the plug-in system. The researchers provide a comprehensive overview of all found issues on the website gpg.fail.
Many vulnerabilities still openSome vulnerabilities found have been fixed in the current versions of the affected programs, but this is not the case for many. Partly because patches have been adopted but no new version with them has been released yet, but partly also because the program authors do not see a problem to be corrected in their tool.
The researchers particularly praised the reaction to the vulnerability in age: Not only was the error fixed in the various age implementations, but the specification was also updated to prevent the problem. Directly at the hacker congress, age developer Filippo Valsorda even went a step further: He was in the audience of the presentation and used the mandatory Q&A session at the end to thank the researchers for their work. He also presented them with an improvised bug bounty in the form of stickers and pins.
The researchers also provide advice on their website on how to avoid the found errors – from both developer and user perspectives. In general, users should also perceive seemingly harmless error messages as serious warnings and avoid cleartext signatures – as recommended by the GnuPG man page. The researchers also suggest rethinking the use of cryptography tools on the command line in general: due to the mentioned ANSI sequences, users can be misled, even if all tools work without errors.
= Related video and/or audio:
- To sign or not to sign: Practical vulnerabilities in GPG & friends
Funding agencies can end profit-first science publishing:
Funding organisations can fix the science publishing system – which currently puts profit first and science second – according to new research.
The new paper says the current relationship between researchers, funders and commercial publishers has created a "drain" – depriving the research system of money, time, trust and control.
The research team used public revenue and income statements to assess the money being spent on publishing articles with the biggest commercial publishers, and placed this in the broader historical context, including recent trends.
Published on arXiv, the paper examines the scale of publisher profits – with the four leading publishers (Elsevier, Springer Nature, Wiley and Taylor & Francis) generating over $7.1 billion in revenue in 2024 alone, with profit margins exceeding 30%.
Much of this money comes from public funds intended for research – and the new paper says bold action by funders is now essential.
"The real solution is not for scientists to band together. We've tried that for 30 years and it hasn't worked – publisher profit margins have remained steady despite every attempted reimagining of science publishing," said Dr Mark Hanson, from the Centre for Ecology and Conservation at the University of Exeter.
"The funding agencies hold all the cards. They're the ones paying authors to do research, and journals to publish that research. They can mandate change.
"Some already are. For example, the US National Institutes of Health (NIH) has proposed limits on how much it will reimburse researchers for payments to publishers to make their articles open access (free to read).
"We researchers can support the battle, but we cannot lead the charge."
Research funding often includes money to pay journal fees to make articles open access. With these fees rising, increasing amounts of research funding – which often comes from taxpayers – becomes publisher profits.
[...] Professor Dan Brockington, from the Institute of Environmental Science and Technology at the Universitat Autònoma de Barcelona, said: "When facing large and powerful organisations, you need allies that are equally large and powerful. We have them: funders, government agencies, foundations and universities, which together could decide where funds for publishing go and what incentives drive researchers.
"The current system harms science: it fuels a proliferation of papers focused on prestige, which strains the publication machinery.
"It also discourages slow, careful interdisciplinary thinking, which is key to achieving higher-quality science. Ultimately, it contributes to a weakening of quality and, consequently, to an erosion of public trust."
Last year, researchers including Dr Hanson and Professor Brockington wrote a landmark paper highlighting the "strain" on scientific publishing caused by the rapidly rising number of papers being published. A 2023 study described an "oligopoly" in which the big five academic publishers profit from article processing charges. These studies paved the way for the new paper, entitled: "The Drain of Scientific Publishing."
Journal Reference: arXiv:2511.04820 [cs.DL] https://doi.org/10.48550/arXiv.2511.04820
https://scitechdaily.com/aging-immune-cells-may-rewrite-their-own-dna-to-stay-inflammatory/
Scientists have identified a pathway that keeps aging immune cells stuck in an inflammatory mode, intensifying the body's response to severe infection and pointing to new therapeutic possibilities.
As the body grows older, the immune system can lose its ability to function properly, increasing the risk of severe illnesses such as sepsis. A new study from researchers at the University of Minnesota examines how aging affects specific immune cells called macrophages, which remain stuck in a highly inflammatory state in preclinical models. The results were published in Nature Aging.
The researchers discovered that these macrophages produce a protein known as GDF3 that acts back on the same cells, strengthening and sustaining their inflammatory activity. This heightened inflammatory state makes it harder for the body to cope with sepsis. The study, led by biochemistry graduate student In Hwa Jang, found that GDF3 sends signals through SMAD2/3, causing lasting changes in the genome. As a result, macrophages release higher levels of inflammatory cytokines.
"Macrophages are critical to the development of inflammation; in our study, we identified a pathway which is used to maintain their inflammatory status," said Christina Camell, PhD, an associate professor with the University of Minnesota Medical School and College of Biological Sciences. "Our findings suggest that this pathway could be blocked to prevent the amplified inflammation that can be damaging to organ function and may be a promising target for future treatments that reduce harmful inflammation."
The researchers showed that removing the GDF3 gene reduced harmful inflammatory responses to bacterial toxins. They also demonstrated that drugs blocking the GDF3–SMAD2/3 signaling pathway can alter how inflammatory, fat-tissue macrophages behave and improve survival in older preclinical models exposed to severe infection.
Finally, in collaboration with Pamela Lutsey (School of Public Health) and using data from the Atherosclerosis Risk in Communities Study (ARIC), the investigators revealed that GDF3 protein correlates with inflammatory signaling in older humans.
Additional research is needed to pinpoint the molecular factors involved in this pathway and understand how it regulates specific inflammatory signals. Dr. Camell was recently awarded a 2025 AFAR Discovery Award based on this research, which will further investigate the consequences of these inflammatory macrophages on multiple metabolic organs and metabolic healthspan.
Reference: “GDF3 promotes adipose tissue macrophage-mediated inflammation via altered chromatin accessibility during aging” by In Hwa Jang, Anna Carey, Victor Kruglov, et al., 15 December 2025, Nature Aging.
DOI: 10.1038/s43587-025-01034-6
Scientists have been trying to develop safe and sustainable materials that can replace traditional plastics, which are non-sustainable and harm the environment. While some recyclable and biodegradable plastics exist, one big problem remains. Current biodegradable plastics like PLA often find their way into the ocean where they cannot be degraded because they are water insoluble. As a result, microplastics—plastic bits smaller than 5 mm—are harming aquatic life and finding their way into the food chain, including our own bodies.
In their new study, Aida and his team focused on solving this problem with supramolecular plastics—polymers with structures held together by reversible interactions. The new plastics were made by combining two ionic monomers that form cross-linked salt bridges, which provide strength and flexibility. In the initial tests, one of the monomers was a common food additive called sodium hexametaphosphate and the other was any of several guanidinium ion-based monomers. Both monomers can be metabolized by bacteria, ensuring biodegradability once the plastic is dissolved into its components.
"While the reversable nature of the bonds in supramolecular plastics have been thought to make them weak and unstable," says Aida, "our new materials are just the opposite." In the new material, the salt bridges structure is irreversible unless exposed to electrolytes like those found in seawater. The key discovery was how to create these selectively irreversible cross links.
As with oil with water, after mixing the two monomers together in water, the researchers observed two separated liquids. One was thick and viscous and contained the important structural cross linked salt bridges, while the other was watery and contained salt ions. For example, when sodium hexametaphosphate and alkyl diguanidinium sulfate were used, sodium sulphate salt was expelled into the watery layer. The final plastic, alkyl SP2, was made by drying what remained in the thick viscous liquid layer.
The "desalting" turned out to be the critical step; without it, the resulting dried material was a brittle crystal, unfit for use. Resalting the plastic by placing it in salt water caused the interactions to reverse and the plastic's structure destabilized in a matter of hours. Thus, having created a strong and durable plastic that can still be dissolved under certain conditions, the researchers next tested the plastic's quality.
The new plastics are non-toxic and non-flammable—meaning no CO2 emissions—and can be reshaped at temperatures above 120°C like other thermoplastics. By testing different types of guanidinium sulfates, the team was able to generate plastics that had varying hardnesses and tensile strengths, all comparable or better than conventional plastics. This means that the new type of plastic can be customized for need; hard scratch resistant plastics, rubber silicone-like plastics, strong weight-bearing plastics, or low tensile flexible plastics are all possible. The researchers also created ocean-degradable plastics using polysaccharides that form cross-linked salt bridges with guanidinium monomers. Plastics like these can be used in 3D printing as well as medical or health-related applications.
Lastly, the researchers investigated the new plastic's recyclability and biodegradability. After dissolving the initial new plastic in salt water, they were able to recover 91% of the hexametaphosphate and 82% of the guanidinium as powders, indicating that recycling is easy and efficient. In soil, sheets of the new plastic degraded completely over the course of 10 days, supplying the soil with phosphorous and nitrogen similar to a fertilizer.
"With this new material, we have created a new family of plastics that are strong, stable, recyclable, can serve multiple functions, and importantly, do not generate microplastics," says Aida.
Journal Reference: Cheng et al. (2024) Mechanically strong yet metabolizable multivalently form a cross-linked network structure by desalting upon phase separation. Science. doi: 10.1126/science.ado1782
Players of Sacred and Gothic games can rejoice once again:
Vintage game emulation just got another slight boost, thanks to the release of D7VK version 1.1. This Direct3D-to-Vulkan translation layer makes it possible to run old Direct3D 7 games on contemporary hardware, and it got some meaty improvements, including a new front-end, and experimental support for Direct3D 6.
In case you're a little confused, D7VK is a translation layer that turns Direct3D 7 calls to Direct X 9 running under Proton's DXVK layer, thereby taking advantage of DXVK's tried-and-true infrastructure and software ecosystem. Being a mere translation layer, it has a minor performance penalty and can run several times faster than a full emulator like WineD3D.
Alongside with a new front-end, the 1.1 update adds Direct3D 6 support as an experimental option. The author mentions that judging by its documentation, adding this API shouldn't be a lot of work. That's in sharp contrast to the lawless lands of Direct3D version 5 and under. Even as it stands, in their own words, "D3D7 is a land of highly cursed interoperability", with many games mixing Direct3D calls with older Windows APIs like DirectDraw and even GDI for 2D graphics.
In turn, this means that support for games is hit-or-miss, depending on how "hacky" the game was initially programmed. For example, this latest version adds a workaround specific to Sacrifice, which uses a wholly unspported depth buffer format. Likewise, support for strided primitive rendering makes Sacred playable, and fixes to mipmap swapping enable gamers to once again enjoy Gothic, Gothic 2, and Star Trek DS9: The Fallen as if they were just released.
Many popular Direct3D 6 titles have seen re-releases using modern APIs, including Final Fantasy VIII, Resident Evil 2, and Grand Theft Auto 2.
Additional fixes for games include workarounds for Conquest: Frontier Wars, Tomb Raider Chronicles, Darkan: Order of the Flame, Earth 2150, Tachyon: The Fringe, and Arabian Nights. If you have a particular game that doesn't run well, visit the issues section in the D7VK GitHub to lend your feedback. In the meantime, if your game doesn't run or is too old to use even Direct3D 7, you can use Wine's WineD3D instead.
WinD3D ironically also works in Windows itself, making older games easy to run on contemporary versions of the OS. If your vintage title used old Glide or OpenGL instead, the author recomments nGlide.
https://mashable.com/article/study-ai-slop-youtube
If it feels like there's a lot of AI slop on YouTube, that's because there's a lot of AI slop on YouTube.
New research from video-editing company Kapwing, reported by the Guardian found that more than one in every five videos that the YouTube Shorts algorithm shows new users is low-quality, AI-generated content.
One of the most interesting parts of the Kapwing study is that of the first 500 YouTube Shorts videos in a brand-new, untouched YouTube Shorts algorithm, 104 were AI-generated and 165 were brainrot — a whopping 21 percent and 33 percent, respectfully.
Of course, the love of AI slop differs depending on the country. Kapwing found that AI slop channels in Spain have a combined 20.22 million subscribers, more than any other country, but has fewer AI slop channels among its top 100 channels than other countries. The U.S. has nine channels among its top 100 channels, and the third-most slop subscribers at 14.47 million.
YouTube isn't the only social media beast whose content is falling to the depths of AI slop despair, but the Kapwing study makes it clear that AI slop isn't going anywhere. As Mashable's Tim Marcin reported earlier this month, AI slop is taking over our feeds, from fake animals on surveillance tapes to heavy machinery cleaning barnacles off whales.