Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
https://phys.org/news/2025-09-insect-cultivated-proteins-healthier-greener.html
Reducing industrial animal use can help to shrink our carbon footprint and boost health—but doing so means we need nutritious meat alternatives that are also tasty and affordable.
The researchers say that by using combinations of different proteins from plants, fungi, insects, microbial fermentation, and cultivated meat, we could create tasty, nutritious, and sustainable alternatives to animal products.
As well as tackling environmental concerns, hybrids could also help to address the health and ethical impact of livestock farming such as animal welfare, zoonotic disease, and antimicrobial resistance.
"Hybrid foods could give us a delicious taste and texture without breaking the bank or the planet," said first author Prof David L. Kaplan from Tufts University in the US. "Using protein alternatives needn't necessarily come with financial, taste, or nutritional costs."
For example, by drawing on the fibrous texture of mycelium, the sensory and nutritional qualities of cultivated meat, the nutrition and sustainability of insects, the proteins, pigments, enzymes, and flavors from microbial fermentation, and the abundance and low cost of plants, hybrids could combine the best of each protein source, say the authors.
But to make this happen, the researchers call for regulatory review and academic and industry cooperation to overcome hurdles and find the best possible protein combinations for our health, sensory, environmental, and cost needs.
"To succeed, we need research and cooperation across science, industry, and regulators to improve quality, scale production, and earn consumer trust," added Prof Kaplan.
The researchers investigated different protein sources: plants (for example, soy products like tofu), insects (processed into flours and blended into foods), mycelium-based products (such as vegan commercial meat analogs), cultivated meat grown in bioreactors, and microbial fermentation products (such as proteins, pigments, enzymes, and flavors).
They assessed the strengths and weaknesses of each protein source and considered how to harness the best qualities of each—both with and without animal meat. For example, while plant proteins are cheap and scalable, they often lack the flavor and texture of meat. Meanwhile, cultivated meat more closely mimics animal meat but is expensive and hard to scale. Mycelium can add natural texture, while insects offer high nutrition with a low environmental footprint.
The researchers reviewed various combinations to compare their sensory and nutritional profiles, consumer acceptance, affordability, and scalability.
They found that while every protein source has drawbacks, combining them can overcome many of these limitations. In the short term, plant–mycelium hybrids appear most economically viable because they are scalable, nutritious, and already used in commercial products.
In the longer term, plant–cultivated meat hybrids may become more desirable, as even small amounts of cultivated cells can improve taste, texture, and nutrition once production costs fall and capacity expands.
They also point to early studies which found that substantial fractions of meat in burgers or sausages can be replaced with plant proteins without reducing consumer acceptance, and even small additions of cultivated meat or mycelium can improve the taste, texture, and nutrition of plant-based products.
"No single alternative protein source is perfect, but hybrid products give us the opportunity to overcome those hurdles, creating products that are more than the sum of their parts," said senior author Prof David Julian McClements from the University of Massachusetts Amherst, US.
As well as benefits, each protein source presents its own limitations which must be addressed before their resulting hybrids can become mainstream meat alternatives, according to the researchers.
The processing necessary for cultivating meat or combining proteins brings high costs and difficulties with scaling up production. Some protein sources need more consistent, less fragmented regulation, and others, like insect protein, face high consumer skepticism.
Many edible insects are highly nutritious and environmentally friendlier to raise than animals, and over two billion people worldwide already regularly eat insects—but consumers in developed countries are often less willing to do so.
Another concern is that many current plant-based meat alternatives require numerous ingredients and extensive processing, and are therefore classified as ultra-processed foods (UPFs), which consumers may view as unhealthy.
Observational studies show correlations between high UPF consumption and adverse health outcomes, though causation has not been established. However, the authors note that hybrids—by drawing on the natural benefits of each source—could help reduce our reliance on additives and heavy processing.
The researchers are therefore working to ensure these products are healthy as well as acceptable to consumers. Future research, they say, should focus on optimizing protein sources, developing scalable production methods, conducting environmental and economic analyses, and using AI to identify new hybrid combinations and processing methods.
More information: Hybrid alternative protein-based foods: designing a healthier and more sustainable food supply, Frontiers in Science (2025). DOI: 10.3389/fsci.2025.1599300
Complex knots can actually be easier to untie than simple ones:
Why is untangling two small knots more difficult than unravelling one big one? Surprisingly, mathematicians have found that larger and seemingly more complex knots created by joining two simpler ones together can sometimes be easier to undo, invalidating a conjecture posed almost 90 years ago.
"We were looking for a counterexample without really having an expectation of finding one, because this conjecture had been around so long," says Mark Brittenham at the University of Nebraska at Lincoln. "In the back of our heads, we were thinking that the conjecture was likely to be true. It was very unexpected and very surprising. "
Mathematicians like Brittenham study knots by treating them as tangled loops with joined ends. One of the most important concepts in knot theory is that each knot has an unknotting number, which is the number of times you would have to sever the string, move another piece of the loop through the gap and then re-join the ends before you reached a circle with no crossings at all – known as the "unknot".
Calculating unknotting numbers can be a very computationally intensive task, and there are still knots with as few as 10 crossings that have no solution. Because of this, it can be helpful to break knots down into two or more simpler knots to analyse them, with those that can't be split any further known as prime knots, analogous to prime numbers.
But a long-standing mystery is whether the unknotting numbers of the two knots added together would give you the unknotting number of the larger knot. Intuitively, it might make sense that a combined knot would be at least as hard to undo as the sum of its constituent parts, and in 1937, it was conjectured that undoing the combined knot could never be easier.
Now, Brittenham and Susan Hermiller, also at the University of Nebraska at Lincoln, have shown that there are cases when this isn't true. "The conjecture's been around for 88 years and as people continue not to find anything wrong with it, people get more hopeful that it's true," says Hermiller. "First, we found one, and then quickly we found infinitely many pairs of knots for whom the connected sum had unknotting numbers that were strictly less than the sum of the unknotting numbers of the two pieces."
"We've shown that we don't understand unknotting numbers nearly as well as we thought we did," says Brittenham. "There could be – even for knots that aren't connected sums – more efficient ways than we ever imagined for unknotting them. Our hope is that this has really opened up a new door for researchers to start exploring."
While finding and checking the counterexamples involved a combination of existing knowledge, intuition and computing power, the final stage of checking the proof was done in a decidedly more simple and practical manner: tying the knot with a piece of rope and physically untangling it to show that the researchers' predicted unknotting number was correct.
Andras Juhasz at the University of Oxford, who previously worked with AI company DeepMind to prove a different conjecture in knot theory, says that he and the company had tried unsuccessfully to crack this latest problem about additive sets in the same way, but with no luck.
"We spent at least a year or two trying to find a counterexample and without success, so we gave up," says Juhasz. "It is possible that for finding counterexamples that are like a needle in a haystack, AI is maybe not the best tool. This was a hard-to-find counterexample, I believe, because we searched pretty hard."
Despite there being many practical applications for knot theory, from cryptography to molecular biology, Nicholas Jackson at the University of Warwick, UK, is hesitant to suggest that this new result can be put to good use. "I guess we now understand a little bit more about how circles work in three dimensions than we did before," he says. "A thing that we didn't understand quite so well a couple of months ago is now understood slightly better."
Reference:
Journal Reference:
Brittenham, Mark, Hermiller, Susan. Unknotting number is not additive under connected sum, (DOI: 10.48550/arXiv.2506.24088)
See also:
Huawei's Ternary Logic Breakthrough: A Game-Changer or Just Hype?:
[Editor's Comment: The source reads as though it could have been created by AI, nevertheless it is an interesting topic and worth a discussion.--JR]
Huawei's recent patent for 'ternary logic' represents a potential breakthrough in chip technology by utilizing a three-state logic system consisting of -1, 0, and 1, rather than the traditional binary system of 0 and 1. This innovative approach could substantially reduce the number of transistors required on a chip, leading to lower energy consumption, particularly in power-hungry AI applications. The ternary logic patent, filed in September 2023 and recently disclosed, may offer significant advantages in processing efficiency and hardware design, addressing some of the physical limits faced by current chip technologies.
Ternary computing is not a novel concept; the first ternary computer was developed in 1958 at Moscow State University, indicating the feasibility of utilizing more than two states in computational logic (source). However, binary logic became the industry standard due to its simplicity and the development of compatible technologies. Huawei's pursuit of ternary logic comes amidst US sanctions, which have pressured the company to explore alternative technological paths. By reducing reliance on traditional chip designs, Huawei aims to innovate in a constrained environment and potentially gain a competitive edge in the AI and semiconductor sectors.
The commercial viability of Huawei's ternary logic chip presents an intriguing yet complex scenario for the tech industry. Ternary logic, which utilizes three states instead of the usual binary system's two, promises significant advancements in chip technology. By potentially reducing the number of transistors required, it could lead to decreases in both manufacturing costs and energy consumption, particularly in power-hungry AI applications. However, the road to commercial viability is laden with challenges. The tech industry must grapple with the transition from a binary-dominated ecosystem to one that could incorporate ternary systems. This includes revamping software and programming approaches to leverage the benefits of the new computing structure. Furthermore, the ability to mass-produce these chips economically and reliably remains unproven, leaving questions about whether the technology can achieve a cost-effective scale .
If successful, Huawei's ternary logic patent could disrupt current computing paradigms and lead to reduced energy consumption in AI technology, aligning with broader trends towards sustainability. The potential of such technology to alter chip design and improve AI efficiency could have far-reaching implications not only for Huawei but for the tech industry at large. Moreover, by possibly circumventing some effects of international sanctions, Huawei's efforts symbolize a form of technological resilience and ingenuity amid geopolitical challenges
Ternary logic represents a novel approach in computing, differentiating itself from the conventional binary logic by utilizing three distinct values: -1, 0, and 1. This trinary system aims to encode information more efficiently, offering potential reductions in the complexity and power usage of AI chips. Unlike the binary system that utilizes two states (0 and 1) to process tasks, ternary logic could lead to less energy-intensive data processing, ultimately decreasing the overall power consumption of AI-driven technologies. This innovation heralds a shift from traditional methodologies, potentially streamlining both hardware requirements and computational resource demands.
The implementation of ternary logic in modern computing could signify a breakthrough in addressing the issues associated with current chip designs. With chip manufacturing approaching its physical limits, the trinary system provides an alternative pathway to enhance processing capabilities without exponentially increasing transistor counts. Huawei's recent patent reflects this innovative direction, aiming to solve power consumption dilemmas while navigating international sanctions and fostering technological advancements in AI. Embedded in this development is Huawei's strategic response to geopolitical challenges, exemplified by their proactive patent applications that emphasize reducing dependence on traditional binary constraints.
Experts Alarmed That AI Is Now Producing Functional Viruses:
In real world experiments, a team of Stanford researchers demonstrated that a virus with AI-written DNA could target and kill specific bacteria, they announced in a study last week. It opened up a world of possibilities where artificial viruses could be used to cure diseases and fight infections.
But experts say it also opened a Pandora's box. Bad actors could just as easily use AI to crank out novel bioweapons, keeping doctors and governments on the backfoot with the outrageous pace at which these viruses can be designed, warn Tal Feldman, a Yale Law School student who formerly built AI models for the federal government, and Jonathan Feldman, a computer science and biology researcher at Georgia Tech (no word on whether the two are related).
"There is no sugarcoating the risks," the pair warned in a piecefor the Washington Post. "We're nowhere near ready for a world in which artificial intelligence can create a working virus, but we need to be — because that's the world we're now living in."
In the study, the Stanford researchers used an AI model called Evo to invent DNA for a bacteriophage, a virus that infects bacteria. Unlike a general purpose large language model like ChatGPT, which is trained on written language, Evo was exclusively trained on millions of bacteriophage genomes.
They focused on an extensively studied phage called phiX174, which is known to infect strains of the bacteria E. coli. Using the EVO AI model, the team came up with 302 candidate genomes based on phiX174 and put them to the test by using the designs to chemically assemble new viruses.
Sixteen of them worked, infecting and killing the E. coli strains. Some of them were even deadlier than the natural form of the virus.
But "while the Stanford team played it safe, what's to stop others from using open data on human pathogens to build their own models?" the two Feldmans warned. "If AI collapses the timeline for designing biological weapons, the United States will have to reduce the timeline for responding to them. We can't stop novel AI-generated threats. The real challenge is to outpace them."
That means using the same AI tech to design antibodies, antivirals, and vaccines. This work is already being done to some extent, but the vast amounts of data needed to accelerate such pioneering research "is siloed in private labs, locked up in proprietary datasets or missing entirely."
"The federal government should make building these high-quality datasets a priority," the duo opined.
From there, the federal government would need to build the necessary infrastructure to manufacture these AI-designed medicines, since the "private sector cannot justify the expense of building that capacity for emergencies that may never arrive," they argue.
Finally, the Food and Drug Administration's sluggish and creaking regulatory framework would need an overhaul. (Perhaps in a monkey's paw of such an overhaul, the FDA said it's using AI to speed-run the approval of medications.)
"Needed are new fast-tracking authorities that allow provisional deployment of AI-generated countermeasures and clinical trials, coupled with rigorous monitoring and safety measures," they said.
The serious risks posed by AI virus generation shouldn't be taken lightly. Yet, it's worth noting that the study in question hasn't made it out of peer review yet and we still don't have a full picture of how readily someone could replicate the work the scientists did.
But with agencies like the Centers for Disease Control and Prevention being gutted, and vaccines and other medical interventions being attacked by a health-crank riddled administration, there's no denying that the country's medical policy and infrastructure is in a bad place. That said, when you consider that the administration is finding any excuse to rapidly deploy AI in every corner of the government, it's worth treading lightly when we ask for more.
More on synthetic biology:Scientists Debate Whether to Halt Type of Research That Could Destroy All Life on Earth
AI Creates Bacteria-Killing Viruses: 'Extreme Caution' Warns Genome Pioneer:
A California outfit has used artificial intelligence to design viral genomes before they were then built and tested in a laboratory. Following this, bacteria was then successfully infected with a number of these AI-created viruses, proving that generative models can create functional genetics.
"The first generative design of complete genomes."
That's what researchers at Stanford University and the Arc Institute in Palo Alto called the results of these experiments. A biologist at NYU Langone Health, Jef Boeke, celebrated the experiment as a substantial step towards AI-designed lifeforms, according to MIT Technology Review.
"They saw viruses with new genes, with truncated genes, and even different gene orders and arrangements," Boeke said.
They team created 302 full genomes, outlined by their AI, Evo - a LLM similar to that of ChatGPT - and introduced them to E. coli test systems. 16 of these designs created successful bacteriophages which were able to replicate and kill the bacteria.
Brian Hie, who leads the Arc Institute lab, reflected on the moment the plates revealed clearings where bacteria had died. "That was pretty striking, just actually seeing, like, this AI-generated sphere," said Hie.
The team targeted bacteriophage phiX174, a minimal DNA phage with approximately 5,000 bases across 11 genes. Around 2 million bacteriophage were used to train the AI model, allowing it to understand the patterns in their makeup and gene order. It then proposed new, complete genomes.
J. Craig Venter helped create the cells with these synthetic genomes. He saw the approach as being "just a faster version of trial-and-error experiments."
"We did the manual AI version - combing through the literature, taking what was known," he explained.
Speed is the appeal here. The prediction from the AI on the protein structure could certainly speed up the processes within drug and biotechnical development. The results could then be used to fight bacterial infections in, for example, farming or even gene therapy.
Samuel King, a student who led the project, said: "There is definitely a lot of potential for this technology."
The team excluded human-infecting viruses from the AI's training, but testing in this area could still be dangerous, warns Venter.
"One area where I urge extreme caution is any viral enhancement research,, especially when it's random so you don't know what you are getting.
"If someone did this with smallpox or anthrax, I would have grave concerns."
There are other issues with this idea. Moving to a 'simple' phage to something more complex such as bacteria - something that AI simply won't be able to do at this point.
"The complexity would rocket from staggering to ... way way more than the number of subatomic particles in the universe," Boeke said.
Despite the challenges surrounding this test, it is an extremely impressive result - and something that could influence the future of genetic engineering.
For many beer lovers, a nice thick head of foam is one of life's pure pleasures, and the longer that foam lasts, the better the beer-drinking experience. A team of Swiss researchers spent seven years studying why some beer foams last longer than others and found that the degree of fermentation—i.e., whether a given beer has been singly, doubly, or triply fermented—is crucial, according to a new paper published in the journal Physics of Fluids.
[...] Individual bubbles typically form a sphere because that's the shape with the minimum surface area for any volume and hence is the most energy-efficient. One reason for the minimizing principle when it comes to a bubble's shape is that many bubbles can then tightly pack together to form a foam. But bubbles "coarsen" over time, the result of gravity pulling down on the liquid and thinning out the walls. Eventually, they start to look more like soccer balls (polyhedrons). In a coarsening foam, smaller bubbles are gradually absorbed by larger ones. There is less and less liquid to separate the individual bubbles, so they press together to fill the space.
This "jamming" is why foams are typically far more rigid than their gas (95 percent) and liquid (5 percent) components. The more tightly the bubbles jam together, the less they can move around and the greater the pressure inside them becomes, giving them properties of a solid.
Various factors can affect foam stability. For instance, in 2019, Japanese researchers investigated a phenomenon known as "collective bubble collapse," or CBC, in which breaking one bubble at the edge of a foam results in a cascading effect as the breakage spreads to other bubbles in the foam. They identified two distinct mechanisms for the resulting CBCs: a so-called "propagating mode," in which a broken bubble is absorbed into the liquid film, and a "penetrating mode," in which the breakage of a bubble causes droplets to shoot off and hit other bubbles, causing them to break in turn.
Higher levels of liquid in the foam slowed the spread of the collapse, and changing the viscosity of the fluid had no significant impact on how many bubbles broke in the CBC. Many industrial strategies for stabilizing foams rely on altering the viscosity; this shows those methods are ineffective. The researchers suggest focusing instead on using several different surfactants in the mixture. This would strengthen the resulting film to make it more resistant to breakage when hit by flying droplets.
However, as the authors of this latest paper note, "Most beers are not detergent solutions (though arguably some may taste that way)." They were inspired by a Belgian brewer's answer when they asked how he controlled fermentation: "By watching the foam." A stable foam is considered to be a sign of successful fermentation. The authors decided to investigate precisely how the various factors governing foam stability might be influenced by the fermentation process.
[...] Single-fermented lager beers had the least stable foam, with triple-fermented beers boasting the most stable foam; the foam stability of double-fermented beers fell in the middle of the range. The team also discovered that the most important factor for foam stability isn't fixed but largely depends on the type of beer. It all comes down to surface viscosity for single-fermented lagers.
But surface viscosity is not a major factor for stable foams in double- or triple-fermented beers. Instead, stability arises from differences in surface tension, i.e., Marangoni stresses—the same phenomenon behind so-called "wine tears" and the "coffee ring effect." Similarly, when a drop of watercolor paint dries, the pigment particles of color break outward toward the rim of the drop. In the case of beer foam, the persistent currents that form as a result of those differences in surface tension lend stability to the foam.
The researchers also analyzed the protein content of the beers and found that one in particular—lipid transfer protein 1 (LPT1)—was a significant factor in stabilizing beer foams, and their form depended on the degree of fermentation. In single-fermented beers, for example, the proteins are small, round particles on the surface of the bubbles. The more proteins there are, the more the foam will be stable because those proteins form a more viscous film around the bubbles.
Those LPT1 proteins become slightly denatured during a second fermentation, forming more of a net-like structure that improves foam stability. That denaturation continues during a third fermentation, when the proteins break down into fragments with hydrophobic and hydrophilic ends, reducing surface tensions. They essentially become surfactants to make the bubbles in the foam much more stable.
That said, the team was a bit surprised to find that increasing the viscosity with additional surfactants can actually make the foam more unstable because it slows down the Marangoni effects too strongly. "The stability of the foam does not depend on individual factors linearly," said co-author Jan Vermant, also of ETH Zurich. "You can't just change 'something' and get it 'right.' The key is to work on one mechanism at a time–and not on several at once. Beer obviously does this well by nature. We now know the mechanism exactly and are able to help [breweries] improve the foam of their beers."
The findings likely have broader applications as well. "This is an inspiration for other types of materials design, where we can start thinking about the most material-efficient ways [of creating stable foams]," said Vermant. "If we can't use classical surfactants, can we mimic the 2D networks that double-fermented beers have?" The group is now investigating how to prevent lubricants used in electric vehicles from foaming; developing sustainable surfactants that don't contain fluoride or silicon; and finding ways to use proteins to stabilize milk foam, among other projects.
Journal Reference: DOI: Physics of Fluids, 2025. 10.1063/5.0274943
How the von Neumann bottleneck is impeding AI computing:
Most computers are based on the von Neumann architecture, which separates compute and memory. This arrangement has been perfect for conventional computing, but it creates a data traffic jam in AI computing.
AI computing has a reputation for consuming epic quantities of energy. This is partly because of the sheer volume of data being handled. Training often requires billions or trillions of pieces of information to create a model with billions of parameters. But that's not the whole reason — it also comes down to how most computer chips are built.
Modern computer processors are quite efficient at performing the discrete computations they're usually tasked with. Though their efficiency nosedives when they must wait for data to move back and forth between memory and compute, they're designed to quickly switch over to work on some unrelated task. But for AI computing, almost all the tasks are interrelated, so there often isn't much other work that can be done when the processor gets stuck waiting, said IBM Research scientist Geoffrey Burr.
In that scenario, processors hit what is called the von Neumann bottleneck, the lag that happens when data moves slower than computation. It's the result of von Neumann architecture, found in almost every processor over the last six decades, wherein a processor's memory and computing units are separate, connected by a bus. This setup has advantages, including flexibility, adaptability to varying workloads, and the ability to easily scale systems and upgrade components. That makes this architecture great for conventional computing, and it won't be going away any time soon.
But for AI computing, whose operations are simple, numerous, and highly predictable, a conventional processor ends up working below its full capacity while it waits for model weights to be shuttled back and forth from memory. Scientists and engineers at IBM Research are working on new processors, like the AIU family, which use various strategies to break down the von Neumann bottleneck and supercharge AI computing.
The von Neumann bottleneck is named for mathematician and physicist John von Neumann, who first circulated a draft of his idea for a stored-program computer in 1945. In that paper, he described a computer with a processing unit, a control unit, memory that stored data and instructions, external storage, and input/output mechanisms. His description didn't name any specific hardware — likely to avoid security clearance issues with the US Army, for whom he was consulting. Almost no scientific discovery is made by one individual, though, and von Neumann architecture is no exception. Von Neumann's work was based on the work of J. Presper Eckert and John Mauchly, who invented the Electronic Numerical Integrator and Computer (ENIAC), the world's first digital computer. In the time since that paper was written, von Neumann architecture has become the norm.
"The von Neumann architecture is quite flexible, that's the main benefit," said IBM Research scientist Manuel Le Gallo-Bourdeau. "That's why it was first adopted, and that's why it's still the prominent architecture today."
[...] For AI computing, the von Neumann bottleneck creates a twofold efficiency problem: the number of model parameters (or weights) to move, and how far they need to move. More model weights mean larger storage, which usually means more distant storage, said IBM Research scientist Hsinyu (Sidney) Tsai. "Because the quantity of model weights is very large, you can't afford to hold them for very long, so you need to keep discarding and reloading," she said.
The main energy expenditure during AI runtime is spent on data transfers — bringing model weights back and forth from memory to compute. By comparison, the energy spent doing computations is low. In deep learning models, for example, the operations are almost all relatively simple matrix vector multiplication problems. Compute energy is still around 10% of modern AI workloads, so it isn't negligible, said Tsai. "It is just found to be no longer dominating energy consumption and latency, unlike in conventional workloads," she added.
About a decade ago, the von Neumann bottleneck wasn't a significant issue because processors and memory weren't so efficient, at least compared to the energy that was spent to transfer data, said Le Gallo-Bourdeau. But data transfer efficiency hasn't improved as much as processing and memory have over the years, so now processors can complete their computations much more quickly, leaving them sitting idle while data moves across the von Neumann bottleneck.
[...] Aside from eliminating the von Neumann bottleneck, one solution includes closing that distance. "The entire industry is working to try to improve data localization," Tsai said. IBM Research scientists recently announced such an approach: a polymer optical waveguide for co-packaged optics. This module brings the speed and bandwidth density of fiber optics to the edge of chips, supercharging their connectivity and hugely reducing model training time and energy costs.
With currently available hardware, though, the result of all these data transfers is that training an LLM can easily take months, consuming more energy than a typical US home does in that time. And AI doesn't stop needing energy after model training. Inferencing has similar computational requirements, meaning that the von Neumann bottleneck slows it down in a similar fashion.
[...] While von Neumann architecture creates a bottleneck for AI computing, for other applications, it's perfectly suited. Sure, it causes issues in model training and inference, but von Neumann architecture is perfect for processing computer graphics or other compute-heavy processes. And when 32- or 64-bit floating point precision is called for, the low precision of in-memory computing isn't up to the task.
"For general purpose computing, there's really nothing more powerful than the von Neumann architecture," said Burr. Under these circumstances, bytes are either operations or operands that are moving on a bus from a memory to a processor. "Just like an all-purpose deli where somebody might order some salami or pepperoni or this or that, but you're able to switch between them because you have the right ingredients on hand, and you can easily make six sandwiches in a row." Special-purpose computing, on the other hand, may involve 5,000 tuna sandwiches for one order — like AI computing as it shuttles static model weights.
This black hole flipped its magnetic field:
The magnetic field swirling around an enormous black hole, located about 55 million light-years from Earth, has unexpectedly switched directions. This dramatic reversal challenges theories of black hole physics and provides scientists with new clues about the dynamic nature of these shadowy giants.
The supermassive black hole, nestled in the heart of the M87 galaxy, was first imaged in 2017. Those images revealed, for the first time, a glowing ring of plasma — an accretion disk — encircling the black hole, dubbed M87*. At the time, the disk's properties, including those of the magnetic field embedded in the plasma, matched theoretical predictions.
But observations of the accretion disk in the years that followed show that its magnetic field is not as stable as it first seemed, researchers report in a paper to appear in Astronomy & Astrophysics. In 2018, the magnetic field shifted and nearly disappeared. By 2021, the field had completely flipped direction.
"No theoretical models we have today can explain this switch," says study coauthor Chi-kwan Chan, an astronomer at Steward Observatory in Tucson. The magnetic field configuration, he says, was expected to be stable due to the black hole's large mass — roughly 6 billion times as massive as the sun, making it over a thousand times as hefty as the supermassive black hole at the center of the Milky Way.
In the new study, astronomers analyzed images of the accretion disk around M87* compiled by the Event Horizon Telescope, a global network of radio telescopes. The scientists focused on a specific component that's sensitive to magnetic field orientation called polarized light, which consists of light waves all oscillating in a particular direction.
By comparing the polarization patterns over the years, the astronomers saw that the magnetic field reversed direction. Magnetic fields around black holes are thought to funnel in material from their surrounding disks. With the new findings, astronomers will have to rethink their understanding of this process.
While researchers don't yet know what caused the flip in this disk's magnetic field, they think it could have been a combination of dynamics within the black hole and external influences.
"I was very surprised to see evidence for such a significant change in M87's magnetic field over a few years," says astrophysicist Jess McIver of the University of British Columbia in Vancouver, who was not involved with the research. "This changes my thinking about the stability of supermassive black holes and their environments."
Expert calls security advice "unfairly outsourcing the problem to Anthropic's users"
On Tuesday [September 9, 2025], Anthropic launched a new file creation feature for its Claude AI assistant that enables users to generate Excel spreadsheets, PowerPoint presentations, and other documents directly within conversations on the web interface and in the Claude desktop app. While the feature may be handy for Claude users, the company's support documentation also warns that it "may put your data at risk" and details how the AI assistant can be manipulated to transmit user data to external servers.
The feature, awkwardly named "Upgraded file creation and analysis," is basically Anthropic's version of ChatGPT's Code Interpreter and an upgraded version of Anthropic's "analysis" tool. It's currently available as a preview for Max, Team, and Enterprise plan users, with Pro users scheduled to receive access "in the coming weeks," according to the announcement.
The security issue comes from the fact that the new feature gives Claude access to a sandbox computing environment, which enables it to download packages and run code to create files. "This feature gives Claude Internet access to create and analyze files, which may put your data at risk," Anthropic writes in its blog announcement. "Monitor chats closely when using this feature."
According to Anthropic's documentation, "a bad actor" manipulating this feature could potentially "inconspicuously add instructions via external files or websites" that manipulate Claude into "reading sensitive data from a claude.ai connected knowledge source" and "using the sandbox environment to make an external network request to leak the data."
This describes a prompt injection attack, where hidden instructions embedded in seemingly innocent content can manipulate the AI model's behavior—a vulnerability that security researchers first documented in 2022. These attacks represent a pernicious, unsolved security flaw of AI language models, since both data and instructions in how to process it are fed through as part of the "context window" to the model in the same format, making it difficult for the AI to distinguish between legitimate instructions and malicious commands hidden in user-provided content.
[...] Anthropic is not completely ignoring the problem, however. The company has implemented several security measures for the file creation feature. For Pro and Max users, Anthropic disabled public sharing of conversations that use the file creation feature. For Enterprise users, the company implemented sandbox isolation so that environments are never shared between users. The company also limited task duration and container runtime "to avoid loops of malicious activity."
[...] Anthropic's documentation states the company has "a continuous process for ongoing security testing and red-teaming of this feature." The company encourages organizations to "evaluate these protections against their specific security requirements when deciding whether to enable this feature."
[...] That kind of "ship first, secure it later" philosophy has caused frustrations among some AI experts like Willison, who has extensively documented prompt injection vulnerabilities (and coined the term). He recently described the current state of AI security as "horrifying" on his blog, noting that these prompt injection vulnerabilities remain widespread "almost three years after we first started talking about them."
In a prescient warning from September 2022, Willison wrote that "there may be systems that should not be built at all until we have a robust solution." His recent assessment in the present? "It looks like we built them anyway!"
https://joel.drapper.me/p/rubygems-takeover/
Ruby Central recently took over a collection of open source projects from their maintainers without their consent. News of the takeover was first broken by Ellen on 19 September.
I have spoken to about a dozen people directly involved in the events, and seen a recording of a key meeting between Ruby Gems maintainers and Ruby Central, to uncover what went on.
https://narrativ.es/@janl/115258495596221725
Okay so this was a hostile takeover. The Ruby community needs to get their house in order.
And one more note, I assume it is implied in the write up, but you might now know: DHH is on the board of directors of Shopify. He exerts tremendous organisational and financial power.
It's hilarious he's threatened by three devs with a hobby project and is willing to burn his community's reputation over it.
They finally came up with a word for it. "Workslop". To much AI usage among (co-)workers is leading to "workslop". Where there is to much AI production that doesn't turn out to be very valuable or productive. It looks fine at a first glance but has produced nothing of value or solved any problems. It just looks fine. All shiny surface, but nothing actual.
workslop is "AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task."
AI promised to revolutionize productivity. Instead, 'workslop' is a giant time suck and the scourge of the 21st century office, Stanford warns
A benefits manager said of one AI-sourced document a colleague sent her, "It was annoying and frustrating to waste time trying to sort out something that should have been very straightforward."
So while companies may be spending hundreds of millions on AI software to create efficiencies and boost productivity, and encouraging employees to use it liberally, they may also be injecting friction into their operations.
The researchers say that "lazy" AI-generated work is not only slowing people down, it's also leading to employees losing respect for each other. After receiving workslop, staffers said they saw the peers behind it as less creative and less trustworthy.
"The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work," they write.
So shit literally flows downwards then?
https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity
https://techcrunch.com/2025/09/27/beware-coworkers-who-produce-ai-generated-workslop/
https://fortune.com/2025/09/23/ai-workslop-workshop-workplace-communication/
https://edition.cnn.com/2025/09/26/business/ai-workslop-nightcap
https://phys.org/news/2025-09-inequality-agri-food-chains-global.html
In the global agri-food system, most agricultural goods are produced in the Global South but value is captured by countries of the Global North through growth and control of the post-farmgate sectors. This is shown by a study from the Institute of Environmental Science and Technology at the Universitat Autònoma de Barcelona (ICTA-UAB), which reveals that between 1995 and 2020, non-agricultural sectors absorbed much of the value added in global agri-food systems. These sectors are disproportionately dominated by countries of the Global North.
The research, published in the journal Global Food Security and led by ICTA-UAB researcher Meghna Goyal together with Jason Hickel, also from ICTA-UAB, and Praveen Jha from Jawaharlal Nehru University, India, analyzes for the first time on a global scale the distribution of economic value in agri-food chains over a 25-year period.
The results show that, although the Global South has increased its share of agricultural production, countries of the North continue to capture a disproportionate share of income from higher-value sectors such as processing, logistics, finance, and services.
The study also notes that a substantial portion of revenue is recorded in low-tax jurisdictions with little agricultural production, suggesting that value-addition is recorded according to profit-maximizing strategies, rather than according to actual production or employment.
This demonstrates that value chains in agri-food systems reinforce structural inequalities through the international division of labor. Countries such as Singapore and Hong Kong capture up to 60 and 27 times more from the global agri-food system than the value of their agricultural production.
Researchers warn of the urgent need for economic sovereignty for the Global South to address structural unequal exchange in the global agri-food system.
"Value capture strategies reshape supply chains. Our findings alert us to its potentially negative consequences for development and equity for farming and the Global South economies," says Meghna Goyal, main author of the study.
ICTA-UAB researcher and co-author Jason Hickel states that "this is the first study to measure the global distribution of value in the agri-food system, and the results are damning. The people who do most of the agricultural production which sustains global civilization do not get a fair share of food-system incomes."
More information: Meghna Goyal et al, Increasing inequality in agri-food value chains: global trends from 1995-2020, Global Food Security (2025). DOI: 10.1016/j.gfs.2025.100883
Get your VPNs ready! Australia, having already nominated an age limit for social media coming this December (after they work out how it will be implemented), will progress to requiring Australians to verify their identity by logging in to a Microsoft or Google account to access adult material starting with search engines. Stop laughing. No, really. They will. Soon. Ok, two minute laugh session. Moving on. While this change in law is for 'good intentions' and Australian politicians high five themselves for 'protecting children', Professor Lisa Given of the RMIS Information Sciences department was quoted as saying that the changes "will definitely create more headaches for the everyday consumer and how they log in and use search services." Meanwhile, in England where similar laws have been enacted, VPN use has skyrocketed.
As stated in the law passed late last year, platforms also cannot rely solely on using government-issued ID for age verification, even though the government-backed technology study found this to be the most effective screening method.
Instead, the guidelines will direct platforms to take a "layered" approach to assessing age with multiple methods and to "minimise friction" for their users — such as by using AI-driven models that assess age with facial scans or by tracking user behaviour.
Ms Wells has previously highlighted those models as examples of cutting-edge technology, although the experts have raised questions about their effectiveness.
Reddit and X are among the companies approached by the eSafety commissioner, Julie Inman Grant, right, about the requirement to prevent under 16s from holding social media accounts. Composite: Guardian AustraliaView image in fullscreenReddit and X are among the companies approached by the eSafety commissioner, Julie Inman Grant, right, about the requirement to prevent under 16s from holding social media accounts. Composite: Guardian AustraliaAustralia's under 16s social media ban could extend to Reddit, Twitch, Roblox and even dating apps
Lego Play and Steam among the unexpected additions to the list that includes Facebook, Instagram, TikTok, YouTube and X
Twitch, Roblox, Steam, Lego Play, X and Reddit are among the companies eSafety has approached about whether the under 16s social media ban applies to them from December.
Companies approached by the eSafety commissioner this month about the requirement to prevent under 16s from holding social media accounts from 10 December have conducted a self-assessment that the commissioner will use to decide if they need to comply with the ban.
eSafety will not be formally declaring which service meets the criteria but companies that eSafety believes meet the criteria will be expected to comply.
The eSafety commissioner's office initially declined to release the list of companies contacted earlier this month but on Wednesday named the companies.
The full list of companies initially approached by eSafety to ask to assess if they need to comply with the ban included:
Meta – Facebook, Instagram , WhatsApp
Snap
Tiktok
YouTube
X
Roblox
Discord
Lego Play
Kick
GitHub
HubApp
Match
Steam
Twitch
Gaming platforms such as Roblox, Lego Play and Steam were unexpected additions to the list that was widely anticipated to include Facebook, Instagram, TikTok, YouTube and X. Platforms that have the sole or primary purpose of enabling users to play online games with other users are exempt from the ban.
"Any platform eSafety believes to be age-restricted will be expected to comply and eSafety will make this clear to the relevant platforms in due course," a spokesperson for the eSafety commissioner said.
[...] The eSafety commissioner, Julie Inman Grant, has previously expressed concerns about Roblox's communications features being used to groom children.
"We know that when it comes to platforms that are popular with children, they also become popular with adult predators seeking to prey on them," Inman Grant said earlier this month. "Roblox is no exception and has become a popular target for paedophiles seeking to groom children."
Earlier this month, Roblox committed to implementing age assurance by the end of this year, making accounts for users under 16 private by default and introducing tools to prevent adult users contacting under 16s without parental consent.
Direct chat will also be switched off by default until a user has gone through age estimation.
The Free Software Foundation (FSF) turns forty on October 4, 2025. The Free Software Foundation will have then been defending the rights of all software users for the past 40 years. The long term goal is for all users have the freedom to run, edit, contribute to, and share software.
There will be an online event, with an in-person option for those that can get to Boston. In November there will also be a hackathon.
"This is a giant project," Nvidia CEO said of new 10-gigawatt AI infrastructure deal:
On Monday, OpenAI and Nvidia jointly announced a letter of intent for a strategic partnership to deploy at least 10 gigawatts of Nvidia systems for OpenAI's AI infrastructure, with Nvidia planning to invest up to $100 billion as the systems roll out. The companies said the first gigawatt of Nvidia systems will come online in the second half of 2026 using Nvidia's Vera Rubin platform.
"Everything starts with compute," said Sam Altman, CEO of OpenAI, in the announcement. "Compute infrastructure will be the basis for the economy of the future, and we will utilize what we're building with NVIDIA to both create new AI breakthroughs and empower people and businesses with them at scale."
The 10-gigawatt project represents an astoundingly ambitious and as-yet-unproven scale for AI infrastructure. Nvidia CEO Jensen Huang told CNBC that the planned 10 gigawatts equals the power consumption of between 4 million and 5 million graphics processing units, which matches the company's total GPU shipments for this year and doubles last year's volume. "This is a giant project," Huang said in an interview alongside Altman and OpenAI President Greg Brockman.
To put that power demand in perspective, 10 gigawatts equals the output of roughly 10 nuclear reactors, which typically output about 1 gigawatt per facility. Current data center energy consumption ranges from 10 megawatts to 1 gigawatt, with most large facilities consuming between 50 and 100 megawatts. OpenAI's planned infrastructure would dwarf existing installations, requiring as much electricity as multiple major cities.
[...] Bryn Talkington, managing partner at Requisite Capital Management, noted the circular nature of the investment structure to CNBC. "Nvidia invests $100 billion in OpenAI, which then OpenAI turns back and gives it back to Nvidia," Talkington told CNBC. "I feel like this is going to be very virtuous for Jensen."
[...] Other massive AI infrastructure projects are emerging across the US. In July, officials in Cheyenne, Wyoming, announced plans for an AI data center that would eventually scale to 10 gigawatts—consuming more electricity than all homes in the state combined, even in its earliest 1.8 gigawatt phase. Whether it's connected to OpenAI's plans remains unclear.
[...] The planned infrastructure buildout would significantly increase global energy consumption, which also raises environmental concerns. The International Energy Agency estimates that global data centers already consumed roughly 1.5 percent of global electricity in 2024. OpenAI's project also faces practical constraints. Existing power grid connections represent bottlenecks in power-constrained markets, with utilities struggling to keep pace with rapid AI expansion that could push global data center electricity demand to 945 terawatt hours by 2030, according to the International Energy Agency.
The companies said they expect to finalize details in the coming weeks. Huang told CNBC the $100 billion investment comes on top of all Nvidia's existing commitments and was not included in the company's recent financial forecasts to investors.
https://gist.github.com/probonopd/9feb7c20257af5dd915e3a9f2d1f2277
Wayland breaks everything! It is binary incompatible, provides no clear transition path with 1:1 replacements for everything in X11, and is even philosophically incompatible with X11. Hence, if you are interested in existing applications to "just work" without the need for adjustments, then you may be better off avoiding Wayland.
Wayland solves no issues I have but breaks almost everything I need. Even the most basic, most simple things (like xkill) - in this case with no obvious replacement. And usually it stays broken, because the Wayland folks mostly seem to care about Automotive, Gnome, maybe KDE - and alienating everyone else (e.g., people using just an X11 window manager or something like GNUstep) in the process.
What follows is a very well written "Feature comparison" between Xorg and Wayland.