Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:164 | Votes:266

posted by janrinok on Friday October 03, @11:18PM   Printer-friendly

NASA boss says US should have 'village' on Moon in a decade:

IAC 2025 If the USA's space strategy succeeds, it will run a "village" on the moon in a decade, NASA administrator Sean Duffy told the International Aeronautical Congress (IAC) in Sydney today.

Duffy appeared in a session featuring the heads of space agencies from the USA, China, Japan, India, Europe, and Canada. Readers will likely have noted the absence of Russia, a longtime space player, from that list.

The NASA boss seemingly hinted at one reason Russia's space boss is not at the Congress when he said the USA "comes in peace" to space. "We have not been in the business of taking people's land," Duffy said.

Asked what success looks like for NASA in a decade, Duffy said "We are going to have sustained human life on the moon. Not just an outpost, but a village." And a nuclear-powered village at that after NASA recently issued an RFI seeking commercial help to build a nuclear reactor on Luna

Duffy also predicted that a decade from now NASA will also have "made leaps and bounds on our mission to get to Mars" and "be on the cusp of putting human boots on Mars."

The theme for this year's IAC is "Sustainable Space: Resilient Earth". Duffy's take on that is how to sustain human life in space, an objective he said is NASA's prerogative because it alone among US government agencies has a remit for exploration. He pointed out that other US government agencies have the job of considering terrestrial stability and earthly resilience, and that NASA must focus on exploration,

The other space agency heads at the event took a different view.

When European Space Agency (ESA) director general Josef Ashbacher had his turn on stage, he offered a very different vision of sustainability by pointing out that the agency he leads freely shares data from its earth observation satellites. "I am glad that we at ESA are working for the betterment of the planet," he said.

V Narayanan, the chair of India's Space Research Organization (ISRO) said ensuring food and water resources is his agency's top goal in space. Lisa Campbell, the President of the Canadian Space Agency said that when her country first orbited an earth observation satellite it had to pay private sector organizations to use its data. "Now they want it," she said, before announcing CA$5 million ($3.6 million) to fund studies on biodiversity from space. The president of the Japan Aerospace Exploration Agency (JAXA), Dr Hiroshi Yamakawa, reminded attendees that Japan recently launched its third greenhouse gas observation satellite.

The deputy administrator of China's National Space Agency (CNSA) Zhigang Bian said his country has launched 500 earth observation satellites. He sent a little murmur around the auditorium when he said China participates in a constellation of such sats shared by members of the BRICS bloc – a loose alliance that recently expanded its membership beyond Brazil, Russia, India, China and South Africa to include Egypt, Ethiopia, Indonesia, Iran, and the United Arab Emirates. Diplomatic types see BRICS expansion as China developing institutions that rival existing blocs – on Earth and in space.

Zhigang also said China is working to make space sustainable with new measures to track orbiting debris, manage traffic in space, and provide alerts to warn if spacecraft are at risk. He said China believes those measures are necessary because the growing mega-constellations of broadband satellites increases risks for all users of space.

"China is currently researching active removal of space debris," he said.

JAXA's Dr Yamakawa said Japan's private sector outfit Astroscale is probably three years away from capturing and de-orbiting a satellite, but that doing so won't solve the space junk problem.

"We think the debris issue is one we must cope with," he said. "There is not enough time to solve for this."

Dr Yamakawa also suggested collaboration between spacefaring nations makes extraterrestrial exploration more sustainable, and pointed to the forthcoming JAXA/ISRO collaboration on the Lunar Polar Exploration (LUPEX) mission that will see a Japanese H3 rocket carry an Indian lander and a Japanese rover.

ISRO's V Narayanan said the mission will supersize India's previous Chandrayaan moon missions, by sending a 6,800kg lander and 300kg rover, up from the 600kgs and 25kgs sent by 2023's Chandrayaan-3 mission.


Original Submission

posted by janrinok on Friday October 03, @06:34PM   Printer-friendly

https://www.bleepingcomputer.com/news/security/cisa-warns-of-critical-linux-sudo-flaw-exploited-in-attacks/

Hackers are actively exploiting a critical vulnerability (CVE-2025-32463) in the sudo package that enables the execution of commands with root-level privileges on Linux operating systems.

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has added this vulnerability to its Known Exploited Vulnerabilities (KEV) catalog, describing it as "an inclusion of functionality from untrusted control sphere."

CISA has given federal agencies until October 20 to apply the official mitigations or discontinue the use of sudo.

A local attacker can exploit this flaw to escalate privileges by using the -R (--chroot) option, even if they are not included in the sudoers list, a configuration file that specifies which users or groups are authorized to execute commands with elevated permissions.

Sudo ("superuser do") allows system administrators to delegate their authority to certain unprivileged users while logging the executed commands and their arguments.

Officially disclosed on June 30, CVE-2025-32463 affects sudo versions 1.9.14 through 1.9.17 and has received a critical severity score of 9.3 out of 10.

"An attacker can leverage sudo's -R (--chroot) option to run arbitrary commands as root, even if they are not listed in the sudoers file," explains the security advisory.

Rich Mirch, a researcher at cybersecurity services company Stratascale who discovered CVE-2025-32463, noted that the issue impacts the default sudo configuration and can be exploited without any predefined rules for the user.

On July 4, Mirch released a proof-of-concept exploit for the CVE-2025-32463 flaw, which has existed since June 2023 with the release of version 1.9.14.

However, additional exploits have circulated publicly since July 1, likely derived from the technical write-up.

CISA has warned that the CVE-2025-32463 vulnerability in sudo is being exploited in real-world attacks, although the agency has not specified the types of incidents in which it has been leveraged.

Organizations worldwide are advised to use CISA's Known Exploited Vulnerabilities catalog as a reference for prioritizing patching and implementing other security mitigations.


Original Submission

posted by janrinok on Friday October 03, @01:51PM   Printer-friendly

https://phys.org/news/2025-09-photodiode-germanium-key-chip.html

Programmable photonics devices, which use light to perform complex computations, are emerging as a key area in integrated photonics research. Unlike conventional electronics that transmit signals with electrons, these systems use photons, offering faster processing speeds, higher bandwidths, and greater energy efficiency. These advantages make programmable photonics well-suited for demanding tasks like real-time deep learning and data-intensive computing.

A major challenge, however, lies in the use of power monitors. These sensors must constantly track the optical signal's strength and provide the necessary feedback for tuning the chip's components as required. However, existing on-chip photodetectors designed for this purpose face a fundamental tradeoff. They either have to absorb a significant amount of the optical signal to achieve a strong reading, which degrades the signal's quality, or they lack the sensitivity to operate at the low power levels required without needing additional amplifiers.

As reported in Advanced Photonics, Yue Niu and Andrew W. Poon from The Hong Kong University of Science and Technology have addressed this challenge by developing a germanium-implanted silicon waveguide photodiode. Their approach overcomes the tradeoffs that have hindered existing on-chip power monitoring technologies.

A waveguide photodiode is a small light detector that can be integrated directly into an optical waveguide, which confines and transports light. Its purpose is to convert a small portion of the light traveling through the waveguide into an electrical signal that can be measured via more conventional electronics. One way to enhance this conversion is through ion implantation, a process that introduces controlled defects into the photodiode's silicon structure by bombarding it with ions.

If executed properly, these defects can absorb photons with energies too low for pure silicon, enabling the photodiode to detect light across a broader range of wavelengths.

Previous attempts to build such detectors used boron, phosphorus, or argon ions. While these approaches improved performance in some respects, they also introduced many free carriers into the silicon lattice, which in turn degraded optical performance. In contrast, the team implanted germanium ions. Germanium, a Group IV element like silicon, can replace silicon atoms in the crystal structure without introducing significant numbers of free carriers. This substitution allows the device to extend its sensitivity without compromising signal quality.

The researchers conducted various comparative experiments to test the new device under relevant conditions. The germanium-implanted photodiode showed high responsivity at both 1,310 nanometers (O-band) and 1,550 nanometers (C-band), two critical wavelengths used in telecommunications. It also demonstrated an extremely low dark current, meaning little unwanted output when no light was present, as well as very low optical absorption loss. This combination makes the device suitable for integration into photonic circuits without disturbing the primary signal flow.

"We benchmarked our results with other reported on-chip linear photodetector platforms and showed that our devices are competitive across various performance metrics for power monitoring applications in self-calibrating programmable photonics," remarks Poon.Overall, this study represents a major step toward practical, large-scale programmable photonic systems. By providing a photodetector that can meet the stringent demands of on-chip monitoring, the researchers have brought the transformative potential of light-based computing closer to reality.

Beyond its immediate use in programmable photonics, the proposed device's unique characteristics also open doors to other promising applications.

"The combination of an extremely low dark current with a low bias voltage positions our device as an ideal candidate for energy-efficient, ultra-sensitive biosensing platforms, where low-noise detection of weak optical signals is paramount," explains Poon. "This would enable direct integration with microfluidics for lab-on-chip systems."

The germanium-implanted photodiode may help advance programmable photonics by improving on-chip light monitoring and could also support future applications in biosensing and lab-on-chip technologies.

More information: Yue Niu et al, Broadband sub-bandgap linear photodetection in Ge+-implanted silicon waveguide photodiode monitors, Advanced Photonics (2025). DOI: 10.1117/1.ap.7.6.066005


Original Submission

posted by janrinok on Friday October 03, @09:06AM   Printer-friendly

404MEDIA

ICE to Buy Tool that Tracks Locations of Hundreds of Millions of Phones Every Day

Documents show that ICE has gone back on its decision to not use location data remotely harvested from peoples' phones. The database is updated every day with billions of pieces of location data.

Immigration and Customs Enforcement (ICE) has bought access to a surveillance tool that is updated every day with billions of pieces of location data from hundreds of millions of mobile phones, according to ICE documents reviewed by 404 Media.

The documents explicitly show that ICE is choosing this product over others offered by the contractor's competitors because it gives ICE essentially an "all-in-one" tool for searching both masses of location data and information taken from social media. The documents also show that ICE is planning to once again use location data remotely harvested from peoples' smartphones after previously saying it had stopped the practice.

Surveillance contractors around the world create massive datasets of phones', and by extension people's movements, and then sell access to the data to government agencies. In turn, U.S. agencies have used these tools without a warrant or court order.

"The Biden Administration shut down DHS's location data purchases after an inspector general found that DHS had broken the law. Every American should be concerned that [the current administration's] hand-picked security force is once again buying and using location data without a warrant," Senator Ron Wyden told 404 Media in a statement.


Original Submission

posted by hubie on Friday October 03, @04:21AM   Printer-friendly

The AI industry has made major promises about its tech boosting the productivity of developers, allowing them to generate copious amounts of code with simple text prompts.

But those claims appear to be massively overblown, as The Register reports, with researchers finding that productivity gains are modest at best — and at worst, that AI can actually slow down human developers.

In a new report, management consultants Bain & Company found that despite being "one of the first areas to deploy generative AI," the "savings have been unremarkable" in programming.

"Generative AI arrived on the scene with sky-high expectations, and many companies rushed into pilot projects," the report reads. "Yet the results haven't lived up to the hype."

First off, "developer adoption is low" even among the companies that rolled out AI tools, the management consultancy found.

Worse yet, while some assistants saw "ten to 15 percent productivity boosts," the savings most of the time "don't translate into positive returns."

It's yet another warning shot, highlighting concerns that even in one of the most promising areas, the AI industry is struggling to live up to its enormous hype. That's despite companies pouring untold billions of dollars into its development, with analysts openly fretting about an enormous AI bubble getting closer to popping.

futurism.com


Original Submission

posted by janrinok on Thursday October 02, @11:35PM   Printer-friendly

https://phys.org/news/2025-09-insect-cultivated-proteins-healthier-greener.html

Reducing industrial animal use can help to shrink our carbon footprint and boost health—but doing so means we need nutritious meat alternatives that are also tasty and affordable.

The researchers say that by using combinations of different proteins from plants, fungi, insects, microbial fermentation, and cultivated meat, we could create tasty, nutritious, and sustainable alternatives to animal products.

As well as tackling environmental concerns, hybrids could also help to address the health and ethical impact of livestock farming such as animal welfare, zoonotic disease, and antimicrobial resistance.

"Hybrid foods could give us a delicious taste and texture without breaking the bank or the planet," said first author Prof David L. Kaplan from Tufts University in the US. "Using protein alternatives needn't necessarily come with financial, taste, or nutritional costs."

For example, by drawing on the fibrous texture of mycelium, the sensory and nutritional qualities of cultivated meat, the nutrition and sustainability of insects, the proteins, pigments, enzymes, and flavors from microbial fermentation, and the abundance and low cost of plants, hybrids could combine the best of each protein source, say the authors.

But to make this happen, the researchers call for regulatory review and academic and industry cooperation to overcome hurdles and find the best possible protein combinations for our health, sensory, environmental, and cost needs.

"To succeed, we need research and cooperation across science, industry, and regulators to improve quality, scale production, and earn consumer trust," added Prof Kaplan.

The researchers investigated different protein sources: plants (for example, soy products like tofu), insects (processed into flours and blended into foods), mycelium-based products (such as vegan commercial meat analogs), cultivated meat grown in bioreactors, and microbial fermentation products (such as proteins, pigments, enzymes, and flavors).

They assessed the strengths and weaknesses of each protein source and considered how to harness the best qualities of each—both with and without animal meat. For example, while plant proteins are cheap and scalable, they often lack the flavor and texture of meat. Meanwhile, cultivated meat more closely mimics animal meat but is expensive and hard to scale. Mycelium can add natural texture, while insects offer high nutrition with a low environmental footprint.

The researchers reviewed various combinations to compare their sensory and nutritional profiles, consumer acceptance, affordability, and scalability.

They found that while every protein source has drawbacks, combining them can overcome many of these limitations. In the short term, plant–mycelium hybrids appear most economically viable because they are scalable, nutritious, and already used in commercial products.

In the longer term, plant–cultivated meat hybrids may become more desirable, as even small amounts of cultivated cells can improve taste, texture, and nutrition once production costs fall and capacity expands.

They also point to early studies which found that substantial fractions of meat in burgers or sausages can be replaced with plant proteins without reducing consumer acceptance, and even small additions of cultivated meat or mycelium can improve the taste, texture, and nutrition of plant-based products.

"No single alternative protein source is perfect, but hybrid products give us the opportunity to overcome those hurdles, creating products that are more than the sum of their parts," said senior author Prof David Julian McClements from the University of Massachusetts Amherst, US.

As well as benefits, each protein source presents its own limitations which must be addressed before their resulting hybrids can become mainstream meat alternatives, according to the researchers.

The processing necessary for cultivating meat or combining proteins brings high costs and difficulties with scaling up production. Some protein sources need more consistent, less fragmented regulation, and others, like insect protein, face high consumer skepticism.

Many edible insects are highly nutritious and environmentally friendlier to raise than animals, and over two billion people worldwide already regularly eat insects—but consumers in developed countries are often less willing to do so.

Another concern is that many current plant-based meat alternatives require numerous ingredients and extensive processing, and are therefore classified as ultra-processed foods (UPFs), which consumers may view as unhealthy.

Observational studies show correlations between high UPF consumption and adverse health outcomes, though causation has not been established. However, the authors note that hybrids—by drawing on the natural benefits of each source—could help reduce our reliance on additives and heavy processing.

The researchers are therefore working to ensure these products are healthy as well as acceptable to consumers. Future research, they say, should focus on optimizing protein sources, developing scalable production methods, conducting environmental and economic analyses, and using AI to identify new hybrid combinations and processing methods.

More information: Hybrid alternative protein-based foods: designing a healthier and more sustainable food supply, Frontiers in Science (2025). DOI: 10.3389/fsci.2025.1599300


Original Submission

posted by janrinok on Thursday October 02, @06:46PM   Printer-friendly

Complex knots can actually be easier to untie than simple ones:

Why is untangling two small knots more difficult than unravelling one big one? Surprisingly, mathematicians have found that larger and seemingly more complex knots created by joining two simpler ones together can sometimes be easier to undo, invalidating a conjecture posed almost 90 years ago.

"We were looking for a counterexample without really having an expectation of finding one, because this conjecture had been around so long," says Mark Brittenham at the University of Nebraska at Lincoln. "In the back of our heads, we were thinking that the conjecture was likely to be true. It was very unexpected and very surprising. "

Mathematicians like Brittenham study knots by treating them as tangled loops with joined ends. One of the most important concepts in knot theory is that each knot has an unknotting number, which is the number of times you would have to sever the string, move another piece of the loop through the gap and then re-join the ends before you reached a circle with no crossings at all – known as the "unknot".

Calculating unknotting numbers can be a very computationally intensive task, and there are still knots with as few as 10 crossings that have no solution. Because of this, it can be helpful to break knots down into two or more simpler knots to analyse them, with those that can't be split any further known as prime knots, analogous to prime numbers.

But a long-standing mystery is whether the unknotting numbers of the two knots added together would give you the unknotting number of the larger knot. Intuitively, it might make sense that a combined knot would be at least as hard to undo as the sum of its constituent parts, and in 1937, it was conjectured that undoing the combined knot could never be easier.

Now, Brittenham and Susan Hermiller, also at the University of Nebraska at Lincoln, have shown that there are cases when this isn't true. "The conjecture's been around for 88 years and as people continue not to find anything wrong with it, people get more hopeful that it's true," says Hermiller. "First, we found one, and then quickly we found infinitely many pairs of knots for whom the connected sum had unknotting numbers that were strictly less than the sum of the unknotting numbers of the two pieces."

"We've shown that we don't understand unknotting numbers nearly as well as we thought we did," says Brittenham. "There could be – even for knots that aren't connected sums – more efficient ways than we ever imagined for unknotting them. Our hope is that this has really opened up a new door for researchers to start exploring."

While finding and checking the counterexamples involved a combination of existing knowledge, intuition and computing power, the final stage of checking the proof was done in a decidedly more simple and practical manner: tying the knot with a piece of rope and physically untangling it to show that the researchers' predicted unknotting number was correct.

Andras Juhasz at the University of Oxford, who previously worked with AI company DeepMind to prove a different conjecture in knot theory, says that he and the company had tried unsuccessfully to crack this latest problem about additive sets in the same way, but with no luck.

"We spent at least a year or two trying to find a counterexample and without success, so we gave up," says Juhasz. "It is possible that for finding counterexamples that are like a needle in a haystack, AI is maybe not the best tool. This was a hard-to-find counterexample, I believe, because we searched pretty hard."

Despite there being many practical applications for knot theory, from cryptography to molecular biology, Nicholas Jackson at the University of Warwick, UK, is hesitant to suggest that this new result can be put to good use. "I guess we now understand a little bit more about how circles work in three dimensions than we did before," he says. "A thing that we didn't understand quite so well a couple of months ago is now understood slightly better."

Reference:

Journal Reference:
Brittenham, Mark, Hermiller, Susan. Unknotting number is not additive under connected sum, (DOI: 10.48550/arXiv.2506.24088)

See also:


Original Submission

posted by janrinok on Thursday October 02, @02:03PM   Printer-friendly

Huawei's Ternary Logic Breakthrough: A Game-Changer or Just Hype?:

[Editor's Comment: The source reads as though it could have been created by AI, nevertheless it is an interesting topic and worth a discussion.--JR]

Huawei's recent patent for 'ternary logic' represents a potential breakthrough in chip technology by utilizing a three-state logic system consisting of -1, 0, and 1, rather than the traditional binary system of 0 and 1. This innovative approach could substantially reduce the number of transistors required on a chip, leading to lower energy consumption, particularly in power-hungry AI applications. The ternary logic patent, filed in September 2023 and recently disclosed, may offer significant advantages in processing efficiency and hardware design, addressing some of the physical limits faced by current chip technologies.

Ternary computing is not a novel concept; the first ternary computer was developed in 1958 at Moscow State University, indicating the feasibility of utilizing more than two states in computational logic (source). However, binary logic became the industry standard due to its simplicity and the development of compatible technologies. Huawei's pursuit of ternary logic comes amidst US sanctions, which have pressured the company to explore alternative technological paths. By reducing reliance on traditional chip designs, Huawei aims to innovate in a constrained environment and potentially gain a competitive edge in the AI and semiconductor sectors.

The commercial viability of Huawei's ternary logic chip presents an intriguing yet complex scenario for the tech industry. Ternary logic, which utilizes three states instead of the usual binary system's two, promises significant advancements in chip technology. By potentially reducing the number of transistors required, it could lead to decreases in both manufacturing costs and energy consumption, particularly in power-hungry AI applications. However, the road to commercial viability is laden with challenges. The tech industry must grapple with the transition from a binary-dominated ecosystem to one that could incorporate ternary systems. This includes revamping software and programming approaches to leverage the benefits of the new computing structure. Furthermore, the ability to mass-produce these chips economically and reliably remains unproven, leaving questions about whether the technology can achieve a cost-effective scale .

If successful, Huawei's ternary logic patent could disrupt current computing paradigms and lead to reduced energy consumption in AI technology, aligning with broader trends towards sustainability. The potential of such technology to alter chip design and improve AI efficiency could have far-reaching implications not only for Huawei but for the tech industry at large. Moreover, by possibly circumventing some effects of international sanctions, Huawei's efforts symbolize a form of technological resilience and ingenuity amid geopolitical challenges

Ternary logic represents a novel approach in computing, differentiating itself from the conventional binary logic by utilizing three distinct values: -1, 0, and 1. This trinary system aims to encode information more efficiently, offering potential reductions in the complexity and power usage of AI chips. Unlike the binary system that utilizes two states (0 and 1) to process tasks, ternary logic could lead to less energy-intensive data processing, ultimately decreasing the overall power consumption of AI-driven technologies. This innovation heralds a shift from traditional methodologies, potentially streamlining both hardware requirements and computational resource demands.

The implementation of ternary logic in modern computing could signify a breakthrough in addressing the issues associated with current chip designs. With chip manufacturing approaching its physical limits, the trinary system provides an alternative pathway to enhance processing capabilities without exponentially increasing transistor counts. Huawei's recent patent reflects this innovative direction, aiming to solve power consumption dilemmas while navigating international sanctions and fostering technological advancements in AI. Embedded in this development is Huawei's strategic response to geopolitical challenges, exemplified by their proactive patent applications that emphasize reducing dependence on traditional binary constraints.


Original Submission

posted by janrinok on Thursday October 02, @09:16AM   Printer-friendly

Experts Alarmed That AI Is Now Producing Functional Viruses:

In real world experiments, a team of Stanford researchers demonstrated that a virus with AI-written DNA could target and kill specific bacteria, they announced in a study last week. It opened up a world of possibilities where artificial viruses could be used to cure diseases and fight infections.

But experts say it also opened a Pandora's box. Bad actors could just as easily use AI to crank out novel bioweapons, keeping doctors and governments on the backfoot with the outrageous pace at which these viruses can be designed, warn Tal Feldman, a Yale Law School student who formerly built AI models for the federal government, and Jonathan Feldman, a computer science and biology researcher at Georgia Tech (no word on whether the two are related).

"There is no sugarcoating the risks," the pair warned in a piecefor the Washington Post. "We're nowhere near ready for a world in which artificial intelligence can create a working virus, but we need to be — because that's the world we're now living in."

In the study, the Stanford researchers used an AI model called Evo to invent DNA for a bacteriophage, a virus that infects bacteria. Unlike a general purpose large language model like ChatGPT, which is trained on written language, Evo was exclusively trained on millions of bacteriophage genomes.

They focused on an extensively studied phage called phiX174, which is known to infect strains of the bacteria E. coli. Using the EVO AI model, the team came up with 302 candidate genomes based on phiX174 and put them to the test by using the designs to chemically assemble new viruses.

Sixteen of them worked, infecting and killing the E. coli strains. Some of them were even deadlier than the natural form of the virus.

But "while the Stanford team played it safe, what's to stop others from using open data on human pathogens to build their own models?" the two Feldmans warned. "If AI collapses the timeline for designing biological weapons, the United States will have to reduce the timeline for responding to them. We can't stop novel AI-generated threats. The real challenge is to outpace them."

That means using the same AI tech to design antibodies, antivirals, and vaccines. This work is already being done to some extent, but the vast amounts of data needed to accelerate such pioneering research "is siloed in private labs, locked up in proprietary datasets or missing entirely."

"The federal government should make building these high-quality datasets a priority," the duo opined.

From there, the federal government would need to build the necessary infrastructure to manufacture these AI-designed medicines, since the "private sector cannot justify the expense of building that capacity for emergencies that may never arrive," they argue.

Finally, the Food and Drug Administration's sluggish and creaking regulatory framework would need an overhaul. (Perhaps in a monkey's paw of such an overhaul, the FDA said it's using AI to speed-run the approval of medications.)

"Needed are new fast-tracking authorities that allow provisional deployment of AI-generated countermeasures and clinical trials, coupled with rigorous monitoring and safety measures," they said.

The serious risks posed by AI virus generation shouldn't be taken lightly. Yet, it's worth noting that the study in question hasn't made it out of peer review yet and we still don't have a full picture of how readily someone could replicate the work the scientists did.

But with agencies like the Centers for Disease Control and Prevention being gutted, and vaccines and other medical interventions being attacked by a health-crank riddled administration, there's no denying that the country's medical policy and infrastructure is in a bad place. That said, when you consider that the administration is finding any excuse to rapidly deploy AI in every corner of the government, it's worth treading lightly when we ask for more.

More on synthetic biology:Scientists Debate Whether to Halt Type of Research That Could Destroy All Life on Earth

AI Creates Bacteria-Killing Viruses: 'Extreme Caution' Warns Genome Pioneer:

A California outfit has used artificial intelligence to design viral genomes before they were then built and tested in a laboratory. Following this, bacteria was then successfully infected with a number of these AI-created viruses, proving that generative models can create functional genetics.

"The first generative design of complete genomes."

That's what researchers at Stanford University and the Arc Institute in Palo Alto called the results of these experiments. A biologist at NYU Langone Health, Jef Boeke, celebrated the experiment as a substantial step towards AI-designed lifeforms, according to MIT Technology Review.

"They saw viruses with new genes, with truncated genes, and even different gene orders and arrangements," Boeke said.

They team created 302 full genomes, outlined by their AI, Evo - a LLM similar to that of ChatGPT - and introduced them to E. coli test systems. 16 of these designs created successful bacteriophages which were able to replicate and kill the bacteria.

Brian Hie, who leads the Arc Institute lab, reflected on the moment the plates revealed clearings where bacteria had died. "That was pretty striking, just actually seeing, like, this AI-generated sphere," said Hie.

The team targeted bacteriophage phiX174, a minimal DNA phage with approximately 5,000 bases across 11 genes. Around 2 million bacteriophage were used to train the AI model, allowing it to understand the patterns in their makeup and gene order. It then proposed new, complete genomes.

J. Craig Venter helped create the cells with these synthetic genomes. He saw the approach as being "just a faster version of trial-and-error experiments."

"We did the manual AI version - combing through the literature, taking what was known," he explained.

Speed is the appeal here. The prediction from the AI on the protein structure could certainly speed up the processes within drug and biotechnical development. The results could then be used to fight bacterial infections in, for example, farming or even gene therapy.

Samuel King, a student who led the project, said: "There is definitely a lot of potential for this technology."

The team excluded human-infecting viruses from the AI's training, but testing in this area could still be dangerous, warns Venter.

"One area where I urge extreme caution is any viral enhancement research,, especially when it's random so you don't know what you are getting.

"If someone did this with smallpox or anthrax, I would have grave concerns."

There are other issues with this idea. Moving to a 'simple' phage to something more complex such as bacteria - something that AI simply won't be able to do at this point.

"The complexity would rocket from staggering to ... way way more than the number of subatomic particles in the universe," Boeke said.

Despite the challenges surrounding this test, it is an extremely impressive result - and something that could influence the future of genetic engineering.


Original Submission

posted by jelizondo on Thursday October 02, @04:31AM   Printer-friendly
from the I-wish-I-had-been-on-the-testing-team dept.

Triple-fermented Belgian beers have the longest-lasting foam; single-fermented lagers have the shortest:

For many beer lovers, a nice thick head of foam is one of life's pure pleasures, and the longer that foam lasts, the better the beer-drinking experience. A team of Swiss researchers spent seven years studying why some beer foams last longer than others and found that the degree of fermentation—i.e., whether a given beer has been singly, doubly, or triply fermented—is crucial, according to a new paper published in the journal Physics of Fluids.

[...] Individual bubbles typically form a sphere because that's the shape with the minimum surface area for any volume and hence is the most energy-efficient. One reason for the minimizing principle when it comes to a bubble's shape is that many bubbles can then tightly pack together to form a foam. But bubbles "coarsen" over time, the result of gravity pulling down on the liquid and thinning out the walls. Eventually, they start to look more like soccer balls (polyhedrons). In a coarsening foam, smaller bubbles are gradually absorbed by larger ones. There is less and less liquid to separate the individual bubbles, so they press together to fill the space.

This "jamming" is why foams are typically far more rigid than their gas (95 percent) and liquid (5 percent) components. The more tightly the bubbles jam together, the less they can move around and the greater the pressure inside them becomes, giving them properties of a solid.

Various factors can affect foam stability. For instance, in 2019, Japanese researchers investigated a phenomenon known as "collective bubble collapse," or CBC, in which breaking one bubble at the edge of a foam results in a cascading effect as the breakage spreads to other bubbles in the foam. They identified two distinct mechanisms for the resulting CBCs: a so-called "propagating mode," in which a broken bubble is absorbed into the liquid film, and a "penetrating mode," in which the breakage of a bubble causes droplets to shoot off and hit other bubbles, causing them to break in turn.

Higher levels of liquid in the foam slowed the spread of the collapse, and changing the viscosity of the fluid had no significant impact on how many bubbles broke in the CBC. Many industrial strategies for stabilizing foams rely on altering the viscosity; this shows those methods are ineffective. The researchers suggest focusing instead on using several different surfactants in the mixture. This would strengthen the resulting film to make it more resistant to breakage when hit by flying droplets.

However, as the authors of this latest paper note, "Most beers are not detergent solutions (though arguably some may taste that way)." They were inspired by a Belgian brewer's answer when they asked how he controlled fermentation: "By watching the foam." A stable foam is considered to be a sign of successful fermentation. The authors decided to investigate precisely how the various factors governing foam stability might be influenced by the fermentation process.

[...] Single-fermented lager beers had the least stable foam, with triple-fermented beers boasting the most stable foam; the foam stability of double-fermented beers fell in the middle of the range. The team also discovered that the most important factor for foam stability isn't fixed but largely depends on the type of beer. It all comes down to surface viscosity for single-fermented lagers.

But surface viscosity is not a major factor for stable foams in double- or triple-fermented beers. Instead, stability arises from differences in surface tension, i.e., Marangoni stresses—the same phenomenon behind so-called "wine tears" and the "coffee ring effect." Similarly, when a drop of watercolor paint dries, the pigment particles of color break outward toward the rim of the drop. In the case of beer foam, the persistent currents that form as a result of those differences in surface tension lend stability to the foam.

The researchers also analyzed the protein content of the beers and found that one in particular—lipid transfer protein 1 (LPT1)—was a significant factor in stabilizing beer foams, and their form depended on the degree of fermentation. In single-fermented beers, for example, the proteins are small, round particles on the surface of the bubbles. The more proteins there are, the more the foam will be stable because those proteins form a more viscous film around the bubbles.

Those LPT1 proteins become slightly denatured during a second fermentation, forming more of a net-like structure that improves foam stability. That denaturation continues during a third fermentation, when the proteins break down into fragments with hydrophobic and hydrophilic ends, reducing surface tensions. They essentially become surfactants to make the bubbles in the foam much more stable.

That said, the team was a bit surprised to find that increasing the viscosity with additional surfactants can actually make the foam more unstable because it slows down the Marangoni effects too strongly. "The stability of the foam does not depend on individual factors linearly," said co-author Jan Vermant, also of ETH Zurich. "You can't just change 'something' and get it 'right.' The key is to work on one mechanism at a time–and not on several at once. Beer obviously does this well by nature. We now know the mechanism exactly and are able to help [breweries] improve the foam of their beers."

The findings likely have broader applications as well. "This is an inspiration for other types of materials design, where we can start thinking about the most material-efficient ways [of creating stable foams]," said Vermant. "If we can't use classical surfactants, can we mimic the 2D networks that double-fermented beers have?" The group is now investigating how to prevent lubricants used in electric vehicles from foaming; developing sustainable surfactants that don't contain fluoride or silicon; and finding ways to use proteins to stabilize milk foam, among other projects.

Journal Reference: DOI: Physics of Fluids, 2025. 10.1063/5.0274943


Original Submission

posted by jelizondo on Wednesday October 01, @11:47PM   Printer-friendly

How the von Neumann bottleneck is impeding AI computing:

Most computers are based on the von Neumann architecture, which separates compute and memory. This arrangement has been perfect for conventional computing, but it creates a data traffic jam in AI computing.

AI computing has a reputation for consuming epic quantities of energy. This is partly because of the sheer volume of data being handled. Training often requires billions or trillions of pieces of information to create a model with billions of parameters. But that's not the whole reason — it also comes down to how most computer chips are built.

Modern computer processors are quite efficient at performing the discrete computations they're usually tasked with. Though their efficiency nosedives when they must wait for data to move back and forth between memory and compute, they're designed to quickly switch over to work on some unrelated task. But for AI computing, almost all the tasks are interrelated, so there often isn't much other work that can be done when the processor gets stuck waiting, said IBM Research scientist Geoffrey Burr.

In that scenario, processors hit what is called the von Neumann bottleneck, the lag that happens when data moves slower than computation. It's the result of von Neumann architecture, found in almost every processor over the last six decades, wherein a processor's memory and computing units are separate, connected by a bus. This setup has advantages, including flexibility, adaptability to varying workloads, and the ability to easily scale systems and upgrade components. That makes this architecture great for conventional computing, and it won't be going away any time soon.

But for AI computing, whose operations are simple, numerous, and highly predictable, a conventional processor ends up working below its full capacity while it waits for model weights to be shuttled back and forth from memory. Scientists and engineers at IBM Research are working on new processors, like the AIU family, which use various strategies to break down the von Neumann bottleneck and supercharge AI computing.

The von Neumann bottleneck is named for mathematician and physicist John von Neumann, who first circulated a draft of his idea for a stored-program computer in 1945. In that paper, he described a computer with a processing unit, a control unit, memory that stored data and instructions, external storage, and input/output mechanisms. His description didn't name any specific hardware — likely to avoid security clearance issues with the US Army, for whom he was consulting. Almost no scientific discovery is made by one individual, though, and von Neumann architecture is no exception. Von Neumann's work was based on the work of J. Presper Eckert and John Mauchly, who invented the Electronic Numerical Integrator and Computer (ENIAC), the world's first digital computer. In the time since that paper was written, von Neumann architecture has become the norm.

"The von Neumann architecture is quite flexible, that's the main benefit," said IBM Research scientist Manuel Le Gallo-Bourdeau. "That's why it was first adopted, and that's why it's still the prominent architecture today."

[...] For AI computing, the von Neumann bottleneck creates a twofold efficiency problem: the number of model parameters (or weights) to move, and how far they need to move. More model weights mean larger storage, which usually means more distant storage, said IBM Research scientist Hsinyu (Sidney) Tsai. "Because the quantity of model weights is very large, you can't afford to hold them for very long, so you need to keep discarding and reloading," she said.

The main energy expenditure during AI runtime is spent on data transfers — bringing model weights back and forth from memory to compute. By comparison, the energy spent doing computations is low. In deep learning models, for example, the operations are almost all relatively simple matrix vector multiplication problems. Compute energy is still around 10% of modern AI workloads, so it isn't negligible, said Tsai. "It is just found to be no longer dominating energy consumption and latency, unlike in conventional workloads," she added.

About a decade ago, the von Neumann bottleneck wasn't a significant issue because processors and memory weren't so efficient, at least compared to the energy that was spent to transfer data, said Le Gallo-Bourdeau. But data transfer efficiency hasn't improved as much as processing and memory have over the years, so now processors can complete their computations much more quickly, leaving them sitting idle while data moves across the von Neumann bottleneck.

[...] Aside from eliminating the von Neumann bottleneck, one solution includes closing that distance. "The entire industry is working to try to improve data localization," Tsai said. IBM Research scientists recently announced such an approach: a polymer optical waveguide for co-packaged optics. This module brings the speed and bandwidth density of fiber optics to the edge of chips, supercharging their connectivity and hugely reducing model training time and energy costs.

With currently available hardware, though, the result of all these data transfers is that training an LLM can easily take months, consuming more energy than a typical US home does in that time. And AI doesn't stop needing energy after model training. Inferencing has similar computational requirements, meaning that the von Neumann bottleneck slows it down in a similar fashion.

[...] While von Neumann architecture creates a bottleneck for AI computing, for other applications, it's perfectly suited. Sure, it causes issues in model training and inference, but von Neumann architecture is perfect for processing computer graphics or other compute-heavy processes. And when 32- or 64-bit floating point precision is called for, the low precision of in-memory computing isn't up to the task.

"For general purpose computing, there's really nothing more powerful than the von Neumann architecture," said Burr. Under these circumstances, bytes are either operations or operands that are moving on a bus from a memory to a processor. "Just like an all-purpose deli where somebody might order some salami or pepperoni or this or that, but you're able to switch between them because you have the right ingredients on hand, and you can easily make six sandwiches in a row." Special-purpose computing, on the other hand, may involve 5,000 tuna sandwiches for one order — like AI computing as it shuttles static model weights.


Original Submission

posted by jelizondo on Wednesday October 01, @07:02PM   Printer-friendly

This black hole flipped its magnetic field:

The magnetic field swirling around an enormous black hole, located about 55 million light-years from Earth, has unexpectedly switched directions. This dramatic reversal challenges theories of black hole physics and provides scientists with new clues about the dynamic nature of these shadowy giants.

The supermassive black hole, nestled in the heart of the M87 galaxy, was first imaged in 2017. Those images revealed, for the first time, a glowing ring of plasma­ — an accretion disk — encircling the black hole, dubbed M87*. At the time, the disk's properties, including those of the magnetic field embedded in the plasma, matched theoretical predictions.

But observations of the accretion disk in the years that followed show that its magnetic field is not as stable as it first seemed, researchers report in a paper to appear in Astronomy & Astrophysics. In 2018, the magnetic field shifted and nearly disappeared. By 2021, the field had completely flipped direction.

"No theoretical models we have today can explain this switch," says study coauthor Chi-kwan Chan, an astronomer at Steward Observatory in Tucson. The magnetic field configuration, he says, was expected to be stable due to the black hole's large mass — roughly 6 billion times as massive as the sun, making it over a thousand times as hefty as the supermassive black hole at the center of the Milky Way.

In the new study, astronomers analyzed images of the accretion disk around M87* compiled by the Event Horizon Telescope, a global network of radio telescopes. The scientists focused on a specific component that's sensitive to magnetic field orientation called polarized light, which consists of light waves all oscillating in a particular direction.

By comparing the polarization patterns over the years, the astronomers saw that the magnetic field reversed direction. Magnetic fields around black holes are thought to funnel in material from their surrounding disks. With the new findings, astronomers will have to rethink their understanding of this process.

While researchers don't yet know what caused the flip in this disk's magnetic field, they think it could have been a combination of dynamics within the black hole and external influences.

"I was very surprised to see evidence for such a significant change in M87's magnetic field over a few years," says astrophysicist Jess McIver of the University of British Columbia in Vancouver, who was not involved with the research. "This changes my thinking about the stability of supermassive black holes and their environments."


Original Submission

posted by jelizondo on Wednesday October 01, @02:15PM   Printer-friendly

Expert calls security advice "unfairly outsourcing the problem to Anthropic's users"

On Tuesday [September 9, 2025], Anthropic launched a new file creation feature for its Claude AI assistant that enables users to generate Excel spreadsheets, PowerPoint presentations, and other documents directly within conversations on the web interface and in the Claude desktop app. While the feature may be handy for Claude users, the company's support documentation also warns that it "may put your data at risk" and details how the AI assistant can be manipulated to transmit user data to external servers.

The feature, awkwardly named "Upgraded file creation and analysis," is basically Anthropic's version of ChatGPT's Code Interpreter and an upgraded version of Anthropic's "analysis" tool. It's currently available as a preview for Max, Team, and Enterprise plan users, with Pro users scheduled to receive access "in the coming weeks," according to the announcement.

The security issue comes from the fact that the new feature gives Claude access to a sandbox computing environment, which enables it to download packages and run code to create files. "This feature gives Claude Internet access to create and analyze files, which may put your data at risk," Anthropic writes in its blog announcement. "Monitor chats closely when using this feature."

According to Anthropic's documentation, "a bad actor" manipulating this feature could potentially "inconspicuously add instructions via external files or websites" that manipulate Claude into "reading sensitive data from a claude.ai connected knowledge source" and "using the sandbox environment to make an external network request to leak the data."

This describes a prompt injection attack, where hidden instructions embedded in seemingly innocent content can manipulate the AI model's behavior—a vulnerability that security researchers first documented in 2022. These attacks represent a pernicious, unsolved security flaw of AI language models, since both data and instructions in how to process it are fed through as part of the "context window" to the model in the same format, making it difficult for the AI to distinguish between legitimate instructions and malicious commands hidden in user-provided content.

[...] Anthropic is not completely ignoring the problem, however. The company has implemented several security measures for the file creation feature. For Pro and Max users, Anthropic disabled public sharing of conversations that use the file creation feature. For Enterprise users, the company implemented sandbox isolation so that environments are never shared between users. The company also limited task duration and container runtime "to avoid loops of malicious activity."

[...] Anthropic's documentation states the company has "a continuous process for ongoing security testing and red-teaming of this feature." The company encourages organizations to "evaluate these protections against their specific security requirements when deciding whether to enable this feature."

[...] That kind of "ship first, secure it later" philosophy has caused frustrations among some AI experts like Willison, who has extensively documented prompt injection vulnerabilities (and coined the term). He recently described the current state of AI security as "horrifying" on his blog, noting that these prompt injection vulnerabilities remain widespread "almost three years after we first started talking about them."

In a prescient warning from September 2022, Willison wrote that "there may be systems that should not be built at all until we have a robust solution." His recent assessment in the present? "It looks like we built them anyway!"


Original Submission

posted by hubie on Wednesday October 01, @09:32AM   Printer-friendly

https://joel.drapper.me/p/rubygems-takeover/

Ruby Central recently took over a collection of open source projects from their maintainers without their consent. News of the takeover was first broken by Ellen on 19 September.

I have spoken to about a dozen people directly involved in the events, and seen a recording of a key meeting between Ruby Gems maintainers and Ruby Central, to uncover what went on.

https://narrativ.es/@janl/115258495596221725

Okay so this was a hostile takeover. The Ruby community needs to get their house in order.

And one more note, I assume it is implied in the write up, but you might now know: DHH is on the board of directors of Shopify. He exerts tremendous organisational and financial power.

It's hilarious he's threatened by three devs with a hobby project and is willing to burn his community's reputation over it.


Original Submission

posted by hubie on Wednesday October 01, @04:49AM   Printer-friendly
from the slop-for-you-slop-for-me-slop-for-everyone dept.

They finally came up with a word for it. "Workslop". To much AI usage among (co-)workers is leading to "workslop". Where there is to much AI production that doesn't turn out to be very valuable or productive. It looks fine at a first glance but has produced nothing of value or solved any problems. It just looks fine. All shiny surface, but nothing actual.

workslop is "AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task."

AI promised to revolutionize productivity. Instead, 'workslop' is a giant time suck and the scourge of the 21st century office, Stanford warns

A benefits manager said of one AI-sourced document a colleague sent her, "It was annoying and frustrating to waste time trying to sort out something that should have been very straightforward."

So while companies may be spending hundreds of millions on AI software to create efficiencies and boost productivity, and encouraging employees to use it liberally, they may also be injecting friction into their operations.

The researchers say that "lazy" AI-generated work is not only slowing people down, it's also leading to employees losing respect for each other. After receiving workslop, staffers said they saw the peers behind it as less creative and less trustworthy.

"The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work," they write.

So shit literally flows downwards then?

https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity
https://techcrunch.com/2025/09/27/beware-coworkers-who-produce-ai-generated-workslop/
https://fortune.com/2025/09/23/ai-workslop-workshop-workplace-communication/
https://edition.cnn.com/2025/09/26/business/ai-workslop-nightcap


Original Submission