Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:164 | Votes:266

posted by hubie on Saturday October 04, @11:00PM   Printer-friendly
from the Nvidia-Inside dept.

Duo could dominate in the same way Microsoft and Intel ruled PCs for decades:

Opinion: The OpenAI and Nvidia $100 billion partnership sure sounds impressive. $100 billion isn't chicken feed, even as more and more tech companies cross the trillion-dollar mark. But what does it really mean?

As two of my Register colleagues noted, "The announcement has enough wiggle room to drive an AI-powered self-driving semi through." True, but it may be the start of something huge that will define the AI movement for the foreseeable future.

Let's step into the Wayback Machine with Mr. Peabody and Sherman to the early 1980s, when PCs from companies most of you have never heard of, such as Osborne, Kaypro, and Sinclair Research, landed on desktops.

IBM decided to get into the personal computer business, and the company needed chips. So Big Blue teamed with a relatively obscure CPU company called Intel.

That took care of the hardware, but IBM needed an operating system urgently. Initially, like everyone else, except for those guys named Steve with some company called Apple, IBM wanted to use CP/M from Digital Research. That didn't work out. So, IBM called Microsoft, and Bill Gates and crew acquired Quick and Dirty Operating System (QDOS ) from Seattle Computer Products, and slapped the names MS-DOS and IBM PC-DOS on it. Microsoft also, and this is the critical bit, kept the right to sell MS-DOS to other companies.

Intel, of course, had always retained the right to sell its chips to anyone. It quickly became clear that IBM was onto something. So other new companies, Compaq specifically, sprang up to develop their own PC clones, starting with the Compaq Portable in 1983. It, and all the many other clones from companies like Dell, HP, and Packard Bell, were, of course, powered by Intel chips and ran Microsoft operating systems.

The two companies started working hand-in-glove with each other. By the late '80s, their pairing, WinTel, would rule the PC world. Decades later, while not nearly as dominant as they once were, chances are the computer in front of you is WinTel.

What does that have to do with OpenNvidia? Everything. This deal promises to create the world's largest AI infrastructure project to date. It gives OpenAI access to millions of Nvidia GPUs and the capital needed for a massive wave of next-generation data centers.

[...] An Nvidia spokesman told Reuters, "Our investments will not change our focus or impact supply to our other customers - we will continue to make every customer a top priority, with or without any equity stake." But what else are they going to say? Sucks to be you, Anthropic? Bite me, Oracle?

Now, where have I seen this combination of chips and software before? Oh, right. WinTel. It worked pretty well for them, didn't it? As for their rivals back in the early days, I recall them because I was already in the tech industry then. If you're under 40, have you even heard of North Star Computers, Cromemco, or Vector Graphics? Yeah, I didn't think so.

[...] True, the deal's details are still messy. As Scott Raynovich, Founder and Chief Technology Analyst of the technology analysis firm Futuriom, noted in a LinkedIn comment, "All of these deals are the same... to me they read like... 'I promise to spend a bunch of money with you if you kick a bunch back to me... but there is no guarantee... and it's all contingent on things going exactly as they are going right now, but we could always bail.'"

Far be it from me to disagree. This deal could go sideways. After all, I'm one of those who won't be surprised if AI goes bust. But, if it doesn't, Nvidia is the one AI company I see surviving. Any business that's aligned closely with Nvidia may do quite well. After all, just like with the dot-com crash, after all the crying, the internet grew and grew. I expect the same will happen with AI, no matter what happens to it in the short term. So, yes, in the long run, I can see OpenNvidia dominating AI in the 2040s the way Wintel did in the 2000s. ®


Original Submission

posted by Fnord666 on Saturday October 04, @06:17PM   Printer-friendly
from the pop-culture dept.

The comic strip Peanuts turns 75 this year. The New Statesman covers the background of the strip and how Peanuts reflected US society.

Peanuts was published in newspapers for the first time on 2 October 1950. By the mid-Sixties, it had tens of millions of daily readers, becoming the most widely read comic strip in the world, translated into more than 20 languages, reaching some 355 million readers in 75 countries. In Japan, Peanuts was taken so seriously that the official translator of the strip was also a Nobel Prize front-runner. Schulz was the first modern cartoonist to be given a retrospective in the Louvre.

[...] Schulz worked on Peanuts for nearly 50 years, single-handedly writing, drawing and lettering 17,897 strips. His existence and Peanuts were so intrinsically linked that their endings were only a few hours apart. Schulz died of a heart attack on 12 February 2000. The next day, his last comic strip announcing his retirement appeared. "No, I think he's writing..." Charlie Brown says down the phone to an unknown caller.

Previously:
(2020) Snoopy Celebrates 20 Years of Humans on Space Station on New NASA Posters


Original Submission

posted by jelizondo on Saturday October 04, @01:33PM   Printer-friendly

Scientists catch a shark threesome on camera:

It's a rare occurrence for scientists to witness sharks mating in the wild. It's even rarer to catch three leopard sharks—two males and one female—engaging in what amounts to a threesome in the wild on camera, particularly since they are considered an endangered species. But that's just what one enterprising marine biology team achieved, describing the mating sequence in careful, clinical detail in a paper published in the Journal of Ethology.

It's not like scientists don't know anything about leopard shark mating behavior; rather, most of that knowledge comes from studying the sharks in captivity. Whether the behavior is identical in the wild is an open question because there hadn't been any documented observations of leopard shark mating practices in the wild—until now.

Hugo Lassauce, a postdoctoral researcher at the University of the Sunshine Coast (UniSC) in Australia, was working with the Aquarium des Lagons in Nouméa, New Caledonia, to monitor sharks off the coast of that South Pacific territory. Lassauce has been snorkeling daily with sharks for a year as part of that program—always with an accompanying boat for safety purposes—and had seen bits of the leopard shark mating behavior before, but never the entire sequence. Then he spotted a female shark on the sand below with two males hanging onto her pectoral fins—classic pre-copulation (courtship) behavior observed in captive leopard sharks.

"I told my colleague to take the boat away to avoid disturbance, and I started waiting on the surface, looking down at the sharks almost motionless on the sea floor," said Lassauce. "I waited an hour, freezing in the water, but finally they started swimming up. It was over quickly for both males, one after the other. The first took 63 seconds, the other 47. Then the males lost all their energy and lay immobile on the bottom while the female swam away actively." (Add your own salacious jokes here. You know you're thinking them.)

Lassauce had two GoPro Hero 5 cameras ready at hand, albeit with questionable battery life. That's why the video footage has two interruptions to the action: once when he had to switch cameras after getting a "low battery" signal, and a second time when he voluntarily stopped filming to conserve the second camera's battery. Not much happened for 55 minutes, after all, and he wanted to be sure to capture the pivotal moments in the sequence. Lassauce succeeded and was rewarded with triumphant cheers from his fellow marine biologists on the boat, who knew full well the rarity of what had just been documented for posterity.

The lengthy pre-copulation stage involved all three sharks motionless on the seafloor for nearly an hour, after which the female started swimming with one male shark biting onto each of her pectoral fins. A few minutes later, the first male made his move, "penetrating the female's cloaca with his left clasper." Claspers are modified pelvic fins capable of transferring sperm. After the first male shark finished, he lay motionless while the second male held onto the female's other fin. Then the other shark moved in, did his business, went motionless, and the female shark swam away. The males also swam away soon afterward.

Apart from the scientific first, documenting the sequence is a good indicator that this particular area is a critical mating habitat for leopard sharks, and could lead to better conservation strategies, as well as artificial insemination efforts to "rewild" leopard sharks in Australia and several other countries. "It's surprising and fascinating that two males were involved sequentially on this occasion," said co-author Christine Dudgeon, also of UniSC, adding, "From a genetic diversity perspective, we want to find out how many fathers contribute to the batches of eggs laid each year by females."

Journal Reference:
Lassauce, Hugo, Gossuin, Hugues, Dudgeon, Christine L., et al. Observation of group courtship/copulating behavior for free-living Indo-Pacific Leopard sharks, Stegostoma tigrinum, Journal of Ethology (DOI: 10.1007/s10164-025-00866-4)


Original Submission

posted by jelizondo on Saturday October 04, @08:07AM   Printer-friendly

https://phys.org/news/2025-09-side-moon-colder-lunar.html

The interior of the mysterious far side of the moon may be colder than the side constantly facing Earth, suggests a new analysis of rock samples co-led by a UCL (University College London) and Peking University researcher.

The study, published in the journal Nature Geoscience, looked at fragments of rock and soil scooped up by China's Chang'e 6 spacecraft in 2024 from a vast crater on the far side of the moon.

The research team confirmed previous findings that the rock sample was about 2.8 billion years old, and analyzed the chemical make-up of its minerals to estimate that it formed from lava deep within the moon's interior at a temperature of about 1,100 degrees C—about 100 degrees C cooler than existing samples from the near side.

Co-author Professor Yang Li, based at UCL's Department of Earth Sciences and Peking University, said, "The near side and far side of the moon are very different at the surface and potentially in the interior. It is one of the great mysteries of the moon. We call it the two-faced moon. A dramatic difference in temperature between the near and far side of the mantle has long been hypothesized, but our study provides the first evidence using real samples."

Co-author Mr. Xuelin Zhu, a Ph.D. student at Peking University, said, "These findings take us a step closer to understanding the two faces of the moon. They show us that the differences between the near and far side are not only at the surface but go deep into the interior."

The far side has a thicker crust, is more mountainous and cratered, and appears to have been less volcanic, with fewer dark patches of basalt formed from ancient lava.

In their paper, the researchers noted that the far side of the interior may have been cooler due to having fewer heat-producing elements—elements such as uranium, thorium and potassium, which release heat due to radioactive decay.

Previous studies have suggested that this uneven distribution of heat-producing elements might have occurred after a massive asteroid or planetary body smashed into the far side, shaking up the moon's interior and pushing denser materials containing more heat-producing elements across to the near side.

Other theories are that the moon might have collided with a second, smaller moon early in its history, with near-side and far-side samples originating from two thermally different moonlets, or that the near side might be hotter due to the tug of Earth's gravity.

For the new study, the research team analyzed 300 g of lunar soil allocated to the Beijing Research Institute of Uranium Geology. Sheng He, first author from the institute, explained, "The sample collected by the Chang'e 6 mission is the first ever from the far side of the moon."

The team mapped selected parts of the sample, made up largely of grains of basalt, with an electron probe, to determine its composition.

The researchers measured tiny variations in lead isotopes using an ion probe to date the rock as 2.8 billion years old (a technique relying on the fact that uranium decays into lead at a steady rate). The data were processed using a method refined by Professor Pieter Vermeesch of UCL Earth Sciences.

They then used several techniques to estimate the temperature of the sample while at different stages of its past when it was deep in the moon's interior.

The first was to analyze the composition of minerals and compare these to computer simulations to estimate how hot the rock was when it formed (crystallized). This was compared to similar estimates for near-side rocks, with a difference of 100 degrees C.

The second approach was to go back further in the sample's history, inferring from its chemical make-up how hot its "parent rock" would have been (i.e., before the parent rock melted into magma and later solidified again into the rock collected back by Chang'e 6), comparing this to estimates for near-side samples collected by the Apollo missions. They again found about a 100 degrees C difference.

As returned samples are limited, they worked with a team from Shandong University to estimate parent rock temperatures using satellite data of the Chang'e landing site on the far side, comparing this with equivalent satellite data from the near side, again finding a difference (this time of 70 degrees C).

On the moon, heat-producing elements such as uranium, thorium and potassium tend to occur together alongside phosphorus and rare earth elements in material known as "KREEP"-rich (the acronym derives from potassium having the chemical symbol K, rare-earth elements (REE), and P for phosphorus).

The leading theory of the moon's origin is that it formed out of debris created from a massive collision between Earth and a Mars-sized protoplanet, and began wholly or mostly made of molten rock (lava or magma). This magma solidified as it cooled, but KREEP elements were incompatible with the crystals that formed and thus stayed for longer in the magma.

Scientists would expect the KREEP material to be evenly spread across the moon. Instead, it is thought to be bunched up in the near side mantle. The distribution of these elements may be why the near side has been more volcanically active.

Although the present temperature of the far and near side of the moon's mantle is not known from this study, any imbalance in temperature between the two sides will likely persist for a very long time, with the moon cooling down very slowly from the moment it formed from a catastrophic impact. However, the research team are currently working on getting a definitive answer to this question.

More information: A relatively cool lunar farside mantle inferred from Chang'e-6 basalts and remote sensing, Nature Geoscience (2025). DOI: 10.1038/s41561-025-01815-z.


Original Submission

posted by jelizondo on Saturday October 04, @04:01AM   Printer-friendly
from the not-down-under dept.

The armed forces of Austria have been moving towards open standards via free and open source software, specifically from MSO to LibreOffice. Given the politics that have arisen in similar cases around the world, the move has been planned carefully planned since 2020.

"It was very important for us to show that we are doing this primarily (...) to strengthen our digital sovereignty, to maintain our independence in terms of ICT infrastructure and (...) to ensure that data is only processed in-house," emphasizes Michael Hillebrand from the Austrian Armed Forces' Directorate 6 ICT and Cyber.

"We are not doing this to save money," Hillebrand emphasized to ORF, "We are doing this so that the Armed Forces as an organization, which is there to function when everything else is down, can continue to have products that work within our sphere of influence." At the beginning of September, he and his colleague Nikolaus Stocker recounted the conversion process at the LibreOffice Conference 2025.

Some of the contributions which Austria has made back to the FOSS community around LibreOffice include:

  • A notes pane
  • Improved paste format
  • Deleting metadata more easily
  • Rotating graphics with a click
  • ... and much more

Previously:
(2025) Microsoft Bans LibreOffice Developer's Account Without Warning, Rejects Appeal
(2025) German Government Moves Closer to Ditching Microsoft: "We're Done With Teams!"
(2025) LibreOffice Adds Voice To 'Ditch Windows For Linux' Campaign
(2019) Microsoft Office 365 Banned in German Schools after Privacy Violations
(2016) OpenOffice Could be Nearing Retirement
(2015) City Council of Bern Demands Transition to FOSS Before 2019
(2015) Italian Military is Switching to LibreOffice and ODF
(2014) Another German Town Says It Has Completed Its Switch To FOSS


Original Submission

posted by janrinok on Friday October 03, @11:18PM   Printer-friendly

NASA boss says US should have 'village' on Moon in a decade:

IAC 2025 If the USA's space strategy succeeds, it will run a "village" on the moon in a decade, NASA administrator Sean Duffy told the International Aeronautical Congress (IAC) in Sydney today.

Duffy appeared in a session featuring the heads of space agencies from the USA, China, Japan, India, Europe, and Canada. Readers will likely have noted the absence of Russia, a longtime space player, from that list.

The NASA boss seemingly hinted at one reason Russia's space boss is not at the Congress when he said the USA "comes in peace" to space. "We have not been in the business of taking people's land," Duffy said.

Asked what success looks like for NASA in a decade, Duffy said "We are going to have sustained human life on the moon. Not just an outpost, but a village." And a nuclear-powered village at that after NASA recently issued an RFI seeking commercial help to build a nuclear reactor on Luna

Duffy also predicted that a decade from now NASA will also have "made leaps and bounds on our mission to get to Mars" and "be on the cusp of putting human boots on Mars."

The theme for this year's IAC is "Sustainable Space: Resilient Earth". Duffy's take on that is how to sustain human life in space, an objective he said is NASA's prerogative because it alone among US government agencies has a remit for exploration. He pointed out that other US government agencies have the job of considering terrestrial stability and earthly resilience, and that NASA must focus on exploration,

The other space agency heads at the event took a different view.

When European Space Agency (ESA) director general Josef Ashbacher had his turn on stage, he offered a very different vision of sustainability by pointing out that the agency he leads freely shares data from its earth observation satellites. "I am glad that we at ESA are working for the betterment of the planet," he said.

V Narayanan, the chair of India's Space Research Organization (ISRO) said ensuring food and water resources is his agency's top goal in space. Lisa Campbell, the President of the Canadian Space Agency said that when her country first orbited an earth observation satellite it had to pay private sector organizations to use its data. "Now they want it," she said, before announcing CA$5 million ($3.6 million) to fund studies on biodiversity from space. The president of the Japan Aerospace Exploration Agency (JAXA), Dr Hiroshi Yamakawa, reminded attendees that Japan recently launched its third greenhouse gas observation satellite.

The deputy administrator of China's National Space Agency (CNSA) Zhigang Bian said his country has launched 500 earth observation satellites. He sent a little murmur around the auditorium when he said China participates in a constellation of such sats shared by members of the BRICS bloc – a loose alliance that recently expanded its membership beyond Brazil, Russia, India, China and South Africa to include Egypt, Ethiopia, Indonesia, Iran, and the United Arab Emirates. Diplomatic types see BRICS expansion as China developing institutions that rival existing blocs – on Earth and in space.

Zhigang also said China is working to make space sustainable with new measures to track orbiting debris, manage traffic in space, and provide alerts to warn if spacecraft are at risk. He said China believes those measures are necessary because the growing mega-constellations of broadband satellites increases risks for all users of space.

"China is currently researching active removal of space debris," he said.

JAXA's Dr Yamakawa said Japan's private sector outfit Astroscale is probably three years away from capturing and de-orbiting a satellite, but that doing so won't solve the space junk problem.

"We think the debris issue is one we must cope with," he said. "There is not enough time to solve for this."

Dr Yamakawa also suggested collaboration between spacefaring nations makes extraterrestrial exploration more sustainable, and pointed to the forthcoming JAXA/ISRO collaboration on the Lunar Polar Exploration (LUPEX) mission that will see a Japanese H3 rocket carry an Indian lander and a Japanese rover.

ISRO's V Narayanan said the mission will supersize India's previous Chandrayaan moon missions, by sending a 6,800kg lander and 300kg rover, up from the 600kgs and 25kgs sent by 2023's Chandrayaan-3 mission.


Original Submission

posted by janrinok on Friday October 03, @06:34PM   Printer-friendly

https://www.bleepingcomputer.com/news/security/cisa-warns-of-critical-linux-sudo-flaw-exploited-in-attacks/

Hackers are actively exploiting a critical vulnerability (CVE-2025-32463) in the sudo package that enables the execution of commands with root-level privileges on Linux operating systems.

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has added this vulnerability to its Known Exploited Vulnerabilities (KEV) catalog, describing it as "an inclusion of functionality from untrusted control sphere."

CISA has given federal agencies until October 20 to apply the official mitigations or discontinue the use of sudo.

A local attacker can exploit this flaw to escalate privileges by using the -R (--chroot) option, even if they are not included in the sudoers list, a configuration file that specifies which users or groups are authorized to execute commands with elevated permissions.

Sudo ("superuser do") allows system administrators to delegate their authority to certain unprivileged users while logging the executed commands and their arguments.

Officially disclosed on June 30, CVE-2025-32463 affects sudo versions 1.9.14 through 1.9.17 and has received a critical severity score of 9.3 out of 10.

"An attacker can leverage sudo's -R (--chroot) option to run arbitrary commands as root, even if they are not listed in the sudoers file," explains the security advisory.

Rich Mirch, a researcher at cybersecurity services company Stratascale who discovered CVE-2025-32463, noted that the issue impacts the default sudo configuration and can be exploited without any predefined rules for the user.

On July 4, Mirch released a proof-of-concept exploit for the CVE-2025-32463 flaw, which has existed since June 2023 with the release of version 1.9.14.

However, additional exploits have circulated publicly since July 1, likely derived from the technical write-up.

CISA has warned that the CVE-2025-32463 vulnerability in sudo is being exploited in real-world attacks, although the agency has not specified the types of incidents in which it has been leveraged.

Organizations worldwide are advised to use CISA's Known Exploited Vulnerabilities catalog as a reference for prioritizing patching and implementing other security mitigations.


Original Submission

posted by janrinok on Friday October 03, @01:51PM   Printer-friendly

https://phys.org/news/2025-09-photodiode-germanium-key-chip.html

Programmable photonics devices, which use light to perform complex computations, are emerging as a key area in integrated photonics research. Unlike conventional electronics that transmit signals with electrons, these systems use photons, offering faster processing speeds, higher bandwidths, and greater energy efficiency. These advantages make programmable photonics well-suited for demanding tasks like real-time deep learning and data-intensive computing.

A major challenge, however, lies in the use of power monitors. These sensors must constantly track the optical signal's strength and provide the necessary feedback for tuning the chip's components as required. However, existing on-chip photodetectors designed for this purpose face a fundamental tradeoff. They either have to absorb a significant amount of the optical signal to achieve a strong reading, which degrades the signal's quality, or they lack the sensitivity to operate at the low power levels required without needing additional amplifiers.

As reported in Advanced Photonics, Yue Niu and Andrew W. Poon from The Hong Kong University of Science and Technology have addressed this challenge by developing a germanium-implanted silicon waveguide photodiode. Their approach overcomes the tradeoffs that have hindered existing on-chip power monitoring technologies.

A waveguide photodiode is a small light detector that can be integrated directly into an optical waveguide, which confines and transports light. Its purpose is to convert a small portion of the light traveling through the waveguide into an electrical signal that can be measured via more conventional electronics. One way to enhance this conversion is through ion implantation, a process that introduces controlled defects into the photodiode's silicon structure by bombarding it with ions.

If executed properly, these defects can absorb photons with energies too low for pure silicon, enabling the photodiode to detect light across a broader range of wavelengths.

Previous attempts to build such detectors used boron, phosphorus, or argon ions. While these approaches improved performance in some respects, they also introduced many free carriers into the silicon lattice, which in turn degraded optical performance. In contrast, the team implanted germanium ions. Germanium, a Group IV element like silicon, can replace silicon atoms in the crystal structure without introducing significant numbers of free carriers. This substitution allows the device to extend its sensitivity without compromising signal quality.

The researchers conducted various comparative experiments to test the new device under relevant conditions. The germanium-implanted photodiode showed high responsivity at both 1,310 nanometers (O-band) and 1,550 nanometers (C-band), two critical wavelengths used in telecommunications. It also demonstrated an extremely low dark current, meaning little unwanted output when no light was present, as well as very low optical absorption loss. This combination makes the device suitable for integration into photonic circuits without disturbing the primary signal flow.

"We benchmarked our results with other reported on-chip linear photodetector platforms and showed that our devices are competitive across various performance metrics for power monitoring applications in self-calibrating programmable photonics," remarks Poon.Overall, this study represents a major step toward practical, large-scale programmable photonic systems. By providing a photodetector that can meet the stringent demands of on-chip monitoring, the researchers have brought the transformative potential of light-based computing closer to reality.

Beyond its immediate use in programmable photonics, the proposed device's unique characteristics also open doors to other promising applications.

"The combination of an extremely low dark current with a low bias voltage positions our device as an ideal candidate for energy-efficient, ultra-sensitive biosensing platforms, where low-noise detection of weak optical signals is paramount," explains Poon. "This would enable direct integration with microfluidics for lab-on-chip systems."

The germanium-implanted photodiode may help advance programmable photonics by improving on-chip light monitoring and could also support future applications in biosensing and lab-on-chip technologies.

More information: Yue Niu et al, Broadband sub-bandgap linear photodetection in Ge+-implanted silicon waveguide photodiode monitors, Advanced Photonics (2025). DOI: 10.1117/1.ap.7.6.066005


Original Submission

posted by janrinok on Friday October 03, @09:06AM   Printer-friendly

404MEDIA

ICE to Buy Tool that Tracks Locations of Hundreds of Millions of Phones Every Day

Documents show that ICE has gone back on its decision to not use location data remotely harvested from peoples' phones. The database is updated every day with billions of pieces of location data.

Immigration and Customs Enforcement (ICE) has bought access to a surveillance tool that is updated every day with billions of pieces of location data from hundreds of millions of mobile phones, according to ICE documents reviewed by 404 Media.

The documents explicitly show that ICE is choosing this product over others offered by the contractor's competitors because it gives ICE essentially an "all-in-one" tool for searching both masses of location data and information taken from social media. The documents also show that ICE is planning to once again use location data remotely harvested from peoples' smartphones after previously saying it had stopped the practice.

Surveillance contractors around the world create massive datasets of phones', and by extension people's movements, and then sell access to the data to government agencies. In turn, U.S. agencies have used these tools without a warrant or court order.

"The Biden Administration shut down DHS's location data purchases after an inspector general found that DHS had broken the law. Every American should be concerned that [the current administration's] hand-picked security force is once again buying and using location data without a warrant," Senator Ron Wyden told 404 Media in a statement.


Original Submission

posted by hubie on Friday October 03, @04:21AM   Printer-friendly

The AI industry has made major promises about its tech boosting the productivity of developers, allowing them to generate copious amounts of code with simple text prompts.

But those claims appear to be massively overblown, as The Register reports, with researchers finding that productivity gains are modest at best — and at worst, that AI can actually slow down human developers.

In a new report, management consultants Bain & Company found that despite being "one of the first areas to deploy generative AI," the "savings have been unremarkable" in programming.

"Generative AI arrived on the scene with sky-high expectations, and many companies rushed into pilot projects," the report reads. "Yet the results haven't lived up to the hype."

First off, "developer adoption is low" even among the companies that rolled out AI tools, the management consultancy found.

Worse yet, while some assistants saw "ten to 15 percent productivity boosts," the savings most of the time "don't translate into positive returns."

It's yet another warning shot, highlighting concerns that even in one of the most promising areas, the AI industry is struggling to live up to its enormous hype. That's despite companies pouring untold billions of dollars into its development, with analysts openly fretting about an enormous AI bubble getting closer to popping.

futurism.com


Original Submission

posted by janrinok on Thursday October 02, @11:35PM   Printer-friendly

https://phys.org/news/2025-09-insect-cultivated-proteins-healthier-greener.html

Reducing industrial animal use can help to shrink our carbon footprint and boost health—but doing so means we need nutritious meat alternatives that are also tasty and affordable.

The researchers say that by using combinations of different proteins from plants, fungi, insects, microbial fermentation, and cultivated meat, we could create tasty, nutritious, and sustainable alternatives to animal products.

As well as tackling environmental concerns, hybrids could also help to address the health and ethical impact of livestock farming such as animal welfare, zoonotic disease, and antimicrobial resistance.

"Hybrid foods could give us a delicious taste and texture without breaking the bank or the planet," said first author Prof David L. Kaplan from Tufts University in the US. "Using protein alternatives needn't necessarily come with financial, taste, or nutritional costs."

For example, by drawing on the fibrous texture of mycelium, the sensory and nutritional qualities of cultivated meat, the nutrition and sustainability of insects, the proteins, pigments, enzymes, and flavors from microbial fermentation, and the abundance and low cost of plants, hybrids could combine the best of each protein source, say the authors.

But to make this happen, the researchers call for regulatory review and academic and industry cooperation to overcome hurdles and find the best possible protein combinations for our health, sensory, environmental, and cost needs.

"To succeed, we need research and cooperation across science, industry, and regulators to improve quality, scale production, and earn consumer trust," added Prof Kaplan.

The researchers investigated different protein sources: plants (for example, soy products like tofu), insects (processed into flours and blended into foods), mycelium-based products (such as vegan commercial meat analogs), cultivated meat grown in bioreactors, and microbial fermentation products (such as proteins, pigments, enzymes, and flavors).

They assessed the strengths and weaknesses of each protein source and considered how to harness the best qualities of each—both with and without animal meat. For example, while plant proteins are cheap and scalable, they often lack the flavor and texture of meat. Meanwhile, cultivated meat more closely mimics animal meat but is expensive and hard to scale. Mycelium can add natural texture, while insects offer high nutrition with a low environmental footprint.

The researchers reviewed various combinations to compare their sensory and nutritional profiles, consumer acceptance, affordability, and scalability.

They found that while every protein source has drawbacks, combining them can overcome many of these limitations. In the short term, plant–mycelium hybrids appear most economically viable because they are scalable, nutritious, and already used in commercial products.

In the longer term, plant–cultivated meat hybrids may become more desirable, as even small amounts of cultivated cells can improve taste, texture, and nutrition once production costs fall and capacity expands.

They also point to early studies which found that substantial fractions of meat in burgers or sausages can be replaced with plant proteins without reducing consumer acceptance, and even small additions of cultivated meat or mycelium can improve the taste, texture, and nutrition of plant-based products.

"No single alternative protein source is perfect, but hybrid products give us the opportunity to overcome those hurdles, creating products that are more than the sum of their parts," said senior author Prof David Julian McClements from the University of Massachusetts Amherst, US.

As well as benefits, each protein source presents its own limitations which must be addressed before their resulting hybrids can become mainstream meat alternatives, according to the researchers.

The processing necessary for cultivating meat or combining proteins brings high costs and difficulties with scaling up production. Some protein sources need more consistent, less fragmented regulation, and others, like insect protein, face high consumer skepticism.

Many edible insects are highly nutritious and environmentally friendlier to raise than animals, and over two billion people worldwide already regularly eat insects—but consumers in developed countries are often less willing to do so.

Another concern is that many current plant-based meat alternatives require numerous ingredients and extensive processing, and are therefore classified as ultra-processed foods (UPFs), which consumers may view as unhealthy.

Observational studies show correlations between high UPF consumption and adverse health outcomes, though causation has not been established. However, the authors note that hybrids—by drawing on the natural benefits of each source—could help reduce our reliance on additives and heavy processing.

The researchers are therefore working to ensure these products are healthy as well as acceptable to consumers. Future research, they say, should focus on optimizing protein sources, developing scalable production methods, conducting environmental and economic analyses, and using AI to identify new hybrid combinations and processing methods.

More information: Hybrid alternative protein-based foods: designing a healthier and more sustainable food supply, Frontiers in Science (2025). DOI: 10.3389/fsci.2025.1599300


Original Submission

posted by janrinok on Thursday October 02, @06:46PM   Printer-friendly

Complex knots can actually be easier to untie than simple ones:

Why is untangling two small knots more difficult than unravelling one big one? Surprisingly, mathematicians have found that larger and seemingly more complex knots created by joining two simpler ones together can sometimes be easier to undo, invalidating a conjecture posed almost 90 years ago.

"We were looking for a counterexample without really having an expectation of finding one, because this conjecture had been around so long," says Mark Brittenham at the University of Nebraska at Lincoln. "In the back of our heads, we were thinking that the conjecture was likely to be true. It was very unexpected and very surprising. "

Mathematicians like Brittenham study knots by treating them as tangled loops with joined ends. One of the most important concepts in knot theory is that each knot has an unknotting number, which is the number of times you would have to sever the string, move another piece of the loop through the gap and then re-join the ends before you reached a circle with no crossings at all – known as the "unknot".

Calculating unknotting numbers can be a very computationally intensive task, and there are still knots with as few as 10 crossings that have no solution. Because of this, it can be helpful to break knots down into two or more simpler knots to analyse them, with those that can't be split any further known as prime knots, analogous to prime numbers.

But a long-standing mystery is whether the unknotting numbers of the two knots added together would give you the unknotting number of the larger knot. Intuitively, it might make sense that a combined knot would be at least as hard to undo as the sum of its constituent parts, and in 1937, it was conjectured that undoing the combined knot could never be easier.

Now, Brittenham and Susan Hermiller, also at the University of Nebraska at Lincoln, have shown that there are cases when this isn't true. "The conjecture's been around for 88 years and as people continue not to find anything wrong with it, people get more hopeful that it's true," says Hermiller. "First, we found one, and then quickly we found infinitely many pairs of knots for whom the connected sum had unknotting numbers that were strictly less than the sum of the unknotting numbers of the two pieces."

"We've shown that we don't understand unknotting numbers nearly as well as we thought we did," says Brittenham. "There could be – even for knots that aren't connected sums – more efficient ways than we ever imagined for unknotting them. Our hope is that this has really opened up a new door for researchers to start exploring."

While finding and checking the counterexamples involved a combination of existing knowledge, intuition and computing power, the final stage of checking the proof was done in a decidedly more simple and practical manner: tying the knot with a piece of rope and physically untangling it to show that the researchers' predicted unknotting number was correct.

Andras Juhasz at the University of Oxford, who previously worked with AI company DeepMind to prove a different conjecture in knot theory, says that he and the company had tried unsuccessfully to crack this latest problem about additive sets in the same way, but with no luck.

"We spent at least a year or two trying to find a counterexample and without success, so we gave up," says Juhasz. "It is possible that for finding counterexamples that are like a needle in a haystack, AI is maybe not the best tool. This was a hard-to-find counterexample, I believe, because we searched pretty hard."

Despite there being many practical applications for knot theory, from cryptography to molecular biology, Nicholas Jackson at the University of Warwick, UK, is hesitant to suggest that this new result can be put to good use. "I guess we now understand a little bit more about how circles work in three dimensions than we did before," he says. "A thing that we didn't understand quite so well a couple of months ago is now understood slightly better."

Reference:

Journal Reference:
Brittenham, Mark, Hermiller, Susan. Unknotting number is not additive under connected sum, (DOI: 10.48550/arXiv.2506.24088)

See also:


Original Submission

posted by janrinok on Thursday October 02, @02:03PM   Printer-friendly

Huawei's Ternary Logic Breakthrough: A Game-Changer or Just Hype?:

[Editor's Comment: The source reads as though it could have been created by AI, nevertheless it is an interesting topic and worth a discussion.--JR]

Huawei's recent patent for 'ternary logic' represents a potential breakthrough in chip technology by utilizing a three-state logic system consisting of -1, 0, and 1, rather than the traditional binary system of 0 and 1. This innovative approach could substantially reduce the number of transistors required on a chip, leading to lower energy consumption, particularly in power-hungry AI applications. The ternary logic patent, filed in September 2023 and recently disclosed, may offer significant advantages in processing efficiency and hardware design, addressing some of the physical limits faced by current chip technologies.

Ternary computing is not a novel concept; the first ternary computer was developed in 1958 at Moscow State University, indicating the feasibility of utilizing more than two states in computational logic (source). However, binary logic became the industry standard due to its simplicity and the development of compatible technologies. Huawei's pursuit of ternary logic comes amidst US sanctions, which have pressured the company to explore alternative technological paths. By reducing reliance on traditional chip designs, Huawei aims to innovate in a constrained environment and potentially gain a competitive edge in the AI and semiconductor sectors.

The commercial viability of Huawei's ternary logic chip presents an intriguing yet complex scenario for the tech industry. Ternary logic, which utilizes three states instead of the usual binary system's two, promises significant advancements in chip technology. By potentially reducing the number of transistors required, it could lead to decreases in both manufacturing costs and energy consumption, particularly in power-hungry AI applications. However, the road to commercial viability is laden with challenges. The tech industry must grapple with the transition from a binary-dominated ecosystem to one that could incorporate ternary systems. This includes revamping software and programming approaches to leverage the benefits of the new computing structure. Furthermore, the ability to mass-produce these chips economically and reliably remains unproven, leaving questions about whether the technology can achieve a cost-effective scale .

If successful, Huawei's ternary logic patent could disrupt current computing paradigms and lead to reduced energy consumption in AI technology, aligning with broader trends towards sustainability. The potential of such technology to alter chip design and improve AI efficiency could have far-reaching implications not only for Huawei but for the tech industry at large. Moreover, by possibly circumventing some effects of international sanctions, Huawei's efforts symbolize a form of technological resilience and ingenuity amid geopolitical challenges

Ternary logic represents a novel approach in computing, differentiating itself from the conventional binary logic by utilizing three distinct values: -1, 0, and 1. This trinary system aims to encode information more efficiently, offering potential reductions in the complexity and power usage of AI chips. Unlike the binary system that utilizes two states (0 and 1) to process tasks, ternary logic could lead to less energy-intensive data processing, ultimately decreasing the overall power consumption of AI-driven technologies. This innovation heralds a shift from traditional methodologies, potentially streamlining both hardware requirements and computational resource demands.

The implementation of ternary logic in modern computing could signify a breakthrough in addressing the issues associated with current chip designs. With chip manufacturing approaching its physical limits, the trinary system provides an alternative pathway to enhance processing capabilities without exponentially increasing transistor counts. Huawei's recent patent reflects this innovative direction, aiming to solve power consumption dilemmas while navigating international sanctions and fostering technological advancements in AI. Embedded in this development is Huawei's strategic response to geopolitical challenges, exemplified by their proactive patent applications that emphasize reducing dependence on traditional binary constraints.


Original Submission

posted by janrinok on Thursday October 02, @09:16AM   Printer-friendly

Experts Alarmed That AI Is Now Producing Functional Viruses:

In real world experiments, a team of Stanford researchers demonstrated that a virus with AI-written DNA could target and kill specific bacteria, they announced in a study last week. It opened up a world of possibilities where artificial viruses could be used to cure diseases and fight infections.

But experts say it also opened a Pandora's box. Bad actors could just as easily use AI to crank out novel bioweapons, keeping doctors and governments on the backfoot with the outrageous pace at which these viruses can be designed, warn Tal Feldman, a Yale Law School student who formerly built AI models for the federal government, and Jonathan Feldman, a computer science and biology researcher at Georgia Tech (no word on whether the two are related).

"There is no sugarcoating the risks," the pair warned in a piecefor the Washington Post. "We're nowhere near ready for a world in which artificial intelligence can create a working virus, but we need to be — because that's the world we're now living in."

In the study, the Stanford researchers used an AI model called Evo to invent DNA for a bacteriophage, a virus that infects bacteria. Unlike a general purpose large language model like ChatGPT, which is trained on written language, Evo was exclusively trained on millions of bacteriophage genomes.

They focused on an extensively studied phage called phiX174, which is known to infect strains of the bacteria E. coli. Using the EVO AI model, the team came up with 302 candidate genomes based on phiX174 and put them to the test by using the designs to chemically assemble new viruses.

Sixteen of them worked, infecting and killing the E. coli strains. Some of them were even deadlier than the natural form of the virus.

But "while the Stanford team played it safe, what's to stop others from using open data on human pathogens to build their own models?" the two Feldmans warned. "If AI collapses the timeline for designing biological weapons, the United States will have to reduce the timeline for responding to them. We can't stop novel AI-generated threats. The real challenge is to outpace them."

That means using the same AI tech to design antibodies, antivirals, and vaccines. This work is already being done to some extent, but the vast amounts of data needed to accelerate such pioneering research "is siloed in private labs, locked up in proprietary datasets or missing entirely."

"The federal government should make building these high-quality datasets a priority," the duo opined.

From there, the federal government would need to build the necessary infrastructure to manufacture these AI-designed medicines, since the "private sector cannot justify the expense of building that capacity for emergencies that may never arrive," they argue.

Finally, the Food and Drug Administration's sluggish and creaking regulatory framework would need an overhaul. (Perhaps in a monkey's paw of such an overhaul, the FDA said it's using AI to speed-run the approval of medications.)

"Needed are new fast-tracking authorities that allow provisional deployment of AI-generated countermeasures and clinical trials, coupled with rigorous monitoring and safety measures," they said.

The serious risks posed by AI virus generation shouldn't be taken lightly. Yet, it's worth noting that the study in question hasn't made it out of peer review yet and we still don't have a full picture of how readily someone could replicate the work the scientists did.

But with agencies like the Centers for Disease Control and Prevention being gutted, and vaccines and other medical interventions being attacked by a health-crank riddled administration, there's no denying that the country's medical policy and infrastructure is in a bad place. That said, when you consider that the administration is finding any excuse to rapidly deploy AI in every corner of the government, it's worth treading lightly when we ask for more.

More on synthetic biology:Scientists Debate Whether to Halt Type of Research That Could Destroy All Life on Earth

AI Creates Bacteria-Killing Viruses: 'Extreme Caution' Warns Genome Pioneer:

A California outfit has used artificial intelligence to design viral genomes before they were then built and tested in a laboratory. Following this, bacteria was then successfully infected with a number of these AI-created viruses, proving that generative models can create functional genetics.

"The first generative design of complete genomes."

That's what researchers at Stanford University and the Arc Institute in Palo Alto called the results of these experiments. A biologist at NYU Langone Health, Jef Boeke, celebrated the experiment as a substantial step towards AI-designed lifeforms, according to MIT Technology Review.

"They saw viruses with new genes, with truncated genes, and even different gene orders and arrangements," Boeke said.

They team created 302 full genomes, outlined by their AI, Evo - a LLM similar to that of ChatGPT - and introduced them to E. coli test systems. 16 of these designs created successful bacteriophages which were able to replicate and kill the bacteria.

Brian Hie, who leads the Arc Institute lab, reflected on the moment the plates revealed clearings where bacteria had died. "That was pretty striking, just actually seeing, like, this AI-generated sphere," said Hie.

The team targeted bacteriophage phiX174, a minimal DNA phage with approximately 5,000 bases across 11 genes. Around 2 million bacteriophage were used to train the AI model, allowing it to understand the patterns in their makeup and gene order. It then proposed new, complete genomes.

J. Craig Venter helped create the cells with these synthetic genomes. He saw the approach as being "just a faster version of trial-and-error experiments."

"We did the manual AI version - combing through the literature, taking what was known," he explained.

Speed is the appeal here. The prediction from the AI on the protein structure could certainly speed up the processes within drug and biotechnical development. The results could then be used to fight bacterial infections in, for example, farming or even gene therapy.

Samuel King, a student who led the project, said: "There is definitely a lot of potential for this technology."

The team excluded human-infecting viruses from the AI's training, but testing in this area could still be dangerous, warns Venter.

"One area where I urge extreme caution is any viral enhancement research,, especially when it's random so you don't know what you are getting.

"If someone did this with smallpox or anthrax, I would have grave concerns."

There are other issues with this idea. Moving to a 'simple' phage to something more complex such as bacteria - something that AI simply won't be able to do at this point.

"The complexity would rocket from staggering to ... way way more than the number of subatomic particles in the universe," Boeke said.

Despite the challenges surrounding this test, it is an extremely impressive result - and something that could influence the future of genetic engineering.


Original Submission

posted by jelizondo on Thursday October 02, @04:31AM   Printer-friendly
from the I-wish-I-had-been-on-the-testing-team dept.

Triple-fermented Belgian beers have the longest-lasting foam; single-fermented lagers have the shortest:

For many beer lovers, a nice thick head of foam is one of life's pure pleasures, and the longer that foam lasts, the better the beer-drinking experience. A team of Swiss researchers spent seven years studying why some beer foams last longer than others and found that the degree of fermentation—i.e., whether a given beer has been singly, doubly, or triply fermented—is crucial, according to a new paper published in the journal Physics of Fluids.

[...] Individual bubbles typically form a sphere because that's the shape with the minimum surface area for any volume and hence is the most energy-efficient. One reason for the minimizing principle when it comes to a bubble's shape is that many bubbles can then tightly pack together to form a foam. But bubbles "coarsen" over time, the result of gravity pulling down on the liquid and thinning out the walls. Eventually, they start to look more like soccer balls (polyhedrons). In a coarsening foam, smaller bubbles are gradually absorbed by larger ones. There is less and less liquid to separate the individual bubbles, so they press together to fill the space.

This "jamming" is why foams are typically far more rigid than their gas (95 percent) and liquid (5 percent) components. The more tightly the bubbles jam together, the less they can move around and the greater the pressure inside them becomes, giving them properties of a solid.

Various factors can affect foam stability. For instance, in 2019, Japanese researchers investigated a phenomenon known as "collective bubble collapse," or CBC, in which breaking one bubble at the edge of a foam results in a cascading effect as the breakage spreads to other bubbles in the foam. They identified two distinct mechanisms for the resulting CBCs: a so-called "propagating mode," in which a broken bubble is absorbed into the liquid film, and a "penetrating mode," in which the breakage of a bubble causes droplets to shoot off and hit other bubbles, causing them to break in turn.

Higher levels of liquid in the foam slowed the spread of the collapse, and changing the viscosity of the fluid had no significant impact on how many bubbles broke in the CBC. Many industrial strategies for stabilizing foams rely on altering the viscosity; this shows those methods are ineffective. The researchers suggest focusing instead on using several different surfactants in the mixture. This would strengthen the resulting film to make it more resistant to breakage when hit by flying droplets.

However, as the authors of this latest paper note, "Most beers are not detergent solutions (though arguably some may taste that way)." They were inspired by a Belgian brewer's answer when they asked how he controlled fermentation: "By watching the foam." A stable foam is considered to be a sign of successful fermentation. The authors decided to investigate precisely how the various factors governing foam stability might be influenced by the fermentation process.

[...] Single-fermented lager beers had the least stable foam, with triple-fermented beers boasting the most stable foam; the foam stability of double-fermented beers fell in the middle of the range. The team also discovered that the most important factor for foam stability isn't fixed but largely depends on the type of beer. It all comes down to surface viscosity for single-fermented lagers.

But surface viscosity is not a major factor for stable foams in double- or triple-fermented beers. Instead, stability arises from differences in surface tension, i.e., Marangoni stresses—the same phenomenon behind so-called "wine tears" and the "coffee ring effect." Similarly, when a drop of watercolor paint dries, the pigment particles of color break outward toward the rim of the drop. In the case of beer foam, the persistent currents that form as a result of those differences in surface tension lend stability to the foam.

The researchers also analyzed the protein content of the beers and found that one in particular—lipid transfer protein 1 (LPT1)—was a significant factor in stabilizing beer foams, and their form depended on the degree of fermentation. In single-fermented beers, for example, the proteins are small, round particles on the surface of the bubbles. The more proteins there are, the more the foam will be stable because those proteins form a more viscous film around the bubbles.

Those LPT1 proteins become slightly denatured during a second fermentation, forming more of a net-like structure that improves foam stability. That denaturation continues during a third fermentation, when the proteins break down into fragments with hydrophobic and hydrophilic ends, reducing surface tensions. They essentially become surfactants to make the bubbles in the foam much more stable.

That said, the team was a bit surprised to find that increasing the viscosity with additional surfactants can actually make the foam more unstable because it slows down the Marangoni effects too strongly. "The stability of the foam does not depend on individual factors linearly," said co-author Jan Vermant, also of ETH Zurich. "You can't just change 'something' and get it 'right.' The key is to work on one mechanism at a time–and not on several at once. Beer obviously does this well by nature. We now know the mechanism exactly and are able to help [breweries] improve the foam of their beers."

The findings likely have broader applications as well. "This is an inspiration for other types of materials design, where we can start thinking about the most material-efficient ways [of creating stable foams]," said Vermant. "If we can't use classical surfactants, can we mimic the 2D networks that double-fermented beers have?" The group is now investigating how to prevent lubricants used in electric vehicles from foaming; developing sustainable surfactants that don't contain fluoride or silicon; and finding ways to use proteins to stabilize milk foam, among other projects.

Journal Reference: DOI: Physics of Fluids, 2025. 10.1063/5.0274943


Original Submission