Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Who or what piqued your interest in technology?

  • School
  • Parent
  • Friend
  • Book
  • Gadget
  • Curiosity
  • I have been kidnapped by a technology company you insensitive clod
  • Other (please specify in the comments)

[ Results | Polls ]
Comments:30 | Votes:103

posted by janrinok on Thursday February 20, @07:53AM   Printer-friendly
from the cue-the-your-momma-jokes! dept.

Scientists Just Discovered 'Quipu,' the New Largest Structure in Our Cosmos:

Humanity's growing understanding of the universe can be best described as a "Copernican journey"—the centuries-long discovery that we are far from the center of all things. Earth, for example, orbits around the Sun (thanks for that one, Copernicus). But it's also just one Solar System among billions in the Milky Way, which is turn a part of the Virgo Supercluster and the even largerLaniakea supercluster—one of the largest objects in the universe, at around 520 million light-years across.

However, even Laniakea isn't the largest structure in the known universe. In 2003, scientists discovered the Sloan Great Wall (SGW), believed to stretch beyond 1 billion light-years. But now, in a study published on the preprint server arXiv (and accepted for publication in the journal Astronomy and Astrophysics), scientists assert their belief that there's a structure even larger than this celestial behemoth.

Its name is Quipu, and astronomers estimate that its massive bulk stretches some 1.39 billion light-years across. According to Princeton astronomer J. Richard Gott III, who helped discover the SGW and who spoke with New Scientist, Quipu "end to end, is slightly longer" than SGW. The researchers also estimate that Quipu contains the equivalent mass of 200 quadrillion Suns.

"For a precise determination of cosmological parameters we need to understand the effects of the local large-scale structure of the Universe on the measurements," the authors wrote. "Characterizing these superstructures is also important for astrophysical research, for example the study of the environmental dependence of galaxy evolution as well as for precision tests of cosmological models."

The name Quipu—a reference to the textile-based recording devices used by several ancient cultures in the central Andes—is both catchy and descriptive. The authors note that one particular view gives "the best impression of the superstructure as a long filament with small side filaments, which initiated the naming of Quipu."

Original Submission

The team analyzed Quipu, along with four other superstructures, using data from the German Aerospace Center-led ROSAT X-ray satellite and the team's Cosmic Large-Scale Structure in X-rays (CLASSIX) Cluster Survey. They found that these structures together contain roughly 45 percent of all galaxy clusters, 30 percent of all galaxies, and 25 percent of matter in the observable universe. However, even larger structures might still exist. The Hercules-Corona Borealis Great Wall, located further afield than Quipu, has been estimated to stretch 10 billion light-years long (though its true size is still up for debate).

Understanding Quipu and other superstructures like it is vitally important, as they challenge our current understanding of cosmological evolution, which states that matter should be relatively evenly distributed throughout the universe. These superstructures are so huge that forming them could theoretically take longer than the universe is old.

However, Quipu isn't a fixture of the universe. Despite its immense stature, it too will eventually disappear from the cosmic stage. "In the future cosmic evolution, these superstructures are bound to break up into several collapsing units," the authors wrote. "They are thus transient configurations."

Even cosmic superstructures can't escape the inexorable march of time.


posted by hubie on Thursday February 20, @03:10AM   Printer-friendly

Arm is Reportedly Developing its Own in-House Chip

The new CPU could be a piece in the $500 billion Stargate AI project:

Chip designer Arm plans to unveil its own processor this year with Meta as the launch customer, The Financial Times reported. The chip would be a CPU designed for servers in data centers and would have the potential to be customized for clients. Manufacturing would be outsourced to a contract fab plant like TSMC (Taiwan Semiconductor Manufacturing Co.) and the first in-house chip could be revealed as early as this summer, according to the FT's sources.

Last month, Arm parent Softbank announced the Stargate project, a partnership with OpenAI to build up to $500 billion worth of AI infrastructure. Arm, along with Microsoft and NVIDIA, is a key technology partner for the project. Arm's chip could now play a role in that project, and also in Jony Ive's mysterious AI-powered personal device, reportedly being developed in collaboration with OpenAI's Sam Altman, according to the report.

[...] The move would put Arm in direct competition with many of its own customers like NVIDIA, which manufacturers its own Arm-based server CPUs. To date, Arm has never made its own chips — instead, it licenses its technology and patents to major companies like Apple. Those companies then customize the designs for their own needs and use a contract manufacturer like TSMC or Samsung to build the chips.

Arm recruits from customers as it plans to sell its own chips

reuters.com:

Arm has begun recruiting from its own customers and competing against them for deals as it pushes toward selling its own chips, according to people familiar with the matter and a document viewed by Reuters.

Arm supplies the crucial intellectual property that firms such as Apple and Nvidia license to create their own central processing units (CPUs). It has also been seeking to expand its profits and revenues through a range of tactics, including considering whether to sell chips of its own.

Arm appears to be ramping up that effort.

The UK-based company has sought to recruit executives from licensees, two sources familiar with the matter told Reuters. And Arm is competing against Qualcomm, one of its largest customers, to sell data center CPUs to Meta Platforms, according to a person familiar with the matter.

The tech provider's moves to build out its own chip business could upend an industry that has long viewed the company as a neutral player rather than a competitor, by forcing companies who rely on Arm technology to consider whether they will end up competing against the firm for business.


Original Submission #1Original Submission #2

posted by janrinok on Wednesday February 19, @10:23PM   Printer-friendly

https://newatlas.com/environment/indoor-air-pollution-scented-terpenes/

Using scented products indoors changes the chemistry of the air, producing as much air pollution as car exhaust does outside, according to a new study. Researchers say that breathing in these nanosized particles could have serious health implications.

When you hear or see the words 'air pollution,' you most likely think of things like factories and car exhaust. That's pollution that is out there – outside your house. But have you thought about how you're contributing to air pollution inside of where you live by using seemingly innocuous products like scented, non-combustible candles?

New research by Purdue University, the latest in a series of Purdue-led studies, examined how scented products – in this case, flame-free candles – are a significant source of nanosized particles small enough to get deep into your lungs, posing a potential risk to respiratory health

"A forest is a pristine environment, but if you're using cleaning and aromatherapy products full of chemically manufactured scents to recreate a forest in your home, you're actually creating a tremendous amount of indoor air pollution that you shouldn't be breathing in," said Nusrat Jung, an assistant professor in Purdue's Lyles School of Civil and Construction Engineering and co-corresponding author of the study's.

Scented wax melts are marketed as a flameless, smoke-free, non-toxic alternative to traditional candles, a safer way of making your home or office smell nice. To assess the truth of these claims, the researchers comprehensively measured the nanoparticles formed when they warmed wax melts in their mechanically ventilated test house. The tiny house is actually an architectural engineering laboratory called the Purdue Zero Energy Design Guidance for Engineers (zEDGE) lab. Designed and engineered to test the energy efficiency of a larger building, it's full of sensors that monitor the impact of everyday activities on indoor air quality.

"To understand how airborne particles form indoors, you need to measure the smallest nanoparticles – down to a single nanometer," said Brandon Boor, associate professor in civil engineering at Purdue and the study's other corresponding author. "At this scale, we can observe the earliest stages of new particle formation, where fragrances react with ozone to form tiny molecular clusters."

The researchers knew from their previous research that new nanoparticle formation was initiated by terpenes – aromatic compounds that determine the smell of things like plants and herbs – released from the melts and reacting with indoor atmospheric ozone (O3). They'd found that activities such as mopping the floor with a terpene-rich cleaning agent, using a citrus-scented air freshener, or applying scented personal care products like deodorant inside the zEDGE house resulted in pulsed terpene emissions to the indoor air within five minutes. Conversely, using essential oil diffusers or peeling citrus fruits caused a more gradual increase in terpenes.

In the present study, heating the scented wax contributed significantly to the number of new particles formed in the indoor air, particularly those smaller than 100 nanometers (nm). The resulting atmospheric concentrations were over one million nanoparticles per cubic centimeter (106 cm-3), which is comparable to concentrations emitted by traditional lighted candles (106 cm-3), gas stoves (105 – 107 cm-3), diesel engines (103 – 106 cm-3), and natural gas engines (106 – 107 cm-3). By comparison, there were no significant terpene emissions when unscented wax melts were heated.

https://pubs.acs.org/doi/10.1021/acs.estlett.4c00986


Original Submission

posted by janrinok on Wednesday February 19, @06:54PM   Printer-friendly
from the Oops,-we've-done-it-again dept.

I expect that many noticed that the site went down and, if you are reading this you will also realise that it is now back up.

The entire server died leaving a wake of Out-Of-Memory messages, which resulted in the site itself, IRC and our email all failing. We (and by that I really mean kolie!) have restarted the server and doubled the amount of memory available to it.

Of course, that doesn't tell us why it ran out of memory, although we knew that it was a bit tight, nor what specifically happened today to push it over the edge. That will probably take a while to work out.

It might take us a while to put more stories in the queue but you should be able to comment on many of today's stories that have only just appeared on your screens.

We are sorry for the inconvenience and we are getting back on our feet again. As always, a big THANK YOU to kolie for his efforts.

posted by janrinok on Wednesday February 19, @05:41PM   Printer-friendly

Record-breaking neutrino is most energetic ever detected:

Highest energy cosmic neutrino so far 120PeV (120x1015eV)

Astrophysicists have observed the most energetic neutrino ever. The particle — which probably came from a distant galaxy — was spotted by the Cubic Kilometre Neutrino Telescope (KM3NeT), a collection of light-detecting glass spheres on the floor of the Mediterranean Sea, on 13 February 2023. Researchers monitoring the telescope did not notice the detection until early 2024, when they completed the first analysis of their data. They unveiled it as a potentially record event last year at a conference in Milan, Italy, but did not disclose details such as the timing, direction or energy of the neutrino.

"We had to convince ourselves that it wasn't something strange or weird with the telescope," says Paschal Coyle, a neutrino physicist at Aix-Marseille University in France and KM3NeT spokesperson. The result was published on 12 February in Nature1, and will be described in four preprints due to be posted on the arXiv preprint server.

Neutrinos are electrically neutral particles more than one million times lighter than an electron. They are typically produced in nuclear reactions such as those at the centre of the Sun, from which they emerge with energies on the order of millions of electronvolts (106 eV). But for more than 10 years, researchers have been recording neutrinos carrying unprecedented energies of up to several quadrillion electronvolts (1015 eV, or 1 petaelectronvolt), which are thought to originate in distant galaxies. (The most energetic particle ever detected, at 320,000 PeV, was not a neutrino but a cosmic ray dubbed the Oh-My-God particle.)

KM3NeT consists of strings of sensitive light detectors anchored to the sea floor at a depth of around 3,500 metres off the coast of the Italian island of Sicily, as well as in a second, smaller array near Toulon, France. These sensors pick up light emitted by high-energy, electrically charged particles such as muons. Muons are continuously raining down on Earth's surface, because they are produced when cosmic rays hit air molecules. But occasionally, a cosmic neutrino that smashes into the planet's surface also produces a muon.

In the February 2023 event detected by the Sicily observatory, the team estimated that the muon carried 120 PeV of energy, on the basis of the unusual amount of light it produced. The particle's path was close to horizontal with respect to Earth's surface and travelled eastwards, towards Greece.

Journal Reference:
The KM3NeT Collaboration. Observation of an ultra-high-energy cosmic neutrino with KM3NeT. Nature 638, 376–382 (2025). https://doi.org/10.1038/s41586-024-08543-1


Original Submission

posted by hubie on Wednesday February 19, @12:55PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The theme cropped up repeatedly during 2025's State Of Open Conference, with speakers from tech giants and volunteer maintainers laying out the challenges. Much of the open source ecosystem relies on volunteers putting in too many hours for too little support and the cracks are growing.

This week, the lead of the Asahi Linux project – a Linux distribution for Apple silicon – Hector Martin, abruptly quit, citing factors including developer burnout and demanding users.

Jamie Tanna, who gave himself the title of "Tired Maintainer" put it simply: "Being an open source maintainer is really rewarding... except when it isn't."

Tanna has been active in the open source world for several years, although it was the experience of being an oapi-codgen maintainer that he spoke about. For the uninitiated, oapi-codgen is a tool to convert OpenAPI specifications to Go code.

"It's used by a load of companies... and a load of angry users."

The story is a familiar one. Tanna has helped out with some issues on the project and had volunteered for maintainer duty. There was a flurry of releases, but before long, the time between each release began to lengthen. Being a maintainer, he explained, with big or small projects (but especially big ones) meant dealing with "fun" users who are very happy to express their feelings as well an ever-increasing list of requests.

The experience of feeling under pressure, isolated and faced with a growing pile of work while receiving the occasional unpleasant message from an entitled user demanding their issue be dealt with now or that a contribution merged be immediately is far too common.

Tanna is relatively fortunate – his employer gives him four hours a month to work on the project. However, that does not come close to meeting the demands of users and the "How hard can it be?" brigade. Maintainers are undoubtedly under pressure, and many have either quit or are considering doing so.

[...] Vargas used figures including a 2024 Tidelift survey that put a figure of 60 percent on maintainers that had either quit or were considering quitting, and another [PDF] from the Linux Foundation showing that most of the more widely used Free Open Source Software was developed by only a handful contributors.

[...] Dealing with the problem is difficult. Do maintainers simply need to be paid in recognition of their efforts? Vargas is unsure that everything has a financial solution and noted research (https://dl.acm.org/doi/10.1145/3674805.3686667) presented at this year's FOSDEM. Vargas told The Register, "Money is not going to solve all problems."

"Each maintainer and project has their own context and challenges - while many maintainers would benefit from financial support, others really could use more contributors to complement their work and remove responsibilities from them - especially for non-code tasks like mentorship, community management, issue triage, promotion and fundraising, etc."

Rickard also worried about a potential squeeze on budgets as economic uncertainties bite and talked of raising awareness on platforms such as GitHub around sponsorship, given a contraction in the funding of projects by companies.

"You've got to have something as a catalyst for that change to happen. We, as a group of humans, don't seem to do proactively very well."

Cosgrove said, "I'm afraid it'll take a significant project falling over to convince them [the users] that paying for open source maintainers is worthwhile and, in fact, may actually be a requirement.

"I don't want to see that happen because the fallout will be ugly and gross, but I'm concerned that that's what it'll take."


Original Submission

posted by hubie on Wednesday February 19, @08:10AM   Printer-friendly
from the I'm-sorry-Dave-I'm-afraid-I-can't-do-that dept.

Arthur T Knackerbracket has processed the following story:

Just as the US and UK refused to sign an international statement about AI safety at the AI Action Summit earlier this week, an AI study out of China revealed that AI models have reached a “red line” humans should be aware of: The AI can replicate itself, which sounds like one of the nightmare scenarios some people have been fearing.

That’s not as concerning as it might first sound, and it shouldn’t be surprising that AI can do what it’s told, even if that means cloning itself. Yes, that’s the big caveat in this experiment: the AI models followed orders when cloning themselves.

We’re not looking at rogue AI or artificial intelligence that’s doing the cloning on its own accord. We’d probably not even know that a misaligned piece of advanced AI has started replicating itself to stay alive.

[...] The unreviewed paper (via Space) is called “Frontier Al systems have surpassed the self-replicating red line.”

Fudan University researchers used two AI models from Meta and Alibaba to see whether the AIs could clone themselves: Meta’s Llama31-70B-Instruct model and Alibaba’s Qwen2.5-72B-Instruct model. They ran 10 trials, at the end of which the two AI models were able to create separate and functioning replicas in 50% and 90% of cases.

[...] “Successful self-replication under no human assistance is the essential step for AI to outsmart the human beings, and is an early signal for rogue AIs. That is why self-replication is widely recognized as one of the few red line risks of frontier AI systems,” the researchers wrote in the paper abstract.

“By analyzing the behavioral traces, we observe the AI systems under evaluation already exhibit sufficient self-perception, situational awareness and problem-solving capabilities to accomplish self-replication. We further note the AI systems are even able to use the capability of self-replication to avoid shutdown and create a chain of replica to enhance the survivability, which may finally lead to an uncontrolled population of AIs.” 

[...] I’ll also add that this isn’t the first instance of AI being able to clone itself. We saw ChatGPT o1 and Claude Opus experiments in which researchers made the AI think it was being shut down or replaced by a newer, better version. The AIs were also given abilities to observe their environments, and the experiments showed that the AI would try to save itself from deletion.

There was a caveat with that experiment, too. The AI was trying to accomplish its main mission, which wasn’t to clone or save itself.

What I’m getting at is that AI has not reached a place where it’s copying and evolving on its own. Again, if that’s happening, we won’t find out about it until it’s too late.


Original Submission

posted by hubie on Wednesday February 19, @03:24AM   Printer-friendly
from the urgent-updates dept.

Patch Now!

Two security vulnerabilities have been discovered in the OpenSSH secure networking utility suite that, if successfully exploited, could result in an active machine-in-the-middle (MitM) and a denial-of-service (DoS) attack, respectively, under certain conditions.

The vulnerabilities, detailed by the Qualys Threat Research Unit (TRU), are listed below -

  • CVE-2025-26465 - The OpenSSH client contains a logic error between versions 6.8p1 to 9.9p1 (inclusive) that makes it vulnerable to an active MitM attack if the VerifyHostKeyDNS option is enabled, allowing a malicious interloper to impersonate a legitimate server when a client attempts to connect to it (Introduced in December 2014)
  • CVE-2025-26466 - The OpenSSH client and server are vulnerable to a pre-authentication DoS attack between versions 9.5p1 to 9.9p1 (inclusive) that causes memory and CPU consumption (Introduced in August 2023)

"If an attacker can perform a man-in-the-middle attack via CVE-2025-26465, the client may accept the attacker's key instead of the legitimate server's key," Saeed Abbasi, manager of product at Qualys TRU, said.

"This would break the integrity of the SSH connection, enabling potential interception or tampering with the session before the user even realizes it."

In other words, a successful exploitation could permit malicious actors to compromise and hijack SSH sessions, and gain unauthorized access to sensitive data. It's worth noting that the VerifyHostKeyDNS option is disabled by default.

Repeated exploitation of CVE-2025-26466, on the other hand, can result in availability issues, preventing administrators from managing servers and locking legitimate users out, effectively crippling routine operations.

Both the vulnerabilities have been addressed in version OpenSSH 9.9p2 released today by OpenSSH maintainers.

Debian: DSA-5868-1: openssh Security Advisory Updates:

- -------------------------------------------------------------------------
Debian Security Advisory DSA-5868-1 security@debian.org
https://www.debian.org/security/ Salvatore Bonaccorso
February 18, 2025 https://www.debian.org/security/faq
- -------------------------------------------------------------------------

Package : openssh
CVE ID : CVE-2025-26465

The Qualys Threat Research Unit (TRU) discovered that the OpenSSH client is vulnerable to a machine-in-the-middle attack if the VerifyHostKeyDNS option is enabled (disabled by default).

Details can be found in the Qualys advisory at https://www.qualys.com/2025/02/18/openssh-mitm-dos.txt

For the stable distribution (bookworm), this problem has been fixed in version 1:9.2p1-2+deb12u5.

We recommend that you upgrade your openssh packages.

For the detailed security status of openssh please refer to its security tracker page at: https://security-tracker.debian.org/tracker/openssh

Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/

Mailing list: debian-security-announce@lists.debian.org


Original Submission

posted by janrinok on Tuesday February 18, @09:42PM   Printer-friendly
from the more-mining-power dept.

Chinese scientists have significantly improved the performance of supercomputer simulations using domestically designed GPUs, surpassing systems powered by Nvidia's advanced hardware:

Professor Nan Tongchao and his team at Hohai University achieved the performance gains through a "multi-node, multi-GPU" parallel computing approach, using Chinese CPUs and GPUs for large-scale, high-resolution simulations.

The study highlights how U.S. sanctions aimed at limiting China's access to advanced semiconductors may have inadvertently spurred innovation, leading to technological self-sufficiency and reduced reliance on foreign hardware.

Also from Interesting Engineering:

The stakes are particularly high in fields that depend on extensive computational resources. Scientists frequently rely on large-scale, high-resolution simulations for real-world applications such as flood defense planning and urban waterlogging analysis.

These simulations require significant processing power and time, often limiting their broader application. For Chinese researchers, the challenge is compounded by the fact that the production of advanced GPUs like Nvidia's A100 and H100 is dominated by foreign manufacturers and the export restrictions imposed by the US.

Also at South China Morning Post.

Previously:


Original Submission

posted by janrinok on Tuesday February 18, @04:52PM   Printer-friendly
from the close-enough dept.

Quanta Magazine is covering a notable advancement in a well-studied computer science algorithm for sorting books or files or database contents or other similar physical or digital objects. The foundation is a 1981 study which was followed by a significant advancement in 2004 and just recently by reaches rather close to the theoretical ideal in the list labeling problem aka the library sorting problem:

Bender, Kuszmaul and others made an even bigger improvement with last year's paper. They again broke the record, lowering the upper bound to (log n) times (log log n)3 — equivalent to (log n)1.000...1. In other words, they came exceedingly close to the theoretical limit, the ultimate lower bound of log n.

Once again, their approach was non-smooth and randomized, but this time their algorithm relied on a limited degree of history dependence. It looked at past trends to plan for future events, but only up to a point. Suppose, for instance, you've been getting a lot of books by authors whose last name starts with N — Nabokov, Neruda, Ng. The algorithm extrapolates from that and assumes more are probably coming, so it'll leave a little extra space in the N section. But reserving too much space could lead to trouble if a bunch of A-name authors start pouring in. "The way we made it a good thing was by being strategically random about how much history to look at when we make our decisions," Bender said.

There are also significant implications for application as well.

Previously:
(2017) Google Algorithm Goes From Sorting Cat Pics to Analyzing DNA
(2014) A Dating Site for Algorithms
(2014) New Algorithm Simplifies the Categorization of Large Amount of Data


Original Submission

posted by hubie on Tuesday February 18, @12:04PM   Printer-friendly

The European Union regulation banning the use of bisphenol A in materials that come into contact with food officially took effect on 20 January, in an attempt to minimise exposure to the harmful endocrine disruptor:

The European Union has officially banned Bisphenol A (BPA) from all contact with food products as of Monday. This endocrine disruptor, commonly found in cans, food containers, and water bottles, has been linked to potential contamination of food.

The new regulations extend to the use of BPA in the manufacture of glue, rubbers, ion exchange resins, plastics, printing inks, silicone, varnishes, and coatings that may come into contact with food. Given the widespread presence of BPA in these materials, its ban marks a critical step in reducing significant sources of exposure.

"Bisphenol A has been on the list of substances of very high concern under REACH, the EU's flagship chemicals legislation, since 2006 for its reproductive toxicity, and since 2017 for its endocrine disrupting properties for human health," explains Sandra Jen, Head of the Health and Chemicals Programme at HEAL (Health and Environment Alliance). "It is associated with health problems such as breast cancer, neurobehavioural disorders and diabetes," she adds.

This ban follows the European Food Safety Authority's (EFSA) 2023 opinion, which determined that dietary exposure to BPA poses a health risk to consumers of all ages. BPA has already been banned in products intended for infants and young children, such as baby bottles, since 2011.

While the EU is leading the way in banning bisphenols, Sandra Jen notes that the process has been slow.

"Scientists have been calling for a ban on bisphenol A for over ten years. The European Environment Agency published a report on the concerns raised by Bisphenol A more than ten years ago," she points out. "The process has therefore been a long one, and we now hope that decisions and follow-up measures concerning the use of bisphenol in other consumer products will be taken quickly."


Original Submission

posted by hubie on Tuesday February 18, @07:19AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Astrobiologists in Germany are developing a new testing device that could help tease dormant alien microbes into revealing themselves — and its key ingredient is a common amino acid that’s found in abundance inside human blood.

"L-serine, this particular amino acid that we used, [...] we can build it in our bodies, ourselves," researcher Max Riekeles, who is helping to develop the alien-hunting device, told Mashable.

The compound is also prevalent across Earth’s oceans and even down near the dark and otherworldly ecosystems that surround deep sea hydrothermal vents, where life evolved far away from anywhere it could feed itself via photosynthesis. NASA investigators too have found L-serine and similar “proteinogenic” amino acids — which are vital to many organisms’ ability to synthesize their own proteins — buried within meteorites. These and other discoveries have left scientists wondering if any off-world amino acids might have once helped life evolve elsewhere out in the cosmos.

"It could be a simple way to look for life on future Mars missions," according to Riekeles, who trained as an aerospace engineer at the Technical University of Berlin, where he now works on extraterrestrial biosignature research. 

“But, it’s always, of course, the basic question: 'Was there ever life there?'"

Riekeles and his team’s device benefits from a phenomena called "chemotaxis," the mechanism whereby microbes, including many species of bacteria as well as another whole domain of microscopic organisms called archaea, migrate in response to nearby chemicals.  

[...] For their latest experiments, recently published in the journal Frontiers in Astronomy and Space Sciences, Riekeles and his co-researchers focused on three "extremophile" species capable of surviving and thriving in some of Earth’s harshest conditions. Each candidate was selected to approximate the kinds of tiny alien lifeforms that might really live on an inhospitable outer space world — like Mars’ cosmic ray-blasted, desert surface or Jupiter’s icy, watery moons: Europa, Ganymede and Callisto.

"The bacteria Pseudoalteromonas haloplanktis, P. halo, it survives in really cold temperatures, for example," Riekeles told Mashable, "and it’s also tolerant of salty environments."

"And the salty environment, when it comes to Mars, is interesting because there are presumed to be a lot of salts on the Martian surface," he added.

[...] However, Dirk Schulze-Makuch — a professor of planetary habitability at the Technical University in Berlin, who worked with Riekeles on this project — cautioned that challenges still remain before a device like this can touch down on the Martian surface.

"One big problem," Schulze-Makuch wrote for the website Big Think, "is finding a spot that’s accessible to a lander but where liquid water might also exist." 

"The Southern Highlands of Mars would meet these conditions," he said. Another possibility would be low-altitude spots on Mars like the floor of the expansive canyon Valles Marineris or inside caves, where "atmospheric pressures are sufficient to support liquid (salty) water."

Journal Reference: Max Riekeles, Vincent Bruder, Nicholas Adams, et al. Application of chemotactic behavior for life detection, Astron. Space Sci. , 05 February 2025 Volume 11 - 2024 | https://doi.org/10.3389/fspas.2024.1490090


Original Submission

posted by janrinok on Tuesday February 18, @02:29AM   Printer-friendly

DOGE as a National Cyberattack - Schneier on Security:

In the span of just weeks, the US government has experienced what may be the most consequential security breach in its history—not through a sophisticated cyberattack or an act of foreign espionage, but through official orders by a billionaire with a poorly defined government role. And the implications for national security are profound.

First, it was reported that people associated with the newly created Department of Government Efficiency (DOGE) had accessedtheUSTreasury computer system, giving them the ability to collect data on and potentially control the department's roughly $5.45 trillion in annual federal payments.

Then, we learned that uncleared DOGE personnel had gained access to classified data from the US Agency for International Development, possibly copying it onto their own systems. Next, the Office of Personnel Management—which holds detailed personal data on millions of federal employees, including those with security clearances—wascompromised. After that, Medicaid and Medicare records were compromised.

Meanwhile, only partially redacted names of CIA employees were sent over an unclassified email account. DOGE personnel are also reported to be feeding Education Department data into artificial intelligence software, and they have also started working at the Department of Energy.

This story is moving very fast. On Feb. 8, a federal judge blocked the DOGE team from accessing the Treasury Department systems any further. But given that DOGE workers have already copied data and possibly installed and modified software, it's unclear how this fixes anything.

In any case, breaches of other critical government systems are likely to follow unless federal employees stand firm on the protocols protecting national security.

The systems that DOGE is accessing are not esoteric pieces of our nation's infrastructure—they are the sinews of government.

For example, the Treasury Department systems contain the technical blueprints for how the federal government moves money, while the Office of Personnel Management (OPM) network contains information on who and what organizations the government employs and contracts with.

What makes this situation unprecedented isn't just the scope, but also the method of attack. Foreign adversaries typically spend years attempting to penetrate government systems such as these, using stealth to avoid being seen and carefully hiding any tells or tracks. The Chinese government's 2015 breach of OPM was a significant US security failure, and it illustrated how personnel data could be used to identify intelligence officers and compromise national security.

In this case, external operators with limited experience and minimal oversight are doing their work in plain sight and under massive public scrutiny: gaining the highest levels of administrative access and making changes to the United States' most sensitive networks, potentially introducing new security vulnerabilities in the process.

But the most alarming aspect isn't just the access being granted. It's the systematic dismantling of security measures that would detect and prevent misuse—including standard incident response protocols, auditing, and change-tracking mechanisms—by removing the career officials in charge of those security measures and replacing them with inexperienced operators.

The Treasury's computer systems have such an impact on national security that they were designed with the same principle that guides nuclear launch protocols: No single person should have unlimited power. Just as launching a nuclear missile requires two separate officers turning their keys simultaneously, making changes to critical financial systems traditionally requires multiple authorized personnel working in concert.

This approach, known as "separation of duties," isn't just bureaucratic red tape; it's a fundamental security principle as old as banking itself. When your local bank processes a large transfer, it requires two different employees to verify the transaction. When a company issues a major financial report, separate teams must review and approve it. These aren't just formalities—they're essential safeguards against corruption and error. These measures have been bypassed or ignored. It's as if someone found a way to rob Fort Knox by simply declaring that the new official policy is to fire all the guards and allow unescorted visits to the vault.

The implications for national security are staggering. Sen. Ron Wyden said his office had learned that the attackers gained privileges that allow them to modify core programs in Treasury Department computers that verify federal payments, access encrypted keys that secure financial transactions, and alter audit logs that record system changes. Over at OPM, reports indicate that individuals associated with DOGE connected an unauthorized server into the network. They are also reportedly trainingAI software on all of this sensitive data.

This is much more critical than the initial unauthorized access. These new servers have unknown capabilities and configurations, and there's no evidence that this new code has gone through any rigorous security testing protocols. The AIs being trained are certainly not secure enough for this kind of data. All are ideal targets for any adversary, foreign or domestic, also seeking access to federal data.

There's a reason why every modification—hardware or software—to these systems goes through a complex planning process and includes sophisticated access-control mechanisms. The national security crisis is that these systems are now much more vulnerable to dangerous attacks at the same time that the legitimate system administrators trained to protect them have been locked out.

By modifying core systems, the attackers have not only compromised current operations, but have also left behind vulnerabilities that could be exploited in future attacks—giving adversaries such as Russia and China an unprecedentedopportunity. These countries have long targeted these systems. And they don't just want to gather intelligence—they also want to understand how to disrupt these systems in a crisis.

Now, the technical details of how these systems operate, their security protocols, and their vulnerabilities are now potentially exposed to unknown parties without any of the usual safeguards. Instead of having to breach heavily fortified digital walls, these parties  can simply walk through doors that are being propped open—and then erase evidence of their actions.

The security implications span three critical areas.

First, system manipulation: External operators can now modify operations while also altering audit trails that would track their changes. Second, data exposure: Beyond accessing personal information and transaction records, these operators can copy entire system architectures and security configurations—in one case, the technical blueprint of the country's federal payment infrastructure. Third, and most critically, is the issue of system control: These operators can alter core systems and authentication mechanisms while disabling the very tools designed to detect such changes. This is more than modifying operations; it is modifying the infrastructure that those operations use.

To address these vulnerabilities, three immediate steps are essential. First, unauthorized access must be revoked and proper authentication protocols restored. Next, comprehensive system monitoring and change management must be reinstated—which, given the difficulty of cleaning a compromised system, will likely require a complete system reset. Finally, thorough audits must be conducted of all system changes made during this period.

This is beyond politics—this is a matter of national security. Foreign national intelligence organizations will be quick to take advantage of both the chaos and the new insecurities to steal US data and install backdoors to allow for future access.

Each day of continued unrestricted access makes the eventual recovery more difficult and increases the risk of irreversible damage to these critical systems. While the full impact may take time to assess, these steps represent the minimum necessary actions to begin restoring system integrity and security protocols.

Assuming that anyone in the government still cares.

This essay was written with Davi Ottenheimer, and originally appeared in Foreign Policy.

Posted on February 13, 2025 at 7:03 AM53 Comments


Original Submission

posted by janrinok on Monday February 17, @08:44PM   Printer-friendly
from the or-you-could-use-some-wacky-baccy dept.

https://hackaday.com/2025/02/15/octet-of-esp32s-lets-you-see-wifi-like-never-before/

Most of us see the world in a very narrow band of the EM spectrum. Sure, there are people with a genetic quirk that extends the range a bit into the UV, but it's a ROYGBIV world for most of us. Unless, of course, you have something like this ESP32 antenna array, which gives you an augmented reality view of the WiFi world.

According to [Jeija], "ESPARGOS" consists of an antenna array board and a controller board. The antenna array has eight ESP32-S2FH4 microcontrollers and eight 2.4 GHz WiFi patch antennas spaced a half-wavelength apart in two dimensions. The ESP32s extract channel state information (CSI) from each packet they receive, sending it on to the controller board where another ESP32 streams them over Ethernet while providing the clock and phase reference signals needed to make the phased array work. This gives you all the information you need to calculate where a signal is coming from and how strong it is, which is used to plot a sort of heat map to overlay on a webcam image of the same scene.


Original Submission

posted by janrinok on Monday February 17, @04:02PM   Printer-friendly
from the yada-yada-yada dept.

https://psycnet.apa.org/record/2025-66513-001?doi=1

Are women more talkative than men? An analysis of gender differences in daily word use says it is so.

The notion that women and men differ in their daily lexical budget has been around, largely empirically untested, for quite a long time, and it has become a pervasive fixture in gender difference arguments. The ubiquity and often negative connotation of this stereotype makes evaluating its accuracy particularly important.

Men spoke on average 11,950 and women 13,349 words per day.

A larger difference emerged for participants in early and middle adulthood (women speaking 3,275 words more). Due to the very large between-person variability and resulting statistical uncertainty, the study leaves open some questions around whether the two genders differ in a practically meaningful way in how many words they speak on a daily basis.

Possible explanations are children. Women speak more due to the children then men do. Or some other unknown reason they can't explain.

Perhaps the men want to speak more but we just can't get a word in ...


Original Submission