Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Who or what piqued your interest in technology?

  • School
  • Parent
  • Friend
  • Book
  • Gadget
  • Curiosity
  • I have been kidnapped by a technology company you insensitive clod
  • Other (please specify in the comments)

[ Results | Polls ]
Comments:36 | Votes:117

posted by hubie on Thursday February 20, @10:12PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

It has been nearly a decade since famed cryptographer and privacy expert Bruce Schneier released the book Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World - an examination of how government agencies and tech giants exploit personal data. Today, his predictions feel eerily accurate.

At stake, he argued then, was a possibly irreversible loss of privacy, and the archiving of everything. As he wrote, science fiction author Charlie Stross described the situation as the "end of prehistory," in that every facet of our lives would be on a computer somewhere and available to anyone who knew how to find them.

Since the book was published, we've seen data harvesting continue, particularly for training AI models. The battle to keep even the most basic facts about us private seems all but lost.

We sat down with Bruce Schneier for an update on his work, and what we can expect in the future.

The Register: Data and Goliath came out nearly two years after Snowden's leaks and just months before Congress finally made a few moves on the surveillance issue with the USA Freedom Act. Ten years on, how do you feel things have changed, if at all?

At the same time, the information environment has gotten worse. More of our data is in the cloud, where companies have easier access to it. We have more Internet-of-Things devices around ourselves, which keep us under constant surveillance. And every one of us carries an incredibly sophisticated surveillance device around with us wherever we go: our smartphones. Everywhere you turn, privacy is losing.

[...]

The Register: If the mass privatization of the government that's looking likely happens, what are the implications of all that data being leased out to the private sector?

And by security, I mean two things. Obviously, there's the possibility that the data will be stolen and used by foreign governments and corporations. And there is the high probability that it will end up in the hands of data brokers, and then bought and sold and combined with other data.

Surveillance in the US is largely a corporate business; this will just make it worse.


Original Submission

posted by hubie on Thursday February 20, @05:27PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Data storage has always depended on systems that toggle between "on" and "off" states. However, the physical size of the components storing these binary states has traditionally limited how much information can be packed into a device.

Now, researchers at the University of Chicago's Pritzker School of Molecular Engineering have developed a way to overcome this constraint. They've successfully demonstrated how missing atoms within a crystal structure can be used to store terabytes of data in a space no larger than a millimeter.

"We found a way to integrate solid-state physics applied to radiation dosimetry with a research group that works strongly in quantum, although our work is not exactly quantum," said first author Leonardo França, a postdoctoral researcher in Zhong's lab.

Their study, published in Nanophotonics, explores how atomic-scale crystal defects can function as individual memory cells, merging quantum methodologies with classical computing principles.

Led by assistant professor Tian Zhong, the research team developed this novel storage method by introducing rare-earth ions into a crystal. Specifically, they incorporated praseodymium ions into a yttrium oxide crystal, though they suggest the approach could extend to other materials due to rare-earth elements' versatile optical properties.

The memory system is activated by a simple ultraviolet laser, which energizes the rare-earth ions, causing them to release electrons. These electrons then become trapped in the crystal's natural defects. By controlling the charge state of these gaps, the researchers effectively created a binary system, where a charged defect represents a "one" and an uncharged defect represents a "zero."

[...] The researchers believe this breakthrough could redefine data storage limits, paving the way for ultra-compact, high-capacity storage solutions in classical computing.

Journal Reference: França, Leonardo V. S., Doshi, Shaan, Zhang, Haitao and Zhong, Tian. "All-optical control of charge-trapping defects in rare-earth doped oxides" Nanophotonics, 2025. https://doi.org/10.1515/nanoph-2024-0635


Original Submission

posted by hubie on Thursday February 20, @12:45PM   Printer-friendly
from the can-you-hear-me-now? dept.

Are noise-cancelling headphones to blame for young people's hearing problems? They are not going deaf but the brain are having difficulty processing sounds. As it has not been trained on sorting out sounds and noise due to constant headphone usage filtering out the auditory realities.

... auditory processing disorder (APD), a neurological condition where the brain finds it difficult to understand sounds and spoken words.

Her audiologist and others in England are now calling for more research into whether the condition is linked to overuse of noise-cancelling headphones.

Five NHS audiology departments have told the BBC that there has been an increase in the number of young people referred to them from GPs with hearing issues - only to find their hearing is normal when tested and it is their ability to process sound that is struggling.

Noise-cancelling headphones do have their benefits, particularly for long-term ear health where their soundproofing feature can prevent high frequency and loud noise from reaching and damaging the ear - even while listening to music.

... by blocking everyday sounds such as cars beeping, there is a possibility the brain can "forget" to filter out the noise.

"You have almost created this false environment by wearing those headphones of only listening to what you want to listen to. You are not having to work at it,"

https://www.bbc.com/news/articles/cgkjvr7x5x6o


Original Submission

posted by janrinok on Thursday February 20, @07:53AM   Printer-friendly
from the cue-the-your-momma-jokes! dept.

Scientists Just Discovered 'Quipu,' the New Largest Structure in Our Cosmos:

Humanity's growing understanding of the universe can be best described as a "Copernican journey"—the centuries-long discovery that we are far from the center of all things. Earth, for example, orbits around the Sun (thanks for that one, Copernicus). But it's also just one Solar System among billions in the Milky Way, which is turn a part of the Virgo Supercluster and the even largerLaniakea supercluster—one of the largest objects in the universe, at around 520 million light-years across.

However, even Laniakea isn't the largest structure in the known universe. In 2003, scientists discovered the Sloan Great Wall (SGW), believed to stretch beyond 1 billion light-years. But now, in a study published on the preprint server arXiv (and accepted for publication in the journal Astronomy and Astrophysics), scientists assert their belief that there's a structure even larger than this celestial behemoth.

Its name is Quipu, and astronomers estimate that its massive bulk stretches some 1.39 billion light-years across. According to Princeton astronomer J. Richard Gott III, who helped discover the SGW and who spoke with New Scientist, Quipu "end to end, is slightly longer" than SGW. The researchers also estimate that Quipu contains the equivalent mass of 200 quadrillion Suns.

"For a precise determination of cosmological parameters we need to understand the effects of the local large-scale structure of the Universe on the measurements," the authors wrote. "Characterizing these superstructures is also important for astrophysical research, for example the study of the environmental dependence of galaxy evolution as well as for precision tests of cosmological models."

The name Quipu—a reference to the textile-based recording devices used by several ancient cultures in the central Andes—is both catchy and descriptive. The authors note that one particular view gives "the best impression of the superstructure as a long filament with small side filaments, which initiated the naming of Quipu."

Original Submission

The team analyzed Quipu, along with four other superstructures, using data from the German Aerospace Center-led ROSAT X-ray satellite and the team's Cosmic Large-Scale Structure in X-rays (CLASSIX) Cluster Survey. They found that these structures together contain roughly 45 percent of all galaxy clusters, 30 percent of all galaxies, and 25 percent of matter in the observable universe. However, even larger structures might still exist. The Hercules-Corona Borealis Great Wall, located further afield than Quipu, has been estimated to stretch 10 billion light-years long (though its true size is still up for debate).

Understanding Quipu and other superstructures like it is vitally important, as they challenge our current understanding of cosmological evolution, which states that matter should be relatively evenly distributed throughout the universe. These superstructures are so huge that forming them could theoretically take longer than the universe is old.

However, Quipu isn't a fixture of the universe. Despite its immense stature, it too will eventually disappear from the cosmic stage. "In the future cosmic evolution, these superstructures are bound to break up into several collapsing units," the authors wrote. "They are thus transient configurations."

Even cosmic superstructures can't escape the inexorable march of time.


posted by hubie on Thursday February 20, @03:10AM   Printer-friendly

Arm is Reportedly Developing its Own in-House Chip

The new CPU could be a piece in the $500 billion Stargate AI project:

Chip designer Arm plans to unveil its own processor this year with Meta as the launch customer, The Financial Times reported. The chip would be a CPU designed for servers in data centers and would have the potential to be customized for clients. Manufacturing would be outsourced to a contract fab plant like TSMC (Taiwan Semiconductor Manufacturing Co.) and the first in-house chip could be revealed as early as this summer, according to the FT's sources.

Last month, Arm parent Softbank announced the Stargate project, a partnership with OpenAI to build up to $500 billion worth of AI infrastructure. Arm, along with Microsoft and NVIDIA, is a key technology partner for the project. Arm's chip could now play a role in that project, and also in Jony Ive's mysterious AI-powered personal device, reportedly being developed in collaboration with OpenAI's Sam Altman, according to the report.

[...] The move would put Arm in direct competition with many of its own customers like NVIDIA, which manufacturers its own Arm-based server CPUs. To date, Arm has never made its own chips — instead, it licenses its technology and patents to major companies like Apple. Those companies then customize the designs for their own needs and use a contract manufacturer like TSMC or Samsung to build the chips.

Arm recruits from customers as it plans to sell its own chips

reuters.com:

Arm has begun recruiting from its own customers and competing against them for deals as it pushes toward selling its own chips, according to people familiar with the matter and a document viewed by Reuters.

Arm supplies the crucial intellectual property that firms such as Apple and Nvidia license to create their own central processing units (CPUs). It has also been seeking to expand its profits and revenues through a range of tactics, including considering whether to sell chips of its own.

Arm appears to be ramping up that effort.

The UK-based company has sought to recruit executives from licensees, two sources familiar with the matter told Reuters. And Arm is competing against Qualcomm, one of its largest customers, to sell data center CPUs to Meta Platforms, according to a person familiar with the matter.

The tech provider's moves to build out its own chip business could upend an industry that has long viewed the company as a neutral player rather than a competitor, by forcing companies who rely on Arm technology to consider whether they will end up competing against the firm for business.


Original Submission #1Original Submission #2

posted by janrinok on Wednesday February 19, @10:23PM   Printer-friendly

https://newatlas.com/environment/indoor-air-pollution-scented-terpenes/

Using scented products indoors changes the chemistry of the air, producing as much air pollution as car exhaust does outside, according to a new study. Researchers say that breathing in these nanosized particles could have serious health implications.

When you hear or see the words 'air pollution,' you most likely think of things like factories and car exhaust. That's pollution that is out there – outside your house. But have you thought about how you're contributing to air pollution inside of where you live by using seemingly innocuous products like scented, non-combustible candles?

New research by Purdue University, the latest in a series of Purdue-led studies, examined how scented products – in this case, flame-free candles – are a significant source of nanosized particles small enough to get deep into your lungs, posing a potential risk to respiratory health

"A forest is a pristine environment, but if you're using cleaning and aromatherapy products full of chemically manufactured scents to recreate a forest in your home, you're actually creating a tremendous amount of indoor air pollution that you shouldn't be breathing in," said Nusrat Jung, an assistant professor in Purdue's Lyles School of Civil and Construction Engineering and co-corresponding author of the study's.

Scented wax melts are marketed as a flameless, smoke-free, non-toxic alternative to traditional candles, a safer way of making your home or office smell nice. To assess the truth of these claims, the researchers comprehensively measured the nanoparticles formed when they warmed wax melts in their mechanically ventilated test house. The tiny house is actually an architectural engineering laboratory called the Purdue Zero Energy Design Guidance for Engineers (zEDGE) lab. Designed and engineered to test the energy efficiency of a larger building, it's full of sensors that monitor the impact of everyday activities on indoor air quality.

"To understand how airborne particles form indoors, you need to measure the smallest nanoparticles – down to a single nanometer," said Brandon Boor, associate professor in civil engineering at Purdue and the study's other corresponding author. "At this scale, we can observe the earliest stages of new particle formation, where fragrances react with ozone to form tiny molecular clusters."

The researchers knew from their previous research that new nanoparticle formation was initiated by terpenes – aromatic compounds that determine the smell of things like plants and herbs – released from the melts and reacting with indoor atmospheric ozone (O3). They'd found that activities such as mopping the floor with a terpene-rich cleaning agent, using a citrus-scented air freshener, or applying scented personal care products like deodorant inside the zEDGE house resulted in pulsed terpene emissions to the indoor air within five minutes. Conversely, using essential oil diffusers or peeling citrus fruits caused a more gradual increase in terpenes.

In the present study, heating the scented wax contributed significantly to the number of new particles formed in the indoor air, particularly those smaller than 100 nanometers (nm). The resulting atmospheric concentrations were over one million nanoparticles per cubic centimeter (106 cm-3), which is comparable to concentrations emitted by traditional lighted candles (106 cm-3), gas stoves (105 – 107 cm-3), diesel engines (103 – 106 cm-3), and natural gas engines (106 – 107 cm-3). By comparison, there were no significant terpene emissions when unscented wax melts were heated.

https://pubs.acs.org/doi/10.1021/acs.estlett.4c00986


Original Submission

posted by janrinok on Wednesday February 19, @06:54PM   Printer-friendly
from the Oops,-we've-done-it-again dept.

I expect that many noticed that the site went down and, if you are reading this you will also realise that it is now back up.

The entire server died leaving a wake of Out-Of-Memory messages, which resulted in the site itself, IRC and our email all failing. We (and by that I really mean kolie!) have restarted the server and doubled the amount of memory available to it.

Of course, that doesn't tell us why it ran out of memory, although we knew that it was a bit tight, nor what specifically happened today to push it over the edge. That will probably take a while to work out.

It might take us a while to put more stories in the queue but you should be able to comment on many of today's stories that have only just appeared on your screens.

We are sorry for the inconvenience and we are getting back on our feet again. As always, a big THANK YOU to kolie for his efforts.

posted by janrinok on Wednesday February 19, @05:41PM   Printer-friendly

Record-breaking neutrino is most energetic ever detected:

Highest energy cosmic neutrino so far 120PeV (120x1015eV)

Astrophysicists have observed the most energetic neutrino ever. The particle — which probably came from a distant galaxy — was spotted by the Cubic Kilometre Neutrino Telescope (KM3NeT), a collection of light-detecting glass spheres on the floor of the Mediterranean Sea, on 13 February 2023. Researchers monitoring the telescope did not notice the detection until early 2024, when they completed the first analysis of their data. They unveiled it as a potentially record event last year at a conference in Milan, Italy, but did not disclose details such as the timing, direction or energy of the neutrino.

"We had to convince ourselves that it wasn't something strange or weird with the telescope," says Paschal Coyle, a neutrino physicist at Aix-Marseille University in France and KM3NeT spokesperson. The result was published on 12 February in Nature1, and will be described in four preprints due to be posted on the arXiv preprint server.

Neutrinos are electrically neutral particles more than one million times lighter than an electron. They are typically produced in nuclear reactions such as those at the centre of the Sun, from which they emerge with energies on the order of millions of electronvolts (106 eV). But for more than 10 years, researchers have been recording neutrinos carrying unprecedented energies of up to several quadrillion electronvolts (1015 eV, or 1 petaelectronvolt), which are thought to originate in distant galaxies. (The most energetic particle ever detected, at 320,000 PeV, was not a neutrino but a cosmic ray dubbed the Oh-My-God particle.)

KM3NeT consists of strings of sensitive light detectors anchored to the sea floor at a depth of around 3,500 metres off the coast of the Italian island of Sicily, as well as in a second, smaller array near Toulon, France. These sensors pick up light emitted by high-energy, electrically charged particles such as muons. Muons are continuously raining down on Earth's surface, because they are produced when cosmic rays hit air molecules. But occasionally, a cosmic neutrino that smashes into the planet's surface also produces a muon.

In the February 2023 event detected by the Sicily observatory, the team estimated that the muon carried 120 PeV of energy, on the basis of the unusual amount of light it produced. The particle's path was close to horizontal with respect to Earth's surface and travelled eastwards, towards Greece.

Journal Reference:
The KM3NeT Collaboration. Observation of an ultra-high-energy cosmic neutrino with KM3NeT. Nature 638, 376–382 (2025). https://doi.org/10.1038/s41586-024-08543-1


Original Submission

posted by hubie on Wednesday February 19, @12:55PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The theme cropped up repeatedly during 2025's State Of Open Conference, with speakers from tech giants and volunteer maintainers laying out the challenges. Much of the open source ecosystem relies on volunteers putting in too many hours for too little support and the cracks are growing.

This week, the lead of the Asahi Linux project – a Linux distribution for Apple silicon – Hector Martin, abruptly quit, citing factors including developer burnout and demanding users.

Jamie Tanna, who gave himself the title of "Tired Maintainer" put it simply: "Being an open source maintainer is really rewarding... except when it isn't."

Tanna has been active in the open source world for several years, although it was the experience of being an oapi-codgen maintainer that he spoke about. For the uninitiated, oapi-codgen is a tool to convert OpenAPI specifications to Go code.

"It's used by a load of companies... and a load of angry users."

The story is a familiar one. Tanna has helped out with some issues on the project and had volunteered for maintainer duty. There was a flurry of releases, but before long, the time between each release began to lengthen. Being a maintainer, he explained, with big or small projects (but especially big ones) meant dealing with "fun" users who are very happy to express their feelings as well an ever-increasing list of requests.

The experience of feeling under pressure, isolated and faced with a growing pile of work while receiving the occasional unpleasant message from an entitled user demanding their issue be dealt with now or that a contribution merged be immediately is far too common.

Tanna is relatively fortunate – his employer gives him four hours a month to work on the project. However, that does not come close to meeting the demands of users and the "How hard can it be?" brigade. Maintainers are undoubtedly under pressure, and many have either quit or are considering doing so.

[...] Vargas used figures including a 2024 Tidelift survey that put a figure of 60 percent on maintainers that had either quit or were considering quitting, and another [PDF] from the Linux Foundation showing that most of the more widely used Free Open Source Software was developed by only a handful contributors.

[...] Dealing with the problem is difficult. Do maintainers simply need to be paid in recognition of their efforts? Vargas is unsure that everything has a financial solution and noted research (https://dl.acm.org/doi/10.1145/3674805.3686667) presented at this year's FOSDEM. Vargas told The Register, "Money is not going to solve all problems."

"Each maintainer and project has their own context and challenges - while many maintainers would benefit from financial support, others really could use more contributors to complement their work and remove responsibilities from them - especially for non-code tasks like mentorship, community management, issue triage, promotion and fundraising, etc."

Rickard also worried about a potential squeeze on budgets as economic uncertainties bite and talked of raising awareness on platforms such as GitHub around sponsorship, given a contraction in the funding of projects by companies.

"You've got to have something as a catalyst for that change to happen. We, as a group of humans, don't seem to do proactively very well."

Cosgrove said, "I'm afraid it'll take a significant project falling over to convince them [the users] that paying for open source maintainers is worthwhile and, in fact, may actually be a requirement.

"I don't want to see that happen because the fallout will be ugly and gross, but I'm concerned that that's what it'll take."


Original Submission

posted by hubie on Wednesday February 19, @08:10AM   Printer-friendly
from the I'm-sorry-Dave-I'm-afraid-I-can't-do-that dept.

Arthur T Knackerbracket has processed the following story:

Just as the US and UK refused to sign an international statement about AI safety at the AI Action Summit earlier this week, an AI study out of China revealed that AI models have reached a “red line” humans should be aware of: The AI can replicate itself, which sounds like one of the nightmare scenarios some people have been fearing.

That’s not as concerning as it might first sound, and it shouldn’t be surprising that AI can do what it’s told, even if that means cloning itself. Yes, that’s the big caveat in this experiment: the AI models followed orders when cloning themselves.

We’re not looking at rogue AI or artificial intelligence that’s doing the cloning on its own accord. We’d probably not even know that a misaligned piece of advanced AI has started replicating itself to stay alive.

[...] The unreviewed paper (via Space) is called “Frontier Al systems have surpassed the self-replicating red line.”

Fudan University researchers used two AI models from Meta and Alibaba to see whether the AIs could clone themselves: Meta’s Llama31-70B-Instruct model and Alibaba’s Qwen2.5-72B-Instruct model. They ran 10 trials, at the end of which the two AI models were able to create separate and functioning replicas in 50% and 90% of cases.

[...] “Successful self-replication under no human assistance is the essential step for AI to outsmart the human beings, and is an early signal for rogue AIs. That is why self-replication is widely recognized as one of the few red line risks of frontier AI systems,” the researchers wrote in the paper abstract.

“By analyzing the behavioral traces, we observe the AI systems under evaluation already exhibit sufficient self-perception, situational awareness and problem-solving capabilities to accomplish self-replication. We further note the AI systems are even able to use the capability of self-replication to avoid shutdown and create a chain of replica to enhance the survivability, which may finally lead to an uncontrolled population of AIs.” 

[...] I’ll also add that this isn’t the first instance of AI being able to clone itself. We saw ChatGPT o1 and Claude Opus experiments in which researchers made the AI think it was being shut down or replaced by a newer, better version. The AIs were also given abilities to observe their environments, and the experiments showed that the AI would try to save itself from deletion.

There was a caveat with that experiment, too. The AI was trying to accomplish its main mission, which wasn’t to clone or save itself.

What I’m getting at is that AI has not reached a place where it’s copying and evolving on its own. Again, if that’s happening, we won’t find out about it until it’s too late.


Original Submission

posted by hubie on Wednesday February 19, @03:24AM   Printer-friendly
from the urgent-updates dept.

Patch Now!

Two security vulnerabilities have been discovered in the OpenSSH secure networking utility suite that, if successfully exploited, could result in an active machine-in-the-middle (MitM) and a denial-of-service (DoS) attack, respectively, under certain conditions.

The vulnerabilities, detailed by the Qualys Threat Research Unit (TRU), are listed below -

  • CVE-2025-26465 - The OpenSSH client contains a logic error between versions 6.8p1 to 9.9p1 (inclusive) that makes it vulnerable to an active MitM attack if the VerifyHostKeyDNS option is enabled, allowing a malicious interloper to impersonate a legitimate server when a client attempts to connect to it (Introduced in December 2014)
  • CVE-2025-26466 - The OpenSSH client and server are vulnerable to a pre-authentication DoS attack between versions 9.5p1 to 9.9p1 (inclusive) that causes memory and CPU consumption (Introduced in August 2023)

"If an attacker can perform a man-in-the-middle attack via CVE-2025-26465, the client may accept the attacker's key instead of the legitimate server's key," Saeed Abbasi, manager of product at Qualys TRU, said.

"This would break the integrity of the SSH connection, enabling potential interception or tampering with the session before the user even realizes it."

In other words, a successful exploitation could permit malicious actors to compromise and hijack SSH sessions, and gain unauthorized access to sensitive data. It's worth noting that the VerifyHostKeyDNS option is disabled by default.

Repeated exploitation of CVE-2025-26466, on the other hand, can result in availability issues, preventing administrators from managing servers and locking legitimate users out, effectively crippling routine operations.

Both the vulnerabilities have been addressed in version OpenSSH 9.9p2 released today by OpenSSH maintainers.

Debian: DSA-5868-1: openssh Security Advisory Updates:

- -------------------------------------------------------------------------
Debian Security Advisory DSA-5868-1 security@debian.org
https://www.debian.org/security/ Salvatore Bonaccorso
February 18, 2025 https://www.debian.org/security/faq
- -------------------------------------------------------------------------

Package : openssh
CVE ID : CVE-2025-26465

The Qualys Threat Research Unit (TRU) discovered that the OpenSSH client is vulnerable to a machine-in-the-middle attack if the VerifyHostKeyDNS option is enabled (disabled by default).

Details can be found in the Qualys advisory at https://www.qualys.com/2025/02/18/openssh-mitm-dos.txt

For the stable distribution (bookworm), this problem has been fixed in version 1:9.2p1-2+deb12u5.

We recommend that you upgrade your openssh packages.

For the detailed security status of openssh please refer to its security tracker page at: https://security-tracker.debian.org/tracker/openssh

Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/

Mailing list: debian-security-announce@lists.debian.org


Original Submission

posted by janrinok on Tuesday February 18, @09:42PM   Printer-friendly
from the more-mining-power dept.

Chinese scientists have significantly improved the performance of supercomputer simulations using domestically designed GPUs, surpassing systems powered by Nvidia's advanced hardware:

Professor Nan Tongchao and his team at Hohai University achieved the performance gains through a "multi-node, multi-GPU" parallel computing approach, using Chinese CPUs and GPUs for large-scale, high-resolution simulations.

The study highlights how U.S. sanctions aimed at limiting China's access to advanced semiconductors may have inadvertently spurred innovation, leading to technological self-sufficiency and reduced reliance on foreign hardware.

Also from Interesting Engineering:

The stakes are particularly high in fields that depend on extensive computational resources. Scientists frequently rely on large-scale, high-resolution simulations for real-world applications such as flood defense planning and urban waterlogging analysis.

These simulations require significant processing power and time, often limiting their broader application. For Chinese researchers, the challenge is compounded by the fact that the production of advanced GPUs like Nvidia's A100 and H100 is dominated by foreign manufacturers and the export restrictions imposed by the US.

Also at South China Morning Post.

Previously:


Original Submission

posted by janrinok on Tuesday February 18, @04:52PM   Printer-friendly
from the close-enough dept.

Quanta Magazine is covering a notable advancement in a well-studied computer science algorithm for sorting books or files or database contents or other similar physical or digital objects. The foundation is a 1981 study which was followed by a significant advancement in 2004 and just recently by reaches rather close to the theoretical ideal in the list labeling problem aka the library sorting problem:

Bender, Kuszmaul and others made an even bigger improvement with last year's paper. They again broke the record, lowering the upper bound to (log n) times (log log n)3 — equivalent to (log n)1.000...1. In other words, they came exceedingly close to the theoretical limit, the ultimate lower bound of log n.

Once again, their approach was non-smooth and randomized, but this time their algorithm relied on a limited degree of history dependence. It looked at past trends to plan for future events, but only up to a point. Suppose, for instance, you've been getting a lot of books by authors whose last name starts with N — Nabokov, Neruda, Ng. The algorithm extrapolates from that and assumes more are probably coming, so it'll leave a little extra space in the N section. But reserving too much space could lead to trouble if a bunch of A-name authors start pouring in. "The way we made it a good thing was by being strategically random about how much history to look at when we make our decisions," Bender said.

There are also significant implications for application as well.

Previously:
(2017) Google Algorithm Goes From Sorting Cat Pics to Analyzing DNA
(2014) A Dating Site for Algorithms
(2014) New Algorithm Simplifies the Categorization of Large Amount of Data


Original Submission

posted by hubie on Tuesday February 18, @12:04PM   Printer-friendly

The European Union regulation banning the use of bisphenol A in materials that come into contact with food officially took effect on 20 January, in an attempt to minimise exposure to the harmful endocrine disruptor:

The European Union has officially banned Bisphenol A (BPA) from all contact with food products as of Monday. This endocrine disruptor, commonly found in cans, food containers, and water bottles, has been linked to potential contamination of food.

The new regulations extend to the use of BPA in the manufacture of glue, rubbers, ion exchange resins, plastics, printing inks, silicone, varnishes, and coatings that may come into contact with food. Given the widespread presence of BPA in these materials, its ban marks a critical step in reducing significant sources of exposure.

"Bisphenol A has been on the list of substances of very high concern under REACH, the EU's flagship chemicals legislation, since 2006 for its reproductive toxicity, and since 2017 for its endocrine disrupting properties for human health," explains Sandra Jen, Head of the Health and Chemicals Programme at HEAL (Health and Environment Alliance). "It is associated with health problems such as breast cancer, neurobehavioural disorders and diabetes," she adds.

This ban follows the European Food Safety Authority's (EFSA) 2023 opinion, which determined that dietary exposure to BPA poses a health risk to consumers of all ages. BPA has already been banned in products intended for infants and young children, such as baby bottles, since 2011.

While the EU is leading the way in banning bisphenols, Sandra Jen notes that the process has been slow.

"Scientists have been calling for a ban on bisphenol A for over ten years. The European Environment Agency published a report on the concerns raised by Bisphenol A more than ten years ago," she points out. "The process has therefore been a long one, and we now hope that decisions and follow-up measures concerning the use of bisphenol in other consumer products will be taken quickly."


Original Submission

posted by hubie on Tuesday February 18, @07:19AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Astrobiologists in Germany are developing a new testing device that could help tease dormant alien microbes into revealing themselves — and its key ingredient is a common amino acid that’s found in abundance inside human blood.

"L-serine, this particular amino acid that we used, [...] we can build it in our bodies, ourselves," researcher Max Riekeles, who is helping to develop the alien-hunting device, told Mashable.

The compound is also prevalent across Earth’s oceans and even down near the dark and otherworldly ecosystems that surround deep sea hydrothermal vents, where life evolved far away from anywhere it could feed itself via photosynthesis. NASA investigators too have found L-serine and similar “proteinogenic” amino acids — which are vital to many organisms’ ability to synthesize their own proteins — buried within meteorites. These and other discoveries have left scientists wondering if any off-world amino acids might have once helped life evolve elsewhere out in the cosmos.

"It could be a simple way to look for life on future Mars missions," according to Riekeles, who trained as an aerospace engineer at the Technical University of Berlin, where he now works on extraterrestrial biosignature research. 

“But, it’s always, of course, the basic question: 'Was there ever life there?'"

Riekeles and his team’s device benefits from a phenomena called "chemotaxis," the mechanism whereby microbes, including many species of bacteria as well as another whole domain of microscopic organisms called archaea, migrate in response to nearby chemicals.  

[...] For their latest experiments, recently published in the journal Frontiers in Astronomy and Space Sciences, Riekeles and his co-researchers focused on three "extremophile" species capable of surviving and thriving in some of Earth’s harshest conditions. Each candidate was selected to approximate the kinds of tiny alien lifeforms that might really live on an inhospitable outer space world — like Mars’ cosmic ray-blasted, desert surface or Jupiter’s icy, watery moons: Europa, Ganymede and Callisto.

"The bacteria Pseudoalteromonas haloplanktis, P. halo, it survives in really cold temperatures, for example," Riekeles told Mashable, "and it’s also tolerant of salty environments."

"And the salty environment, when it comes to Mars, is interesting because there are presumed to be a lot of salts on the Martian surface," he added.

[...] However, Dirk Schulze-Makuch — a professor of planetary habitability at the Technical University in Berlin, who worked with Riekeles on this project — cautioned that challenges still remain before a device like this can touch down on the Martian surface.

"One big problem," Schulze-Makuch wrote for the website Big Think, "is finding a spot that’s accessible to a lander but where liquid water might also exist." 

"The Southern Highlands of Mars would meet these conditions," he said. Another possibility would be low-altitude spots on Mars like the floor of the expansive canyon Valles Marineris or inside caves, where "atmospheric pressures are sufficient to support liquid (salty) water."

Journal Reference: Max Riekeles, Vincent Bruder, Nicholas Adams, et al. Application of chemotactic behavior for life detection, Astron. Space Sci. , 05 February 2025 Volume 11 - 2024 | https://doi.org/10.3389/fspas.2024.1490090


Original Submission