Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is your favorite keyboard trait?

  • QWERTY
  • AZERTY
  • Silent (sounds)
  • Clicky sounds
  • Thocky sounds
  • The pretty colored lights
  • I use Braille you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:49 | Votes:75

posted by hubie on Wednesday July 24, @11:41PM   Printer-friendly

Inorganic production of oxygen in the deep ocean

https://www.sciencealert.com/mysterious-dark-oxygen-discovered-at-bottom-of-ocean-stuns-scientists

Chugging quietly away in the dark depths of Earth's ocean floors, a spontaneous chemical reaction is unobtrusively creating oxygen, all without the involvement of life.

"The discovery of oxygen production by a non-photosynthetic process requires us to rethink how the evolution of complex life on the planet might have originated," says SAMS marine scientist Nicholas Owens.

Scatterings of polymetallic nodules carpet vast areas of the ocean's bottom. We value these exact metals for their use in batteries, and it turns out that's exactly how the rocks may be spontaneously acting on the ocean floor. Single nodules produced voltages of up to 0.95 V. So when clustered together, like batteries in a series, they can easily reach the 1.5 V required to split oxygen from water in an electrolysis reaction.

This discovery offers a possible explanation for the mysterious stubborn persistence of ocean 'dead zones' decades after deep sea mining has ceased.

"In 2016 and 2017, marine biologists visited sites that were mined in the 1980s and found not even bacteria had recovered in mined areas. In unmined regions, however, marine life flourished," explains Geiger.

"Why such 'dead zones' persist for decades is still unknown. However, this puts a major asterisk onto strategies for sea-floor mining as ocean-floor faunal diversity in nodule-rich areas is higher than in the most diverse tropical rainforests."

As well as these massive implications for deep-sea mining, 'dark oxygen' also sparks a cascade of new questions around the origins of oxygen-breathing life on Earth.

Deep-Ocean Floor Produces its Own Oxygen

Deep-ocean floor produces its own oxygen:

The surprising discovery challenges long-held assumptions that only photosynthetic organisms, such as plants and algae, generate Earth's oxygen. But the new finding shows there might be another way. It appears oxygen also can be produced at the seafloor -- where no light can penetrate -- to support the oxygen-breathing (aerobic) sea life living in complete darkness.

Andrew Sweetman, of the Scottish Association for Marine Science (SAMS), made the "dark oxygen" discovery while conducting ship-based fieldwork in the Pacific Ocean. Northwestern's Franz Geiger led the electrochemistry experiments, which potentially explain the finding.

"For aerobic life to begin on the planet, there had to be oxygen, and our understanding has been that Earth's oxygen supply began with photosynthetic organisms," said Sweetman, who leads the Seafloor Ecology and Biogeochemistry research group at SAMS. "But we now know that there is oxygen produced in the deep sea, where there is no light. I think we, therefore, need to revisit questions like: Where could aerobic life have begun?"

Polymetallic nodules -- natural mineral deposits that form on the ocean floor -- sit at the heart of the discovery. A mix of various minerals, the nodules measure anywhere between tiny particles and an average potato in size.

"The polymetallic nodules that produce this oxygen contain metals such as cobalt, nickel, copper, lithium and manganese -- which are all critical elements used in batteries," said Geiger, who co-authored the study. "Several large-scale mining companies now aim to extract these precious elements from the seafloor at depths of 10,000 to 20,000 feet below the surface. We need to rethink how to mine these materials, so that we do not deplete the oxygen source for deep-sea life."

[...] Sweetman made the discovery while sampling the seabed of the Clarion-Clipperton Zone, a mountainous submarine ridge along the seafloor that extends nearly 4,500 miles along the north-east quadrant of the Pacific Ocean. When his team initially detected oxygen, he assumed the equipment must be broken.

"When we first got this data, we thought the sensors were faulty because every study ever done in the deep sea has only seen oxygen being consumed rather than produced," Sweetman said. "We would come home and recalibrate the sensors, but, over the course of 10 years, these strange oxygen readings kept showing up.

"We decided to take a back-up method that worked differently to the optode sensors we were using. When both methods came back with the same result, we knew we were onto something ground-breaking and unthought-of."

Mysterious 'Dark Oxygen' Is Being Produced On The Ocean Floor

Arthur T Knackerbracket has processed the following story:

A new form of oxygen production has been detected on the ocean floor, raising concerns about the impact of deep-sea mining to this vital ecosystem.

Researchers have discovered large amounts of oxygen being produced deep in the Pacific Ocean – and the source appears to be lumps of metal.

The researchers made the discovery in a region of the ocean 4,000 metres down, where a large amount of “polymetallic nodules” cover the ocean floor. The scientists believe that these nodules are producing this “dark oxygen”.

The team said the discovery is fascinating, as it suggests there is another source of oxygen production other than photosynthesis. It is believed that these metal nodules are acting as “geo-batteries”.

These nodules are believed to play a role in the dark oxygen production (DOP) by catalysing the splitting of water molecules. The researchers say further investigation needs to be done after this discovery to see how this process could be impacted by deep-sea mining.

[...] Sweetman said that researchers should map the areas where oxygen production is occurring before deep-sea mining occurs, due to the potential impact it could have on ecosystems.

“If there’s oxygen being produced in large amounts, it’s possibly going to be important for the animals that are living there,” he said.

Sweetman, A. K. et al. Nature Geosci. https://doi.org/10.1038/s41561-024-01480-8 (2024)


Original Submission #1Original Submission #2Original Submission #3

posted by hubie on Wednesday July 24, @06:58PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

After years of indecision on the issue of third-party cookies, Google has finally made a decision: on Monday, the company revealed that it would no longer pursue its plan to cut off support for third-party cookies in Chrome. Instead, Google played up other options that would hand more control of privacy and tracking to Chrome users.

As one alternative solution, Google touted its Privacy Sandbox, a set of tools in Chrome designed to help you manage third-party cookies that track you and deliver targeted ads. Google said that the performance of this tool's APIs would improve over time following greater industry adoption. That transition is likely to require a lot of effort by publishers, advertisers, and other participants, so Google has something else up its sleeve.

"In light of this, we are proposing an updated approach that elevates user choice," Google said in a Monday blog post. "Instead of deprecating third-party cookies, we would introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they'd be able to adjust that choice at any time."

[...] Third-party cookies have proven to be a contentious issue in the browsing world.

Users see them as a privacy violation, as advertisers use such cookies to track their activities across the internet to serve targeted ads. Regulators worry about flaws in the privacy tools available to users. Meanwhile, websites and advertisers view these cookies as a revenue source, as they provide insight into users' habits and interests. With all these parties weighing in on Google's plans, it's no wonder the company was kicking the can down the road.

[...] In an email to ZDNET, Longacre said: "If you ask me, the decision means Google is finally admitting the alternatives to third-party cookies are worse for targeting and no better for consumer privacy. That said, it was ultimately combined pressure from three groups -- regulators, advertisers, and publishers – that influenced Google to make this decision, in my opinion."

Other browser makers have been able to cut off support for third-party cookies without issue.

[...] Google's mention of a new option in Chrome for managing third-party cookies seems hazy. The browser already offers users a way to stop third-party cookies. The process is as simple as going to Settings, selecting "Privacy and security," clicking "Third-party cookies," and then turning on the switch to block them. What more could Google add to the browser without making the process too confusing?

"I imagine this change simply means you will get an annoying pop-up like this on every new website you visit -- kind of what happens currently in the EU," Longacre said. "So yes, expect more annoying EU-style pop-ups on every site you visit. This will be bad for UX [user experience], but will keep the regulators happy on both sides of the Atlantic."

Ultimately, the entire process has been largely driven by regulators, according to Longacre, as people are upset over how their personal information is handled online. Users feel that cookies and other digital advertising tools that collect their data are intrusive, and they don't trust the tech world, he added.

"Privacy is now regarded as a fundamental right, and organizations are moving swiftly to safeguard consumer PII (personally identifiable information), with limited or no movement of consumer data and capturing of consent," Longacre said. "Google's announcement today will neither slow down nor reverse this process."


Original Submission

posted by martyb on Wednesday July 24, @02:13PM   Printer-friendly

An innovative membrane that captures carbon dioxide from the air using humidity differences has been developed. This energy-efficient method could help meet climate goals by offering a sustainable carbon dioxide source for various applications. (Artist’s concept.) Credit: SciTechDaily.com

Direct air capture was identified as one of the ‘Seven chemical separations to change the world’. This is because although carbon dioxide is the main contributor to climate change (we release ~40 billion tons into the atmosphere every year), separating carbon dioxide from air is very challenging due to its dilute concentration (~0.04%).

Prof Ian Metcalfe, Royal Academy of Engineering Chair in Emerging Technologies in the School of Engineering, Newcastle University, UK, and lead investigator states, “Dilute separation processes are the most challenging separations to perform for two key reasons. First, due to the low concentration, the kinetics (speed) of chemical reactions targeting the removal of the dilute component are very slow. Second, concentrating the dilute component requires a lot of energy.”

These are the two challenges that the Newcastle researchers (with colleagues at the Victoria University of Wellington, New Zealand, Imperial College London, UK, Oxford University, UK, Strathclyde University, UK, and UCL, UK) set out to address with their new membrane process. By using naturally occurring humidity differences as a driving force for pumping carbon dioxide out of air, the team overcame the energy challenge. The presence of water also accelerated the transport of carbon dioxide through the membrane, tackling the kinetic challenge.

The work is published in Nature Energy and Dr. Greg A. Mutch, Royal Academy of Engineering Fellow in the School of Engineering, Newcastle University, UK explains, “Direct air capture will be a key component of the energy system of the future. It will be needed to capture the emissions from mobile, distributed sources of carbon dioxide that cannot easily be decarbonized in other ways.”

“In our work, we demonstrate the first synthetic membrane capable of capturing carbon dioxide from air and increasing its concentration without a traditional energy input like heat or pressure. I think a helpful analogy might be a water wheel on a flour mill. Whereas a mill uses the downhill transport of water to drive milling, we use it to pump carbon dioxide out of the air.”

Separation processes underpin most aspects of modern life. From the food we eat, to the medicines we take, and the fuels or batteries in our car, most products we use have been through several separation processes. Moreover, separation processes are important for minimizing waste and the need for environmental remediation, such as direct air capture of carbon dioxide.

However, in a world moving towards a circular economy, separation processes will become even more critical. Here, direct air capture might be used to provide carbon dioxide as a feedstock for making many of the hydrocarbon products we use today, but in a carbon-neutral, or even carbon-negative, cycle.

Most importantly, alongside transitioning to renewable energy and traditional carbon capture from point sources like power plants, direct air capture is necessary for realizing climate targets, such as the 1.5 °C goal set by the Paris Agreement.

Dr. Evangelos Papaioannou, Senior Lecturer in the School of Engineering, Newcastle University, UK explains, “In a departure from typical membrane operation, and as described in the research paper, the team tested a new carbon dioxide-permeable membrane with a variety of humidity differences applied across it. When the humidity was higher on the output side of the membrane, the membrane spontaneously pumped carbon dioxide into that output stream.”

Using X-ray micro-computed tomography with collaborators at UCL and the University of Oxford, the team was able to precisely characterize the structure of the membrane. This enabled them to provide robust performance comparisons with other state-of-the-art membranes.

A key aspect of the work was modeling the processes occurring in the membrane at the molecular scale. Using density-functional-theory calculations with a collaborator affiliated to both Victoria University of Wellington and Imperial College London, the team identified ‘carriers’ within the membrane. The carrier uniquely transports both carbon dioxide and water but nothing else. Water is required to release carbon dioxide from the membrane, and carbon dioxide is required to release water. Because of this, the energy from a humidity difference can be used to drive carbon dioxide through the membrane from a low concentration to a higher concentration.

Prof Metcalfe adds, “This was a real team effort over several years. We are very grateful for the contributions from our collaborators, and for the support from the Royal Academy of Engineering and the Engineering & Physical Sciences Research Council.”

I.S. Metcalfe, G.A. Mutch, E.I. Papaioannou, [et al]. “Separation and concentration of carbon dioxide from air using a humidity-driven molten-carbonate membraneNature Energy; 19 July 2024. (DOI: 10.1038/s41560-024-01588-6)


Original Submission

posted by janrinok on Wednesday July 24, @09:38AM   Printer-friendly
from the when-will-we-break-1nm? dept.

Arthur T Knackerbracket has processed the following story:

Last week, Applied Materials pulled back the curtain on its latest materials engineering solutions designed to enable copper wiring to scale down to 2nm dimensions and below while also reducing electrical resistance and strengthening chips for 3D stacking.

The company's Black Diamond low-k dielectric material has been offered since the early 2000s. It surrounds copper wires with a special film engineered to reduce the buildup of electrical charges that increase power consumption and cause interference between electrical signals.

Applied Materials has now come up with an enhanced version of Black Diamond, which reduces the minimum k-value even further, enabling copper wiring scaling to the 2nm node while also increasing mechanical strength – a critical property as chipmakers look to stack multiple logic and memory dies vertically.

But scaling the copper wiring itself as dimensions shrink is another enormous challenge. Today's most cutting-edge logic chips can pack over 60 miles of copper wires that are fashioned by first etching trenches into the dielectric material and then depositing an ultra-thin barrier layer to prevent copper migration. A liner layer goes down next to aid copper adhesion before the final copper deposition fills the remaining space.

The problem is that at 2nm dimensions and below, the barrier and liner layers consume an increasingly large percentage of the available trench volume, leaving little room for sufficient copper fill and risking high resistance and reliability issues. Applied Materials has solved this predicament with this brand-new materials concoction.

Their latest Integrated Materials Solution (IMS) combines six different core technologies into one high-vacuum system, including an industry-first pairing of ruthenium and cobalt to form an ultra-thin 2nm binary metal liner. This allows a 33% reduction in liner thickness compared to previous generations while also improving surface properties for seamless, void-free copper adhesion and reflow. The end result is up to 25% lower electrical resistance in chip wiring to boost performance and reduce power leakage.

Applied Materials claims that all leading logic chipmakers have already adopted its new copper barrier seed IMS with ruthenium CVD technology for 3nm chip production, with 2nm nodes expected to follow.


Original Submission

posted by janrinok on Wednesday July 24, @04:53AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

UK communications regulator Ofcom has banned mid-contract price rises linked to inflation.

The change, which comes into effect from January 2025, means that price rises must be clearly written into contracts. Ofcom noted that BT and Vodafone had already changed their pricing practices accordingly.

Cristina Luna-Esteban, Ofcom Telecoms Policy Director, criticized the practice of vendors tying customers into contracts where the price could change based on inflation. Future inflation is difficult to predict, after all.

Luna-Esteban said. "We're stepping in on behalf of phone, broadband and pay TV customers to stamp out this practice, so people can be certain of the price they will pay, compare deals more easily and take advantage of the competitive market we have in the UK."

Ofcom proposed the ban in 2023 after UK inflation soared during the previous years, making it impossible for customers to predict what they might be paying during a contract's term. The imposition of early termination fees for customers seeking to escape what they saw as an unexpected rise added to the pain.

In theory, a customer could exit a contract without penalty if they weren't made aware of potential rises when signing the contract. However, providers were able to get around this by simply saying prices would rise by whatever the consumer price index was at the time, plus a certain percentage.

Therefore, the customer was made aware of a rise – but didn't know what it would be.

Ofcom's solution is to require the provider to clearly disclose the rises to avoid a situation in which customers do not know how much they will be expected to pay during their contract term.

[...] "Finally, broadband and mobile customers will know ahead of time exactly what they will pay for the duration of a contract, making it easier for them to properly manage their finances."


Original Submission

posted by janrinok on Tuesday July 23, @11:15PM   Printer-friendly

Botanists vote to remove racist reference from plants' scientific names:

[ Editor's Comment: caffra means 'infidel' in Arabic, and it was used as a racial slur against black (non-arabic) people, predominantly in South Africa. ]

Scientists have voted to eliminate the names of certain plants that are deemed to be racially offensive. The decision to remove a label that contains such a slur was taken last week after a gruelling six-day session attended by more than 100 researchers, as part of the International Botanical Congress, which officially opens on Sunday in Madrid.

The effect of the vote will be that all plants, fungi and algae names that contain the word caffra, which originates in insults made against Black people, will be replaced by the word affra to denote their African origins. More than 200 species will be affected, including the coast coral tree, which will be known as Erythrina affra instead of Erythrina caffra.

The scientists attending the nomenclature session also agreed to create a special committee which would rule on names given to newly discovered plants, fungi and algae. These are usually named by those who first describe them in the scientific literature. However, the names could now be overruled by the committee if they are deemed to be derogatory to a group or race.

A more general move to rule on other controversial historical labels was not agreed by botanists. Nevertheless, the changes agreed last week are the first rule alterations that taxonomists have officially agreed to the naming of species, and were welcomed by the botanist Sandy Knapp of the Natural History Museum in London, who presided over the six-day nomenclature session.

"This is an absolutely monumental first step in addressing an issue that has become a real problem in botany and also in other biological sciences," she told the Observer. "It is a very important start."

The change to remove the word caffra from species names was proposed by the plant taxonomist Prof Gideon Smith of Nelson Mandela University in South Africa, and his colleague Prof Estrela Figueiredo. They have campaigned for years for changes to be made to the international system for giving scientific names to plants and animals in order to permit the deletion and substitution of past names deemed objectionable.

"We are very pleased with the retroactive and permanent eradication of a racial slur from botanical nomenclature," Smith told the Observer. "It is most encouraging that more than 60% of our international colleagues supported this proposal."

And the Australian plant taxonomist Kevin Thiele – who had originally pressed for historical past names to be subject to changes as well as future names – told Nature that last week's moves were "at least a sliver of recognition of the issue".

Plant names are only a part of the taxonomic controversy, however. Naming animals after racists, fascists and other controversial figures cause just as many headaches as those posed by plants, say scientists. Examples include a brown, eyeless beetle which has been named after Adolf Hitler. Nor is Anophthalmus hitleri alone. Many other species' names recall individuals that offend, such as the moth Hypopta mussolinii.

The International Commission on Zoological Nomenclature (ICZN) has so far refused to consider changing its rules to allow the removal of racist or fascist references. Renaming would be disruptive, while replacement names could one day be seen as offensive "as attitudes change in the future", it announced in the Zoological Journal of the Linnean Societylast year. Nevertheless, many researchers have acknowledged that some changes will have to be made to zoological nomenclature rules in the near future.


Original Submission

posted by janrinok on Tuesday July 23, @05:31PM   Printer-friendly
from the fingers-crossed dept.

Academic journals are a lucrative scam – and we're determined to change that:

'It's never been more evident that for-profit publishing simply does not align with the aims of scholarly inquiry.' Photograph: agefotostock/AlamyView image in fullscreen'It's never been more evident that for-profit publishing simply does not align with the aims

Giant publishers are bleeding universities dry, with profit margins that rival Google's. So we decided to start our own

If you've ever read an academic article, the chances are that you were unwittingly paying tribute to a vast profit-generating machine that exploits the free labour of researchers and siphons off public funds.

The annual revenues of the "big five" commercial publishers – Elsevier, Wiley, Taylor & Francis, Springer Nature, and SAGE – are each in the billions, and some have staggering profit margins approaching 40%, surpassing even the likes of Google. Meanwhile, academics do almost all of the substantive work to produce these articles free of charge: we do the research, write the articles, vet them for quality and edit the journals.

Not only do these publishers not pay us for our work; they then sell access to these journals to the very same universities and institutions that fund the research and editorial labour in the first place. Universities need access to journals because these are where most cutting-edge research is disseminated. But the cost of subscribing to these journals has become so exorbitantly expensive that some universities are struggling to afford them. Consequently, many researchers (not to mention the general public) remain blocked by paywalls, unable to access the information they need. If your university or library doesn't subscribe to the main journals, downloading a single paywalled article on philosophy or politics can cost between £30 and £40.

The commercial stranglehold on academic publishing is doing considerable damage to our intellectual and scientific culture. As disinformation and propaganda spread freely online, genuine research and scholarship remains gated and prohibitively expensive. For the past couple of years, I worked as an editor of Philosophy & Public Affairs, one of the leading journals in political philosophy. It was founded in 1972, and it has published research from renowned philosophers such as John Rawls, Judith Jarvis Thomson and Peter Singer. Many of the most influential ideas in our field, on topics from abortion and democracy to famine and colonialism, started out in the pages of this journal. But earlier this year, my co-editors and I and our editorial board decided we'd had enough, and resigned en masse.

We were sick of the academic publishing racket and had decided to try something different. We wanted to launch a journal that would be truly open access, ensuring anyone could read our articles. This will be published by the Open Library of Humanities, a not-for-profit publisher funded by a consortium of libraries and other institutions. When academic publishing is run on a not-for-profit basis, it works reasonably well. These publishers provide a real service and typically sell the final product at a reasonable price to their own community. So why aren't there more of them?

To answer this, we have to go back a few decades, when commercial publishers began buying up journals from university presses. Exploiting their monopoly position, they then sharply raised prices. Today, a library subscription to a single journal in the humanities or social sciences typically costs more than £1,000 a year. Worse still, publishers often "bundle" journals together, forcing libraries to buy ones they don't want in order to have access to ones they do. Between 2010 and 2019, UK universities paid more than £1bn in journal subscriptions and other publishing charges. More than 90% of these fees went to the big five commercial publishers (UCL and Manchester shelled out over £4m each). It's worth remembering that the universities funded this research, paid the salaries of the academics who produced it and then had to pay millions of pounds to commercial publishers in order to access the end product.

Even more astonishing is the fact these publishers often charge authors for the privilege of publishing in their journals. In recent years, large publishers have begun offering so-called "open access" articles that are free to read. On the surface, this might sound like a welcome improvement. But for-profit publishers provide open access to readers only by charging authors, often thousands of pounds, to publish their own articles. Who ends up paying these substantial author fees? Once again, universities. In 2022 alone, UK institutions of higher education paid more than £112m to the big five to secure open-access publication for their authors.

This trend is having an insidious impact on knowledge production. Commercial publishers are incentivised to try to publish as many articles and journals as possible, because each additional article brings in more profit. This has led to a proliferation of junk journals that publish fake research, and has increased the pressure on rigorous journals to weaken their quality controls. It's never been more evident that for-profit publishing simply does not align with the aims of scholarly inquiry.

There is an obvious alternative: universities, libraries, and academic funding agencies can cut out the intermediary and directly fund journals themselves, at a far lower cost. This would remove commercial pressures from the editorial process, preserve editorial integrity and make research accessible to all. The term for this is "diamond" open access, which means the publishers charge neither authors, editors, nor readers (this is how our new journal will operate). Librarians have been urging this for years. So why haven't academics already migrated to diamond journals?

The reason is that such journals require alternative funding sources, and even if such funding were in place, academics still face a massive collective action problem: we want a new arrangement but each of us, individually, is strongly incentivised to stick with the status quo. Career advancement depends heavily on publishing in journals with established name recognition and prestige, and these journals are often owned by commercial publishers. Many academics – particularly early-career researchers trying to secure long-term employment in an extremely difficult job market – cannot afford to take a chance on new, untested journals on their own.


Original Submission

posted by hubie on Tuesday July 23, @11:45AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

New research led by scientists at the University of Michigan reveals that the Arctic has lost approximately 25% of its cooling ability since 1980 due to diminishing sea ice and reduced reflectivity. Additionally, this phenomenon has contributed to a global loss of up to 15% in cooling power.

Using satellite measurements of cloud cover and the solar radiation reflected by sea ice between 1980 and 2023, the researchers found that the percent decrease in sea ice’s cooling power is about twice as high as the percent decrease in annual average sea ice area in both the Arctic and Antarctic. The added warming impact from this change to sea ice cooling power is toward the higher end of climate model estimates.

“When we use climate simulations to quantify how melting sea ice affects climate, we typically simulate a full century before we have an answer,” said Mark Flanner, professor of climate and space sciences and engineering and the corresponding author of the study published in Geophysical Research Letters.

“We’re now reaching the point where we have a long enough record of satellite data to estimate the sea ice climate feedback with measurements.”

[...] The Arctic has seen the largest and most steady declines in sea ice cooling power since 1980, but until recently, the South Pole had appeared more resilient to the changing climate. Its sea ice cover had remained relatively stable from 2007 into the 2010s, and the cooling power of the Antarctic’s sea ice was actually trending up at that time.

That view abruptly changed in 2016, when an area larger than Texas melted on one of the continent’s largest ice shelves. The Antarctic lost sea ice then too, and its cooling power hasn’t recovered, according to the new study. As a result, 2016 and the following seven years have had the weakest global sea ice cooling effect since the early 1980s.

Beyond disappearing ice cover, the remaining ice is also growing less reflective as warming temperatures and increased rainfall create thinner, wetter ice and more melt ponds that reflect less solar radiation. This effect has been most pronounced in the Arctic, where sea ice has become less reflective in the sunniest parts of the year, and the new study raises the possibility that it could be an important factor in the Antarctic, too—in addition to lost sea ice cover.

[...] The research team hopes to provide their updated estimates of sea ice’s cooling power and climate feedback from less reflective ice to the climate science community via a website that is updated whenever new satellite data is available.

Reference: “Earth’s Sea Ice Radiative Effect From 1980 to 2023” by A. Duspayev, M. G. Flanner and A. Riihelä, 17 July 2024, Geophysical Research Letters.
  DOI: 10.1029/2024GL109608


Original Submission

posted by hubie on Tuesday July 23, @06:10AM   Printer-friendly

https://pldb.io/blog/JohnOusterhout.html

Dr. John Ousterhout is a computer science luminary who has made significant contributions to the field of computer science, particularly in the areas of operating systems and file systems. He is the creator of the Tcl scripting language, and has also worked on several major software projects, including the Log-Structured file system and the Sprite operating system. John Ousterhout's creation of Tcl has had a lasting impact on the technology industry, transforming the way developers think about scripting and automation.


Original Submission

posted by hubie on Tuesday July 23, @01:26AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Two new studies suggest that antibodies that attack people’s own tissues might cause ongoing neurological issues that afflict millions of people with the disease. 

When scientists transferred these antibodies from people with long COVID into healthy mice, certain symptoms, including pain, transferred to the animals too, researchers reported May 31 on bioRxiv.org and June 19 on medRxiv.org. 

Though scientists have previously implicated such antibodies, known as autoantibodies, as suspects in long COVID, the new studies are the first to offer direct evidence that they can do harm. “This is a big deal,” says Manali Mukherjee, a translational immunologist at McMaster University in Hamilton, Canada, who was not involved in the work. The papers make a good case for therapies that target autoantibodies, she says.

The work could also offer “peace of mind to some of the long-haulers,” Mukherjee says. As someone who has endured long COVID herself, she understands that when patients don’t know the cause of their suffering, it can add to their anxiety. They wonder, “What the hell is going wrong with me?” she says.

[...] Scientists have proposed many hypotheses for what causes long COVID, including SARS-CoV-2 virus lingering in the tissues and the reawakening of dormant herpes viruses (SN: 3/4/24). Those elements may still play a role in some people’s long COVID symptoms, but for pain, at least, rogue antibodies seem to be enough to kick-start the symptom all on their own. It’s not an out-of-the-blue role for autoantibodies; scientists suspect they may also be involved in other conditions that cause people pain, including fibromyalgia and myalgic encephalomyelitis/chronic fatigue syndrome.

But if doctors could identify which long COVID patients have pain-linked autoantibodies, they could try to reduce the amount circulating in the blood, says Iwasaki, who is also a Howard Hughes Medical Institute investigator. “I think that would really be a game changer for this particular set of patients.” 

The work represents a “very strong level of evidence” that autoantibodies could cause harm in people with long COVID, says Ignacio Sanz, an immunologist at Emory University in Atlanta. Both he and Mukherjee would like to see the findings validated in larger sets of participants. And the real clincher, Sanz says, would come from longer-term studies. If scientists could show that patients’ symptoms ease as these rogue antibodies disappear over time, that’d be an even surer sign of their guilt. 

References:
    • K. S. Guedes de Sa et alA causal link between autoantibodies and neurological symptoms in long COVID. medRxiv.org. Posted June 19, 2024. doi: 10.1101/2024.06.18.24309100.
    • H-J Chen et alTransfer of IgG from long COVID patients induces symptomology in mice. bioRxiv.org. Posted May 31, 2024. doi: 10.1101/2024.05.30.596590. 


Original Submission

posted by hubie on Monday July 22, @08:38PM   Printer-friendly
from the private-sector-always-does-it-cheaper dept.

Arthur T Knackerbracket has processed the following story:

Europe’s largest local authority faces a $15.58 million (£12 million) bill for manually auditing accounts which should have been supported by an Oracle ERP systems installed in April 2022.

The £3.2 billion ($4.1 billion) budget authority has become infamous for its ERP project disaster, which has seen its switch from legacy SAP software to cloud-based Oracle Fusion, a customer win co-founder and CTO Larry Ellison once flaunted to investors.

The delayed project left the council without auditable accounts, and without security features, along with costs climbing from around £20 million to as much as £131 million. The IT problems contributed to the Birmingham City Council becoming effectively bankrupt in September last year.

A report from external auditors stated the council will not have a fully functioning cash system until April next year, three years after it went live on an Oracle ERP, and will have to wait until September 2025 for a fully functioning finance system.

Yesterday, Mark Stocks, head of public sector practice at external auditors Grant Thornton, told councillors that officials had told him the new accounting “out-of-the-box” system might not be ready until March 2026, nearly four years after the failing customized system first went live.

The lack of a functioning accounting system was making it costly and time consuming to produce a full audit, the auditors concluded after exploratory work.

[...] Problems with the customized ERP system were multiple, but cash management, bank reconciliation and accounts receivable were of particular concern. The council has bought third-party software — CivicaPay/Civica Income Management — as the replacement for the banking system.

Stocks said officials had been working hard to improve the current Oracle system, and said he did not “lose that message.”

Nonetheless, serious issues continue. “You're not going to have a fully functioning finance system and cash system [until] April next year. The actual financial ledger could be April 2026. That's really difficult from a finance officer point of view [and] it's particularly difficult from an external audit point of view to draw a conclusion on your accounts,” he said.


Original Submission

posted by hubie on Monday July 22, @03:52PM   Printer-friendly

Editor's note: Due to the extensive use of buzzwords, the submitter questions whether this was written by a human or not, but perhaps those who are knowledgeable in network architecture can comment on whether this idea is as revolutionary as TFA suggests.

Arthur T Knackerbracket has processed the following story:

A research team has proposed a revolutionary polymorphic network environment (PNE) in their study, which seeks to achieve global scalability while addressing the diverse needs of evolving network services. Their framework challenges traditional network designs by creating a versatile “network of networks” that overcomes the limitations of current systems, paving the way for scalable and adaptable network architectures.

A recent paper published in Engineering by scientists Wu Jiangxing and his research team introduces a theoretical framework that promises to transform network systems and architectures. The study tackles a critical issue in network design: how to achieve global scalability while meeting the varied demands of evolving services.

For decades, the quest for an ideal network capable of seamlessly scaling across various dimensions has remained elusive. The team, however, has identified a critical barrier known as the “impossible service-level agreement (S), multiplexity (M), and variousness (V) triangle” dilemma, which highlights the inherent limitations of traditional unimorphic network systems. These systems struggle to adapt to the growing complexity of services and application scenarios while maintaining global scalability throughout the network’s life cycle.

To overcome this challenge, the researchers propose a paradigm shift in network development—an approach they term the polymorphic network environment (PNE). At the core of this framework lies the separation of application network systems from the underlying infrastructure environment. By leveraging core technologies such as network elementization and dynamic resource aggregation, the PNE enables the creation of a versatile “network of networks” capable of accommodating diverse service requirements.

Through extensive theoretical analysis and environment testing, the team demonstrates the viability of the PNE model. Results indicate that the framework not only supports multiple application network modalities simultaneously but also aligns with technical and economic constraints, thus paving the way for scalable and adaptable network architectures.

Reference: “Theoretical Framework for a Polymorphic Network Environment” by Jiangxing Wu et al., 28 February 2024, Engineering. DOI: 10.1016/j.eng.2024.01.018


Original Submission

posted by hubie on Monday July 22, @11:06AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A new study published in Science Advances reveals a surprising twist in the evolutionary history of complex life. Researchers at Queen Mary University of London have discovered that a single-celled organism, a close relative of animals, harbors the remnants of ancient giant viruses woven into its own genetic code. This finding sheds light on how complex organisms may have acquired some of their genes and highlights the dynamic interplay between viruses and their hosts.

The study focused on a microbe called Amoebidium, a unicellular parasite found in freshwater environments. By analyzing Amoebidium's genome, the researchers led by Dr. Alex de Mendoza Soler, Senior Lecturer at Queen Mary's School of Biological and Behavioural Sciences, found a surprising abundance of genetic material originating from giant viruses—some of the largest viruses known to science. These viral sequences were heavily methylated, a chemical tag that often silences genes.

[...] "These findings challenge our understanding of the relationship between viruses and their hosts," says Dr. de Mendoza Soler. "Traditionally, viruses are seen as invaders, but this study suggests a more complex story. Viral insertions may have played a role in the evolution of complex organisms by providing them with new genes. And this is allowed by the chemical taming of these intruders DNA."

Furthermore, the findings in Amoebidium offer intriguing parallels to how our own genomes interact with viruses. Similar to Amoebidium, humans and other mammals have remnants of ancient viruses, called endogenous retroviruses, integrated into their DNA.

While these remnants were previously thought to be inactive "junk DNA," some might now be beneficial. However, unlike the giant viruses found in Amoebidium, Endogenous retroviruses are much smaller, and the human genome is significantly larger. Future research can explore these similarities and differences to understand the complex interplay between viruses and complex life forms.

More information: Luke A. Sarre et al, DNA methylation enables recurrent endogenization of giant viruses in an animal relative, Science Advances (2024). DOI: 10.1126/sciadv.ado6406


Original Submission

posted by janrinok on Monday July 22, @06:23AM   Printer-friendly

CrowdStrike broke Debian and Rocky Linux months ago, but no one noticed:

A widespread Blue Screen of Death (BSOD) issue on Windows PCs disrupted operations across various sectors, notably impacting airlines, banks, and healthcare providers. The issue was caused by a problematic channel file delivered via an update from the popular cybersecurity service provider, CrowdStrike. CrowdStrike confirmed that this crash did not impact Mac or Linux PCs.

It turns out that similar problems have been occurring for months without much awareness, despite the fact that many may view this as an isolated incident. Users of Debian and Rocky Linux also experienced significant disruptions as a result of CrowdStrike updates, raising serious concerns about the company's software update and testing procedures. These occurrences highlight potential risks for customers who rely on their products daily.

In April, a CrowdStrike update caused all Debian Linux servers in a civic tech lab to crash simultaneously and refuse to boot. The update proved incompatible with the latest stable version of Debian, despite the specific Linux configuration being supposedly supported. The lab's IT team discovered that removing CrowdStrike allowed the machines to boot and reported the incident.

A team member involved in the incident expressed dissatisfaction with CrowdStrike's delayed response. It took them weeks to provide a root cause analysis after acknowledging the issue a day later. The analysis revealed that the Debian Linux configuration was not included in their test matrix.

"Crowdstrike's model seems to be 'we push software to your machines any time we want, whether or not it's urgent, without testing it'," lamented the team member.

This was not an isolated incident. CrowdStrike users also reported similar issues after upgrading to RockyLinux 9.4, with their servers crashing due to a kernel bug. Crowdstrike support acknowledged the issue, highlighting a pattern of inadequate testing and insufficient attention to compatibility issues across different operating systems.

To avoid such issues in the future, CrowdStrike should prioritize rigorous testing across all supported configurations. Additionally, organizations should approach CrowdStrike updates with caution and have contingency plans in place to mitigate potential disruptions.

Source: Ycombinator, RockyLinux


Original Submission

posted by hubie on Monday July 22, @01:40AM   Printer-friendly
from the mmmm-mmmm-good dept.

Arthur T Knackerbracket has processed the following story:

At the end of a small country road in Denmark is the "Enorm" factory, an insect farm set up by a Danish woman who wants to revolutionize livestock feed.

Jane Lind Sam and her father, Carsten Lind Pedersen, swapped pigs for soldier flies and created a 22,000-square-metre (237,000 square feet) factory where they intend to produce more than 10,000 tonnes of insect meal and oil a year.

The factory, which opened in December 2023, is the largest of its kind in northern Europe, and its products will initially be used by farmers for animal feed and, perhaps in the future, for human consumption.

The two entrepreneurs are making products that will be "substituting other, maybe less climate-friendly products", Lind Sam, co-owner and chief operations officer, explained to AFP.

They hope to contribute to the evolution of agriculture in a country where the sector's climate impact is under scrutiny.

[...] Under turquoise fluorescent lights, millions of black flies buzzed inside some 500 plastic cages, where they lay hundreds of thousands of eggs every day.

Inside the facility, it was impossible to escape the roar of insects who incessantly lay eggs throughout their 10-day lifespan.

"The female fly lays its eggs in this piece of cardboard," Lind Sam explained as she pulled out a sheet with a honeycomb pattern at the bottom of one of the cages.

About 25 kilograms (55 pounds) of eggs are produced per day. A single gram corresponds to about 40,000 eggs.

From these eggs come some of tomorrow's feeder flies, but also the future maggots which, once they have become pupae, will be transformed.

[...] "They are fascinating animals. And I think it's amazing that they can live on any organic matter," Lind Sam said.

Niels Thomas Eriksen, a biologist at Aalborg University, told AFP that "insects can eat materials that other animals probably won't so we can make better use" of agricultural byproducts and food waste.

Minimizing waste is one of Enorm's key aims and the manufacturer stressed that the rearing of insects facilitates "the recycling of nutrients".

It takes between 40 and 50 days to produce the finished product, which is mainly flour with a protein content of 55 percent.

It is then distributed across Europe—although Enorm remains discreet about the identity of its customers—used for feed for pig, poultry, fish and pet farms.

See Also: Fly larvae: Costa Rica's sustainable protein for animal feed


Original Submission