Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

The shambling corpse of Steve Jobs lumbers forth, heeding not the end of October! How will you drive him away?

  • Flash running on an Android phone, in denial of his will
  • Zune, or another horror from darkest Redmond
  • Newton, HyperCard, or some other despised interim Apple product
  • BeOS, the abomination from across the sea
  • Macintosh II with expansion slots, in violation of his ancient decree
  • Tow his car for parking in a handicap space without a permit
  • Oncology textbook—without rounded corners
  • Some of us are still in mourning, you insensitive clod!

[ Results | Polls ]
Comments:33 | Votes:94

posted by janrinok on Thursday July 25, @11:19PM   Printer-friendly
from the when-you-see-a-fork-in-the-road-take-it dept.

Arthur T Knackerbracket has processed the following story:

For developers familiar with MySQL, you've probably heard that MariaDB is the next generation of the database engine. MySQL has long been the traditional database in Linux, Apache, MySQL, and PHP (LAMP) environments. However, MariaDB has gained popularity as an alternative. MariaDB is a fork of the original MySQL codebase, created to ensure continuity and avoid the potential pitfalls after MySQL was acquired by Oracle. Developers will find that the syntax is similar, but MariaDB introduces several notable differences.

Although MySQL remains embedded in several large technology businesses, MariaDB is often seen as a popular new-generation database for enterprises. MariaDB supports higher data transfer volumes and is supported by most cloud providers. Its similarity to MySQL, which was the dominant database in the early 2000s, has facilitated its adoption.

The key differences between MariaDB and MySQL form the foundation of MariaDB's performance. MariaDB offers several more storage engines and supports over 200,000 connections. MySQL's Enterprise edition includes proprietary code, while MariaDB is completely open-source. These differences contribute to MariaDB's superior speed. In recent benchmark testing, MariaDB performs somewhere between 13% to 36% faster than MySQL.

Since MariaDB is a fork from MySQL, the syntax is similar, but MariaDB has several additional features. Basic SQL syntax remains the same, but MariaDB handles data storage and functions differently. Each new version of MariaDB includes added features and extensions.

One example of a feature in MariaDB not available in MySQL is the SEQUENCE feature. In MySQL, you use the AUTO_INCREMENT feature to add a unique incremented integer to each row created in a table. With SEQUENCE, you can create a custom sequence that starts at a specific value and increments by a custom value.

The following is an example of the SEQUENCE function:

CREATE SEQUENCE s START WITH 10 INCREMENT BY 10;

MySQL was introduced in 1995 and became the dominant database engine in the early 2000s. It's used by some of the world's largest companies such as Facebook, GitHub, Airbnb, and YouTube. It handles billions of records and integrates easily into Linux environments, including affordable web hosting providers.

Because MySQL is so popular, there are plenty of videos and tutorials available to learn how to set up the database and use its SQL syntax to create queries. MySQL is also suitable for personal projects and is free for individual use. It runs on both Windows and Linux, making it accessible to almost any developer. Many developers begin learning database programming and storage design with MySQL.

MariaDB is slightly more challenging because it's designed as an enterprise solution. It has more engines to work with and is available in the cloud. Most enterprise applications have many more features than consumer alternatives, making them more difficult to learn.

No one can predict the future, but MySQL is likely here to stay. More application developers might choose MariaDB over MySQL for enterprise applications, but MySQL still maintains a strong market presence. WordPress works natively with MySQL and powers a significant percentage of websites – however, WordPress is also compatible with MariaDB – MariaDB can be used seamlessly with WordPress without requiring significant changes.

In the future, MariaDB could power a large portion of web applications, but for now, it maintains a strong presence in the enterprise realm, especially in Linux environments. It's possible that MariaDB will become a more popular database for enterprise applications in a LAMP environment.

If you need to learn about databases or have a small pet project, MySQL may be the best option. MySQL offers a convenient desktop application that simplifies database management and configuration. The MySQL Workbench software uses a graphical user interface to guide you through the table creation process, and you can build your queries and functions with better feedback from the database service if you make mistakes.

In a large organization, MariaDB is the better option. It's also beneficial for smaller businesses that expect a large increase in concurrent users (there are some GUI applications here, too, if you need them). MariaDB scales easily as an application becomes more popular and more users access it. If you want to get experience working with cloud databases, MariaDB is a good choice for learning replication and management of data in the cloud.

As a fork of MySQL, MariaDB shares many similarities with its predecessor, making the determination of "which is better" subjective. Some developers prefer MariaDB because it's open-source and free, but MySQL is a stable, popular alternative that's good for small projects.

At the risk of starting a flamewar, do you agree with the views expressed here? If not, which is your preferred database and why?


Original Submission

posted by janrinok on Thursday July 25, @06:36PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Smart home defenses crumble when the NEO dog arrives.

The Department of Homeland Security (DHS) has announced that it has developed a four-legged robot designed to jam the wireless transmissions of smart home devices. The robot was revealed at the 2024 Border Security Expo and is called NEO. It is built using the Quadruped Unmanned Ground Vehicle (Q-UGV) and looks a lot like the Boston Dynamics Spot robot. 

According to the transcript of the speech by DHS Federal Law Enforcement Training Centers (FLETC) director Benjamine Huffman, acquired by 404 Media, NEO is equipped with an antenna array that is designed to overload home networks, thus disrupting devices that rely on Wi-Fi and other wireless communication protocols. It will thus likely be effective against a wide range of popular smart home devices that use wireless technologies for communications.

Aside from taking out smart devices, law enforcement can also use the robot to communicate with subjects in the target area, and to provide remote eyes and ears to officers on the ground. “NEO can enter a potentially dangerous environment to provide video and audio feedback to the officers before entry and allow them to communicate with those in that environment,” says Huffman. “NEO carries an onboard computer and antenna array that will allow officers the ability to create a ‘denial-of-service’ (DoS) event to disable ‘Internet of Things’ devices that could potentially cause harm while entry is made.”

This roaming robotic jammer was first contemplated after a child sexual abuse suspect used his doorbell camera to see FBI agents at his door serving a search warrant. The gunman opened fire on them from behind the closed door with an assault-style rifle, killing two veteran agents and injuring three more.

Aside from the NEO, the DHS also built the ‘FLETC Smart House’, which is designed to train law enforcement about smart home devices and how they could be used against them. Huffman explained, “A suspect who has been searched and is under the control of officers can cause these actions to happen with a simple voice command which can start a chain of events to occur within a house, such as turning off lights, locking doors, activating the HVAC system to introduce chemicals into the environment and cause a fire or explosion to take place.”

This development shows how law enforcement is catching up with technological advancements. Smart home devices started becoming common in the mid-to-late-2010s, with many users installing them to automate several aspects of their houses and bolster security. So, anyone with a little bit of technical know-how and ingenuity could potentially create a hostile environment using readily available wireless electronics. While NEO might not be able to affect hard-wired smart devices, it would still be able to disable the radio frequencies most wireless IoT devices use, thus reducing the risks for law enforcement officers.


Original Submission

posted by janrinok on Thursday July 25, @01:56PM   Printer-friendly

Today I'd like to revisit an often ignored/known method for tracking/hacking for SN discussion!

[Editor's Comment: Much of the discussion in the links originates from 2013-2016. That could mean several things. 1. It wasn't shown to be very effective, or 2. It is effective but very difficult to detect and counter. ]

Ultrasound Tracking Could Be Used to Deanonymize Tor Users
https://www.bleepingcomputer.com/news/security/ultrasound-tracking-could-be-used-to-deanonymize-tor-users/

Their research focuses on the science of ultrasound cross-device tracking (uXDT), a new technology that started being deployed in modern-day advertising platforms around 2014.

uXDT relies on advertisers hiding ultrasounds in their ads. When the ad plays on a TV or radio, or some ad code runs on a mobile or computer, it emits ultrasounds that get picked up by the microphone of nearby laptops, desktops, tablets or smartphones.

These second-stage devices, who silently listen in the background, will interpret these ultrasounds, which contain hidden instructions, telling them to ping back to the advertiser's server with details about that device.

Advertisers use uXDT in order to link different devices to the same person and create better advertising profiles so to deliver better-targeted ads in the future.

Ultrasound Cross Device Tracking techniques could be used to launch deanonymization attacks against some users: https://gitlab.torproject.org/legacy/trac/-/issues/20214

Your home's online gadgets could be hacked by ultrasound: https://www.newscientist.com/article/2110762-your-homes-online-gadgets-could-be-hacked-by-ultrasound/

Beware of ads that use inaudible sound to link your phone, TV, tablet, and PC: https://arstechnica.com/tech-policy/2015/11/beware-of-ads-that-use-inaudible-sound-to-link-your-phone-tv-tablet-and-pc/

Meet "badBIOS," the mysterious Mac and PC malware that jumps airgaps: https://arstechnica.com/information-technology/2013/10/meet-badbios-the-mysterious-mac-and-pc-malware-that-jumps-airgaps/

Scientist-developed malware prototype covertly jumps air gaps using inaudible sound: https://arstechnica.com/information-technology/2013/12/scientist-developed-malware-covertly-jumps-air-gaps-using-inaudible-sound/

Using Ultrasonic Beacons to Track Users: https://www.schneier.com/blog/archives/2017/05/using_ultrasoni.html

Ads Surreptitiously Using Sound to Communicate Across Devices: https://www.schneier.com/blog/archives/2015/11/ads_surreptitio.html

235 apps attempt to secretly track users with ultrasonic audio: https://boingboing.net/2017/05/04/235-apps-attempt-to-secretly-t.html

Leaking Data By Ultrasound: https://hackaday.com/2020/12/06/leaking-data-by-ultrasound/


Original Submission

posted by janrinok on Thursday July 25, @09:14AM   Printer-friendly

https://arxiv.org/abs/2407.13924

Fermilab is a major US national lab with a budget of several 100M$ per year, focusing on particle physics. All is not well at the lab, however, following project delays and huge cost overruns for the flagship DUNE project. The organisation that operates Fermilab, led by University of Chicago, has had its contract withdrawn and the lab director Lia Merminga has been laid off. Now a pair of senior and well-respected scientists have put their oar in as well, blasting the management of the lab over the past decade that has led to the current situation in a paper posted to the arxiv preprint server. The pair point at many problems, based on a toxic working environment, giving anecdotal examples supported by indicators such as a fourfold increase in sick leave over the past decade.

The PDF is available here.

It's a fun read!

[Ed. note: It appears Lia Merminga has not been laid off]


Original Submission

posted by hubie on Thursday July 25, @04:28AM   Printer-friendly

You're not going crazy — you may actually be paying higher prices than other people | CNN Business:

It's hard not to get fired up by how much more everything costs compared to just a few years ago. But people making the same exact purchases as you aren't necessarily paying the same exact prices as you.

This became apparent to me a few weeks ago when a friend texted me that Starbucks was running a buy one, get one free drink promotion. But when I logged in to the app, the offer was nowhere to be found.

Why was my friend getting special treatment?

It's likely that Starbucks used artificial intelligence to determine that my friend, if offered a promotion, would make a purchase they wouldn't otherwise have, whereas I would make a purchase regardless, said Shikha Jain, a lead partner in the North American consumer and retail division at the consultancy firm Simon-Kucher.

The system nailed it for me — just opening the app to check if I had the promo got me to order, and I paid full price.

[...] The Seattle-based coffee chain declined to share what feeds into its AI model, dubbed Deep Brew. A spokesperson did, however, confirm that AI is powering the individualized offers it sends customers.

This personalized promotion strategy is not unique to Starbucks. Companies are increasingly leveraging customer data, often derived from loyalty programs, in coordination with machine-learning models to customize prices of goods and services based on an individual's willingness to pay.

[...] On Tuesday, the Federal Trade Commission sent orders to eight companies — Mastercard, Revionics, Bloomreach, JPMorgan Chase, Task Software, PROS, Accenture and McKinsey & Co — seeking information on how they allegedly offer surveillance pricing and services "that incorporate data about consumers' characteristics and behavior."

The orders seek to understand how technologies like AI along with consumers' personal information could be used "to categorize individuals and set a targeted price for a product or service," according to an announcement the FTC published Tuesday morning.

"Firms that harvest Americans' personal data can put people's privacy at risk. Now firms could be exploiting this vast trove of personal information to charge people higher prices," FTC Chair Lina Khan said in a statement.

[...] Using AI, companies can now answer questions like, "What is this person going to buy next? What do we think they're going to be willing to pay? Where are they going to buy from? When are they going to buy it?" said Jain.

Matt Pavich, senior director of strategy and innovation at Revionics, an AI company that specializes in helping retailers set prices, said its goal is not to tell retailers exactly how much to charge individual customers. Rather, its bread and butter is to provide companies with "all of the analytics and predictive scenarios" to figure out prices themselves.

Instead of waiting for customers to respond in real time to price changes, Revionics' clients get a toolbox to test out prices in advance. Then, by predicting how much consumers will buy at different price points, Revionics helps retailers manage their inventories.

[...] Mary Winn Pilkington, senior vice president of investor relations and public relations at Tractor Supply Co., told CNN it recently partnered with Revionics because it wanted to more successfully adjust prices to "the ever-changing market" to "attract and retain customers."

The aim of partnering with Revionics wasn't to see how high they can raise their prices without turning away too many customers, she said.

She noted that Tractor Supply Co. does use machine learning "to curate specific offers individualized for customers," although Revionics is not involved in that aspect. This, she said, "often leads to lower prices and better value on the products and services our customers need."

Of course, like my Starbucks experience, it could also very well lead to identifying customers who don't require promotions at all.


Original Submission

posted by hubie on Wednesday July 24, @11:41PM   Printer-friendly

Inorganic production of oxygen in the deep ocean

https://www.sciencealert.com/mysterious-dark-oxygen-discovered-at-bottom-of-ocean-stuns-scientists

Chugging quietly away in the dark depths of Earth's ocean floors, a spontaneous chemical reaction is unobtrusively creating oxygen, all without the involvement of life.

"The discovery of oxygen production by a non-photosynthetic process requires us to rethink how the evolution of complex life on the planet might have originated," says SAMS marine scientist Nicholas Owens.

Scatterings of polymetallic nodules carpet vast areas of the ocean's bottom. We value these exact metals for their use in batteries, and it turns out that's exactly how the rocks may be spontaneously acting on the ocean floor. Single nodules produced voltages of up to 0.95 V. So when clustered together, like batteries in a series, they can easily reach the 1.5 V required to split oxygen from water in an electrolysis reaction.

This discovery offers a possible explanation for the mysterious stubborn persistence of ocean 'dead zones' decades after deep sea mining has ceased.

"In 2016 and 2017, marine biologists visited sites that were mined in the 1980s and found not even bacteria had recovered in mined areas. In unmined regions, however, marine life flourished," explains Geiger.

"Why such 'dead zones' persist for decades is still unknown. However, this puts a major asterisk onto strategies for sea-floor mining as ocean-floor faunal diversity in nodule-rich areas is higher than in the most diverse tropical rainforests."

As well as these massive implications for deep-sea mining, 'dark oxygen' also sparks a cascade of new questions around the origins of oxygen-breathing life on Earth.

Deep-Ocean Floor Produces its Own Oxygen

Deep-ocean floor produces its own oxygen:

The surprising discovery challenges long-held assumptions that only photosynthetic organisms, such as plants and algae, generate Earth's oxygen. But the new finding shows there might be another way. It appears oxygen also can be produced at the seafloor -- where no light can penetrate -- to support the oxygen-breathing (aerobic) sea life living in complete darkness.

Andrew Sweetman, of the Scottish Association for Marine Science (SAMS), made the "dark oxygen" discovery while conducting ship-based fieldwork in the Pacific Ocean. Northwestern's Franz Geiger led the electrochemistry experiments, which potentially explain the finding.

"For aerobic life to begin on the planet, there had to be oxygen, and our understanding has been that Earth's oxygen supply began with photosynthetic organisms," said Sweetman, who leads the Seafloor Ecology and Biogeochemistry research group at SAMS. "But we now know that there is oxygen produced in the deep sea, where there is no light. I think we, therefore, need to revisit questions like: Where could aerobic life have begun?"

Polymetallic nodules -- natural mineral deposits that form on the ocean floor -- sit at the heart of the discovery. A mix of various minerals, the nodules measure anywhere between tiny particles and an average potato in size.

"The polymetallic nodules that produce this oxygen contain metals such as cobalt, nickel, copper, lithium and manganese -- which are all critical elements used in batteries," said Geiger, who co-authored the study. "Several large-scale mining companies now aim to extract these precious elements from the seafloor at depths of 10,000 to 20,000 feet below the surface. We need to rethink how to mine these materials, so that we do not deplete the oxygen source for deep-sea life."

[...] Sweetman made the discovery while sampling the seabed of the Clarion-Clipperton Zone, a mountainous submarine ridge along the seafloor that extends nearly 4,500 miles along the north-east quadrant of the Pacific Ocean. When his team initially detected oxygen, he assumed the equipment must be broken.

"When we first got this data, we thought the sensors were faulty because every study ever done in the deep sea has only seen oxygen being consumed rather than produced," Sweetman said. "We would come home and recalibrate the sensors, but, over the course of 10 years, these strange oxygen readings kept showing up.

"We decided to take a back-up method that worked differently to the optode sensors we were using. When both methods came back with the same result, we knew we were onto something ground-breaking and unthought-of."

Mysterious 'Dark Oxygen' Is Being Produced On The Ocean Floor

Arthur T Knackerbracket has processed the following story:

A new form of oxygen production has been detected on the ocean floor, raising concerns about the impact of deep-sea mining to this vital ecosystem.

Researchers have discovered large amounts of oxygen being produced deep in the Pacific Ocean – and the source appears to be lumps of metal.

The researchers made the discovery in a region of the ocean 4,000 metres down, where a large amount of “polymetallic nodules” cover the ocean floor. The scientists believe that these nodules are producing this “dark oxygen”.

The team said the discovery is fascinating, as it suggests there is another source of oxygen production other than photosynthesis. It is believed that these metal nodules are acting as “geo-batteries”.

These nodules are believed to play a role in the dark oxygen production (DOP) by catalysing the splitting of water molecules. The researchers say further investigation needs to be done after this discovery to see how this process could be impacted by deep-sea mining.

[...] Sweetman said that researchers should map the areas where oxygen production is occurring before deep-sea mining occurs, due to the potential impact it could have on ecosystems.

“If there’s oxygen being produced in large amounts, it’s possibly going to be important for the animals that are living there,” he said.

Sweetman, A. K. et al. Nature Geosci. https://doi.org/10.1038/s41561-024-01480-8 (2024)


Original Submission #1Original Submission #2Original Submission #3

posted by hubie on Wednesday July 24, @06:58PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

After years of indecision on the issue of third-party cookies, Google has finally made a decision: on Monday, the company revealed that it would no longer pursue its plan to cut off support for third-party cookies in Chrome. Instead, Google played up other options that would hand more control of privacy and tracking to Chrome users.

As one alternative solution, Google touted its Privacy Sandbox, a set of tools in Chrome designed to help you manage third-party cookies that track you and deliver targeted ads. Google said that the performance of this tool's APIs would improve over time following greater industry adoption. That transition is likely to require a lot of effort by publishers, advertisers, and other participants, so Google has something else up its sleeve.

"In light of this, we are proposing an updated approach that elevates user choice," Google said in a Monday blog post. "Instead of deprecating third-party cookies, we would introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they'd be able to adjust that choice at any time."

[...] Third-party cookies have proven to be a contentious issue in the browsing world.

Users see them as a privacy violation, as advertisers use such cookies to track their activities across the internet to serve targeted ads. Regulators worry about flaws in the privacy tools available to users. Meanwhile, websites and advertisers view these cookies as a revenue source, as they provide insight into users' habits and interests. With all these parties weighing in on Google's plans, it's no wonder the company was kicking the can down the road.

[...] In an email to ZDNET, Longacre said: "If you ask me, the decision means Google is finally admitting the alternatives to third-party cookies are worse for targeting and no better for consumer privacy. That said, it was ultimately combined pressure from three groups -- regulators, advertisers, and publishers – that influenced Google to make this decision, in my opinion."

Other browser makers have been able to cut off support for third-party cookies without issue.

[...] Google's mention of a new option in Chrome for managing third-party cookies seems hazy. The browser already offers users a way to stop third-party cookies. The process is as simple as going to Settings, selecting "Privacy and security," clicking "Third-party cookies," and then turning on the switch to block them. What more could Google add to the browser without making the process too confusing?

"I imagine this change simply means you will get an annoying pop-up like this on every new website you visit -- kind of what happens currently in the EU," Longacre said. "So yes, expect more annoying EU-style pop-ups on every site you visit. This will be bad for UX [user experience], but will keep the regulators happy on both sides of the Atlantic."

Ultimately, the entire process has been largely driven by regulators, according to Longacre, as people are upset over how their personal information is handled online. Users feel that cookies and other digital advertising tools that collect their data are intrusive, and they don't trust the tech world, he added.

"Privacy is now regarded as a fundamental right, and organizations are moving swiftly to safeguard consumer PII (personally identifiable information), with limited or no movement of consumer data and capturing of consent," Longacre said. "Google's announcement today will neither slow down nor reverse this process."


Original Submission

posted by martyb on Wednesday July 24, @02:13PM   Printer-friendly

An innovative membrane that captures carbon dioxide from the air using humidity differences has been developed. This energy-efficient method could help meet climate goals by offering a sustainable carbon dioxide source for various applications. (Artist’s concept.) Credit: SciTechDaily.com

Direct air capture was identified as one of the ‘Seven chemical separations to change the world’. This is because although carbon dioxide is the main contributor to climate change (we release ~40 billion tons into the atmosphere every year), separating carbon dioxide from air is very challenging due to its dilute concentration (~0.04%).

Prof Ian Metcalfe, Royal Academy of Engineering Chair in Emerging Technologies in the School of Engineering, Newcastle University, UK, and lead investigator states, “Dilute separation processes are the most challenging separations to perform for two key reasons. First, due to the low concentration, the kinetics (speed) of chemical reactions targeting the removal of the dilute component are very slow. Second, concentrating the dilute component requires a lot of energy.”

These are the two challenges that the Newcastle researchers (with colleagues at the Victoria University of Wellington, New Zealand, Imperial College London, UK, Oxford University, UK, Strathclyde University, UK, and UCL, UK) set out to address with their new membrane process. By using naturally occurring humidity differences as a driving force for pumping carbon dioxide out of air, the team overcame the energy challenge. The presence of water also accelerated the transport of carbon dioxide through the membrane, tackling the kinetic challenge.

The work is published in Nature Energy and Dr. Greg A. Mutch, Royal Academy of Engineering Fellow in the School of Engineering, Newcastle University, UK explains, “Direct air capture will be a key component of the energy system of the future. It will be needed to capture the emissions from mobile, distributed sources of carbon dioxide that cannot easily be decarbonized in other ways.”

“In our work, we demonstrate the first synthetic membrane capable of capturing carbon dioxide from air and increasing its concentration without a traditional energy input like heat or pressure. I think a helpful analogy might be a water wheel on a flour mill. Whereas a mill uses the downhill transport of water to drive milling, we use it to pump carbon dioxide out of the air.”

Separation processes underpin most aspects of modern life. From the food we eat, to the medicines we take, and the fuels or batteries in our car, most products we use have been through several separation processes. Moreover, separation processes are important for minimizing waste and the need for environmental remediation, such as direct air capture of carbon dioxide.

However, in a world moving towards a circular economy, separation processes will become even more critical. Here, direct air capture might be used to provide carbon dioxide as a feedstock for making many of the hydrocarbon products we use today, but in a carbon-neutral, or even carbon-negative, cycle.

Most importantly, alongside transitioning to renewable energy and traditional carbon capture from point sources like power plants, direct air capture is necessary for realizing climate targets, such as the 1.5 °C goal set by the Paris Agreement.

Dr. Evangelos Papaioannou, Senior Lecturer in the School of Engineering, Newcastle University, UK explains, “In a departure from typical membrane operation, and as described in the research paper, the team tested a new carbon dioxide-permeable membrane with a variety of humidity differences applied across it. When the humidity was higher on the output side of the membrane, the membrane spontaneously pumped carbon dioxide into that output stream.”

Using X-ray micro-computed tomography with collaborators at UCL and the University of Oxford, the team was able to precisely characterize the structure of the membrane. This enabled them to provide robust performance comparisons with other state-of-the-art membranes.

A key aspect of the work was modeling the processes occurring in the membrane at the molecular scale. Using density-functional-theory calculations with a collaborator affiliated to both Victoria University of Wellington and Imperial College London, the team identified ‘carriers’ within the membrane. The carrier uniquely transports both carbon dioxide and water but nothing else. Water is required to release carbon dioxide from the membrane, and carbon dioxide is required to release water. Because of this, the energy from a humidity difference can be used to drive carbon dioxide through the membrane from a low concentration to a higher concentration.

Prof Metcalfe adds, “This was a real team effort over several years. We are very grateful for the contributions from our collaborators, and for the support from the Royal Academy of Engineering and the Engineering & Physical Sciences Research Council.”

I.S. Metcalfe, G.A. Mutch, E.I. Papaioannou, [et al]. “Separation and concentration of carbon dioxide from air using a humidity-driven molten-carbonate membraneNature Energy; 19 July 2024. (DOI: 10.1038/s41560-024-01588-6)


Original Submission

posted by janrinok on Wednesday July 24, @09:38AM   Printer-friendly
from the when-will-we-break-1nm? dept.

Arthur T Knackerbracket has processed the following story:

Last week, Applied Materials pulled back the curtain on its latest materials engineering solutions designed to enable copper wiring to scale down to 2nm dimensions and below while also reducing electrical resistance and strengthening chips for 3D stacking.

The company's Black Diamond low-k dielectric material has been offered since the early 2000s. It surrounds copper wires with a special film engineered to reduce the buildup of electrical charges that increase power consumption and cause interference between electrical signals.

Applied Materials has now come up with an enhanced version of Black Diamond, which reduces the minimum k-value even further, enabling copper wiring scaling to the 2nm node while also increasing mechanical strength – a critical property as chipmakers look to stack multiple logic and memory dies vertically.

But scaling the copper wiring itself as dimensions shrink is another enormous challenge. Today's most cutting-edge logic chips can pack over 60 miles of copper wires that are fashioned by first etching trenches into the dielectric material and then depositing an ultra-thin barrier layer to prevent copper migration. A liner layer goes down next to aid copper adhesion before the final copper deposition fills the remaining space.

The problem is that at 2nm dimensions and below, the barrier and liner layers consume an increasingly large percentage of the available trench volume, leaving little room for sufficient copper fill and risking high resistance and reliability issues. Applied Materials has solved this predicament with this brand-new materials concoction.

Their latest Integrated Materials Solution (IMS) combines six different core technologies into one high-vacuum system, including an industry-first pairing of ruthenium and cobalt to form an ultra-thin 2nm binary metal liner. This allows a 33% reduction in liner thickness compared to previous generations while also improving surface properties for seamless, void-free copper adhesion and reflow. The end result is up to 25% lower electrical resistance in chip wiring to boost performance and reduce power leakage.

Applied Materials claims that all leading logic chipmakers have already adopted its new copper barrier seed IMS with ruthenium CVD technology for 3nm chip production, with 2nm nodes expected to follow.


Original Submission

posted by janrinok on Wednesday July 24, @04:53AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

UK communications regulator Ofcom has banned mid-contract price rises linked to inflation.

The change, which comes into effect from January 2025, means that price rises must be clearly written into contracts. Ofcom noted that BT and Vodafone had already changed their pricing practices accordingly.

Cristina Luna-Esteban, Ofcom Telecoms Policy Director, criticized the practice of vendors tying customers into contracts where the price could change based on inflation. Future inflation is difficult to predict, after all.

Luna-Esteban said. "We're stepping in on behalf of phone, broadband and pay TV customers to stamp out this practice, so people can be certain of the price they will pay, compare deals more easily and take advantage of the competitive market we have in the UK."

Ofcom proposed the ban in 2023 after UK inflation soared during the previous years, making it impossible for customers to predict what they might be paying during a contract's term. The imposition of early termination fees for customers seeking to escape what they saw as an unexpected rise added to the pain.

In theory, a customer could exit a contract without penalty if they weren't made aware of potential rises when signing the contract. However, providers were able to get around this by simply saying prices would rise by whatever the consumer price index was at the time, plus a certain percentage.

Therefore, the customer was made aware of a rise – but didn't know what it would be.

Ofcom's solution is to require the provider to clearly disclose the rises to avoid a situation in which customers do not know how much they will be expected to pay during their contract term.

[...] "Finally, broadband and mobile customers will know ahead of time exactly what they will pay for the duration of a contract, making it easier for them to properly manage their finances."


Original Submission

posted by janrinok on Tuesday July 23, @11:15PM   Printer-friendly

Botanists vote to remove racist reference from plants' scientific names:

[ Editor's Comment: caffra means 'infidel' in Arabic, and it was used as a racial slur against black (non-arabic) people, predominantly in South Africa. ]

Scientists have voted to eliminate the names of certain plants that are deemed to be racially offensive. The decision to remove a label that contains such a slur was taken last week after a gruelling six-day session attended by more than 100 researchers, as part of the International Botanical Congress, which officially opens on Sunday in Madrid.

The effect of the vote will be that all plants, fungi and algae names that contain the word caffra, which originates in insults made against Black people, will be replaced by the word affra to denote their African origins. More than 200 species will be affected, including the coast coral tree, which will be known as Erythrina affra instead of Erythrina caffra.

The scientists attending the nomenclature session also agreed to create a special committee which would rule on names given to newly discovered plants, fungi and algae. These are usually named by those who first describe them in the scientific literature. However, the names could now be overruled by the committee if they are deemed to be derogatory to a group or race.

A more general move to rule on other controversial historical labels was not agreed by botanists. Nevertheless, the changes agreed last week are the first rule alterations that taxonomists have officially agreed to the naming of species, and were welcomed by the botanist Sandy Knapp of the Natural History Museum in London, who presided over the six-day nomenclature session.

"This is an absolutely monumental first step in addressing an issue that has become a real problem in botany and also in other biological sciences," she told the Observer. "It is a very important start."

The change to remove the word caffra from species names was proposed by the plant taxonomist Prof Gideon Smith of Nelson Mandela University in South Africa, and his colleague Prof Estrela Figueiredo. They have campaigned for years for changes to be made to the international system for giving scientific names to plants and animals in order to permit the deletion and substitution of past names deemed objectionable.

"We are very pleased with the retroactive and permanent eradication of a racial slur from botanical nomenclature," Smith told the Observer. "It is most encouraging that more than 60% of our international colleagues supported this proposal."

And the Australian plant taxonomist Kevin Thiele – who had originally pressed for historical past names to be subject to changes as well as future names – told Nature that last week's moves were "at least a sliver of recognition of the issue".

Plant names are only a part of the taxonomic controversy, however. Naming animals after racists, fascists and other controversial figures cause just as many headaches as those posed by plants, say scientists. Examples include a brown, eyeless beetle which has been named after Adolf Hitler. Nor is Anophthalmus hitleri alone. Many other species' names recall individuals that offend, such as the moth Hypopta mussolinii.

The International Commission on Zoological Nomenclature (ICZN) has so far refused to consider changing its rules to allow the removal of racist or fascist references. Renaming would be disruptive, while replacement names could one day be seen as offensive "as attitudes change in the future", it announced in the Zoological Journal of the Linnean Societylast year. Nevertheless, many researchers have acknowledged that some changes will have to be made to zoological nomenclature rules in the near future.


Original Submission

posted by janrinok on Tuesday July 23, @05:31PM   Printer-friendly
from the fingers-crossed dept.

Academic journals are a lucrative scam – and we're determined to change that:

'It's never been more evident that for-profit publishing simply does not align with the aims of scholarly inquiry.' Photograph: agefotostock/AlamyView image in fullscreen'It's never been more evident that for-profit publishing simply does not align with the aims

Giant publishers are bleeding universities dry, with profit margins that rival Google's. So we decided to start our own

If you've ever read an academic article, the chances are that you were unwittingly paying tribute to a vast profit-generating machine that exploits the free labour of researchers and siphons off public funds.

The annual revenues of the "big five" commercial publishers – Elsevier, Wiley, Taylor & Francis, Springer Nature, and SAGE – are each in the billions, and some have staggering profit margins approaching 40%, surpassing even the likes of Google. Meanwhile, academics do almost all of the substantive work to produce these articles free of charge: we do the research, write the articles, vet them for quality and edit the journals.

Not only do these publishers not pay us for our work; they then sell access to these journals to the very same universities and institutions that fund the research and editorial labour in the first place. Universities need access to journals because these are where most cutting-edge research is disseminated. But the cost of subscribing to these journals has become so exorbitantly expensive that some universities are struggling to afford them. Consequently, many researchers (not to mention the general public) remain blocked by paywalls, unable to access the information they need. If your university or library doesn't subscribe to the main journals, downloading a single paywalled article on philosophy or politics can cost between £30 and £40.

The commercial stranglehold on academic publishing is doing considerable damage to our intellectual and scientific culture. As disinformation and propaganda spread freely online, genuine research and scholarship remains gated and prohibitively expensive. For the past couple of years, I worked as an editor of Philosophy & Public Affairs, one of the leading journals in political philosophy. It was founded in 1972, and it has published research from renowned philosophers such as John Rawls, Judith Jarvis Thomson and Peter Singer. Many of the most influential ideas in our field, on topics from abortion and democracy to famine and colonialism, started out in the pages of this journal. But earlier this year, my co-editors and I and our editorial board decided we'd had enough, and resigned en masse.

We were sick of the academic publishing racket and had decided to try something different. We wanted to launch a journal that would be truly open access, ensuring anyone could read our articles. This will be published by the Open Library of Humanities, a not-for-profit publisher funded by a consortium of libraries and other institutions. When academic publishing is run on a not-for-profit basis, it works reasonably well. These publishers provide a real service and typically sell the final product at a reasonable price to their own community. So why aren't there more of them?

To answer this, we have to go back a few decades, when commercial publishers began buying up journals from university presses. Exploiting their monopoly position, they then sharply raised prices. Today, a library subscription to a single journal in the humanities or social sciences typically costs more than £1,000 a year. Worse still, publishers often "bundle" journals together, forcing libraries to buy ones they don't want in order to have access to ones they do. Between 2010 and 2019, UK universities paid more than £1bn in journal subscriptions and other publishing charges. More than 90% of these fees went to the big five commercial publishers (UCL and Manchester shelled out over £4m each). It's worth remembering that the universities funded this research, paid the salaries of the academics who produced it and then had to pay millions of pounds to commercial publishers in order to access the end product.

Even more astonishing is the fact these publishers often charge authors for the privilege of publishing in their journals. In recent years, large publishers have begun offering so-called "open access" articles that are free to read. On the surface, this might sound like a welcome improvement. But for-profit publishers provide open access to readers only by charging authors, often thousands of pounds, to publish their own articles. Who ends up paying these substantial author fees? Once again, universities. In 2022 alone, UK institutions of higher education paid more than £112m to the big five to secure open-access publication for their authors.

This trend is having an insidious impact on knowledge production. Commercial publishers are incentivised to try to publish as many articles and journals as possible, because each additional article brings in more profit. This has led to a proliferation of junk journals that publish fake research, and has increased the pressure on rigorous journals to weaken their quality controls. It's never been more evident that for-profit publishing simply does not align with the aims of scholarly inquiry.

There is an obvious alternative: universities, libraries, and academic funding agencies can cut out the intermediary and directly fund journals themselves, at a far lower cost. This would remove commercial pressures from the editorial process, preserve editorial integrity and make research accessible to all. The term for this is "diamond" open access, which means the publishers charge neither authors, editors, nor readers (this is how our new journal will operate). Librarians have been urging this for years. So why haven't academics already migrated to diamond journals?

The reason is that such journals require alternative funding sources, and even if such funding were in place, academics still face a massive collective action problem: we want a new arrangement but each of us, individually, is strongly incentivised to stick with the status quo. Career advancement depends heavily on publishing in journals with established name recognition and prestige, and these journals are often owned by commercial publishers. Many academics – particularly early-career researchers trying to secure long-term employment in an extremely difficult job market – cannot afford to take a chance on new, untested journals on their own.


Original Submission

posted by hubie on Tuesday July 23, @11:45AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

New research led by scientists at the University of Michigan reveals that the Arctic has lost approximately 25% of its cooling ability since 1980 due to diminishing sea ice and reduced reflectivity. Additionally, this phenomenon has contributed to a global loss of up to 15% in cooling power.

Using satellite measurements of cloud cover and the solar radiation reflected by sea ice between 1980 and 2023, the researchers found that the percent decrease in sea ice’s cooling power is about twice as high as the percent decrease in annual average sea ice area in both the Arctic and Antarctic. The added warming impact from this change to sea ice cooling power is toward the higher end of climate model estimates.

“When we use climate simulations to quantify how melting sea ice affects climate, we typically simulate a full century before we have an answer,” said Mark Flanner, professor of climate and space sciences and engineering and the corresponding author of the study published in Geophysical Research Letters.

“We’re now reaching the point where we have a long enough record of satellite data to estimate the sea ice climate feedback with measurements.”

[...] The Arctic has seen the largest and most steady declines in sea ice cooling power since 1980, but until recently, the South Pole had appeared more resilient to the changing climate. Its sea ice cover had remained relatively stable from 2007 into the 2010s, and the cooling power of the Antarctic’s sea ice was actually trending up at that time.

That view abruptly changed in 2016, when an area larger than Texas melted on one of the continent’s largest ice shelves. The Antarctic lost sea ice then too, and its cooling power hasn’t recovered, according to the new study. As a result, 2016 and the following seven years have had the weakest global sea ice cooling effect since the early 1980s.

Beyond disappearing ice cover, the remaining ice is also growing less reflective as warming temperatures and increased rainfall create thinner, wetter ice and more melt ponds that reflect less solar radiation. This effect has been most pronounced in the Arctic, where sea ice has become less reflective in the sunniest parts of the year, and the new study raises the possibility that it could be an important factor in the Antarctic, too—in addition to lost sea ice cover.

[...] The research team hopes to provide their updated estimates of sea ice’s cooling power and climate feedback from less reflective ice to the climate science community via a website that is updated whenever new satellite data is available.

Reference: “Earth’s Sea Ice Radiative Effect From 1980 to 2023” by A. Duspayev, M. G. Flanner and A. Riihelä, 17 July 2024, Geophysical Research Letters.
  DOI: 10.1029/2024GL109608


Original Submission

posted by hubie on Tuesday July 23, @06:10AM   Printer-friendly

https://pldb.io/blog/JohnOusterhout.html

Dr. John Ousterhout is a computer science luminary who has made significant contributions to the field of computer science, particularly in the areas of operating systems and file systems. He is the creator of the Tcl scripting language, and has also worked on several major software projects, including the Log-Structured file system and the Sprite operating system. John Ousterhout's creation of Tcl has had a lasting impact on the technology industry, transforming the way developers think about scripting and automation.


Original Submission

posted by hubie on Tuesday July 23, @01:26AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Two new studies suggest that antibodies that attack people’s own tissues might cause ongoing neurological issues that afflict millions of people with the disease. 

When scientists transferred these antibodies from people with long COVID into healthy mice, certain symptoms, including pain, transferred to the animals too, researchers reported May 31 on bioRxiv.org and June 19 on medRxiv.org. 

Though scientists have previously implicated such antibodies, known as autoantibodies, as suspects in long COVID, the new studies are the first to offer direct evidence that they can do harm. “This is a big deal,” says Manali Mukherjee, a translational immunologist at McMaster University in Hamilton, Canada, who was not involved in the work. The papers make a good case for therapies that target autoantibodies, she says.

The work could also offer “peace of mind to some of the long-haulers,” Mukherjee says. As someone who has endured long COVID herself, she understands that when patients don’t know the cause of their suffering, it can add to their anxiety. They wonder, “What the hell is going wrong with me?” she says.

[...] Scientists have proposed many hypotheses for what causes long COVID, including SARS-CoV-2 virus lingering in the tissues and the reawakening of dormant herpes viruses (SN: 3/4/24). Those elements may still play a role in some people’s long COVID symptoms, but for pain, at least, rogue antibodies seem to be enough to kick-start the symptom all on their own. It’s not an out-of-the-blue role for autoantibodies; scientists suspect they may also be involved in other conditions that cause people pain, including fibromyalgia and myalgic encephalomyelitis/chronic fatigue syndrome.

But if doctors could identify which long COVID patients have pain-linked autoantibodies, they could try to reduce the amount circulating in the blood, says Iwasaki, who is also a Howard Hughes Medical Institute investigator. “I think that would really be a game changer for this particular set of patients.” 

The work represents a “very strong level of evidence” that autoantibodies could cause harm in people with long COVID, says Ignacio Sanz, an immunologist at Emory University in Atlanta. Both he and Mukherjee would like to see the findings validated in larger sets of participants. And the real clincher, Sanz says, would come from longer-term studies. If scientists could show that patients’ symptoms ease as these rogue antibodies disappear over time, that’d be an even surer sign of their guilt. 

References:
    • K. S. Guedes de Sa et alA causal link between autoantibodies and neurological symptoms in long COVID. medRxiv.org. Posted June 19, 2024. doi: 10.1101/2024.06.18.24309100.
    • H-J Chen et alTransfer of IgG from long COVID patients induces symptomology in mice. bioRxiv.org. Posted May 31, 2024. doi: 10.1101/2024.05.30.596590. 


Original Submission