Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current job?

  • less than 1 year
  • 1 year up to 2 years
  • 2 years up to 3 years
  • 3 years up to 5 years
  • 5 years up to 10 years
  • 10 or more years
  • work is for suckers
  • I haven't got a job you insensitive clod!

[ Results | Polls ]
Comments:116 | Votes:433

posted by jelizondo on Saturday August 23, @11:00AM   Printer-friendly
from the blame-the-wife dept.

https://phys.org/news/2025-08-styling-hair-products-billions-nanoparticles.html

A Purdue research team led by Nusrat Jung, an assistant professor in the Lyles School of Civil and Construction Engineering, and her Ph.D. student Jianghui Liu, found that a 10–20-minute heat-based hair care routine exposes a person to upward of 10 billion nanoparticles that are directly deposited into their lungs. These particles can lead to serious health risks such as respiratory stress, lung inflammation and cognitive decline.

The team's findings are published in Environmental Science & Technology.

"This is really quite concerning," Jung said. "The number of nanoparticles inhaled from using typical, store-bought hair-care products was far greater than we ever anticipated."

Until this study, Jung said, no real-time measurements on nanoparticle formation during heat-based hair styling had been conducted in full-scale residential settings. Their research addresses this gap by examining temporal changes in indoor nanoparticle number concentrations and size distributions during realistic heat-based hair styling routines.

"By providing a detailed characterization of indoor nanoparticle emissions during these personal care routines, our research lays the groundwork for future investigations into their impact on indoor atmospheric chemistry and inhalation toxicity," Jung said. "Studies of this kind have not been done before, so until now, the public has had little understanding of the potential health risks posed by their everyday hair care routines."

What makes these hair care products so harmful, Liu said, is when they are combined with large amounts of heat from styling appliances such as curling irons and straighteners. When combined with heat exceeding 300 degrees Fahrenheit, the chemicals not only rapidly release into the air but also lead to the formation of substantial numbers of new airborne nanoparticles.

"Atmospheric nanoparticle formation was especially responsive to these heat applications," Liu said. "Heat is the main driver—cyclic siloxanes and other low-volatility ingredients volatilize, nucleate and grow into new nanoparticles, most of them smaller than 100 nanometers."

In a study Jung published in 2023, her team found that heat significantly increased emissions of volatile chemicals such as decamethylcyclopentasiloxane (aka D5 siloxane) from hair care routines. D5 siloxane in particular was identified as a compound of concern when inhaled.

"When we first studied the emissions from hair care products during heat surges, we focused on the volatile chemicals that were released, and what we found was already quite concerning," Jung said. "But when we took an even closer look with aerosol instrumentation typically used to measure tailpipe exhaust, we discovered that these chemicals were generating bursts of anywhere from 10,000 to 100,000 nanoparticles per cubic centimeter."

Jung said that D5 siloxane is an organosilicon compound and is often listed first or second in the ingredient lists of many hair care products, indicating it can be among the most abundant ingredients. It has become a common ingredient over the past few decades in many personal care products due to its low surface tension, inertness, high thermal stability and smooth texture.

According to the European Chemicals Agency, D5 siloxane is classified as "very persistent, very bioaccumulative." And while the test results on laboratory animals are already concerning, Jung said, there is little information on its human impact. The chemical in wash-off cosmetic products has already been restricted in the European Union because of this.

"D5 siloxane has been found to lead to adverse effects on the respiratory tract, liver and nervous system of laboratory animals," Jung said previously. However, under high heat, cyclic siloxanes and other hair care product ingredients can volatilize and contribute to the formation of large numbers of airborne nanoparticles that deposit efficiently throughout the respiratory system. These secondary emissions and exposures remain far less characterized than the primary chemical emissions.

"And now it appears that the airborne hazards of these products—particularly 'leave-on' formulations designed to be heat-resistant, such as hair sprays, creams and gels—are even greater than we expected," Liu said.

According to the report, respiratory tract deposition modeling indicated that more than 10 billion nanoparticles could deposit in the respiratory system during a single hair styling session, with the highest dose occurring in the pulmonary region—the deepest part of the lungs. Their findings identified heat-based hair styling as a significant indoor source of airborne nanoparticles and highlight previously underestimated inhalation exposure risks.

As for how to avoid putting oneself at risk of inhaling mixtures of airborne nanoparticles and volatile chemicals, Jung and Liu said the best course of action is simply to avoid using such products—particularly in combination with heating devices. If that is not possible, Jung recommends reducing exposure by using bathroom exhaust fans for better room ventilation.

"If you must use hair care products, limit their use and ensure the space is well ventilated," Liu said. "Even without heating appliances, better ventilation can reduce exposure to volatile chemicals, such as D5 siloxane, in these products."

To more fully capture the complete nanoparticle formation and growth process, Jung said future studies should integrate nano-mobility particle sizing instruments capable of detecting particles down to a single nanometer. The chemical composition of these particles should also be evaluated.

"By addressing these research gaps, future studies can provide a more holistic understanding of the emissions and exposures associated with heat-based hair styling, contributing to improved indoor air pollution assessments and mitigation strategies," Jung said.

Jung and Liu's experimental research was conducted in a residential architectural engineering laboratory that Jung designed: the Purdue zero Energy Design Guidance for Engineers (zEDGE) tiny house.

The zEDGE lab is a mechanically ventilated, single-zone residential building with a conditioned interior. A state-of-the-art high-resolution electrical low-pressure impactor (HR-ELPI+) from Jung's laboratory was used to measure airborne nanoparticles in indoor air in real time, second by second. In parallel, a proton transfer reaction time-of-flight mass spectrometer (PTR-TOF-MS) was used to monitor volatile chemicals in real time.

The hair care routine emission experiments were conducted during a measurement campaign in zEDGE over a period of several months, including three experiment types: realistic hair care experiments that replicate actual hair care routines in the home environment, hot plate emission experiments that explore the relationship between the temperature of the hair care tools and nanoparticle formation, and surface area emission experiments that investigate how hair surface area impacts nanoparticle emissions during hair care events.

For the realistic hair care routine emission experiments, participants were asked to bring their own hair care products and hair styling tools to replicate their routines in zEDGE. Prior to each experiment, the participants were instructed to separate their hair into four sections. The hair length of each participant was categorized as long hair (below the shoulder) or short hair (above the shoulder). The sequence of each experiment consisted of four periods, to replicate a real-life routine.

After hair styling, the participants had two minutes to collect the tools and leave zEDGE; this was followed by a 60-minute concentration decay period in which zEDGE was unoccupied, and the HR-ELPI+ monitored the decay in indoor nanoparticle concentrations. The experiments and subsequent analysis focused on the formation of nanoparticles and resulting exposure during and after active hair care routine periods.

More information: Jianghui Liu et al, Indoor Nanoparticle Emissions and Exposures during Heat-Based Hair Styling Activities, Environmental Science & Technology (2025). DOI: 10.1021/acs.est.4c14384


Original Submission

posted by hubie on Saturday August 23, @06:10AM   Printer-friendly

New research ferments the perfect recipe for fine chocolate flavour - University of Nottingham:

Researchers have identified key factors that influence the flavour of chocolate during the cocoa bean fermentation process, a discovery that could offer chocolate producers a powerful tool to craft consistently high-quality, flavour-rich chocolate.

Scientists from the University of Nottingham's School of Biosciences examined how cacao bean temperature, pH, and microbial communities interact during fermentation and how these factors shape chocolate flavour. The team identified key microbial species and metabolic traits associated with fine-flavour chocolate and found that both abiotic factors (such as temperature and pH) and biotic factors (the microbial communities) are strong,consistent indicators of flavour development. The study has been published today in Nature Microbiology.

The quality and flavour of chocolate begin with the cacao bean, which is profoundly influenced by both pre- and post-harvest factors. Among these, fermentation is the first, and one of the most critical steps after harvest. It lays the foundation for aroma development, flavour complexity, and the reduction of bitterness in the final chocolate product.

Dr David Gopaulchan, the first author of the paper, from the School of Biosciences explains: "Fermentation is a natural, microbe-driven process that typically takes place directly on cocoa farms, where harvested beans are piled in boxes, heaps, or baskets. In these settings, naturally occurring bacteria and fungi from the surrounding environment break down the beans, producing key chemical compounds that underpin chocolate's final taste and aroma. However, this spontaneous fermentation is largely uncontrolled. Farmers have little influence over which microbes dominate or how the fermentation process unfolds. As a result, fermentation, and thus the flavour and quality of the beans, varies widely between harvests, farms, regions, and countries."

The researchers wanted to find out whether this unstable, natural process could be replicated and controlled in the lab. Working with Colombian farmers during the fermentation process they identified the factors that influence flavour. They were then able to use this knowledge to create a lab fermentation process and developed a defined microbial community, a curated mix of bacteria and fungi, capable of replicating the key chemical and sensory outcomes of traditional fermentations. This synthetic community successfully mimicked the dynamics of on-farm fermentations and produced chocolate with the same fine-flavour characteristics.

Dr David Gopaulchan adds: "The discoveries we have made are really important for helping chocolate producers to be able to consistently maximise their cocoa crops as we have shown they can rely on measurable markers such as specific pH, temperature, and microbial dynamics, to reliably predict and achieve consistent flavour outcomes.

This research signals a shift from spontaneous, uncontrolled fermentations to a standardized, science-driven process. Just as starter cultures revolutionized beer and cheese production, cocoa fermentation is poised for its own transformation, powered by microbes, guided by data, and tailored for flavour excellence. By effectively domesticating the fermentation process, this work lays the foundation for a new era in chocolate production, where defined starter cultures can standardise fermentation, unlock novel flavour possibilities, and elevate chocolate quality on a global scale.

Journal Reference: Gopaulchan, D., Moore, C., Ali, N. et al. A defined microbial community reproduces attributes of fine flavour chocolate fermentation. Nat Microbiol (2025). https://doi.org/10.1038/s41564-025-02077-6


Original Submission

posted by hubie on Saturday August 23, @01:25AM   Printer-friendly

https://arstechnica.com/tech-policy/2025/08/t-mobile-claimed-selling-location-data-without-consent-is-legal-judges-disagree/
https://archive.ph/LBtay

A federal appeals court rejected T-Mobile's attempt to overturn $92 million in fines for selling customer location information to third-party firms.

The Federal Communications Commission last year fined T-Mobile, AT&T, and Verizon, saying the carriers illegally shared access to customers' location information without consent and did not take reasonable measures to protect that sensitive data against unauthorized disclosure. The fines relate to sharing of real-time location data that was revealed in 2018, but it took years for the FCC to finalize the penalties.

The three carriers appealed the rulings in three different courts, and the first major decision was handed down Friday. A three-judge panel at the US Court of Appeals for the District of Columbia Circuit ruled unanimously against T-Mobile and its subsidiary Sprint.

"Every cell phone is a tracking device," the ruling begins. "To receive service, a cell phone must periodically connect with the nearest tower in a wireless carrier's network. Each time it does, it sends the carrier a record of the phone's location and, by extension, the location of the customer who owns it. Over time, this information becomes an exhaustive history of a customer's whereabouts and 'provides an intimate window into [that] person's life.'"

Until 2019, T-Mobile and Sprint sold customer location information (CLI) to location information aggregators LocationSmart and Zumigo.

The carriers did not verify whether buyers obtained customer consent, the ruling said. "Several bad actors abused Sprint and T-Mobile's programs to illicitly access CLI without the customers' knowledge, let alone consent. And even after Sprint and T-Mobile became aware of those abuses, they continued to sell CLI for some time without adopting new safeguards," judges wrote.

Carriers claimed selling data didn't violate law. Instead of denying the allegations, the carriers argued that the FCC overstepped its authority. But the appeals court panel decided that the FCC acted properly:

[...] T-Mobile told Ars today that it is "currently reviewing the court's action" but did not provide further comment. The carrier could seek an en banc review in front of all the appeals court's justices, or ask the Supreme Court to review the case. Meanwhile, AT&T is challenging its fine in the 5th Circuit appeals court while Verizon is challenging in the 2nd Circuit.

[...] The carriers also argued that the device-location information, which is "passively generated when a mobile device pings cell towers to support both voice and data services," does not qualify as Customer Proprietary Network Information (CPNI) under the law. The carriers said the law "covers information relating to the 'location... of use' of a telecommunications service," and claimed that only call location information fits that description.

Judges faulted T-Mobile and Sprint for relying on "strained interpretations" of the statute. "We begin with the text. The Communications Act refers to the 'location... of a telecommunications service, not the location of a voice call... Recall that cell phones connect periodically to cell towers, and that is what enables the devices to send and receive calls at any moment," the ruling said. In the judges' view, "a customer 'uses' a telecommunications service whenever his or her device connects to the carrier's network for the purpose of being able to send and receive calls. And the Carriers' reading therefore does not narrow 'location... of use' to times when the customer is actively on a voice call."

Judges also weren't persuaded by the argument that the fines were too large. "The Carriers note that the Commission previously had imposed such large fines only in cases involving fraud or intentional efforts to mislead consumers, and they are guilty of neither form of misconduct," the ruling said. "The Commission reasonably explained, however, that the Carriers' conduct was 'egregious': Even after the Securus breach exposed Sprint and T-Mobile's safeguards as inadequate, both carriers continued to sell access to CLI under a broken system."


Original Submission

posted by hubie on Friday August 22, @08:39PM   Printer-friendly

Tree species face unprecedented climate shifts across their ranges:

Trees may seem like quiet bystanders, but they're vital to the health of ecosystems, the stability of our climate, and even our own survival. As the planet heats up, these ancient organisms are being pushed to their limits – and the impacts reach far beyond the forest.

A new study has shed light on the future of trees under climate change. The research team modeled how more than 32,000 tree species might respond to changing temperatures.

The findings are stark. If greenhouse gas emissions remain high, most trees on Earth could find themselves living in climates they have never known.

The study reveals a major upheaval for the world's trees. According to study lead author Dr. Coline Boonman from Wageningen University, the team found that nearly 70% of tree species will see significant climate shifts in at least part of their range by the end of this century.

"For some species, over half of their habitat could be affected under an extreme 4°C warming scenario," said Dr. Boonman.

This means trees in these regions will face temperatures, rainfall, and seasonal patterns outside their historical experience. For organisms adapted to specific conditions, these shifts may prove overwhelming. Some species may fail to reproduce.

Others may suffer from drought, disease, or increased competition. Trees cannot quickly move. They grow slowly, with some species taking decades to mature. As the climate shifts fast, many may be left behind.

The study does not just warn of future risks. It shows where those risks are most concentrated. The researchers mapped "exposure hotspots" – regions where tree diversity is most likely to face serious disruption.

These include large parts of Eurasia, northwestern North America, northern Chile, and the Amazon Delta. In these areas, the changes in climate could be so extreme that native species may not survive without human intervention.

"This research provides a global map of where trees are most vulnerable to climate change," explained Dr. Boonman. "It's a crucial tool for conservation planning and ecosystem resilience."

[...] Importantly, this study only looked at climate exposure. It did not include other threats such as illegal logging, invasive pests, or pollution. This means that the risk to tree species is likely even greater than reported.

A tree struggling with drought may also be weakened against disease. A forest hit by fire may face development before it can recover. Climate stress is just one part of a complex web of pressures.

Not all news from the study was grim. The researchers also identified what they call climate refugia. These are areas expected to remain relatively stable despite the overall global warming trend.

In such zones, trees could persist with less stress, acting as sanctuaries for biodiversity if protected. These stable environments offer a rare opportunity. If shielded from human interference, they may become the lifeboats of future forests.

In a rapidly warming world, these refuges could harbor tree species long enough for the climate to stabilize or for new conservation tools to be deployed. Their preservation is vital for long-term efforts to protect forest ecosystems.

Journal Reference: Coline C. F. Boonman, Selwyn Hoeks, Josep M. Serra-Diaz, et al. High tree diversity exposed to unprecedented macroclimatic conditions even under minimal anthropogenic climate change, PNAS, 122 (26) e2420059122 https://doi.org/10.1073/pnas.2420059122


Original Submission

posted by hubie on Friday August 22, @03:56PM   Printer-friendly

https://arstechnica.com/security/2025/08/adult-sites-use-malicious-svg-files-to-rack-up-likes-on-facebook/
https://archive.ph/lzs6b

Running JavaScript from inside an image? What could possibly go wrong?

Dozens of porn sites are turning to a familiar source to generate likes on Facebook—malware that causes browsers to surreptitiously endorse the sites. This time, the sites are using a newer vehicle for sowing this malware—.svg image files.

The Scalable Vector Graphics format is an open standard for rendering two-dimensional graphics. Unlike more common formats such as .jpg or .png, .svg uses XML-based text to specify how the image should appear, allowing files to be resized without losing quality due to pixelation. But therein lies the rub: The text in these files can incorporate HTML and JavaScript, and that, in turn, opens the risk of them being abused for a range of attacks, including cross-site scripting, HTML injection, and denial of service.

Security firm Malwarebytes on Friday said it recently discovered that porn sites have been seeding boobytrapped .svg files to select visitors. When one of these people clicks on the image, it causes browsers to surreptitiously register a like for Facebook posts promoting the site.

Unpacking the attack took work because much of the JavaScript in the .svg images was heavily obscured using a custom version of "JSFuck," a technique that uses only a handful of character types to encode JavaScript into a camouflaged wall of text.

Once decoded, the script causes the browser to download a chain of additional obfuscated JavaScript. The final payload, a known malicious script called Trojan.JS.Likejack, induces the browser to like a specified Facebook post as long as a user has their account open.

"This Trojan, also written in Javascript, silently clicks a 'Like' button for a Facebook page without the user's knowledge or consent, in this case the adult posts we found above," Malwarebytes researcher Pieter Arntz wrote. "The user will have to be logged in on Facebook for this to work, but we know many people keep Facebook open for easy access."

Related: Trojans Embedded in .svg Files


Original Submission

posted by hubie on Friday August 22, @11:11AM   Printer-friendly

Teenagers are choosing to study STEM subjects:

A-level results in 2025 show the increasing popularity of STEM (science, technology, engineering and math) among students. For students taking three A-levels—the majority—the most popular combination of subjects was biology, chemistry and math.

The subject with the greatest rise in entries from 2024 is further math, followed by economics, math, physics and chemistry. Math remains the most popular subject, with entries making up 12.7% of all A-level entries.

Conversely, subjects such as French, drama, history and English literature are falling in exam entry numbers.

There is considerable incentive for young people who may be looking beyond school and university to the job market to study STEM. Research has found that STEM undergraduate degrees bring higher financial benefits to people and to the public purse than non-STEM subjects.

Many of the world's fastest-growing jobs need STEM skills. These include data analysts, AI specialists, renewable energy engineers, app developers, cybersecurity experts and financial technology experts.

[...] What's more, math, engineering and the sciences are now vital parts of careers that might have once seemed unrelated. It was once the case that the division between arts and science was seen as unbridgeable: you were firmly on one side or the other. Today this is far less evident.

Artists, in their many manifestations, are almost by default material scientists. Architects, photographers, musicians, video-makers, sound and lighting technicians are (arguably) technical engineers. Landscape gardeners are environmentalists, chefs are food scientists.

[...] One important factor here is imbuing students with a positive STEM identity. When young people think they are good at STEM subjects and are able to be successful, they are much more likely to choose a STEM career.

The upshot here is that, as the world changes, and changes quickly, so does the realization that STEM is an essential and invaluable dimension of life and that career prospects are varied and available at many, many levels. It seems little wonder that students have to come to see this and are enrolling in study and employment in greater numbers than before.


Original Submission

posted by jelizondo on Friday August 22, @06:22AM   Printer-friendly
from the you'll-own-nothing-and-like-it dept.

VW introduces monthly subscription to increase car power:

VW introduces monthly subscription to increase car power

German car making giant Volkswagen (VW) has introduced a subscription for UK customers wanting to increase the power of some of its electric cars.

Those who buy an eligible car in its ID.3 range can choose to pay extra if they want to unlock the full power of the engine inside the vehicle.

VW says the "optional power upgrade" will cost £16.50 per month or £165 annually - or people can choose to pay £649 for a lifetime subscription.

The firm said it was "offering customers choice" with the feature.

Auto Express, who first reported the story, said a lifetime subscription would be for the car rather than the individual - meaning the upgrade would remain on the car if it was sold on.

A VW spokesperson told the BBC they believed giving people the option to purchase more power for their car is "nothing new".

"Historically many petrol and diesel vehicles have been offered with engines of the same size, but with the possibility of choosing one with more potency," they said.

They added that the power upgrades would allow customers to opt for a "sportier" driving experience at any time, "rather than committing from the outset with a higher initial purchase price".

Such offers have proved controversial for some customers in the past, who are displeased they may have to pay to access features which - in some cases - are already present inside the car they own.

Other vehicle manufacturers such as BMW have introduced similar subscription-based add-ons in the past, such as for heated seats and steering wheels.

And Mercedes introduced an online subscription service in the US in 2022 which allowed customers to pay to make its electric cars speed up quicker.

According to a survey from S&P Global, some customers may be put off by the cost of in-car subscriptions for features such as connectivity, or by basic functions being split into paid tiers.

It said the number of respondents who said they would pay for connected services had fallen from 86% in 2024 to 68% in 2025.

This is despite a wider embrace of subscriptions in general, with market research firm Juniper Research estimating in 2024 the global subscription economy would reach nearly $1tn (£740bn) in value by 2028.


Original Submission

posted by jelizondo on Friday August 22, @01:35AM   Printer-friendly

I read this whole article and found it interesting, thought others might too...
https://pluralistic.net/2025/08/20/billionaireism/#surveillance-infantalism

from 1 part:
"...the ultra-rich (and the states they have suborned) have a fundamental understanding that the more unfair a society is, the less stable it is. The more unstable a state is, the more its ruling class have to expend on private security. No captain of industry wants to arise from his sarcophagus of a morning, only to discover a mob of hoi polloi building a guillotine on his lawn.

As Thomas Piketty argues, there comes a point where it's cheaper to make society more fair – say, by building hospitals and schools – than it is to pay for all the gaiter-wearing gun-thugs you'll need to weed out the guillotine-building projects that spontaneously erupt under conditions of gross unfairness:

https://memex.craphound.com/2014/06/24/thomas-pikettys-capital-in-the-21st-century/

Mass surveillance shifts the guillotine equilibrium in favor of being greedier, by making it cheaper to identify and neutralize incipient guillotine-builders, which means that you can raise the greediness floor without seeing a concomitant rise in your guard labor bill."

From another part of the article:
"...But there's another way in which surveillance abets rampant billionaireism: when companies spy on us, they can change the rules of their services to increase how much we pay them, and decrease how much they pay us. When companies do this to their customers, they call it 'personalized pricing' – but everyone else calls it what it is, surveillance pricing:

When a company charges you more than someone else for the same service (say, Uber jacking up the price of a ride because your phone battery is about to die, or an airline charging you extra because they know you have a funeral to attend), they're effectively re-valuing the dollars in your bank account. The fact that the cab-ride that costs you $20 and costs someone else $15 means that your dollar is only worth $0.75."

https://pluralistic.net/2025/06/24/price-discrimination/#algorithmic-pricing


Original Submission

posted by janrinok on Thursday August 21, @08:49PM   Printer-friendly

Uncovering the fraudsters and their schemes responsible for polluting the scientific literature:

The extent of fraudulent papers in the scientific literature is growing exponentially and goes far beyond isolated events, new research has revealed. 'You can see a scenario in a decade or less where you could have more than half of [studies being published] each year being fraudulent,' says Reese Richardson, one of the study's key researchers at Northwestern University, US.

Scientific integrity and honesty are key pillars of science and research, yet in recent years large organisations – known as paper mills – have been threatening these ideals by facilitating systemic scientific fraud.

Each year, paper mills produce and sell thousands of often poor-quality or fake scientific studies, sometimes using entirely made-up or doctored data and images. Growing pressure for researchers to 'publish or perish' has contributed to an increasingly competitive scientific community, leading some to turn to these businesses to pad their publication record. 'It becomes sort of a snowball situation where the optimum strategy is you have to start to cheat in order to win out,' says Richardson.

Previous studies have exposed paper mills using specific cases, rather than looking at the issue systematically. To gain a sense of the scale of the problem, the team, led by Luís Nunes Amaral, analysed a collection of paper-milled manuscripts discovered by science sleuths and found that the problem is even worse and more widespread than thought. 'There are networks of individuals and entities that are producing scientific fraud at scale,' says Amaral.

Analysis of 2213 articles flagged for image duplication – a hallmark of fraudulent science – revealed that such articles are published in large batches. The researchers also suggest that paper mills cooperate with brokers, a small number of dishonest editors who control some of the publishing decisions at select journals. Even a small number of brokers can lead to the publication of huge numbers of counterfeit articles, with publisher Frontiers recently announcing that they are retracting a batch of 122 articles.

Despite efforts to curb paper mills, Amaral and his team found that suspected paper-milled articles are growing exponentially, doubling every 1.5 years. In comparison, the total number of all publications is only doubling every 15 years.

'I'm not sure that [the researchers] are being clear enough why this is happening. It's about money and the need for publications. [But] in scientific research those shouldn't be the key factors,' says Jana Christopher, an expert in image integrity who works for the publisher FEBS Press.

The team's analysis of articles published by Plos One – a peer-reviewed open-access mega-journal that discloses the handling editor – identified 45 editors who accepted articles that were more likely to be retracted or receive post-publication comments on PubPeer than chance would predict. These individuals comprised just 0.25% of the journal's editors, handling 1.3% of Plos One articles published in 2024, yet they oversaw 30.2% of the journal's retracted articles.

Network analysis revealed that these editors send most of their submissions to one another over other editors. The team found the same trend when analysing data from 10 Hindawi journals – another open-access publisher – and the Institute of Electronics and Electrical Engineers' (IEEE) conference proceedings.

These paper mills continue to churn out fraudulent papers and few fraudsters are ever caught and shut down. As a result, the scientific community has become increasingly concerned by paper mills, leading to efforts to curb their influence. Consequentially, paper mills must now adapt to guarantee publication for their clients, 'hopping' between journals as they become de-indexed or fall out of favour with customers.

Amaral and his team found evidence of a fraudulent publishing service – known as the Academic Research and Development Association (ARDA) – that demonstrates this behaviour. The researchers found that the list of the association's journals evolves to continue guaranteed publication for customers, with it changing in direct response to de-indexing by academic databases such as Scopus or Web of Science. While this is just one case, the team notes that the rapid rise in the number of papers some journals publish is consistent with 'hopping', highlighting that this is a widespread practice.

Fraudulent scientific activity is not consistent within each scientific field, with the researchers highlighting that fraudsters preferentially select certain sub-fields over others. The team found that within six sub-fields of RNA biology the retraction rate – a sign of the number of fraudulent articles – was inconsistent. 'Some subfields of RNA [biology] are now so polluted by fraudulent research that it essentially becomes impossible for legitimate researchers to even enter the field,' says Amaral.

The solution to this issue remains divisive. Current strategies involve using artificial intelligence and tools to identify non-standard phrases and duplicate or doctored images, post-publication peer review and retracting fraudulent articles after publication. Christopher notes that 'it's going to be increasingly difficult to distinguish between genuine research and low quality or made up content', especially as the number of fraudulent articles increases.

Amaral believes that the situation needs collective action from institutions, such as the learned societies and national academies. 'You cannot have a system where you are trying to detect fraud after it's created. You actually have to prevent people from putting these things into the system,' he says.

He adds that restricting researchers engaging with paper mills and fraudulent papers would be beneficial and create a fairer scientific community. However, Richardson believes that penalising the clients of paper mills is not the answer and calls for systemic change. 'We need to make the scientific [community] much less competitive, fairer and more equal. Inequality, locally and globally, has led to this problem,' he says.

Journal Reference: The entities enabling scientific fraud at scale are large, resilient, and growing rapidly, (DOI: 10.1073/pnas.2420092122)


Original Submission

posted by janrinok on Thursday August 21, @04:09PM   Printer-friendly

Physics of badminton's new killer spin serve:

Serious badminton players are constantly exploring different techniques to give them an edge over opponents. One of the latest innovations is the spin serve, a devastatingly effective method in which a player adds a pre-spin just before the racket contacts the shuttlecock (aka the birdie). It's so effective—some have called it "impossible to return" [YouTube 4:15 --JE] —that the Badminton World Federation (BWF) banned the spin serve in 2023, at least until after the 2024 Paralympic Games in Paris.

The sanction wasn't meant to quash innovation but to address players' concerns about the possible unfair advantages the spin serve conferred. The BWF thought that international tournaments shouldn't become the test bed for the technique, which is markedly similar to the previously banned "Sidek serve." The BWF permanently banned the spin serve earlier this year. Chinese physicists have now teased out the complex fundamental physics of the spin serve, publishing their findings in the journal Physics of Fluids.

Shuttlecocks are unique among the various projectiles used in different sports due to their open conical shape. Sixteen overlapping feathers protrude from a rounded cork base that is usually covered in thin leather. The birdies one uses for leisurely backyard play might be synthetic nylon, but serious players prefer actual feathers.

Those overlapping feathers give rise to quite a bit of drag, such that the shuttlecock will rapidly decelerate as it travels and its parabolic trajectory will fall at a steeper angle than its rise. The extra drag also means that players must exert quite a bit of force to hit a shuttlecock the full length of a badminton court. Still, shuttlecocks can achieve top speeds of more than 300 mph. The feathers also give the birdie a slight natural spin around its axis, and this can affect different strokes. For instance, slicing from right to left, rather than vice versa, will produce a better tumbling net shot.

The cork base makes the birdie aerodynamically stable: No matter how one orients the birdie, once airborne, it will turn so that it is traveling cork-first and will maintain that orientation throughout its trajectory. A 2015 study examined the physics of this trademark flip, recording flips with high-speed video and conducting free-fall experiments in a water tank to study how its geometry affects the behavior. The latter confirmed that shuttlecock feather geometry hits a sweet spot in terms of an opening inclination angle that is neither too small nor too large. And they found that feather shuttlecocks are indeed better than synthetic ones, deforming more when hit to produce a more triangular trajectory.

While many studies have extensively examined the physics of the shuttlecock's trajectory, the Chinese authors of this latest paper realized that nobody had yet investigated the effects of the spin serve on that trajectory. "We were interested in the underlying aerodynamics," said co-author Zhicheng Zhang of Hong Kong University of Science and Technology. "Moreover, revealing the effects of pre-spin on the trajectory and aerodynamics of a shuttlecock can help players learn the art of delivering a spin serve, and perhaps help players on the other side of the net to return the serve."

So the authors created a digital shuttlecock model based on the commercially available Li-Ning D8 feather shuttlecock, treating it as a smooth and rigid object but ignoring surface roughness and feather porosity as variables. Then they ran 3D fluid dynamics simulations under three different conditions: without pre-spin, with a pre-spin in the direction of the birdie's natural spin, and with a pre-spin in the opposite direction of the natural spin.

Zhang et al. were able to identify three distinct phases of the shuttlecock's trajectory: the "turnover" phase (when the birdie flips to its preferred orientation), the oscillation phase, and the stabilization phase. If a player uses a pre-spin in the opposite direction of the natural spin, this prolongs the oscillation phase, producing a "dip and sway" pattern. The authors attribute this to a high-pressure region that forms on the side facing the flight direction, which produces a larger decay in the birdie's velocity in the horizontal direction. The oscillation also produces a significant variation in pressure on the shuttlecock's feathers.

The authors acknowledge that different shuttlecock shapes could alter the trajectory and orientation results and plan to study different configurations in the future. They also hope to conduct motion capture studies of various badminton serves, including the spin serve, that they hope will help badminton players further refine their serving skills.

Journal Reference:
Shuttlecock trajectory during spin serves, (DOI: 10.1063/5.0275494)


Original Submission

posted by janrinok on Thursday August 21, @11:24AM   Printer-friendly

https://www.bleepingcomputer.com/news/legal/mozilla-warns-germany-could-soon-declare-ad-blockers-illegal/

A recent ruling from Germany's Federal Supreme Court (BGH) has revived a legal battle over whether browser-based ad blockers infringe copyright, raising fears about a potential ban of the tools in the country.

The case stems from online media company Axel Springer's lawsuit against Eyeo - the maker of the popular Adblock Plus browser extension.

Axel Springer says that ad blockers threaten its revenue generation model and frames website execution inside web browsers as a copyright violation.

This is grounded in the assertion that a website's HTML/CSS is a protected computer program that an ad blocker intervenes in the in-memory execution structures (DOM, CSSOM, rendering tree), this constituting unlawful reproduction and modification.

Previously, this claim was rejected by a lower-level court in Hamburg, but a new ruling by the BGH found the earlier dismissal flawed and overturned part of the appeal, sending the case back for examination.

Mozilla's Senior IP & Product Counsel, Daniel Nazer, delivered a warning last week, noting that due to the underlying technical background of the legal dispute, the ban could also impact other browser extensions and hinder users' choices.

"There are many reasons, in addition to ad blocking, that users might want their browser or a browser extension to alter a webpage," Nazer says, explaining that some causes could stem from the need "to improve accessibility, to evaluate accessibility, or to protect privacy."

As per BGH's ruling, Springer's argument needs to be re-examined to determine if DOM, CSS, and bytecode count as a protected computer program and whether the ad blocker's modifications are lawful.

"It cannot be excluded that the bytecode, or the code generated from it, is protected as a computer program, and that the ad blocker, through modification or modifying reproduction, infringed the exclusive right thereto," reads BGH's statement (automated translation).

While ad blockers haven't been outlawed, Springer's case has been revived now, and there's a real possibility that things may take a different turn this time.

Mozilla noted that the new proceedings could take up to a couple of years to reach a final conclusion. As the core issue is not settled, there is a future risk of extension developers to be held liable for financial losses.

Mozilla explains that, in the meantime, the situation could cause a chilling effect on browser users' freedom, with browser developers locking down their apps further, and extension developers limiting the functionality of their tools to avoid legal troubles.


Original Submission

posted by jelizondo on Thursday August 21, @06:38AM   Printer-friendly

If AI takes most of our jobs, money as we know it will be over. What then?:

[Disclosure statement: Ben Spies-Butcher is co-director of the Australian Basic Income Lab, a research collaboration between Macquarie University, University of Sydney and Australian National University.]

It's the defining technology of an era. But just how artificial intelligence (AI) will end up shaping our future remains a controversial question.

For techno-optimists, who see the technology improving our lives, it heralds a future of material abundance.

That outcome is far from guaranteed. But even if AI's technical promise is realised – and with it, once intractable problems are solved – how will that abundance be used?

We can already see this tension on a smaller scale in Australia's food economy. According to the Australian government, we collectively waste around 7.6 million tonnes of food a year. That's about 312 kilograms per person.

At the same time, as many as one in eight Australians are food-insecure, mostly because they do not have enough money to pay for the food they need.

What does that say about our ability to fairly distribute the promised abundance from the AI revolution?

As economist Lionel Robbins articulated when he was establishing the foundations of modern market economics, economics is the study of a relationship between ends (what we want) and scarce means (what we have) which have alternative uses.

Markets are understood to work by rationing scarce resources towards endless wants. Scarcity affects prices – what people are willing to pay for goods and services. And the need to pay for life's necessities requires (most of) us to work to earn money and produce more goods and services.

The promise of AI bringing abundance and solving complex medical, engineering and social problems sits uncomfortably against this market logic.

It is also directly connected to concerns that technology will make millions of workers redundant. And without paid work, how do people earn money or markets function?

It is not only technology, though, that causes unemployment. A relatively unique feature of market economies is their ability to produce mass want, through unemployment or low wages, amid apparent plenty.

As economist John Maynard Keynes revealed, recessions and depressions can be the result of the market system itself, leaving many in poverty even as raw materials, factories and workers lay idle.

In Australia, our most recent experience of economic downturn wasn't caused by a market failure. It stemmed from the public health crisis of the pandemic. Yet it still revealed a potential solution to the economic challenge of technology-fuelled abundance.

Changes to government benefits – to increase payments, remove activity tests and ease means-testing – radically reduced poverty and food insecurity [PDF], even as the productive capacity of the economy declined.

Similar policies were enacted globally [PDF], with cash payments introduced in more than 200 countries. This experience of the pandemic reinforced growing calls to combine technological advances with a "universal basic income".

This is a research focus of the Australian Basic Income Lab, a collaboration between Macquarie University, the University of Sydney and the Australian National University.

If everyone had a guaranteed income high enough to cover necessities, then market economies might be able to manage the transition, and the promises of technology might be broadly shared.

When we talk about universal basic income, we have to be clear about what we mean. Some versions of the idea would still leave huge wealth inequalities.

My Australian Basic Income Lab colleague, Elise Klein, along with Stanford Professor James Ferguson, have called instead for a universal basic income designed not as welfare, but as a "rightful share".

They argue the wealth created through technological advances and social cooperation is the collective work of humanity and should be enjoyed equally by all, as a basic human right. Just as we think of a country's natural resources as the collective property of its people.

These debates over universal basic income are much older than the current questions raised by AI. A similar upsurge of interest in the concept occurred in early 20th-century Britain, when industrialisation and automation boosted growth without abolishing poverty, instead threatening jobs.

Even earlier, Luddites sought to smash new machines used to drive down wages. Market competition might produce incentives to innovate, but it also spreads the risks and rewards of technological change very unevenly.

Rather than resisting AI, another solution is to change the social and economic system that distributes its gains. UK author Aaron Bastani offers a radical vision of "fully automated luxury communism".

He welcomes technological advances, believing this should allow more leisure alongside rising living standards. It is a radical version of the more modest ambitions outlined by the Labor government's new favourite book – Abundance.

Bastani's preferred solution is not a universal basic income. Rather, he favours universal basic services.

Instead of giving people money to buy what they need, why not provide necessities directly – as free health, care, transport, education, energy and so on?

Of course, this would mean changing how AI and other technologies are applied – effectively socialising their use to ensure they meet collective needs.

Proposals for universal basic income or services highlight that, even on optimistic readings, by itself AI is unlikely to bring about utopia.

Instead, as Peter Frase outlines, the combination of technological advance and ecological collapse can create very different futures, not only in how much we collectively can produce, but in how we politically determine who gets what and on what terms.

The enormous power of tech companies run by billionaires may suggest something closer to what former Greek finance minister Yanis Varoufakis calls "technofeudalism", where control of technology and online platforms replaces markets and democracy with a new authoritarianism.

Waiting for a technological "nirvana" misses the real possibilities of today. We already have enough food for everyone. We already know how to end poverty. We don't need AI to tell us.

Journal Reference:
Marguerit, David. Augmenting or Automating Labor? The Effect of AI Development on New Work, Employment, and Wages, (DOI: 10.2139/ssrn.5169611)
The future of work for young people – early occupational pathways and the risk of automation in Australia, (DOI: 10.1080/13676261.2022.2112161)


Original Submission

posted by hubie on Thursday August 21, @01:52AM   Printer-friendly

Designer, anthropologist, and developer, Maggie Appleton has written a treatise on how chatbot sycophancy and passivity undermine the Enlightenment by undermining the original values of active intellectual engagement, skeptical inquiry, and challenging received wisdom.

As an expert on the Enlightenment, he's clearly been roped into developing an opinion on whether we're in an AI-fuelled "second Enlightenment."

Remember the first Enlightenment? That ~150 year period between 1650-1800 that we retroactively constructed and labelled as a unified historical event? The age of reason. Post-scientific revolution. The main characters are a bunch of moody philosophers like Locke, Descartes, Hume, Kant, Montesquieu, Rousseau, Diderot, and Voltaire. The vibe is reading pamphlets by candlelight, penning treatises, sporting powdered wigs and silk waistcoats, circulating ideas in Parisian salons and London coffee houses, sipping laudanum, and retreating to the seaside when you contracted tuberculosis. Everyone is big on ditching tradition, questioning political and religious authority, embracing scepticism, and educating the masses.

Anyway, Professor Bell's thesis is that our current AI chatbots contradict and undermine the original Enlightenment values. Values that are implicitly sacred in our modern culture; active intellectual engagement, sceptical inquiry, and challenging received wisdom.

Previously:
(2025) Book Documents the Rise and Fall of the Concept of the Private Life


Original Submission

posted by hubie on Wednesday August 20, @09:08PM   Printer-friendly
from the where-are-my-backups? dept.

https://www.bleepingcomputer.com/news/microsoft/microsoft-august-security-updates-break-windows-recovery-reset/

Microsoft has confirmed that the August 2025 Windows security updates are breaking reset and recovery operations on systems running Windows 10 and older versions of Windows 11.

"After installing the August 2025 Windows security update [..] on any of the client versions mentioned below in the 'Affected platforms' section, attempts to reset or recover the device might fail," the company said in a new Windows release health update.

Installing this month's security updates will cause issues for users who want to reinstall their system while keeping their files using the Reset my PC feature, or reinstall it and keep their files, apps, and settings using the Fix problems using Windows Update tool.

The known issue may also impact users who want to remotely reset devices using the RemoteWipe configuration service provider (RemoteWipe CSP).

According to Redmond, the bug only impacts client platforms after installing the following updates, including:

  • Windows 11 23H2 and Windows 11 22H2 (KB5063875),
  • Windows 10 22H2, Windows 10 Enterprise LTSC 2021, Windows 10 IoT Enterprise LTSC 2021 (KB5063709),
  • Windows 10 Enterprise LTSC 2019, Windows 10 IoT Enterprise LTSC 2019 (KB5063877).

The company is currently working on a fix for this known issue, which will be delivered via out-of-band updates for all impacted platforms over the coming days.


Original Submission

posted by hubie on Wednesday August 20, @04:22PM   Printer-friendly
from the slooooooooooowtv dept.