Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:119 | Votes:201

posted by jelizondo on Friday September 26, @08:20PM   Printer-friendly

American Kids Can't Read or Do Math Anymore:

Teenagers are dumb, regardless of generation. I was dumb as a teenager. You were dumb as a teenager. Now that we've established that baseline of stupidity, the latest data from the National Assessment of Educational Progress (NAEP) [DOCX --JE], nicknamed "the Nation's Report Card," shows that today's American high school seniors are even less educated than we suspected. Their math and reading skills in 2024 have plunged to historic lows.

The NAEP, a federally run assessment that regularly quizzes America's youth on their ability to do basic math and understand words, just released its first post-pandemic results. They're bleak. If the children are our future, we're f**ked.

12th graders scored the lowest in math since 2005 and the lowest in reading since the assessments began in 1992. That is a dramatic way of saying it's the weakest performance since the assessments began.

Nearly half of all seniors performed below the "Basic" level in math, and almost a third couldn't hit that low bar in reading. Meanwhile, eighth-graders are flailing in science, with their worst scores since 2009.

So, what the f**k happened? You could point to the pandemic. Remote learning, while keeping the kids and their families alive and healthy, didn't spark academic excellence.

Studies have shown it also torpedoed kids' mental health and educational development. The growing use of generative AI probably isn't helping, since it's more than likely doing their homework and writing their papers for them rather than assisting them to learn how to do it themselves.

That's a recipe for churning out a lot of poorly educated teenagers who are about to enter the real world without any of the tools necessary to survive in it.

But there's a twist: the instinct to blame COVID and pandemic lockdowns solely doesn't explain why those numbers were already on the downturn before the pandemic. According to Matthew Soldner, acting commissioner at the National Center for Education Statistics, the decline has been brewing for years, especially among the lowest-performing students. COVID just kicked the crumbling foundation out from under the whole system.

As the kids fail, the adults in charge of ensuring that we prep the future generations for success are failing even more miserably. The Department of Education is hemorrhaging staff and funding due to Trump-led efforts to dismantle the department entirely.

The Department of Education, currently being headed by Linda McMahon, the wife of WWE's Vince McMahon [...], gutted funding of the Institute of Education Sciences, a subunit of the Department of Education that monitors the state of US education and funds research to improve academic outcomes.

In other words, our children are going to get dumber. And for at least the next 3 ½-ish years, we aren't going to be gathering enough data to figure out how to fix any of it. It's all going to get much worse before it gets better.


Original Submission

posted by jelizondo on Friday September 26, @03:35PM   Printer-friendly

https://phys.org/news/2025-09-world-coastal-settlements-retreating-seas.html

Human settlements around the world are moving inland and relocating away from coastlines as sea levels rise and coastal hazards grow more severe, but a new international study shows the poorest regions are being forced to stay put or even move closer to danger.

The study, published in Nature Climate Change, analyzed decades (1992–2019) of satellite nighttime light data across 1,071 coastal regions in 155 countries.

It found that human settlements in 56% of the regions analyzed relocated further inland, 28% stayed put, and 16% moved closer to the coast.

Low-income groups were more likely to move closer to the coast, driven largely by the growth of informal settlements and the search for better livelihoods. Human settlements shifted most towards coastlines in South America (up to 17.7% ) and Asia (17.4%), followed by Europe (14.8%), Oceania (13.8%), Africa (12.4%) and North America (8.8%).

Lead author Xiaoming Wang, an adjunct professor based at the Monash Department of Civil and Environmental Engineering, said relocation was largely driven by vulnerability and the capacity to respond.

"For the first time, we've mapped how human settlements are relocating from coasts around the world. It's clear that moving inland is happening, but only where people have the means to do so.

"In poorer regions, people may have to be forced to stay exposed to climate risks, either for living or no capacity to move. These communities can face increasingly severe risk in a changing climate," said Wang.

Oceania had some of the closest settlements to the coast globally, reflecting the region's reliance on coastal economies.

"In Oceania, we see a common reality where wealthier and poorer communities are both likely to relocate towards coastlines in addition to moving inland," adjunct professor Wang said.

"On one hand, the movement closer to coastlines can expose vulnerable populations to the impacts of storms, erosion, and sea-level rise. On the other hand, it can expose those wealthy communities to the growing coastal hazards."

The study also highlights concerns that overconfidence in protective infrastructure encouraged risky development close to the coast.

"It is interesting to note that high-income groups also had a relatively higher likelihood to remain on coastlines, such as in Europe and North America. This can be due to their capacity and wealth accumulated in coastal areas," adjunct professor Wang said.

The study warns that relocation inland may become unavoidable as sea levels rise and climate change intensifies.

"Relocating away from the coast must be part of a long-term climate strategy, and the rationale for policy and planning to relocate people requires meticulous consideration of both economic and social implications across individuals, communities and regions," adjunct professor Wang said.

"Alongside climate change mitigation, it needs to be combined with efforts to reduce coastal hazard exposure and vulnerability, improve informal settlements, balance coastal risks with livelihoods and maintain sustainable lifestyles in the long-term. Without this, coastal adaptation gaps will continue to be widened and leave the world's poorest behind."

The study was an international collaboration on climate adaptation research between adjunct professor Wang, the Institute for Disaster Management and Reconstruction at Sichuan University, and researchers from Denmark and Indonesia.

The collaboration aims to understand how communities cope with recurring coastal hazards and highlights gaps in adaptation that need urgent attention.

More information: Lilai Xu et al, Global coastal human settlement retreat driven by vulnerability to coastal climate hazards, Nature Climate Change (2025). DOI: 10.1038/s41558-025-02435-6
                                                                                                       


Original Submission

posted by janrinok on Friday September 26, @10:53AM   Printer-friendly

Huntington's disease successfully treated for first time:

One of the cruellest and most devastating diseases – Huntington's – has been successfully treated for the first time, say doctors.

The disease runs through families, relentlessly kills brain cells and resembles a combination of dementia, Parkinson's and motor neurone disease.

An emotional research team became tearful as they described how data shows the disease was slowed by 75% in patients.

It means the decline you would normally expect in one year would take four years after treatment, giving patients decades of "good quality life", Prof Sarah Tabrizi told BBC News.

The new treatment is a type of gene therapy given during 12 to 18 hours of delicate brain surgery.

The first symptoms of Huntington's disease tend to appear in your 30s or 40s and is normally fatal within two decades – opening the possibility that earlier treatment could prevent symptoms from ever emerging.

Prof Tabrizi, director of the University College London Huntington's Disease Centre, described the results as "spectacular".

"We never in our wildest dreams would have expected a 75% slowing of clinical progression," she said.

None of the patients who have been treated are being identified, but one was medically retired and has returned to work. Others in the trial are still walking despite being expected to need a wheelchair.

Treatment is likely to be very expensive. However, this is a moment of real hope in a disease that hits people in their prime and devastates families.

Huntington's runs through Jack May-Davis' family. He has the faulty gene that causes the disease, as did his dad, Fred, and his grandmother, Joyce.

[...] This mutation turns a normal protein needed in the brain – called the huntingtin protein – into a killer of neurons.

The goal of the treatment is to reduce levels of this toxic protein permanently, in a single dose.

The therapy uses cutting edge genetic medicine combining gene therapy and gene silencing technologies.

It starts with a safe virus that has been altered to contain a specially designed sequence of DNA.

This is infused deep into the brain using real-time MRI scanning to guide a microcatheter to two brain regions - the caudate nucleus and the putamen. This takes 12 to 18 hours of neurosurgery.

The virus then acts like a microscopic postman – delivering the new piece of DNA inside brain cells, where it becomes active.

This turns the neurons into a factory for making the therapy to avert their own death.

The cells produce a small fragment of genetic material (called microRNA) that is designed to intercept and disable the instructions (called messenger RNA) being sent from the cells' DNA for building mutant huntingtin.

This results in lower levels of mutant huntingtin in the brain.

Results from the trial - which involved 29 patients - have been released in a statement by the company uniQure, but have not yet been published in full for review by other specialists.

The data showed that three years after surgery there was an average 75% slowing of the disease based on a measure which combines cognition, motor function and the ability to manage in daily life.

The data also shows the treatment is saving brain cells. Levels of neurofilaments in spinal fluid – a clear sign of brain cells dying – should have increased by a third if the disease continued to progress, but was actually lower than at the start of the trial.

"This is the result we've been waiting for," said Prof Ed Wild, consultant neurologist at the National Hospital for Neurology and Neurosurgery at UCLH. "There was every chance that we would never see a result like this, so to be living in a world where we know this is not only possible, but the actual magnitude of the effect is breathtaking, it's very difficult to fully encapsulate the emotion."

[...] The treatment was considered safe, although some patients did develop inflammation from the virus that caused headaches and confusion that either resolved or needed steroid treatment.

Prof Wild anticipates the therapy "should last for life" because brain cells are not replaced by the body in the same manner as blood, bone and skin are constantly renewed.

Approximately 75,000 people have Huntington's disease in the UK, US and Europe with hundreds of thousands carrying the mutation meaning they will develop the disease.

UniQure says it will apply for a licence in the US in the first quarter of 2026 with the aim of launching the drug later that year. Conversations with authorities in the UK and Europe will start next year, but the initial focus is on the US. Dr Walid Abi-Saab, the chief medical officer at uniQure, said he was "incredibly excited" about what the results mean for families, and added that the treatment had "the potential to fundamentally transform" Huntington's disease.

However, the drug will not be available for everyone due to the highly complex surgery and the anticipated cost.

"It will be expensive for sure," says Prof Wild. There isn't an official price for the drug. Gene therapies are often pricey, but their long-term impact means that can still be affordable. In the UK, the NHS does pay for a £2.6m-per-patient gene therapy for haemophilia B.


Original Submission

posted by janrinok on Friday September 26, @06:11AM   Printer-friendly

https://phys.org/news/2025-09-facebook-reveal-devastating-real-world.html

Twenty-one years after Facebook's launch, Australia's top 25 news outlets now have a combined 27.6 million followers on the platform. They rely on Facebook's reach more than ever, posting far more stories there than in the past.

With access to Meta's Content Library (Meta is the owner of Facebook), our big data study analyzed more than three million posts from 25 Australian news publishers. We wanted to understand how content is distributed, how audiences engage with news topics, and the nature of misinformation spread.

The study enabled us to track de-identified Facebook comments and take a closer look at examples of how misinformation spreads. These included cases about election integrity, the environment (floods) and health misinformation such as hydroxychloroquine promotion during the COVID pandemic.

The data reveal misinformation's real-world impact: it isn't just a digital issue, it's linked to poor health outcomes, falling public trust, and significant societal harm.

Take the example of the false claims that antimalarial drug hydroxychloroquine was a viable COVID treatment.

In Australia, as in the United States, political figures and media played leading roles in the spread of this idea. Mining billionaire and then leader of the United Australia Party, Clive Palmer, actively promoted hydroxychloroquine as a COVID treatment. In March 2020 he announced he would fund trials, manufacture, and stockpile the drug.

He placed a two-page advertisement in The Australian. Federal Coalition MPs Craig Kelly and George Christensen also championed hydroxychloroquine, coauthoring an open letter advocating its use.

We examined 7,000 public comments responding to 100 hydroxychloroquine posts from the selected media outlets during the pandemic. Contrary to concerns that public debate is siloed in echo chambers, we found robust online exchanges about the drug's effectiveness in combating COVID.

Yet, despite fact-checking efforts, we find that facts alone fail to stop the spread of misinformation and conspiracy theories about hydroxychloroquine. This misinformation targeted not only the drug, but also the government, media and "big pharma."

To put the real-world harm in perspective, public health studies estimate hydroxychloroquine use was linked to at least 17,000 deaths worldwide, though the true toll is likely higher.

The topic modeling also highlighted the personal toll caused by this misinformation spread. These include the secondary harm of the drug's unavailability (due to stockpiling) for legitimate treatment of non-COVID conditions such as rheumatoid arthritis and lupus, leading to distress, frustration and worsening symptoms.

In other instances, we saw how misinformation can hurt public trust in institutions and non-government organizations. Following the 2022 floods in Queensland and New South Wales, we again saw that despite fact-checking efforts, misinformation about the Red Cross charity flourished online and was amplified by political commentary.

Without repeating the falsehoods here, the misinformation led to changes in some public donation behavior, such as buying gift cards for flood victims rather than trusting the Red Cross to distribute much-needed funds. This highlights the significant harm misinformation can inflict on public trust and disaster response efforts.

The data also reveal the cyclical nature of misinformation. We call this misinformation's "stickiness," because it reappears at regular intervals such as elections. In one example, electoral administrators were targeted with false accusations that polling officials rigged the election outcome by rubbing out votes marked with pencils.

While this is an old conspiracy theory about voter fraud that predates social media and it is also not unique to Australia, the data show misinformation's persistence online during state and federal elections, including the 2023 Voice referendum.

Here, multiple debunking efforts from electoral commissioners, fact-checkers, media and social media seem to have limited levels of public engagement compared to a noisy minority. When we examined 60,000 sentences on electoral topics from the past decade, we detected just 418 sentences from informed or official sources.

Again, high-profile figures such as Palmer have played a central role in circulating this misinformation. The chart below demonstrates its stickiness.

Our study has lessons for public figures and institutions. They, especially politicians, must lead in curbing misinformation, as their misleading statements are quickly amplified by the public.

Social media and mainstream media also play an important role in limiting the circulation of misinformation. As Australians increasingly rely on social media for news, mainstream media can provide credible information and counter misinformation through their online story posts. Digital platforms can also curb algorithmic spread and remove dangerous content that leads to real-world harms.

The study offers evidence of a change over time in audiences' news consumption patterns. Whether this is due to news avoidance or changes in algorithmic promotion is unclear. But it is clear that from 2016 to 2024, online audiences increasingly engaged with arts, lifestyle and celebrity news over politics, leading media outlets to prioritize posting stories that entertain rather than inform. This shift may pose a challenge to mitigating misinformation with hard news facts.

Finally, the study shows that fact-checking, while valuable, is not a silver bullet. Combating misinformation requires a multi-pronged approach, including counter-messaging by trusted civic leaders, media and digital literacy campaigns, and public restraint in sharing unverified content.


Original Submission

posted by janrinok on Friday September 26, @01:27AM   Printer-friendly

China starts producing world-first non-binary AI chips for aviation, manufacturing:

China has started mass production of the world's first non-binary chips, adding this new technology to important industries like aviation and manufacturing.

Spearheaded by Professor Li Hongge and his team at Beihang University in Beijing, this project resolves key problems in older systems by blending binary logic with random or probability-based logic. In doing so, it has enabled unprecedented fault tolerance and power efficiency, while smoothly sidestepping US chip restrictions.

Today's chip technologies face two insurmountable challenges – the power wall and the architectural wall, according to Professor Li. They use too much power, and new chips struggle to work with older systems.

Having been on the search for a solution since 2022, his team came up with a new system called Hybrid Stochastic Number (HSN), which mixes regular binary numbers with probability-based numbers to improve performance.

Binary logic, used by all computers worldwide, uses variables in 0s and 1s to carry out arithmetic operations. However, large-scale binary computations require advanced hardware resources.

In contrast, probabilistic computing uses high voltage signals – how they appear over a set time to represent different values. This method uses less hardware and has already been used in areas such as image processing, neural networks, and deep learning. However, there's one drawback – it takes longer to process information given the way it represents values.

Based on probabilistic computation, Professor Li's team developed a new smart chip for touch and display in 2023 using leading Chinese chipmaker Semiconductor Manufacturing International Corporation's mature 110-nanometer process technology.

The project results were published in the IEEE Journal of Solid-State Circuits two years ago. Followed by that, the team came up with another chip for machine learning, fabricated using a standard 28 nm CMOS process.

Apart from HSN, it also features in-memory computing algorithms that reduce the need to move data constantly between the memory and processors. This helps save energy and makes the chip more efficient.

The chip also uses a system-on-chip(SoC) design that combines different computing units to handle multiple tasks simultaneously, unlike traditional chips that process one task at a time.

The chip is now being used in smart control systems, such as touch screens, where it filters background noise to detect weaker signals and improve how users interact with devices.

Professor Li also told Guangming Daily that his team is developing a special set of instructions and chip design tailored for hybrid probabilistic computing. They plan to use the chip in areas like speech and image processing, speeding up large AI models, and handling other complex tasks.

"The current chip already achieves on-chip computing latency at the microsecond level, striking a balance between high-performance hardware acceleration and flexible software programmability," Li said.

While China's move towards non-binary hybrid AI chips is certainly exciting and innovative, it's important not to overhype the breakthrough yet, as several hurdles still need to be crossed, such as compatibility limitations and long-term uncertainties related to the chip's usage.


Original Submission

posted by janrinok on Thursday September 25, @08:42PM   Printer-friendly

The Future Of Nuclear Reactors Is Making Its Way To The US:

America has a checkered history with nuclear power. Stemming from the Three Mile Island accident in 1979, safety fears surrounding nuclear energy have curtailed the country's development of commercial nuclear reactors. Ultimately, from 1977 until 2013, there were no new construction starts for nuclear power stations. Yet, the country remains the world's largest producer of nuclear power, generating close to 30% of the world's total nuclear output. It's also the third biggest method of power generation in the U.S., producing 18% of America's electricity; only natural gas and coal add more power to the grid. However, the slump in investment in nuclear power may be coming to an end.

Building on President Trump's four executive orders to revitalize the sector, Chris Wright, the U.S. Secretary of Energy, recently announced a "pathway" to streamline the development and deployment of advanced nuclear reactors. Speaking at the International Atomic Energy Agency's (IAEA) General Conference, Secretary Wright cited the growing demand for affordable power and the rise of high-power demand industries like AI as driving forces behind the strategy change. He said, "We established an expedited pathway to approve advanced reactors, set standards to evaluate new construction licenses within 18 months." The goal is to deploy Small Modular Reactors (SMRs) as part of President Trump's plan to add 300 gigawatts of nuclear capacity to the grid by 2050.

The key to that future lies in the aforementioned SMRs, a new generation of reactors that are designed to be smaller, safer, and faster to build.

For those of us who associate nuclear power stations with behemoths like Chernobyl or Japan's Kashiwazaki-Kariwa plant, the new generation being planned by the U.S. might come as a surprise. Rather than being large, static plants, the U.S. Government sees the future of nuclear energy as being smaller in scale. The President's executive order details the need for the U.S. to develop advanced Generation III+ reactors. These include small modular reactors (SMRs) and microreactors. The executive order also notes that these should be developed in both stationary and mobile formats to build greater resilience into critical electrical infrastructure.

One of the cornerstones of the executive order is the use of SMRs. As the name suggests, these are small reactors with a power capacity of up to 300 megawatts per module. Because of their small size and scalability, these can be installed in places where traditional reactors are unsuitable. The modular aspect of the design also means they can be pre-built at a factory and quickly installed on site. SMRs can also be quickly — and relatively easily — installed in rural areas with limited electrical infrastructure.

Microreactors are an SMR subclass; these are smaller reactors that typically generate a maximum of 10 megawatts. Microreactors have many of the same advantages as larger SMRs. Additionally, they are also a cost-effective solution for isolated areas and can also be used for backup power or as a replacement for diesel generators. Incidentally, the U.S. Army is developing a microreactor.

For the U.S., the renewed focus on nuclear power isn't just about clean and reliable energy sources — it's also about jobs and security. In employment terms, the nuclear industry already employs close to 500,000 workers. Additionally, these are well-paid jobs with salaries around 50% higher than comparable jobs within other energy generation sectors. However, the development of SMRs is still a work-in-progress, and these are seen as critical for the future of the industry. President Trump further indicated America's commitment to their development when he announced a $900 million package, split across two tiers, to support the development of SMRs. The majority of the funding is intended to support the development of new commercial projects. The remainder is to be used to help deployments by smoothing out prohibiting factors like design and supply chain issues.

Security is also a driving force behind the re-emergence of the U.S. nuclear power sector. Historically, both the U.S. and Europe were central in developing international safeguards designed to prevent nuclear proliferation, an influence that has waned in recent years. With the advanced technology being developed and the moves by the U.S. Government to support and encourage the sector, the aim is to restore the U.S. influence across global energy markets.

Despite nuclear fusion records continuing to be broken, it's still considered a technology for the future. In the meantime, SMRs may just be the bridge that keeps our lights on as we move away from fossil fuels.


Original Submission

posted by janrinok on Thursday September 25, @03:54PM   Printer-friendly

https://phys.org/news/2025-09-magic-mushrooms-unique-biochemical-paths.html

A German-Austrian team led by Friedrich Schiller University Jena and Leibniz-HKI has been able to biochemically demonstrate for the first time that different types of mushrooms produce the same mind-altering active substance, psilocybin, in different ways.

Both Psilocybe mushrooms and fiber cap mushrooms of the genus Inocybe produce this substance, but use completely different enzymes and reaction sequences for this process. The results are published in Angewandte Chemie International Edition.

"This concerns the biosynthesis of a molecule that has a very long history with humans," explains Prof. Dirk Hoffmeister, head of the research group Pharmaceutical Microbiology at Friedrich Schiller University Jena and the Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI).

"We are referring to psilocybin, a substance found in so-called 'magic mushrooms,' which our body converts into psilocin—a compound that can profoundly alter consciousness. However, psilocybin not only triggers psychedelic experiences, but is also considered a promising active compound in the treatment of therapy-resistant depression," says Hoffmeister.

The study, which was conducted within the Cluster of Excellence "Balance of the Microverse," shows for the first time that fungi have developed the ability to produce psilocybin at least twice independently of each other. While Psilocybe species use a known enzyme toolkit for this purpose, fiber cap mushrooms employ a completely different biochemical arsenal—and yet arrive at the same molecule.

This finding is considered an example of convergent evolution: Different species have independently developed a similar trait, but the magic mushrooms have gone their own way in doing so.

Tim Schäfer, lead author of the study and doctoral researcher in Hoffmeister's team, explains, "It was like looking at two different workshops, but both ultimately delivering the same product. In the fiber caps, we found a unique set of enzymes that have nothing to do with those found in Psilocybe mushrooms. Nevertheless, they all catalyze the steps necessary to form psilocybin."

The researchers analyzed the enzymes in the laboratory. Protein models created by Innsbruck chemist Bernhard Rupp confirmed that the sequence of reactions differs significantly from that known in Psilocybe.

"Here, nature has actually invented the same active compound twice," says Schäfer.

However, why two such different groups of fungi produce the same active compound remains unclear. "The real answer is that we don't know," emphasizes Hoffmeister. "Nature does nothing without reason. So there must be an advantage to both fiber cap mushrooms in the forest and Psilocybe species on manure or wood mulch producing this molecule—we just don't know what it is yet."

"One possible reason could be that psilocybin is intended to deter predators. Even the smallest injuries cause Psilocybe mushrooms to turn blue through a chemical chain reaction, revealing the breakdown products of psilocybin. Perhaps the molecule is a type of chemical defense mechanism," says Hoffmeister.

Although it is still unclear why different fungi ultimately produce the same molecule, the discovery nevertheless has practical implications.

"Now that we know about additional enzymes, we have more tools in our toolbox for the biotechnological production of psilocybin," explains Hoffmeister.

Schäfer is also looking ahead, stating, "We hope that our results will contribute to the future production of psilocybin for pharmaceuticals in bioreactors without the need for complex chemical syntheses."

At the Leibniz-HKI in Jena, Hoffmeister's team is working closely with the Bio Pilot Plant, which is developing processes for producing natural products such as psilocybin on an industry-like scale.

At the same time, the study provides exciting insights into the diversity of chemical strategies used by fungi and their interactions with their environment.

More information: Dissimilar Reactions and Enzymes for Psilocybin Biosynthesis in Inocybe and Psilocybe Mushrooms, Angewandte Chemie International Edition (2025). DOI: 10.1002/anie.202512017


Original Submission

posted by hubie on Thursday September 25, @11:43AM   Printer-friendly

https://phys.org/news/2025-09-ganges-river-drying-unprecedented.html

The Ganges River is in crisis. This lifeline for around 600 million people in India and neighboring countries is experiencing its worst drying period in 1,300 years. Using a combination of historical data, paleoclimate records and hydrological models, researchers from IIT Gandhinagar and the University of Arizona discovered that human activity is the main cause. They also found that the current drying is more severe than any recorded drought in the river's history.

In their study, published in the Proceedings of the National Academy of Sciences, researchers first reconstructed the river's flow for the last 1,300 years (700 to 2012 C.E.) by analyzing tree rings from the Monsoon Asia Drought Atlas (MADA) dataset. Then they used powerful computer programs to combine this tree-ring data with modern records to create a timeline of the river's flow. To ensure its accuracy, they double-checked it against documented historical droughts and famines.

The scientists found that the recent drying of the Ganges River from 1991 to 2020 is 76% worse than the previous worst recorded drought, which occurred during the 16th century. Not only is the river drier overall, but droughts are now more frequent and last longer. The main reason, according to the researchers, is human activity. While some natural climate patterns are at play, the primary driver is the weakening of the summer monsoon.

This weakening is linked to human-driven factors such as the warming of the Indian Ocean and air pollution from anthropogenic aerosols. These are liquid droplets and fine solid particles that come from factories, vehicles and power plants, among other sources and can suppress rainfall. The scientists also found that most climate models failed to spot the severe drying trend.

"The recent drying is well beyond the realm of last millennium climate variability, and most global climate models fail to capture it," the authors wrote in their paper. "Our findings underscore the urgent need to examine the interactions among the factors that control summer monsoon precipitation, including large-scale climate variability and anthropogenic forcings."

The researchers suggest two main courses of action. Given the mismatch between climate models and what they actually found, they are calling for better modeling to account for the regional impacts of human activity.

And because the Ganges is a vital source of water for drinking, agricultural production, industrial use and wildlife, the team also recommends implementing new adaptive water management strategies to mitigate potential water scarcity.

More information: Dipesh Singh Chuphal et al, Recent drying of the Ganga River is unprecedented in the last 1,300 years, Proceedings of the National Academy of Sciences (2025). DOI: 10.1073/pnas.2424613122


Original Submission

posted by hubie on Thursday September 25, @07:01AM   Printer-friendly

Invest in a major customer, then get money from a major customer?

Nvidia plans to invest up to $100 billion in OpenAI, marking an unprecedented financial and strategic alignment between the leading AI hardware provider and one of the best-known developers of artificial intelligence models. However, the deal raises major antitrust concerns among legal experts and policymakers over potential market imbalance, as in both cases the investment can impact competitors of both companies, reports Reuters.

The planned investment raises questions about how the cash infusion could affect Nvidia's other customers and the overall AI hardware market. Nvidia already commands the lion's share of the market for hardware used for AI training and inference, as virtually all AI companies use its Hopper and Blackwell GPUs, so they rely on access to Nvidia's GPUs to scale their own models. Legal experts note that Nvidia's investment may create incentives to prioritize OpenAI over others, potentially offering better terms or faster access to a limited supply of leading-edge GPUs, such as Rubin.

In response, a representative for Nvidia stated that the company's commitment to all of its clients remains unchanged. The spokesperson emphasized that having a financial interest in any one partner would not affect how the company serves others, assuring that every customer will continue to receive the same level of attention and service.

Additionally, if OpenAI prefers hardware from Nvidia over hardware from its rivals, such as AMD, or even its own processor developed in collaboration with Broadcom, then Nvidia will get an unfair advantage. Keeping in mind that OpenAI is believed to have acquired $10 billion worth of custom-built AI processors from Broadcom, it is unlikely that it won't deploy them, but now that Nvidia will provide OpenAI hardware worth tens of billions of dollars, the AI company will continue to do more work on Nvidia's hardware rather than on competing processors.

OpenAI currently operates as a non-profit but is pursuing a transition to a for-profit public benefit corporation. This structural change is meant to facilitate investment while maintaining oversight by the original non-profit entity. The arrangement with Nvidia does not provide governance rights — only financial participation — and may depend on regulatory approvals in states like Delaware and California, where OpenAI is registered.

[...] U.S. regulators have previously flagged the risk of major technology firms leveraging their existing dominance to control emerging AI markets. Officials from the Department of Justice have emphasized the importance of averting exclusionary practices in the AI supply chain, including restricted access to processors and compute infrastructure.

The potential effects extend beyond hardware. Oracle recently disclosed that it had signed large-scale cloud contracts with OpenAI and other clients, boosting its valuation. With Nvidia's investment potentially strengthening OpenAI's financial position, Oracle's revenue projections may appear more credible, something that will address investor concerns about OpenAI's ability to fund such commitments, according to Reuters.


Original Submission

posted by hubie on Thursday September 25, @03:16AM   Printer-friendly

Consent fatigue and clickspamageddon to be addressed by European Commission amendments to its 2009 e-Privacy Directive:

The plague of cookie consent alerts, banners, and pop-ups that have added a sliver of sandpaper to web surfing since 2009 might be eradicated in December. The European Commission (EC) intends to revise a law called the e-Privacy Directive, reports Politico. Specifically, new guidelines from the European Data Protection Board (EDPB) aim to eliminate manipulative consent banners and reduce consent fatigue.

Cookies are a necessary part of the World Wide Web, which seasoned surfers will have first become aware of in troubleshooting – fixing issues by clearing cookies and so on. However, after the e-Privacy Directive came into force in the late noughties, cookies soon became a source of persistent irritation. The directive required website holders to get consent from visitors unless the cookies were strictly necessary.

Now, in 2025, and if you refresh your browser or buy a new computer/device, you'll face days and days of cookie clickspamageddon to return to smooth surfing on your familiar sites. We know there are browser extensions designed to ignore cookies, but they can have their own trade-offs with privacy, and/or compatibility wrinkles.

Politico shares a quote from Peter Craddock, a data lawyer with Keller and Heckman, which highlights the problem with the current state of cookie consent regulations. "Too much consent basically kills consent," remarked Craddock. "People are used to giving consent for everything, so they might stop reading things in as much detail, and if consent is the default for everything, it's no longer perceived in the same way by users."

[...] In practice, some of the changes we look forward to could be the hinted extremely clear 'reject all' button, which must be as prominent as any 'accept all' option, on all sites. Allowing browser-level consent preferences might be the biggest time saver of all, though. We'll see how browser makers tune and allow for granular control here.


Original Submission

posted by hubie on Wednesday September 24, @10:36PM   Printer-friendly
from the ways-to-escape-webkit dept.

Supporting the future of the open web: Cloudflare is sponsoring Ladybird and Omarchy

At Cloudflare, we believe that helping build a better Internet means encouraging a healthy ecosystem of options for how people can connect safely and quickly to the resources they need. [....] sometimes that means we support and partner with fantastic open teams taking big bets on the next generation of tools.

To that end, today we are excited to announce our support of two independent, open source projects: Ladybird, an ambitious project to build a completely independent browser from the ground up, and Omarchy, an opinionated Arch Linux setup for developers.

[....]

Ladybird, a new and independent browser

[....] While the openness of how browsers work has led to an explosive growth of services on the Internet, browsers themselves have consolidated to a tiny handful of viable options. There's a high probability you're reading this on a Chromium-based browser, like Google's Chrome, along with about 65% of users on the Internet. However, that consolidation has also scared off new entrants in the space. If all browsers ship on the same operating systems, powered by the same underlying technology, we lose out on potential privacy, security and performance innovations that could benefit developers and everyday Internet users.

This is where Ladybird comes in: it's not Chromium based – everything is built from scratch. The Ladybird project has two main components: LibWeb, a brand-new rendering engine, and LibJS, a brand-new JavaScript engine with its own parser, interpreter, and bytecode execution engine.

Building an engine that can correctly and securely render the modern web is a monumental task that requires deep technical expertise and navigating decades of specifications governed by standards bodies like the W3C and WHATWG. And because Ladybird implements these standards directly, it also stress-tests them in practice. Along the way, the project has found, reported, and sometimes fixed countless issues in the specifications themselves, contributions that strengthen the entire web platform for developers, browser vendors, and anyone who may attempt to build a browser in the future.

[...rest dele.........cough...cough..]

First came the Navigator. Then came the Explorer. Then came the Konqueror. When the Konqueror was in its seventh year, it begat Webkit.


Original Submission

posted by janrinok on Wednesday September 24, @05:48PM   Printer-friendly

How billions of hacked mosquitoes and a vaccine could beat the deadly dengue virus:

Last month, a parade of vehicles wound its way through three cities in Brazil, releasing clouds of mosquitoes into the air. The insects all carry a secret weapon — a bacterium called Wolbachia that lowers the odds that the mosquitoes can transmit the dreaded dengue virus to humans.

These infected mosquitoes are the latest weapon in Brazil's fight against dengue, which infects millions of people in the country each year and can be fatal. A biofactory that opened in the town of Curitiba in July can produce 100 million mosquito eggs per week — making it the largest such facility in the world. The company that runs it, Wolbito do Brasil, aims to protect about 14 million Brazilians per year through its Wolbachia-infected mosquitoes.

That will come as welcome news for the Brazilian health officials battling the rapidly growing threat of dengue. In 2024, the country experienced its worst outbreak yet: with 6.6 million probable cases and more than 6,300 related deaths. This year's outbreak, although less severe, is also one of the highest on record, with 1.6 million probable cases so far (see 'Dangerous outbreaks'). And the problem is spreading. Argentina, Colombia and Peru also experienced record-breaking outbreaks in 2024 and have seen a sustained increase in cases in recent years. Across Latin America and the Caribbean, deaths from dengue last year totalled more than 8,400 and the global figure reached more than 12,000 — the highest ever recorded for this disease.

As outbreaks grow larger and the crisis becomes more urgent, the Wolbachia method isn't Brazil's only bet. A locally produced dengue vaccine is now awaiting approval by the country's drug-regulatory agency, and its health ministry expects to start administering tens of millions of doses by next year.

These twin advances offer some hope to other countries — in the region and beyond. Driven by forces such as climate change, mosquito adaptation, globalized trade and movements of people, dengue is becoming a crisis worldwide, with an estimated 3.9 billion people at risk of infection. As Brazil rolls out its armies of infected mosquitoes and a vaccine in the coming year, the rest of the world will be watching closely.

Currently, there is one main dengue vaccine in use around the world: Qdenga, licensed by the Japanese pharmaceutical company Takeda. The vaccine has been approved in many countries, including Brazil, which was the first nation to include it in its public-health system.

However, Qdenga's roll-out in Brazil is limited. The country bought nine million doses of the two-dose vaccine this year: enough to vaccinate 4.5 million of its population of more than 210 million. So far, Qdenga has been administered to children between the ages of 10 and 14, one of the groups most likely to end up in hospital after contracting dengue, together with older people. Its safety and efficacy have not yet been tested in adults aged over 60.

The main reasons for such a limited roll-out in Brazil are availability and cost. Even though Brazil secured Qdenga from Takeda at one of the cheapest prices in the world —around US$19 per dose — the cost is still high compared with other vaccines. And even in the most optimistic scenario, the maximum number of doses Takeda could provide by 2028 is 50 million — enough to vaccinate 25 million people. What's more, for people who have not had dengue before, clinical trials did not show Qdenga to be effective against all four variants — or serotypes — of the dengue virus.

Brazil is trying to address all of those limitations with its one-dose vaccine candidate, developed at the Butantan Institute, a public biomedical research centre in São Paulo. "Having local production capacity gives us independence on decisions — how many doses we need, and at what speed to vaccinate," says Esper Kallás, Butantan's director. "You can practise prices that are more suitable and absorbable by a public-health system such as Brazil's."

Butantan is also optimistic that its vaccine will be effective against all four forms of dengue. Severe disease usually occurs when a person is infected by a different serotype to their first infection. That means that a successful vaccine needs to generate antibodies for all four serotypes without triggering severe reactions, which makes it a difficult vaccine to develop. "It was indeed a challenge, as each serotype behaves differently," says Neuza Frazatti Gallina, manager of the viral vaccine development laboratory at Butantan.

The vaccine's development began at the US National Institutes of Health in the late 1990s, where scientists transformed dengue viruses they had isolated from patients into weakened vaccine strains that could trigger the production of protective antibodies without causing disease. In 2009, Butantan extended that research by working to solve the challenges of combining the four strains into a vaccine.

After testing 30 formulations, Butantan arrived at one that proved highly effective in preventing infections, according to the preliminary results of a phase III trial involving more than 16,000 volunteers in Brazil. The study reported that two years after vaccinations, the formulation was 89% effective in preventing infections in people who had previously been infected with dengue, and 74% effective in those with no previous exposure1.

"It was a well-designed trial," says Annelies Wilder-Smith, who is team lead for vaccine development at the World Health Organization (WHO). But she says one limitation of the trial is that it was conducted in a single country, and therefore runs a risk that all four serotypes were not circulating at the time.

In fact, serotypes 3 and 4 were not prevalent during the data-collection period of the clinical trial, although they are now circulating in Brazil. Butantan researchers suggest that the vaccine will be effective against serotypes 3 and 4, pointing to data from a phase II trial2 in 300 adults that showed participants produced neutralizing antibodies to each of the serotypes. That study evaluated safety and immunological response in the short term, rather than looking at the vaccine's long-term efficacy in preventing infections. The full results of the Brazilian phase III trial — which will provide data on long-term effectiveness — are not yet public and are undergoing peer review.

The vaccine is already moving through the country's regulatory process. And although there's still no certainty about when Anvisa, Brazil's regulatory agency, will approve the vaccine, the government is counting on it. In February, President Luiz Inácio Lula da Silva announced that, starting in 2026, the Ministry of Health would be buying 60 million doses annually.

To meet that demand, Butantan is now producing the vaccine at its São Paulo facility. On its lush campus, an entire building is dedicated to churning out doses.

Regarding the vaccine's approval, "We are very confident," says Kallás. "We also anticipate that there is a very prominent need to have this product in the arms of people. So we hit the road running and started producing vaccines late last year."

Although Butantan's production efforts will focus initially on meeting Brazil's need for millions of doses, Kallás expects that the vaccine could reach other countries. Butantan has been discussing with its development partner — the pharmaceutical giant Merck — and the Pan American Health Organization (PAHO) how to make the vaccine accessible to other countries. The logical first step, he says, would be to roll it out through PAHO to Latin America and the Caribbean, and then to other regions.

In the meantime, Merck is developing a potential vaccine for Asia with an almost identical formulation, which builds on the knowledge that Butantan has developed. In a statement, the drug firm said that Butantan is "sharing clinical data and other learnings". In June, Merck started enrolling participants for its own phase III trial. "All the data, experiences and insights they have collected with the Butantan vaccine will be helpful," says Wilder-Smith.

While Butantan awaits news about the vaccine's approval, the Wolbachia method to control dengue is gaining momentum. The World Mosquito Program (WMP) — a non-profit group of companies owned by Monash University in Melbourne, Australia, where the strategy was developed — has operations in 14 countries, including Vietnam, Indonesia, Mexico and Colombia, but Brazil leads the way in terms of the scale of its expansion.

The method's arrival in the Americas is tied to Brazilian researcher Luciano Moreira, now the chief executive of Wolbito do Brazil. Wolbachia is naturally present in around 50% of insects, but not in the mosquito species Aedes aegypti, which is the main transmitter of dengue and many other viruses.

Journal Reference:
Live, Attenuated, Tetravalent Butantan–Dengue Vaccine in Children and Adults, New England Journal of Medicine (DOI: 10.1056%2FNEJMoa2301790)
Safety and immunogenicity of the tetravalent, live-attenuated dengue vaccine Butantan-DV in adults in Brazil: a two-step, double-blind, randomised placebo-controlled phase 2 trial, (DOI: )
A Wolbachia Symbiont in Aedes aegypti Limits Infection with Dengue, Chikungunya, and Plasmodium, (DOI: 10.1016%2Fj.cell.2009.11.042)
Sofia B. Pinto, Thais I. S. Riback, Gabriel Sylvestre, et al. Effectiveness of Wolbachia-infected mosquito deployments in reducing the incidence of dengue and other Aedes-borne diseases in Niterói, Brazil: A quasi-experimental study, PLOS Neglected Tropical Diseases (DOI: 10.1371/journal.pntd.0009556)
Katherine L Anders, Gabriel Sylvestre Ribeiro, Renato da Silva Lopes, et al. Long-term durability and public health impact of city-wide wMel Wolbachia mosquito releases in Niterói, Brazil during a dengue epidemic surge [$], medRxiv (DOI: 10.1101/2025.04.06.25325319)
Efficacy of Wolbachia-Infected Mosquito Deployments for the Control of Dengue, New England Journal of Medicine (DOI: 10.1056%2FNEJMoa2030243)


Original Submission

posted by janrinok on Wednesday September 24, @01:02PM   Printer-friendly

The Guardian has a very interesting article about Human Computer Interactions and its implications beyond gamers:

Five years ago, on the verge of the first Covid lockdown, I wrote an article asking what seemed to be an extremely niche question: why do some people invert their controls when playing 3D games?

I thought a few hardcore gamers would be interested in the question. Instead, more than one million people read the article, and the ensuing debate caught the attention of Dr Jennifer Corbett (quoted in the original piece) and Dr Jaap Munneke, then based at the Visual Perception and Attention Lab at Brunel University London.

At the time, the two were conducting research into vision science and cognitive neuroscience, but when the country locked down, they were no longer able to test volunteers in their laboratory. The question of controller inversion provided the perfect opportunity to study the neuroscience of human-computer interactions using remote subjects. They put out a call for gamers willing to help research the reasons behind controller inversion and received many hundreds of replies.

And it wasn't just gamers who were interested. "Machinists, equipment operators, pilots, designers, surgeons – people from so many different backgrounds reached out," says Corbett. "Because there were so many different answers, we realised we had a lot of scientific literature to review to design the best possible study. Readers' responses turned this study into the first of its kind to try to figure out what actually are those factors that shape how users configure their controllers. Personal experiences, favourite games, different genres, age, consoles, which way you scroll with a mouse ... all of these things could potentially be involved."

This month the duo published their findings in a paper entitled "Why axis inversion? Optimising interactions between users, interfaces, and visual displays in 3D environments". And the reason why some people invert their controls? It's complicated.

The process started with participants completing a survey about their backgrounds and gaming experiences. "Many people told us that playing a flight simulator, using a certain type of console, or the first game they played were the reasons they preferred to invert or not," says Corbett. "Many also said they switched preferences over time. We added a whole new section to the study based on all this feedback."

What they discovered through the cognitive testing was that a lot of assumptions being made around controller preferences were wrong. "None of the reasons people gave us [for inverting controls] had anything to do with whether they actually inverted," says Corbett. "It turns out the most predictive out of all the factors we measured was how quickly gamers could mentally rotate things and ​​overcome the Simon effect. The faster they were, the less likely they were to invert. People who said they sometimes inverted were by far the slowest on these tasks." So does this mean non-inverters are better gamers? No, says Corbett. "Though they tended to be faster, they didn't get the correct answer more than inverters who were actually slightly more accurate."

In short, gamers think they are an inverter or a non-inverter because of how they were first exposed to game controls. Someone who played a lot of flight sims in the 1980s may have unconsciously taught themselves to invert and now they consider that their innate preference; alternatively a gamer who grew up in the 2000s, when non-inverted controls became prevalent may think they are naturally a non-inverter. However, cognitive tests suggest otherwise. It's much more likely that you invert or don't invert due to how your brain perceives objects in 3D space.

Consequently, Corbett says that it may improve you as a gamer to try the controller setup you are currently not using. "Non-inverters should give inversion a try – and inverters should give non-inversion another shot," she says. "You might even want to force yourself to stick with it for a few hours. People have learned one way. That doesn't mean they won't learn another way even better. A good example is being left-handed. Until the mid-20th century, left-handed children were forced to write with their right hand, causing some people to have lifelong handwriting difficulties and learning problems. Many older adults still don't realise they're naturally left-handed and could write/draw much better if they switched back."

Through this research, Corbett and Munneke have established that there are complex and often unconscious cognitive processes involved in how individuals use controllers, and that these may have important ramifications for not just game hardware but for any human-computer interfaces, from aircraft controls to surgical devices. They were able to design a framework for assessing how to best configure controls for any given individual and have now made that available via their research paper.

"This work opened our eyes to the huge potential that optimising inversion settings has for advancing human-machine teaming," says Corbett. "So many technologies are pairing humans with AI and other machines to augment what we can do alone. Understanding how a given individual best performs with a certain setup (controller configuration, screen placement, whether they are trying to hit a target or avoid an obstacle) can allow for much smoother interactions between humans and machines in lots of scenarios from partnering with an AI player to defeat a boss, to preventing damage to delicate internal tissue while performing a complicated laparoscopic surgery."

So what started as an idle, slightly nerdy question has now become a published cognitive research paper. One scientific publication has already cited it and interview requests are pouring in from podcasts and Youtubers. As for my takeaway? "The most surprising finding for gamers [who don't invert] is that they might perform better if they practised with an inverted control scheme," says Corbett. "Maybe not, but given our findings, it's definitely worth a shot because it could dramatically improve competitive game play!"

Additional Journal link: Why axis inversion? Optimizing interactions between users, interfaces, and visual displays in 3D environments


Original Submission

posted by janrinok on Wednesday September 24, @08:16AM   Printer-friendly

'A Black Hole of Energy Use': Meta's Massive AI Data Center Is Stressing Out a Louisiana Community:

A massive data center for Meta's AI will likely lead to rate hikes for Louisiana customers, but Meta wants to keep the details under wraps.

Holly Ridge is a rural community bisected by US Highway 80, gridded with farmland, with a big creek—it is literally named Big Creek—running through it. It is home to rice and grain mills and an elementary school and a few houses. Soon, it will also be home to Meta's massive, 4 million square foot AI data center hosting thousands of perpetually humming [4:01 --JE] servers that require billions of watts of energy to power. And that energy-guzzling infrastructure will be partially paid for by Louisiana residents. 

The plan is part of what Meta CEO Mark Zuckerberg said would be "a defining year for AI." On Threads, Zuckerberg boasted that his company was "building a 2GW+ datacenter that is so large it would cover a significant part of Manhattan," posting a map of Manhattan along with the data center overlaid. Zuckerberg went on to say that over the coming years, AI "will drive our core products and business, unlock historic innovation, and extend American technology leadership. Let's go build! 💪"

What Zuckerberg did not mention is that "Let's go build" refers not only to the massive data center but also three new Meta-subsidized, gas power plants and a transmission line to fuel it serviced by Entergy Louisiana, the region's energy monopoly.

Key details about Meta's investments with the data center remain vague, and Meta's contracts with Entergy are largely cloaked from public scrutiny. But what is known is the $10 billion data center has been positioned as an enormous economic boon for the area—one that politicians bent over backward to facilitate—and Meta said it will invest $200 million into "local roads and water infrastructure."

A January report from NOLA.com said that the the state had rewritten zoning laws, promised to change a law so that it no longer had to put state property up for public bidding, and rewrote what was supposed to be a tax incentive for broadband internet meant to bridge the digital divide so that it was only an incentive for data centers, all with the goal of luring in Meta.

But Entergy Louisiana's residential customers, who live in one of the poorest regions of the state, will see their utility bills increase to pay for Meta's energy infrastructure, according to Entergy's application. Entergy estimates that amount will be small and will only cover a transmission line, but advocates for energy affordability say the costs could balloon depending on whether Meta agrees to finish paying for its three gas plants 15 years from now. The short-term rate increases will be debated in a public hearing before state regulators that has not yet been scheduled.

The Alliance for Affordable Energy called it a "black hole of energy use," and said "to give perspective on how much electricity the Meta project will use: Meta's energy needs are roughly 2.3x the power needs of Orleans Parish ... it's like building the power impact of a large city overnight in the middle of nowhere."

By 2030, Entergy's electricity prices are projected to increase 90 percent from where they were in 2018, although the company attributes much of that to damage to infrastructure from hurricanes. The state already has a high energy cost burden in part because of a storm damage to infrastructure, and balmy heat made worse by climate change that drives air conditioner use. The state's homes largely are not energy efficient, with many porous older buildings that don't retain heat in the winter or remain cool in the summer.

"You don't just have high utility bills, you also have high repair costs, you have high insurance premiums, and it all contributes to housing insecurity," said Andreanecia Morris, a member of Housing Louisiana, which is opposed to Entergy's gas plant application. She believes Meta's data center will make it worse. And Louisiana residents have reasons to distrust Entergy when it comes to passing off costs of new infrastructure: in 2018, the company's New Orleans subsidiary was caught paying actors to testify on behalf of a new gas plant. "The fees for the gas plant have all been borne by the people of New Orleans," Morris said.

In its application to build new gas plants and in public testimony, Entergy says the cost of Meta's data center to customers will be minimal and has even suggested Meta's presence will make their bills go down. But Meta's commitments are temporary, many of Meta's assurances are not binding, and crucial details about its deal with Entergy are shielded from public view, a structural issue with state energy regulators across the country. 

[Editor's Note - The source is far too long to include here. I recommend reading the original source (in the link) for more of the details.--JR]


Original Submission

posted by janrinok on Wednesday September 24, @03:35AM   Printer-friendly

When cancer targets the young:

Cancer is usually a curse of time. In the United States, the vast majority of cancer diagnoses are in people over age 50. Our bodies' cells accumulate DNA damage over time, and older immune systems are not as good at making repairs. At the same time, decades of interaction with sunlight, tobacco products, alcohol, carcinogenic chemicals and other risk factors also take their toll.

But in recent years, cancer has been increasingly attacking younger adults. Global incidence rates of several types of cancer are rising in people in their 20s, 30s and 40s, many with no family history of the disease. Scientists don't know why diagnoses are soaring in people under age 50, and they are racing to find out. But as freelance journalist Fred Schwaller reports in this issue, identifying how risk factors like diet or environmental exposures could be at fault is notoriously difficult because there are so many potential influences at play.

For one, cancers in young adults may advance much more quickly than they do in older people, belying the assumption that healthy young bodies would excel at eradicating malignant cells.

What's more, cancer screening recommendations in many countries aren't currently designed to detect the disease in younger people. Young adult patients often say their concerns that something wasn't right are dismissed by doctors who say they are "too young to have cancer," even when they repeatedly voice their concerns. And that can lead to delayed diagnosis and treatment.

[...] Harsh treatments like radiation and chemotherapy can damage immature egg cells and cells that make sperm, making it impossible for some people who had cancer in childhood to have biological children. Teenage and adult patients may be able to freeze eggs or sperm, but children who haven't gone through puberty don't have those options. Senior writer Meghan Rosen reports on emerging research intended to help make that possible, including a conversation with the first childhood cancer survivor to have testicular stem cells transplanted back into his body.

Parents of children with cancer are increasingly considering these options for both boys and girls. And while scientists say the work is still in its infancy, they hope more childhood cancer survivors will one day have the option to thrive as parents.


Original Submission