Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
https://phys.org/news/2025-12-ice-home-food-scientist-easy.html
When you splurge on a cocktail in a bar, the drink often comes with a slab of aesthetically pleasing, perfectly clear ice. The stuff looks much fancier than the slightly cloudy ice you get from your home freezer. How do they do this?
Clear ice is actually made from regular water—what's different is the freezing process.
With a little help from science, you can make clear ice at home, and it's not even that tricky. However, there are quite a few hacks on the internet that won't work. Let's dive into the physics and chemistry involved.
Homemade ice is often cloudy because it has a myriad of tiny bubbles and other impurities. In a typical ice cube tray, as freezing begins and ice starts to form inward from all directions, it traps whatever is floating in the water: mostly air bubbles, dissolved minerals and gases.
These get pushed toward the center of the ice as freezing progresses and end up caught in the middle of the cube with nowhere else to go.
That's why when making ice the usual way—just pouring water into a vessel and putting in the freezer—it will always end up looking somewhat cloudy. Light scatters as it hits the finished ice cube, colliding with the concentrated core of trapped gases and minerals. This creates the cloudy appearance.
As well as looking nice, clear ice is denser and melts slower because it doesn't have those bubbles and impurities. This also means that it dilutes drinks more slowly than regular, cloudy ice.
Because it doesn't have impurities, the clear ice should also be free from any inadvertent flavors that could contaminate your drink.
Additionally, because it's less likely to crumble, clear ice can be easily cut and formed into different shapes to further dress up your cocktail.
If you've tried looking up how to make clear ice before, you've likely seen several suggestions. These include using distilled, boiled or filtered water, and a process called directional freezing. Here's the science on what works and what doesn't.
You might think that to get clear ice, you simply need to start out with really clean water. However, a recent study found this isn't the case.
- Using boiling water: Starting out with boiling water does mean the water will have less dissolved gases in it, but boiling doesn't remove all impurities. It also doesn't control the freezing process, so the ice will still become cloudy.
- Using distilled water: While distilling water removes more impurities than boiling, distilled water still freezes from the outside in, concentrating any remaining impurities or air bubbles in the center, again resulting in cloudy ice.
- Using filtered or tap water: Filtering the water or using tap water also doesn't stop the impurities from concentrating during the conventional freezing process.
As it turns out, it's not the water quality that guarantees clear ice. It's all about how you freeze it. The main technique for successfully making clear ice is called "directional freezing."
Directional freezing is simply the process of forcing water to freeze in a single direction instead of from all sides at once, like it does in a regular ice cube tray.
This way, the impurities and air will be forced to the opposite side from where the freezing starts, leaving the ice clear except for a small cloudy section.
In practice, this means insulating the sides of the ice container so that the water freezes in one direction, typically from the top down. This is because heat transfer and phase transition from liquid to solid happens faster through the exposed top than the insulated sides.
The simplest way to have a go at directional freezing at home is to use an insulated container—you can use a really small cooler (that is, an "esky"), an insulated mug or even a commercially available insulated ice cube tray designed for making clear ice at home.
Fill the insulated container with water and place it in the freezer, then check on it periodically.
Once all the impurities and air bubbles are concentrated in a single cloudy area at the bottom, you can either pour away this water before it's fully frozen through, or let the block freeze solid and then cut off the cloudy portion with a large serrated knife, then cut the ice into cubes for your drinks.
If using a commercial clear ice tray, it will likely come with instructions on how to get rid of the cloudy portion so you can enjoy the sparkling clear ice.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
https://scitechdaily.com/scientists-say-ozempic-could-change-how-you-feel-after-drinking-alcohol/
Drugs used for diabetes and weight loss may also dampen alcohol's effects by slowing how fast it enters the bloodstream. Early research suggests this could help people feel less intoxicated and potentially drink less.
Evidence is growing that medications commonly prescribed for diabetes and weight loss, better known by brand names like Ozempic and Wegovy, may also help reduce alcohol consumption.
New findings from the Fralin Biomedical Research Institute at VTC, published this month in Scientific Reports, suggest that GLP-1 agonists slow the rate at which alcohol enters the bloodstream. As a result, alcohol's effects on the brain also appear to develop more slowly.
"People who drink know there's a difference between nursing a glass of wine and downing a shot of whiskey," said Alex DiFeliceantonio, assistant professor and interim co-director of the FBRI's Center for Health Behaviors Research.
Even though both drinks contain the same amount of alcohol, 0.6 ounces, a shot causes blood alcohol levels to rise much faster. That rapid increase changes how alcohol feels because of the way the body absorbs and processes it over time.
"Why would this matter? Faster-acting drugs have a higher abuse potential," DiFeliceantonio said. "They have a different impact on the brain. So if GLP-1s slow alcohol entering the bloodstream, they could reduce the effects of alcohol and help people drink less."
Alcohol use is widespread in the United States, with more than half of adults reporting that they drink. About one in ten people meets the criteria for alcohol use disorder. Long-term heavy drinking is linked to serious health problems, including high blood pressure, cancer, and heart and liver disease. In January, U.S. Surgeon General Vivek Murthy issued an advisory naming alcohol use as the third leading preventable cause of cancer, after tobacco use and obesity.
In the study, participants who took GLP-1 medications such as semaglutide, tirzepatide, or liraglutide showed a slower increase in breath alcohol concentration, even though they consumed the same amount of alcohol as others. Their alcohol levels rose more gradually, and they also reported feeling less intoxicated based on their own assessments.
The research was supported by funding from Virginia Tech's Fralin Biomedical Research Institute and focused on how alcohol moves through the body and how it feels subjectively for people taking GLP-1 drugs. Researchers say the results offer early guidance for designing larger and more detailed studies on whether these medications could be used to help reduce alcohol use.
The study included 20 adults with a BMI of 30 or higher. Half of the participants were taking a maintenance dose of a GLP-1 medication, while the other half were not taking any medication. All participants were recruited from Roanoke, Virginia, and nearby communities. They fasted before arriving and were given a snack bar to keep caloric intake and stomach contents consistent.
Researchers measured blood pressure, pulse, breath alcohol concentration, and blood glucose levels. Ninety minutes later, participants were given an alcoholic drink and asked to finish it within 10 minutes. Over the next hour, researchers repeatedly measured breath alcohol levels and asked participants about cravings, appetite, taste, and alcohol effects. One question asked participants to rate, on a scale from zero to 10, "How drunk do you feel right now?"
Participants taking GLP-1 medications consistently reported feeling less drunk.
Afterward, participants stayed in a recovery room while their bodies processed the alcohol. Breath alcohol levels were checked every 30 minutes, blood glucose was measured twice, and participants answered follow-up questions three hours later. After four hours, once breath alcohol levels dropped below .02 percent and a study physician approved, participants were allowed to leave.
"Other medications designed to help reduce alcohol intake" — naltrexone and acamprosate — "act on the central nervous system," said DiFeliceantonio, the study's corresponding author. "Our preliminary data suggest that GLP-1s suppress intake through a different mechanism."
GLP-1 drugs slow gastric emptying, which appears to delay the rise of alcohol levels in the blood.
The study began during a faculty retreat at the Fralin Biomedical Research Institute and was led by Warren Bickel, professor and director of the Addiction Recovery Research Center, who died in 2024.
The work built on an earlier analysis of social media posts on Reddit, where users described experiencing fewer alcohol cravings while taking medications for type 2 diabetes and obesity.
"His guidance shaped every stage of this research — from the initial idea to its final form — and his passion for scientific discovery continues to inspire me every day," said Fatima Quddos, a graduate researcher in Bickel's lab and first author on both studies.
"Bickel's work had long focused on what happens when you delay rewards, so we asked, 'What if GLP-1s affect how the body handles alcohol?'" DiFeliceantonio said. "Ending this project was bittersweet, because it was my last collaboration with him."
"He was always asking, 'How do we help people the fastest?' Using a drug that's already shown to be safe to help people reduce drinking could be a way to get people help fast," DiFeliceantonio said.
Although the study was small, researchers say the differences between groups were clear and provide early evidence to support larger clinical trials. Those future studies would test whether GLP-1 drugs could become a treatment option for people looking to cut back on alcohol.
"As a recent graduate, I'm deeply inspired by the potential this research holds — not only for advancing our scientific understanding, but also for paving the way toward future therapies," said Quddos, who earned her doctorate from Virginia Tech's Translational Biology, Medicine, and Health Graduate Program in May. "The possibility of offering new hope to individuals struggling with addiction is what makes this work so meaningful."
Reference: “A preliminary study of the physiological and perceptual effects of GLP-1 receptor agonists during alcohol consumption in people with obesity” by Fatima Quddos, Mary Fowler, Ana Carolina de Lima Bovo, Zacarya Elbash, Allison N. Tegge, Kirstin M. Gatchalian, Anita S. Kablinger, Alexandra G. DiFeliceantonio and Warren K. Bickel, 15 October 2025, Scientific Reports.
DOI: 10.1038/s41598-025-17927-w
A study done by a technology tracker run by the Australian Strategic Policy Institute (ASPI) — an independent think tank, indicates that China is leading research in nearly 90% of the crucial technologies that "significantly enhance, or pose risks to, a country's national interests."
The tracker measures not a country's overall strength in critical technologies but its research performance in them. It does so by focusing on high-impact research, the 10 percent most cited research papers. A country's five-year performance between 2020 and 2024 is taken as a lead indicator of its future science and technology capability.
The 10 new technologies that have been added to it are key to strategic advantage, including advanced computing and communication, artificial intelligence, and emerging neurotechnologies relevant to human-machine integration. The dataset has also undergone a full refresh to ensure accuracy and comparability.
The updated picture is stark. China's exceptional gains in high-impact research are continuing, and the gap between it and the rest of the world is still widening. In eight of the 10 newly added technologies, China has a clear lead in its global share of high-impact research output. Four—cloud and edge computing, computer vision, generative AI and grid integration technologies—carry a high technology monopoly risk (TMR) rating, reflecting substantial concentration of expertise within Chinese institutions.
In total, China now leads in 66 of the 74 technologies tracked, with the United States leading in the remaining eight—an imbalance that underscores why trusted partners need to act together to leverage comparative advantages, reduce concentration risk and shape the trajectory of critical technologies together.
The historical data for these new technologies tells a familiar story: an early and often overwhelming US lead in research output in the opening decade of this millennium, eroded and then outmatched by persistent long-term Chinese investment in fundamental research. In total, China now leads in 66 of the 74 technologies tracked, with the United States leading in the remaining eight—an imbalance that underscores why trusted partners need to act together to leverage comparative advantages, reduce concentration risk and shape the trajectory of critical technologies together.
The ASPI team based its analysis on a database that contains more than nine million publications from all around the world. It ranked nations in each technology by identifying the top 10% of the most-cited papers produced by researchers in a country over a five-year period, between 2020 and 2024, and calculated that country's global share.
Paywalled Nature article:
https://www.nature.com/articles/d41586-025-04048-7?WT.ec_id=NATURE-20260102
Original study:
https://www.aspistrategist.org.au/aspis-critical-technology-tracker-2025-updates-and-10-new-technologies/
Planet's oldest bee species and primary pollinators were under threat from deforestation and competition from 'killer bees'
Stingless bees from the Amazon have become the first insects to be granted legal rights anywhere in the world, in a breakthrough supporters hope will be a catalyst for similar moves to protect bees elsewhere.
It means that across a broad swathe of the Peruvian Amazon, the rainforest's long-overlooked native bees – which, unlike their cousins the European honeybees, have no sting – now have the right to exist and to flourish.
Cultivated by Indigenous peoples since pre-Columbian times, stingless bees are thought to be key rainforest pollinators, sustaining biodiversity and ecosystem health.
But they are faced with a deadly confluence of climate change, deforestation and pesticides, as well as competition from European bees, and scientists and campaigners have been racing against time to get stingless bees on international conservation red lists.
Constanza Prieto, Latin American director at the Earth Law Center, who was part of the campaign, said: "This ordinance marks a turning point in our relationship with nature: it makes stingless bees visible, recognises them as rights-bearing subjects, and affirms their essential role in preserving ecosystems."
The world-first ordinances, passed in two Peruvian regions in the past few months, follow a campaign of research and advocacy spearheaded by Rosa Vásquez Espinoza, founder of Amazon Research Internacional, who has spent the past few years travelling into the Amazon to work with Indigenous people to document the bees.
Espinoza, a chemical biologist, first started researching the bees in 2020, after a colleague asked her to conduct an analysis of their honey, which was being used during the pandemic in Indigenous communities where treatments for Covid were in short supply. She was stunned by the findings.
"I was seeing hundreds of medicinal molecules, like molecules that are known to have some sort of biological medicinal property," Espinoza recalled. "And the variety was also really wild – these molecules have been known to have antiinflammatory effects or antiviral, antibacterial, antioxidant, even anti-cancer."
Espinoza, who has written a book, The Spirit of the Rainforest, about her work in the Amazon, began leading expeditions to learn more about stingless bees, working with Indigenous people to document the traditional methods of finding and cultivating the insects, and harvesting their honey.
Found in tropical regions across the world, stingless bees, a class that encompasses a number of varieties, are the oldest bee species on the planet. About half of the world's 500 known species live in the Amazon, where they are responsible for pollinating more than 80% of the flora, including such crops as cacao, coffee and avocados.
They also hold deep cultural and spiritual meaning for the forest's Indigenous Asháninka and Kukama-Kukamiria peoples. "Within the stingless bee lives Indigenous traditional knowledge, passed down since the time of our grandparents," said Apu Cesar Ramos, president of EcoAshaninka of the Ashaninka Communal Reserve. "The stingless bee has existed since time immemorial and reflects our coexistence with the rainforest."
From the outset, Espinoza began hearing reports that the bees were becoming more difficult to find. "We were talking actively with the different community members and the first things they were saying, which they still do to this day, is: 'I cannot see my bees any more. It used to take me 30 minutes walking into the jungle to find them. And now it takes me hours.'"
Her chemical analysis had also turned up some concerning findings. Traces of pesticides were appearing in the stingless bees' honey – despite their being kept in areas far from industrial agriculture.
A lack of awareness about stingless bees made obtaining funding for research difficult, Espinoza said. So at the same time as beginning fieldwork, she and her colleagues began advocating for recognition of the insects, both in Peru and at the International Union for the Conservation of Nature (IUCN).
For years, the only kinds of bees to have official recognition in Peru have been European honeybees, brought to the continent by colonisers in the 1500s.
"It almost created a vicious cycle. I cannot give you the funding because you're not on the list, but you cannot even get on the list because you don't have the data. You don't have the funding to get it." In 2023, they formally began a project to map the extent and ecology of the bees, "because by that time we had already spoken with the IUCN team and some government people in Peru and understood that that data was critical."
The mapping revealed links between deforestation and the decline of stingless bees – research that helped contribute to the passing of a law in 2024 recognising stingless bees as the native bees of Peru. The law was a critical step, as Peruvian law requires the protection of native species.
Dr César Delgado, a researcher at the Institute of Investigation of the Peruvian Amazon, described stingless bees as "primary pollinators" in the Amazon, contributing not just to plant reproduction, but also to biodiversity, forest conservation and global food security.
But their research revealed something else too.
An experiment in 1950s Brazil to create a strain that would produce more honey in tropical conditions led to the creation of the Africanised honeybee – a variety that was also more aggressive, earning them the fearsome moniker "African killer bees". Now, Espinoza and her colleagues found, these Africanised bees have begun outcompeting the comparatively gentle stingless bees in their own habitats.
On an expedition in the Amazonian highlands of Junin, southern Peru, they met Elizabeth, an Asháninka elder, who told them of what Espinoza said was "the strongest example of [bee] species competition that I have ever seen".
Living a semi-nomadic lifestyle in a remote part of the Avireri Vraem Biosphere reserve, Elizabeth farmed and kept bees at a spot in the forest some distance from her home. But she described how her stingless bees had been displaced by Africanised bees, which attacked her violently whenever she visited.
"I felt so scared, to be honest," said Espinoza. "Because I have heard of that before, but not to that extent. She had horror in her eyes and she kept looking at me straight and asking: 'how do I get rid of them? I hate them. I want them gone'."
It is the municipality where Elizabeth lives, Satipo, that became the first to pass an ordinance granting legal rights to stingless bees in October. Across the Avireri Vraem reserve the bees will now have rights to exist and thrive, to maintain healthy populations, to a healthy habitat free from pollution, ecologically stable climatic conditions and, crucially, to be legally represented in cases of threat or harm. A second municipality, Nauta, in the Loreto region, approved a matching ordinance on Monday 22 December.
The ordinances are precedents with no equivalent worldwide. According to Prieto they will establish a mandate requiring policies for the bees' survival, "including habitat reforestation and restoration, strict regulation of pesticides and herbicides, mitigation of and adaptation to the impacts of climate change, the advancement of scientific research, and the adoption of the precautionary principle as a guiding framework for all decisions that may affect their survival."
Already, a global petition by Avaaz calling on Peru to make the law nationwide has reached more than 386,000 signatures, and there has also been strong interest from groups in Bolivia, the Netherlands and the US who want follow the municipalities' examples as a basis to advocate for the rights of their own wild bees.
Ramos said: "The stingless bee provides us with food and medicine, and it must be made known so that more people will protect it. For this reason, this law that protects bees and their rights represents a major step forward for us, because it gives value to the lived experience of our Indigenous peoples and the rainforest."
[ Links in article ]
China's first homegrown 6nm GPUs are no longer a show-floor exclusive.
Earlier this year, Lisuan took the stage to announce its G100 series of GPUs, based on the in-house "TrueGPU" architecture and fabricated on TSMC's N6 process. It was the first time a Chinese company had potential to directly rival AMD and Nvidia's duopoly in the discrete GPU market. Sampling for these cards was expected to begin in September and now, IT Home is reporting that they've begun initial deliveries.
There are two GPUs part of the G100 family: the gaming-oriented 7G106 and the enterprise-focused 7G105. It was the former that really made headlines by touting RTX 4060-level performance, even beating the GPU in early benchmark results. Specs-wise, we're looking at 192 texture units, 96 ROPs, and an FP32 throughput of up to 24 TFLOP/s.
The 7G106 has 12GB of GDDR6 VRAM saturated across a 192-bit wide bus, which is doubled to 24GB on the workstation-class 7G105, with proper ECC support. These GPUs support modern APIs including DirectX 12, use the PCIe 4.0 interface, and even have a custom upscaling solution called NSRR, akin to Nvidia's DLSS or AMD's FSR. Unlike those two, however, Lisuan supports Microsoft's Windows-on-Arm initiative.
IT Home says the G100 series began production on September 15, 2025, in China, and now that customers have started to receive the first batch of orders, these graphics cards have successfully transitioned into commercialization. This is a big deal for the region, and Lisuan's TrueGPU architecture represents China's self-reliance ambitions in the boldest way possible — something that even local darling Moore Threads hasn't been able to achieve yet.
When these GPUs were first announced, we were impressed by the performance Lisuan was touting. While reviews are still up in the air, if the numbers at the launch event were real, then Lisuan's efforts would've paid off massively in creating a legitimate homegrown alternative. Now that the G100 is finally shipping, we should start to see those claims validated sooner rather than later.
China is requiring chipmakers to use at least 50% domestically made equipment for adding new capacity, three people familiar with the matter said, as Beijing pushes to build a self-sufficient semiconductor supply chain.
The rule is not publicly documented, but chipmakers seeking state approval to build or expand their plants have been told by authorities in recent months that they must prove through procurement tenders that at least half their equipment will be Chinese-made, the people told Reuters.
The mandate is one of the most significant measures Beijing has introduced to wean itself off reliance on foreign technology, a push that gathered pace after the U.S. tightened technology export restrictions in 2023, banning sales of advanced AI chips and semiconductor equipment to China.
While those U.S. export restrictions blocked the sale of some of the most advanced tools, the 50% rule is leading Chinese manufacturers to choose domestic suppliers even in areas where foreign equipment from the U.S., Japan, South Korea and Europe remain available.
[...] "Authorities prefer if it is much higher than 50%," one source told Reuters. "Eventually they are aiming for the plants to use 100% domestic equipment."
[...] China's President Xi Jinping has been calling for a "whole nation" effort to build a fully self-sufficient domestic semiconductor supply chain that involves thousands of engineers and scientists at companies and research centers nationwide.
The effort is being made across the wide supply-chain spectrum. Reuters reported earlier this month that Chinese scientists are working on a prototype of a machine capable of producing cutting-edge chips, an outcome that Washington has spent years trying to prevent.
"Before, domestic fabs like SMIC would prefer U.S. equipment and would not really give Chinese firms a chance," a former employee at local equipment maker Naura Technology, said, referring to the Semiconductor Manufacturing International Corporation
"But that changed starting with the 2023 U.S export restrictions, when Chinese fabs had no choice but to work with domestic suppliers."
[...] To support the local chip supply chain, Beijing has also poured hundreds of billions of yuan into its semiconductor sector through the "Big Fund", which established a third phase in 2024 with 344 billion yuan ($49 billion) in capital.
The policy is already yielding results, including in areas such as etching, a critical chip manufacturing step that involves removing materials from silicon wafers to carve out intricate transistor patterns, sources said.
China's largest chip equipment group, Naura, is testing its etching tools on a cutting-edge 7nm (nanometre) production line of SMIC, two sources said. The early-stage milestone, which comes after Naura recently deployed etching tools on 14nm successfully, demonstrates how quickly domestic suppliers are advancing.
"Naura's etching results have been accelerated by the government requiring fabs to use at least 50% domestic equipment," one of the people told Reuters, adding that it was forcing the company to rapidly improve.
[...] Analysts estimate that China has now reached roughly 50% self-sufficiency in photoresist-removal and cleaning equipment, a market previously dominated by Japanese firms, but now locally led by Naura.
Major releases still coming out, and enthusiasts collecting discs:
20 years ago today, the CES in Las Vegas was buzzing with talk of Blu-ray technology, players, and media. Several years in the making, Blu-rays arrived with considerable industry backing, with "seven out of the eight major movie studios announced movie titles for the launch," reports Blu-ray.com. This successor to the DVD offered improved density and thus capacity vs earlier optical formats, largely thanks to the development of blue‑violet laser diodes – hence the name.
Blu-ray discs boosted single layer media capacity to 25GB, vs 4.7GB for DVDs, using a new 405nm blue‑violet laser combined with more advanced materials. The shorter wavelength enabled a higher numerical aperture for more pits per sq mm. This was complemented by a tighter track pitch and a thinner (but harder) protection layer to boost capacity tenfold (comparing single-layer media).
Moreover, Blu-ray's base speed was significantly boosted, with the older DVD standard offering 11 Mbps, but the new format raising the bar to 36 Mbps. Better quality video was also delivered thanks to Blu-ray's adoption of the AVC (H.264) codec. It retained MPEG-2 compatibility, but AVC facilitated more efficient HD video file playback at manageable bitrates.
Blu-ray's success wasn't inevitable, as a rival faction of electronics companies and movie studios would ignite a high‑profile format war. Much like the VHS vs Betamax videotape format war, there could only be one winner, and Sony was on the winning side this time, being one of the biggest backers of Blu-ray. Console gamers of the late noughties became well aware of this format war, as it would also divide the PlayStation and Xbox camps.
Blu-ray's superior capacity, default console integration, copy protection, and broader studio support would mean that this format war was quite brief, with Toshiba conceding in early 2008.
Since its introduction, Blu-ray has been iterated and improved with 4K Blu-ray packing HEVC, HDR and more features into the format starting about a decade ago.
Its bitrates are still considerably better than the best mainstream streaming quality available, so it remains a cherished format among home cinema enthusiasts. Thus, Blu-ray media still clings onto some relevance in 2026, with collectors and bandwidth‑limited regions keeping the format alive. It is also still available as the physical media distribution format for some modern consoles.
Its days look numbered, though, if we look at various industry trends. Console makers are pulling away from physical media, including Blu-ray distribution, for example. Also, we saw news of Sony ending recordable Blu-ray production in 2025, and LG ending production of Blu-ray players in late 2024. Changes like this put several sturdy nails in this optical disc format's coffin.
It seems like an age since PCs last came with Blu-ray (or any optical) disc apparatus built-in. That excludes Japan, for some reason, where we recently noticed optical drive demand surged (inc Blu-ray compatible) coinciding with the end of support for Windows 10.
Strengthening asphalt roads with a unique green ingredient: Algae:
Snow and ice can damage paved surfaces, leading to frost heaves and potholes. These become potential hazards for drivers and pedestrians and are expensive to fix. Now, researchers propose in ACS Sustainable Chemistry & Engineering a figurative and literal green solution to improve the durability of roads and sidewalks: an algae-derived asphalt binder. For temperatures below freezing, results indicated that the algae binder reduced asphalt cracks when compared to a conventional, petroleum-based binder.
"Algae-derived compounds can improve moisture resistance, flexibility and self-healing behavior in asphalt, potentially extending pavement life and reducing maintenance costs," says research team lead Elham Fini. "In the long term, algae asphalt could help create more sustainable, resilient and environmentally responsive roadways."
Currently, asphalt is held together with bitumen: a thick, glue-like substance made from crude oil. Bitumen binds the sand and rocks that make up paved surfaces and allows the asphalt to expand and contract in hot and cold conditions, respectively. However, when the temperature rapidly drops below freezing, the binder becomes brittle and can crack, leading to roadway damage. To improve asphalt's flexibility and durability at subzero temperatures, Fini and colleagues developed a sustainable and rubbery binder from algae oil.
Fini's previous studies showed that oil extracted from algae can make a bitumen-like product that is particularly durable at low temperatures. Continuing this work, Fini and colleagues used computer models to evaluate oils from four algae species for their abilities to produce bitumen-like products that mixed well with asphalt solids and retained functionality in freezing temps. Of the four algal species, oil from the freshwater green microalga Haematococcus pluvialis appeared to impart the most resistance to permanent deformation under simulated traffic-induced stress, as well as enhanced resistance to moisture-induced damage.
In laboratory demonstrations that mimicked road traffic and freezing cycles, H. pluvialis algae-asphalt samples created by the researchers showed up to a 70% improvement in deformation recovery compared to pavement made with a crude oil-based binder. In addition to strengthening roads, the team estimates that substituting 1% of the petroleum-based binder with algae-based binder would cut net carbon emissions from asphalt by 4.5%. At around 22% algae-based binder, asphalt could potentially become carbon neutral. The researchers say this approach paves the way toward high-performance, cost-effective and sustainable pavement infrastructure.
Journal Reference: Mohammadjavad Kazemi, Farideh Pahlavan, Andrew J. Schmidt, et al., ACS Sustainable Chemistry & Engineering 2025 13 (45), 19496-19510 https://doi.org/10.1021/acssuschemeng.5c03860
https://phys.org/news/2025-12-venus-cloud-highlights-combining-polarization.html
A research team from the Hefei Institutes of Physical Science of the Chinese Academy of Sciences has advanced the characterization and retrieval capability evaluation of microphysical properties of Venusian clouds and haze.
The findings are published in Icarus.
Venus's thick cloud and haze layers play a vital role in regulating the planet's energy balance and atmospheric circulation. Accurately retrieving key microphysical parameters from intensity and polarization observations is essential for current and future Venus exploration missions.
In this study, the researchers built a layered atmospheric model of Venus's clouds and haze and evaluated the sensitivity of intensity and polarization observations to key aerosol parameters. They found that intensity measurements are more responsive to the lower cloud layers, while polarization observations—particularly at small phase angles—provide better constraints on the particle size and refractive index of the upper clouds and haze.
Using the Degrees of Freedom for Signal index, the team showed that combining intensity and polarization measurements significantly increases the amount of retrievable information. Adding near-infrared bands from the VenSpec-H instrument further enhances the retrieval of upper cloud parameters and refractive index.
The study also indicates that intensity and polarization channels should be properly arranged across the 650 nm–2.5 μm aerosol window bands, and that high measurement accuracy is crucial for reducing retrieval uncertainties.
This work provides new insights for studying atmospheric aerosols on Venus and other terrestrial planets, according to the team.
More information: Yiqi Li et al, Information content analysis of venus clouds and haze based on polarization bands of the SPICAV IR and Venspec-H instruments, Icarus (2026). DOI: 10.1016/j.icarus.2025.116905
New study shows that everyday conversations can delay eye movements, essential for safe driving:
Talking while driving is widely recognized as a major source of distraction, but the specific ways conversation interferes with the earliest stages of visual processing have remained largely unclear. While previous research has shown that cognitive distraction can slow braking or reduce situational awareness, the question of whether talking disrupts the foundational gaze processes that precede physical reactions has remained unanswered.
Now, researchers from Fujita Health University have demonstrated that talking imposes cognitive load strong enough to delay essential eye-movement responses, potentially affecting the fast visual assessments required for safe driving. A study led by Associate Professor Shintaro Uehara and the team, including Mr. Takuya Suzuki and Professor Takaji Suzuki, published online on October 6, 2025, in PLOS ONE, examined how talking alters the temporal dynamics of gaze behavior.
Gaze behavior is especially significant because approximately 90% of the information used for driving is acquired visually. Any delay in initiating or completing eye movements can cascade into slower recognition of hazards, reduced accuracy of visual scanning, and delayed motor responses. "We investigated whether the impact of talking-related cognitive load on gaze behavior varies depending on the direction of eye movement," explains Dr. Uehara.
[...] The authors note that their findings do not imply that talking is the sole or dominant cause of slowed physical reactions behind the wheel. Driving performance is influenced by multiple cognitive and perceptual factors, including inattentional blindness, divided attention, and the broader interference that occurs when the brain is forced to manage two demanding tasks at once. Even so, the study demonstrates that talking introduces delays at the earliest stage of visual processing before recognition, decision-making, or physical action, which means it may quietly undermine driving performance in ways that are not immediately obvious to drivers themselves. "These results indicate that the cognitive demands associated with talking interfere with the neural mechanisms responsible for initiating and controlling eye movements, which represent the critical first stage of visuomotor processing during driving," concludes Dr. Uehara.
These insights carry meaningful implications for public safety. By understanding that the cognitive effort involved in conversation can degrade gaze accuracy and timing, drivers may become more mindful about when and how they choose to talk while driving. Over time, this knowledge could support safer driving behaviors, inform driver-training frameworks, inspire improvements in vehicle interface design, and guide policymakers in shaping future recommendations around cognitive distraction.
Journal Reference: Suzuki T, Suzuki T, Uehara S (2025) Talking-associated cognitive loads degrade the quality of gaze behavior. PLoS One 20(10): e0333586. https://doi.org/10.1371/journal.pone.0333586
Colorectal cancer breaks the usual immune rules, with certain regulatory T cells linked to improved survival.
In many solid tumors, having a large number of regulatory T (Treg) cells is linked to worse outcomes. These cells can weaken the immune system's ability to recognize and attack cancer.
Colorectal cancer is an unusual exception. In this disease, tumors packed with Treg cells are actually tied to better survival, even though researchers have not fully understood the reason.
A new study from the Sloan Kettering Institute at Memorial Sloan Kettering Cancer Center (MSK) sheds light on this contradiction. The findings suggest a path toward improving immunotherapy for most people with colorectal cancer, and they may also apply to cancers that develop in tissues such as the skin and the lining of the stomach, mouth, and throat.
According to results published December 15 in Immunity, a leading immunology journal, the key factor is not simply how many Treg cells are present, but which kinds of Treg cells are in the tumor.
"Instead of the regulatory T cells promoting tumor growth, as they do in most cancers, in colorectal cancer we discovered there are actually two distinct subtypes of Treg cells that play opposing roles — one restrains tumor growth, while the other fuels it," says Alexander Rudensky, PhD, co-senior author of the study and chair of the Immunology Program at MSK. "It's these beneficial Treg cells that make the difference, and this underscores the need for selective approaches."
Colorectal cancer is the second leading cause of cancer death when numbers for men and women are combined, according to the American Cancer Society.
For this study, the researchers focused on a type of colorectal cancer that accounts for 80% to 85% of all colorectal cancers — microsatellite stable (MSS) with proficient mismatch repair (MMRp), meaning the tumors' DNA is relatively stable. These cancers are largely resistant to checkpoint inhibitor immunotherapies.
Here the team employed an MSK-developed mouse model that accurately recreates the common mutations, behaviors, and immune cell composition of human colorectal cancer. They found that the regulatory T cells associated with the cancer are split between two types: Cells that make a signaling molecule (cytokine) called interleukin-10 (IL-10) and cells that don't.
Through a series of sophisticated experiments that selectively eliminated each type of cell, the researchers discovered:
- IL-10-positive Tregs help hold tumor growth in check. They work by dampening the activity of a different type of T cell, called Th17 cells — these produce interleukin 17 (IL-17), which acts as a growth factor for the tumor. They're more abundant in healthy tissue adjacent to a tumor.
- When IL-10-positive cells were removed, tumor growth accelerated.
- IL-10-negative Tregs, on the other hand, suppress immune defenders — especially CD8+ T cells with strong anti-cancer capabilities. This subtype of Tregs is largely found within the tumor itself.
- When IL-10-negative cells were removed, the tumors shrank.
The researchers validated their laboratory findings using samples from people with colorectal tumors. Here, too, they found two distinct populations of IL-10-positive and IL-10-negative cells. And, in an analysis of outcomes in more than 100 colorectal cancer patients, those with more of the "good" IL-10-positive Tregs lived longer, while those with more "bad" IL-10-negative cells fared worse.
"This research shows how important these positive cells are," Dr. Huang says. "And it highlights the need to develop therapies that can selectively eliminate the harmful Tregs while preserving the helpful ones."
The research does point to a potential opportunity to improve outcomes for the majority of colorectal cancer patients, says Dr. Rudensky, who is also a Howard Hughes Medical Institute Investigator.
The IL-10-negative cells — the immunosuppressive ones primarily found in tumors — express high levels of a protein called CCR8, the team found.
Previous research from Dr. Rudensky's lab found high levels of CCR8 displayed by tumor Treg cells in breast cancer and many other types of human cancer. Those findings suggested that harmful Treg cells might be selectively targeted with antibodies — depleting them and opening the tumor up to attack by other immune cells, while sparing helpful Treg cells.
"This idea of using CCR8-depleting antibodies, which was pioneered at MSK, is the main target of global efforts to bring regulatory T cell–based immunotherapy to the clinic," Dr. Rudensky says.
Numerous clinical trials are underway at MSK and elsewhere to test the approach as a standalone treatment and in combination with other immunotherapies.
The new study adds evidence of the strategy's potential against colorectal cancer and perhaps other cancers as well.
Looking beyond colorectal cancer, the researchers searched for similar divisions between IL-10-positive and IL-10-negative cells in a large dataset of T cells spanning 16 different cancer types — and found them in several other cancers that affect the skin and the lining of the mouth, throat, and stomach.
"What these tissues have in common is that immune cells play a critical role in constantly defending and repairing them as they're exposed to microbes and environmental stresses," says Dr. Mitra, who led the complex data analysis. Dr. Mitra is co-mentored by Dr. Leslie and Dr. Rudensky.
Approaches that selectively target IL-10-negative cells in colorectal cancer might also be effective against these other barrier-tissue cancers, the researchers say.
Interestingly, a different pattern emerged when the team looked at colorectal cancer that had spread to the liver.
Here, IL-10-negative cells far outnumbered their positive, helpful counterparts. As a result, in contrast to primary tumors, removing all Treg cells led to shrinkage of metastasized tumors. This finding points to a need for tissue- and context-specific therapeutic strategies in colorectal cancer, the researchers say.
Reference: “Opposing functions of distinct regulatory T cell subsets in colorectal cancer” by Xiao Huang, Dan Feng, Sneha Mitra, Emma S. Andretta, Nima B. Hooshdaran, Aazam P. Ghelani, Eric Y. Wang, Joe N. Frost, Victoria R. Lawless, Aparna Vancheswaran, Qingwen Jiang, Cheryl Mai, Karuna Ganesh, Christina S. Leslie and Alexander Y. Rudensky, 15 December 2025, Immunity.
DOI: 10.1016/j.immuni.2025.11.014
Processors are built by multi-billion-dollar corporations using some of the most cutting-edge technologies known to man. But even with all their expertise, investment, and know-how, sometimes these CPU makers drop the ball. Some CPUs have just been poor performers for the money or their generation, while others easily overheated or drew too much power.
Some CPUs were so bad that they set their companies back generations, taking years to recover.
But years on from their release and the fallout, we no longer need to feel let down, disappointed, or ripped off by these lame-duck processors. We can enjoy them for the catastrophic failures they were, and hope the companies involved learned a valuable lesson.
Here are some of the worst CPUs ever made.
Note: Plenty of people will bring up the Pentium FDIV bug here, but the reason we didn't include it is simple: Despite being an enormous marketing failure for Intel and a considerable expense, the actual bug was tiny. It affected no one who wasn't already doing scientific computing, and, in technical terms, the scale and scope of the problem were never estimated to be much of anything. The incident is recalled today more for the disastrous way Intel handled it than for any overarching problem in the Pentium microarchitecture.
Intel Itanium
Intel's Itanium was a radical attempt to push hardware complexity into software optimizations. All the work to determine which instructions to execute in parallel was handled by the compiler before the CPU ran a byte of code.
Analysts predicted that Itanium would conquer the world. It didn't. Compilers were unable to extract necessary performance, and the chip was radically incompatible with everything that had come before it. Once expected to replace x86 entirely and change the world, Itanium limped along for years with a niche market and precious little else.
Itanium's failure was particularly egregious because it represented the death of Intel's entire 64-bit strategy (at the time). Intel had originally planned to move the entire market to IA64 rather than extend x86. AMD's x86-64 (AMD64) proved quite popular, partly because Intel had no luck bringing a competitive Itanium to market. Not many CPUs can claim to have failed so egregiously that they killed their manufacturers' plans for an entire instruction set.
Intel Pentium 4 (Prescott)
Prescott doubled down on the Pentium 4's already-long pipeline, extending it to nearly 40 stages, while Intel simultaneously shrank it down to a 90nm die. This was a mistake.
The new chip was crippled by pipeline stalls that even its new branch prediction unit couldn't prevent, and parasitic leakage drove high power consumption, preventing the chip from hitting the clocks it needed to be successful. Prescott and its dual-core sibling, Smithfield, are the weakest desktop products Intel ever fielded relative to its competition at the time. Intel set revenue records with the chip, but its reputation took a beating.
Its reputation for running rather toasty would be a recurring issue for Intel in the future, too.
AMD Bulldozer
AMD's Bulldozer was supposed to steal a march on Intel by cleverly sharing certain chip capabilities to improve efficiency and reduce die size. AMD wanted a smaller core with higher clocks to offset any penalties from the shared design. What it got was a disaster.
Bulldozer couldn't hit its target clocks, drew too much power, and its performance was a fraction of what it needed to be. It's rare that a CPU is so bad that it nearly kills the company that invented it. Bulldozer nearly did. AMD did penance for Bulldozer by continuing to use it. Despite the core's flaws, it formed the backbone of AMD's CPU family for the next six years.
Fortunately, during the intervening years, AMD went back to the drawing board, and in 2017, Ryzen was born. And the rest is history.
Cyrix 6x86
Cyrix was one of the x86 manufacturers that didn't survive the late 1990s. (VIA now holds its x86 license.) Chips like the 6x86 were a major part of the reason why.
Cyrix has the dubious distinction of being the reason why some games and applications carry compatibility warnings. The 6x86 was significantly faster than Intel's Pentium in integer code, but its FPU was abysmal, and its chips weren't particularly stable when paired with Socket 7 motherboards. If you were a gamer in the late 1990s, you wanted an Intel CPU but could settle for AMD. The 6x86 was one of the terrible "everybody else" chips you didn't want in your Christmas stocking.
The 6x86 failed because it couldn't differentiate itself from Intel or AMD in a way that made sense or gave Cyrix an effective niche of its own. The company tried to develop a unique product and wound up earning itself a second place on this list instead.
Cyrix MediaGX
The Cyrix MediaGX was the first attempt to build an integrated SoC processor for desktop, with graphics, CPU, PCI bus, and memory controller all on one die. Unfortunately, this happened in 1998, which means all those components were really terrible.
Motherboard compatibility was incredibly limited, the underlying CPU architecture (Cyrix 5x86) was equivalent to Intel's 80486, and the CPU couldn't connect to an off-die L2 cache (the only kind of L2 cache there was, back then). Chips like the Cyrix 6x86 could at least claim to compete with Intel in business applications. The MediaGX couldn't compete with a dead manatee.
The entry for the MediaGX on Wikipedia includes the sentence "Whether this processor belongs in the fourth or fifth generation of x86 processors can be considered a matter of debate." The 5th generation of x86 CPUs is the Pentium generation, while the 4th generation refers to 80486 CPUs. The MediaGX shipped in 1997 with a CPU core stuck somewhere between 1989 and 1992, at a time when people really did replace their PCs every 2-3 years if they wanted to stay on the cutting edge.
It also notes, "The graphics, sound, and PCI bus ran at the same speed as the processor clock also due to tight integration. This made the processor appear much slower than its actual rated speed." When your 486-class CPU is being choked by its own PCI bus, you know you've got a problem.
Texas Instruments TMS9900
The TMS9900 is a noteworthy failure for one enormous reason: When IBM was looking for a chip to power the original IBM PC, it had two basic choices to hit its own ship date: the TMS9900 and the Intel 8086/8088 (the Motorola 68K was under development but wasn't ready in time).
The TMS9900 only had 16 bits of address space, while the 8086 had 20. That made the difference between addressing 1MB of RAM and just 64KB. TI also neglected to develop a 16-bit peripheral chip, which left the CPU stuck with performance-crippling 8-bit peripherals. The TMS9900 also had no on-chip general purpose registers; all 16 of its 16-bit registers were stored in main memory. TI had trouble securing partners for second-sourcing and when IBM had to pick, it picked Intel.
Good choice.
Intel Core i9-14900K
It's rare to call a top chip of its generation a "bad" CPU, and even rarer to denigrate the name of a company's current fastest gaming CPU, but with the Intel 14900K, it deserves its place on this list. Although it is fantastically fast in gaming and some productivity workloads, and can compete with some of the best chips available at the end of 2025, it is still a bad CPU for a range of key reasons.
For starters, it barely moved the needle. The 14900K is basically an overclocked 13900K (or 13900KS if we're considering special editions), which wasn't much different from the 12900K that came before it. The 14900K was the poster child for Intel's lack of innovation, which is saying a lot considering how long Intel languished on its 14nm node.
The 14900K also pulled way too much power and got exceptionally hot. I had to underclock it when reviewing it just to get it to stop thermal throttling—and that was on a 360mm AIO cooler, too.
The 14th-generation was plagued with bugs and microcode issues, too, causing crashes and stability issues that required regular BIOS updates to try to fix.
The real problem was that the rest of the range was just better. The 14600K is almost as fast in gaming despite being far cheaper, easier to cool, easier to overclock, and less prone to crashes. The rest of the range wasn't too exciting, though the 14100 remains a stellar gaming CPU under $100 today.
The 14900K was the most stopgap of stopgap flagships. It was a capstone on years of Intel stagnation, and a weird pinnacle in performance at the same time. It's not as big a dud as the other chips on this list, but it did nothing to help Intel's modern reputation, and years later, it's still trying to course-correct.
Dishonorable Mention: Qualcomm Snapdragon 810
The Snapdragon 810 was Qualcomm's first attempt to build a big.LITTLE CPU and was based on TSMC's short-lived 20nm process. The SoC was easily Qualcomm's least-loved high-end chip in recent memory—Samsung skipped it altogether, and other companies ran into serious problems with the device.
Qualcomm claimed that the issues with the chip were caused by poor OEM power management, but whether the problem was related to TSMC's 20nm process, problems with Qualcomm's implementation, or OEM optimization, the result was the same: A hot-running chip that won precious few top-tier designs and is missed by no one.
Dishonorable Mention: IBM PowerPC G5
Apple's partnership with IBM on the PowerPC 970 (marketed by Apple as the G5) was supposed to be a turning point for the company. When it announced the first G5 products, Apple promised to launch a 3GHz chip within a year. But IBM failed to deliver components that could hit these clocks at reasonable power consumption, and the G5 was incapable of replacing the G4 in laptops due to high power draw.
Apple was forced to move to Intel and x86 in order to field competitive laptops and improve its desktop performance. The G5 wasn't a terrible CPU, but IBM wasn't able to evolve the chip to compete with Intel.
Ironically, it would be Intel years later that couldn't compete with ARM that would lead Apple to build its own silicon in the M-series.
Dishonorable Mention: Pentium III 1.13GHz
The Coppermine Pentium III was a fine architecture. But during the race to 1GHz against AMD, Intel was desperate to maintain a performance lead, even as shipments of its high-end systems slipped further and further away (at one point, AMD was estimated to have a 12:1 advantage over Intel when it came to actually shipping 1GHz systems).
In a final bid to regain the performance clock, Intel tried to push the 180nm Cumine P3 up to 1.13GHz. It failed. The chips were fundamentally unstable, and Intel recalled the entire batch.
Dishonorable Mention: Cell Broadband Engine
We'll take some heat for this one, but we'd toss the Cell Broadband Engine on this pile as well. Cell is an excellent example of how a chip can be phenomenally good in theory, yet nearly impossible to leverage in practice.
Sony may have used it as the general processor for the PS3, but Cell was far better at multimedia and vector processing than it ever was at general-purpose workloads (its design dates to a time when Sony expected to handle both CPU and GPU workloads with the same processor architecture). It's quite difficult to multi-thread the CPU to take advantage of its SPEs (Synergistic Processing Elements), and it bears little resemblance to any other architecture.
It did end up as part of a linked-PS3 supercomputer built by the Department of Defense, which shows just how capable these chips could be. But that's hardly a daily-driver use case.
What's the Worst CPU Ever?
It's surprisingly difficult to pick an absolute worst CPU. All of the ones on this list were bad in their own way at that specific time. Some of them would have been amazing if they'd been released just a year earlier, or if other technologies had kept pace.
Some of them just failed to meet overinflated expectations (Itanium). Others nearly killed the company that built it (Bulldozer). Do we judge Prescott on its heat and performance (bad, in both cases) or on the revenue records Intel smashed with it?
Evaluated in the broadest possible meanings of "worst," I think one chip ultimately stands feet and ankles below the rest: the Cyrix MediaGX. Even then, it is impossible not to admire the forward-thinking ideas behind this CPU. Cyrix was the first company to build what we would now call an SoC, with PCI, audio, video, and RAM controller all on the same chip. More than 10 years before Intel or AMD would ship their own CPU+GPU configurations, Cyrix was out there, blazing a trail.
It's unfortunate that the trail led straight into what the locals affectionately call "Alligator Swamp."
Designed for the extreme budget market, the Cyrix MediaGX disappointed just about anyone who ever came in contact with it. Performance was poor—a Cyrix MediaGX 333 had 95% the integer performance and 76% of the FPU performance of a Pentium 233 MMX, a CPU running at just 70% of its clock. The integrated graphics had no video memory at all. There's no option to add an off-die L2 cache, either.
If you found this under your tree, you cried. If you had to use this for work, you cried. If you needed to use a Cyrix MediaGX laptop to upload a program to sabotage the alien ship that was going to destroy all of humanity, you died.
All in all, not a great chip. Others were bad, sure, but none embody that quite like the Cyrix MediaGX.
You might not agree with these choices. If you do not, then tell us your own favourite "worst" CPU and why you think it deserves a mention. Has anybody got information on the worst CPUs that have been produced in Russia or China yet?
Parkinson's is the canary in the coal mine warning us that our environment is sick:
Parkinson's disease occurs worldwide, affects people of all ages and backgrounds, has an enormous societal impact, and is rising at an alarming rate. According to neurologist Bas Bloem, Parkinson's literally meets all the criteria of a pandemic, except that the disease is not infectious. In a recent publication in The Lancet Neurology, Bloem and a group of internationally recognised scientists place this development in historical perspective, beginning with James Parkinson, who first described the disease in 1817.
This historical view is needed, Bloem says, because the search for the causes of Parkinson's is anything but new. As early as the 1990s, researchers and pesticide manufacturers knew that the pesticide Paraquat was linked to Parkinson's—yet the substance is still used in parts of the world (for example, the United States). In the Netherlands, Paraquat has fortunately been banned since 2007. Two other environmental factors, dry-cleaning chemicals and air pollution, also occur on a large scale. This strengthens Bloem's conviction that this largely human-made disease can also be reduced through human intervention.
As a young medical student, Bloem found himself in the midst of groundbreaking research in California, where he worked at the age of 21. "I did not yet see the enormous impact of the research being carried out there," he recalls. One of the groundbreaking studies of that era was conducted by J. William Langston in 1983. He investigated seven young drug users who suddenly developed symptoms of advanced Parkinson's after using a contaminated heroin variant.
It turned out that this so-called designer drug contained the substance MPTP, which in the body is converted into a compound that closely resembles the pesticide Paraquat. The study demonstrated that an external chemical substance could cause Parkinson's disease. Whereas the heroin users had received a high dose all at once, most people in daily life are exposed to small amounts over long periods, with ultimately a similar effect.
During the same period, and at the same Parkinson Institute in Sunnyvale, California, researcher Carlie Tanner also carried out key work. Bloem explains: "Her hypothesis was simple: if Parkinson's is hereditary, then identical twins who share the same DNA should develop it far more often than fraternal twins, as we see for conditions such as diabetes." But this was not the case.
[...] These insights became the starting point for new research into pesticides. "When researchers exposed laboratory animals to these substances, they developed Parkinson-like symptoms, and damage occurred precisely in the substantia nigra, the area of the brain affected in Parkinson's," Bloem says, convincing evidence of a causal link.
A third important study comes from Canadian neurologist André Barbeau, who in 1987 investigated the role of environmental factors in the province of Quebec. If the disease were evenly distributed across the region, this would suggest a hereditary or random cause. But this was not the case: Parkinson's occurred in clear clusters.
These clusters were located precisely in areas where high concentrations of pesticides were found in groundwater, another strong indication that environmental factors play a causal role.
Discussions about pesticides evoke strong emotions, Bloem notes. "People are frightened, farmers feel attacked, and industry attempts to sow doubt. But farmers or horticulturalists are not the problem. They work with what they are permitted to use. The responsibility lies with the systems that allow such substances."
He advocates for policies based on the precautionary principle. "The burden of proof now lies with scientists and citizens, who must demonstrate that a substance is harmful. But doubt should benefit humans, not chemical products."
"The most hopeful message," Bloem says, "is that Parkinson's appears to be at least partly—perhaps even largely—preventable. That is revolutionary: a brain disease that we can prevent through better environmental policy." Yet hardly any funding goes into prevention. "In the US, only 2 percent of Parkinson's research focuses on prevention. Meanwhile, billions are spent on treatments instead of turning off the tap."
[...] His message is clear: "Parkinson's is not an unavoidable fate. It is the canary in the coal mine warning us that our environment is sick and that toxic substances are circulating. If we act now—by reducing toxins, improving air quality, and enforcing stricter regulations—we can reverse this pandemic. And in doing so, we will likely reduce other health risks such as dementia and cancer."
The U.S. Department of Commerce has issued a permit to Taiwan Semiconductor Manufacturing Company (TSMC) to import U.S.-made chip-making equipment into China for its Nanjing fab. According to Reuters, Samsung and SK hynix were also given import licenses to bring in specialized equipment that used American-made components into their Chinese factories. These three chipmakers used to enjoy validated end-user status, meaning they could freely import restricted items into China without asking for individual licenses. However, this privilege has expired at the end of 2025, meaning they now have to seek annual approval from Washington, D.C., to continue receiving advanced tools.
"The U.S. Department of Commerce has granted TSMC Nanjing an annual export license that allows U.S. export-controlled items to be supplied to TSMC Nanjing without the need for individual vendor licenses," the company said in a statement to Reuters. It also said that this "ensures uninterrupted fab operations and product deliveries." This move to require annual licenses for the Chinese factories of these chipmakers is a part of the White House's effort to keep advanced chipmaking tools out of China.
Beijing has been working hard to achieve "semiconductor sovereignty," just as the U.S. has been trying hard to prevent it from acquiring the latest chips. Aside from that, ASML, the only manufacturer of cutting-edge chipmaking tools, has been banned from exporting its products to China and servicing those that are already installed. Because of this, we've seen reports that the country is covertly working on reverse engineering EUV lithography tools, and that it has even come up with a "Frankenstein" EUV chipmaking tool, but has yet to produce a single chip.
The U.S. does not allow EUV lithography machines with U.S. technology to be exported to China, even to companies like TSMC and Samsung that have Chinese factories. This means that these fabs are only limited to mature nodes of 16-nm and up. The revocation of the validated end-user status for the China-based fabs of these companies shows that Washington is tightening its grip on chipmaking machines, even older DUV tech, to make it difficult for Beijing to create its own technology.
Despite this, the East Asian nation is pushing hard to develop its own equipment. The central government has even told its chipmakers to use homegrown tools for half of new capacity. And while the country is still years behind cutting-edge tech from ASML and other Western companies, it's slowly taking steps in the right direction.
When an associate of mine accessed their personal email account on their work computer, they opened an email from a friend purporting to be an invitation to a holiday party, and it contained a link that it claimed was to RSVP. In fact, the link was to a malicious MSI file hosted on Cloudflare's r2.dev service. Not knowing what an MSI file was, the associate ran the file and installed an instance of ConnectWise's ScreenConnect software operated by an attacker. The attacker promptly took control of the associate's computer for a couple of minutes before the associate wisely powered the computer off. Sure, the obvious answers are that people shouldn't click on suspicious links in emails they weren't expecting, even if they come from a friend or trusted colleague, and that they really shouldn't use work computers for personal tasks and vice versa. But this incident also revealed troubling concerns about how some large companies like Cloudflare have double standards about security.
The neighbor's computer was compromised by the same attacker, who accessed their GMail account and apparently sent a single email with the phishing email with the entire contact list as Bcc recipients of the email. This was probably a large number of contacts, and it really should have been automatically flagged by Google as potentially a spam email. A reasonable approach might be to delay sending the email until the sender confirms they really intended to Bcc a large number of people on a potentially suspicious email. The sender would then get a notification on their phone asking to confirm if they really intended to send a mass email, which they could either confirm or reject. Google is keen to push multi-factor authentication and require that users associate phone numbers with their accounts, so it seems like this might be a rational approach for outbound emails that ought to be flagged as suspicious.
But I'm more frustrated with Cloudflare, who seems to act as a gatekeeper many websites, arbitrarily blocking browsers and locking people out of websites, especially for the dastardly crime of using a non-Chromium browser like Palemoon. The malicious file was hosted on r2.dev, which is a cloud-based object storage system. Although the actual file might not trip malware scanners because ScreenConnect has legitimate purposes, R2 storage buckets and Cloudflare's other hosting services are also often used to host malware and phishing content. This is probably because Cloudflare has a free tier and is easy to use, making them a good tool for attackers to abuse. One of the logical actions I took was to try to report the malicious content to Cloudflare so they would take it down. They encourage reporting of abuse through an online reporting form. The first time I accessed the abuse reporting form, it was blank. I reloaded the page, and Cloudflare informed me that I had been blocked from accessing their abuse reporting page. The irony here is that Cloudflare has arbitrarily blocked me for no apparent reason, as if I am malicious, preventing me from reporting actual malicious content being hosted on their platform.
The problem here is that large companies like Google and Cloudflare have positioned themselves as gatekeepers of the internet, demanding that users conform to their security standards while themselves not taking reasonable steps to prevent attacks originating from their own platforms. In the case of Google, reCaptcha is mostly security theatre, making users jump through hoops to prove they're not malicious while harvesting data that can be used for tracking users through browser fingerprinting. As for Cloudflare, they use methods like blocking browsers with low market share, supposedly in the name of blocking malicious traffic. The hypocrisy is very blatant when Cloudflare's arbitrary and opaque blocking prevents users from reporting actual malicious content hosted by Cloudflare itself. Unfortunately, this doesn't seem particularly uncommon.
It's becoming increasingly difficult not to see companies like Google and Cloudflare as bad actors. In the case of Cloudflare, I finally sent complaints to their abuse@ and noc@ email addresses, but I expect little will be done to actually address the problem. How do we demand accountability from companies that act gatekeepers of the internet and treat ordinary users like potential criminals while doing little to prevent their own platforms from being vectors for abuse? In this case, is the best solution to complain to a government agency like the state attorney general, state that the malware may have caused harm, and that Cloudflare has made it next to impossible to get the content taken down?