Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
When it comes to sleep, traditional advice has focused on the number of hours a person sleeps. But for older adults, the quality of sleep may affect cognitive performance the following day regardless of their quantity of sleep, according to a new study by researchers from the Penn State College of Health and Human Development and Albert Einstein College of Medicine, Bronx, New York.
In a study published this week (Dec. 17) in Sleep Health, the researchers found that the quality of a night of sleep — rather than the length of the night of sleep — predicted how quickly older adults processed information the next day. The researchers evaluated sleep quality based on how much time someone was awake between when they first went to sleep and when they rose in the morning.
[...] Few studies have examined how poor sleep impacts cognitive functioning the following day, according to Carol Derby, professor of neurology and epidemiology & population health, Louis and Gertrude Feil Faculty Scholar in Neurology at Albert Einstein College of Medicine and senior author of the study.
"Understanding the nuances of how sleep impacts older adults' cognition and their ability to perform daily activities may indicate which individuals are at risk for later cognitive impairment, such as Alzheimer's disease," Derby said.
[...] When the researchers compared performance on cognitive tests not just to participants' own performance but across participants in the entire study sample, they found that older adults who, on average, spent more time awake during their night's sleep performed worse on three of the four cognitive tests. In addition to slower processing speed, participants with more wake time after falling asleep performed worse on two tests of visual working memory.
"Repeatedly waking after you've fallen asleep for the night diminishes the overall quality of your sleep," said Buxton, associate director of both the Penn State Clinical and Translational Science Institute and the Penn State Social Science Research Institute and an investigator in the Penn State Center for Healthy Aging. "We examined multiple aspects of sleep, and quality is the only one that made a day-to-day difference in cognitive performance."
[...] "My number one piece of advice is not to worry about sleep problems," Buxton said. "Worrying only creates stress that can disrupt sleep further. This does not mean that people should ignore sleep, though. There are research-validated interventions that can help you sleep better."
To promote healthy sleep, people should go to bed at a consistent time each night, aiming for a similar length of sleep in restful circumstances, Buxton continued.
"When it comes to sleep, no single night matters, just like no single day is critical to your exercise or diet," Buxton said. "What matters is good habits and establishing restful sleep over time."
[...] "The work demonstrating the day-to-day impact of sleep quality on cognition among individuals who do not have dementia suggests that disrupted sleep may have an early impact on cognitive health as we age," Derby said. "This finding suggests that improving sleep quality may help delay later onset of dementia."
Journal Reference: https://doi.org/10.1016/j.sleh.2025.11.010
Relief for those dealing with data pipelines between the two, but move has its critics:
The EU has extended its adequacy decision, allowing data sharing with and from the UK under the General Data Protection Regulation for at least six more years.
This will be some relief to techies in the UK and the member state block and beyond whose work or product set depends on the frictionless movement of data between the two, especially as they can point to the 2031 expiration date as a risk managing aspect to backers and partners. But the move does have its critics.
After GDPR was more-or-less replicated in UK law following the nation's official departure from the EU, the trading and political bloc made its first adequacy decision to allow sharing with a specific jurisdiction outside its boundaries.
In a statement last week, the European Commission — the executive branch of the EU — said that it was renewing the 2021 decision to allow the free flow of personal data with the United Kingdom. "The decisions ensure that personal data can continue flowing freely and safely between the European Economic Area (EEA) and the United Kingdom, as the UK legal framework contains data protection safeguards that are essentially equivalent to those provided by the EU," it said.
In June 2025, the Commission had adopted a technical extension of the 2021 adequacy decisions with the United Kingdom – one under the GDPR and the other concerning the Law Enforcement Directive – for a limited period of six months, as they were set to expire on 27 December this year.
The renewal decisions will last for six years until 27 December 2031 and will be reviewed after four years. It followed the European Data Protection Board's opinion and the Member States' approval.
Following the UK's departure from the EU, the Conservative government originally made plans to diverge from EU data protection law, potentially jeopardizing the adequacy decision. In 2022, for example, then digital minister Michelle Donelan said that the UK planned t
These proposals never made it into law. Since the election of a Labour government, Parliament has passed the Data Use and Access Act.
The government promised the new data regime would boost the British economy by £10 billion over the next decade by cutting NHS and police bureaucracy, speeding up roadworks, and turbocharging innovation in tech and science.
The Act also offers a lawful basis for relying on people's personal information to make significant automated decisions about them, as long as data processors apply certain safeguards.
None of this has been enough to upset the EU, it seems.
Science sleuths raise concerns about scores of bioengineering papers:
In December 2024, Elisabeth Bik noticed irregularities in a few papers by a highly-cited bioengineer, Ali Khademhosseini. She started looking at more publications for which he was a co-author, and the issues soon piled up: some figures were stitched together strangely, and images of cells and tissues were duplicated, rotated, mirrored and sometimes reused and labelled differently.
Bik, a microbiologist and leading research-integrity specialist based in San Francisco, California, ended up flagging about 80 papers on PubPeer, a platform that allows researchers to review papers after publication. A handful of other volunteer science sleuths found more, bringing the total to 90.
The articles were published in 33 journals over 20 years and have been cited a combined total of 14,000 times. Although there are hundreds of co-authors on the papers, the sleuthing effort centred on Khademhosseini, who is a corresponding author for about 60% of them.
He and his co-authors sprang into action. Responding to the concerns, some of which were reported in the blog For Better Science, became like a full-time job, says Khademhosseini, who until August was director and chief executive of the Terasaki Institute for Biomedical Innovation in Los Angeles, California. "I alerted journals, I alerted collaborators. We tried to do our best to make the literature correct." In many cases, he and his co-authors provided original source data to journal editors, and the papers were corrected.
Khademhosseini told Nature that investigations into his work have been carried out and have found no evidence of misconduct by him. The Terasaki Institute says that an "internal review has not found that Dr. Khademhosseini engaged in research misconduct".
The case raises questions about oversight in large laboratories and about when a paper needs to be retracted and when a correction is sufficient. In some cases, journals have issued corrections for papers containing issues that research-integrity sleuths describe as "clearly data manipulation", and the corrections were issued without source data. Bik and others argue that this approach sets a bad precedent. "I don't think that any part of a study that bears these signs of data manipulation should be trusted," says Reese Richardson, who studies data integrity at Northwestern University in Evanston, Illinois. He argues that such papers should be retracted.
Khademhosseini defends the corrections and says that the conclusions of the papers still hold. He says he has not seen any "conclusive evidence" of misconduct or "purposeful manipulation" in the papers, and nothing that would require a retraction.
For three decades, Khademhosseini has developed biomedical technologies such as organs on chips and hydrogel wound treatments. His work has been funded by the US National Institutes of Health, and by other public and private agencies. As a PhD student, he worked under Robert Langer, a renowned bioengineer at the Massachusetts Institute of Technology in Cambridge. Khademhosseini has published more than 1,000 papers, which have been cited more than 100,000 times in total. He has also received numerous awards and honours — most recently, the 2024 Biomaterials Global Impact Award, from the journal Biomaterials.
Related:
• Why retractions could be a powerful tool for cleaning up science
• This science sleuth revealed a retraction crisis at Indian universities
A University of Cambridge philosopher argues that our evidence for what constitutes consciousness is far too limited to tell if or when artificial intelligence has made the leap – and a valid test for doing so will remain out of reach for the foreseeable future.
As artificial consciousness shifts from the realm of sci-fi to become a pressing ethical issue, Dr Tom McClelland says the only "justifiable stance" is agnosticism: we simply won't be able to tell, and this will not change for a long time – if ever.
While issues of AI rights are typically linked to consciousness, McClelland argues that consciousness alone is not enough to make AI matter ethically. What matters is a particular type of consciousness – known as sentience – which includes positive and negative feelings.
"Consciousness would see AI develop perception and become self-aware, but this can still be a neutral state," said McClelland, from Cambridge's Department of History and Philosophy of Science.
"Sentience involves conscious experiences that are good or bad, which is what makes an entity capable of suffering or enjoyment. This is when ethics kicks in," he said. "Even if we accidentally make conscious AI, it's unlikely to be the kind of consciousness we need to worry about."
"For example, self-driving cars that experience the road in front of them would be a huge deal. But ethically, it doesn't matter. If they start to have an emotional response to their destinations, that's something else."
Companies are investing vast sums of money pursuing Artificial General Intelligence: machines with human-like cognition. Some claim that conscious AI is just around the corner, with researchers and governments already considering how we regulate AI consciousness.
McClelland points out that we don't know what explains consciousness, so don't know how to test for AI consciousness.
"If we accidentally make conscious or sentient AI, we should be careful to avoid harms. But treating what's effectively a toaster as conscious when there are actual conscious beings out there which we harm on an epic scale, also seems like a big mistake."
In debates around artificial consciousness there are two main camps, says McClelland. Believers argue that if an AI system can replicate the "software" – the functional architecture – of consciousness, it will be conscious even though it's running on silicon chips instead of brain tissue.
On the other side, sceptics argue that consciousness depends on the right kind of biological processes in an "embodied organic subject". Even if the structure of consciousness could be recreated on silicon, it would merely be a simulation that would run without the AI flickering into awareness.
[...] "We do not have a deep explanation of consciousness. There is no evidence to suggest that consciousness can emerge with the right computational structure, or indeed that consciousness is essentially biological," said McClelland.
"Nor is there any sign of sufficient evidence on the horizon. The best-case scenario is we're an intellectual revolution away from any kind of viable consciousness test."
"I believe that my cat is conscious," said McClelland. "This is not based on science or philosophy so much as common sense – it's just kind of obvious."
"However, common sense is the product of a long evolutionary history during which there were no artificial lifeforms, so common sense can't be trusted when it comes to AI. But if we look at the evidence and data, that doesn't work either.
"If neither common sense nor hard-nosed research can give us an answer, the logical position is agnosticism. We cannot, and may never, know."
[...] McClelland's work on consciousness has led members of the public to contact him about AI chatbots. "People have got their chatbots to write me personal letters pleading with me that they're conscious. It makes the problem more concrete when people are convinced they've got conscious machines that deserve rights we're all ignoring."
"If you have an emotional connection with something premised on it being conscious and it's not, that has the potential to be existentially toxic. This is surely exacerbated by the pumped-up rhetoric of the tech industry."
Journal Reference: Tom McClelland, Agnosticism about artificial consciousness [OPEN], Mind & Language First published: 18 December 2025
https://doi.org/10.1111/mila.70010
https://scitechdaily.com/mit-reveals-how-high-fat-diets-quietly-prime-the-liver-for-cancer/
A fatty diet doesn't just damage the liver — it rewires its cells in ways that give cancer a dangerous head start.
Eating a diet high in fat is one of the strongest known risk factors for liver cancer. New research from MIT explains why, showing that fatty diets can fundamentally change how liver cells behave in ways that make cancer more likely to develop.
The study found that when the liver is exposed to a high-fat diet, mature liver cells called hepatocytes undergo a striking shift. Instead of maintaining their specialized roles, these cells revert to a more primitive, stem-cell-like state. While this transformation helps the cells cope with the ongoing stress caused by excess fat, it also leaves them far more vulnerable to becoming cancerous over time.
"If cells are forced to deal with a stressor, such as a high-fat diet, over and over again, they will do things that will help them survive, but at the risk of increased susceptibility to tumorigenesis," says Alex K. Shalek, director of the Institute for Medical Engineering and Sciences (IMES), the J. W. Kieckhefer Professor in IMES and the Department of Chemistry, and a member of the Koch Institute for Integrative Cancer Research at MIT, the Ragon Institute of MGH, MIT, and Harvard, and the Broad Institute of MIT and Harvard.
The team also pinpointed several transcription factors that appear to drive this cellular regression. Because these molecules help control whether liver cells stay mature or revert to an immature state, they may offer promising targets for future drugs aimed at reducing cancer risk in vulnerable patients.
High-fat diets are known to promote inflammation and fat buildup in the liver, leading to a condition called steatotic liver disease. This disorder can also result from other long-term metabolic stresses, including heavy alcohol use, and may progress to cirrhosis, liver failure, and eventually cancer.
To better understand what drives this progression, the researchers focused on how liver cells respond at the genetic level when exposed to a high-fat diet, especially which genes are activated or shut down as damage accumulates over time.
The team fed mice a high-fat diet and used single-cell RNA-sequencing to analyze liver cells at multiple stages of disease development. This approach allowed them to track changes in gene activity as the animals moved from early inflammation to tissue scarring and, ultimately, liver cancer.
Early in the process, hepatocytes began activating genes that promote survival under stress. These included genes that reduce the likelihood of cell death and encourage continued cell division. At the same time, genes essential for normal liver function, such as those involved in metabolism and protein secretion, were gradually switched off.
"This really looks like a trade-off, prioritizing what's good for the individual cell to stay alive in a stressful environment, at the expense of what the collective tissue should be doing," Tzouanas says.
Some of these shifts occurred quickly, while others developed more slowly. In particular, the decline in metabolic enzyme production unfolded over a longer period. By the end of the study, nearly all mice on the high-fat diet had developed liver cancer.
According to the researchers, liver cells that revert to a less mature state appear to be especially susceptible to cancer if they later acquire harmful mutations.
"These cells have already turned on the same genes that they're going to need to become cancerous. They've already shifted away from the mature identity that would otherwise drag down their ability to proliferate," Tzouanas says. "Once a cell picks up the wrong mutation, then it's really off to the races and they've already gotten a head start on some of those hallmarks of cancer."
The team also identified specific genes that help coordinate this shift back to an immature state. During the course of the study, a drug targeting one of these genes (thyroid hormone receptor) was approved to treat a severe form of steatotic liver disease known as MASH fibrosis. In addition,
a drug that activates another enzyme highlighted in the research (HMGCS2) is currently being tested in clinical trials for steatotic liver disease.
Another potential drug target identified by the researchers is a transcription factor called SOX4. This factor is typically active during fetal development and in only a limited number of adult tissues (but not the liver), making its reactivation in liver cells particularly notable.
After observing these effects in mice, the researchers examined whether the same patterns could be found in people. They analyzed liver tissue samples from patients at various stages of liver disease, including individuals who had not yet developed cancer.
The human data closely matched the findings in mice. Over time, genes required for healthy liver function declined, while genes linked to immature cell states became more active. Using these gene expression patterns, the researchers were also able to predict patient survival outcomes.
"Patients who had higher expression of these pro-cell-survival genes that are turned on with high-fat diet survived for less time after tumors developed," Tzouanas says. "And if a patient has lower expression of genes that support the functions that the liver normally performs, they also survive for less time."
While cancer developed within about a year in mice, the researchers believe the same process unfolds much more slowly in humans, potentially over a span of roughly 20 years. The timeline likely varies depending on factors such as diet, alcohol use, and viral infections, all of which can encourage liver cells to revert to an immature state.
The researchers now plan to explore whether the cellular changes triggered by a high-fat diet can be reversed. Future studies will test whether returning to a healthier diet or using weight-loss medications such as GLP-1 agonists can restore normal liver cell function.
They also hope to further evaluate the transcription factors identified in the study as possible drug targets to prevent damaged liver tissue from progressing to cancer.
"We now have all these new molecular targets and a better understanding of what is underlying the biology, which could give us new angles to improve outcomes for patients," Shalek says.
Reference: “Hepatic adaptation to chronic metabolic stress primes tumorigenesis” 22 December 2025, Cell.
https://phys.org/news/2025-12-disaster-raw-materials.html
This Christmas Day marks 21 years since the terrifying Indian Ocean tsunami. As we remember the hundreds of thousands of lives lost in this tragic event, it is also a moment to reflect on what followed. How do communities rebuild after major events such as the tsunami, and other disasters like it? What were the financial and hidden costs of reconstruction?
Beyond the immediate human toll, disasters destroy hundreds of thousands of buildings each year. In 2013, Typhoon Haiyan damaged a record 1.2 million structures in Philippines. Last year, earthquakes and cyclones damaged more than half a million buildings worldwide. For communities to rebuild their lives, these structures must be rebuilt.
While governments, non-government agencies and individuals struggle to finance post-disaster reconstruction, rebuilding also demands staggering volumes of building materials. In turn, these require vast amounts of natural resource extraction.
For instance, an estimated 1 billion burnt clay bricks were needed to reconstruct the half-million homes destroyed in the Nepal earthquake. This is enough bricks to circle the Earth six times if laid end to end. How can we responsibly source such vast quantities of materials to meet demand?
Sudden spikes in demand have led to severe shortages of common building materials after nearly every major disaster over the past two decades, including the 2015 Nepal earthquake and the 2019 California wildfires. These shortages often trigger price hikes of 30%–40%, which delays reconstruction and prolongs the suffering of affected communities. Disasters not only increase demand for building materials but also generate enormous volumes of debris.
For example, the 2023 Turkey–Syria earthquake produced more than 100 million cubic meters of debris—40 times the volume of the Great Pyramid of Giza.
Disaster debris can pose serious environmental and health risks, including toxic dust and waterway pollution. But some debris can be safely transformed into useful assets such as recycled building materials. Rubble can be crushed and repurposed as a base for low-traffic roads or turned into cement blocks .
The consequences of poor post-disaster building materials management have reached alarming global proportions. After the 2004 Indian Ocean Tsunami, for example, the surge in sand demand led to excessive and illegal sand mining in rivers along Sri Lanka's west coast. This caused irreversible ecological damage to two major watersheds, devastating the livelihoods of thousands of farmers and fisherpeople.
Similar impacts from the overextraction of materials such as sand, gravel, clay and timber have been reported following other major disasters, including the 2008 Sichuan earthquake in China and Cyclone Idai in Mozambique in 2019. If left unaddressed, the social, environmental and economic impacts of resource extraction will escalate to catastrophic levels, especially as climate change intensifies disaster frequency.
This crisis has yet to receive adequate international attention. Earlier this year, several global organizations came together to publish a Global Call to Action on sustainable building materials management after disasters.
Based on an analysis of 15 major disasters between 2005 and 2020, it identified three key challenges: building material shortages and price escalation, unsustainable extraction and use of building materials, and poor management of disaster debris.
Although well-established solutions exist to address these challenges, rebuilding efforts suffer from policy and governance gaps. The Call to Action urges international bodies such as the United Nations Office for Disaster Risk Reduction to take immediate policy and practical action.
After a disaster hits, it leaves an opportunity to build back better. Rebuilding can boost resilience to future hazards, encourage economic development and reduce environmental impact. The United Nations' framework for disaster management emphasizes the importance of rebuilding better and safer rather than simply restoring communities to pre-disaster conditions.
Disaster-affected communities should be rebuilt with the capacity to cope with future external shocks and environmental risks. Lessons can be learned from both negative and positive experiences of past disasters. For example, poor planning of some reconstruction projects after the Indian Ocean Tsunami (2004) in Sri Lanka made the communities vulnerable again to coastal hazards within a few years. On the other hand, the community-led reconstruction approach followed after the Bhuj earthquake, India (2001), has resulted in safer and more socio-economically robust settlements, standing the test of 24 years.
As an integral part of the "build back better" approach, authorities must include strategies for environmentally and socially responsible management of building materials. These should encourage engineers, architects and project managers to select safe, sustainable materials for reconstruction projects.
At the national level, regulatory barriers to repurposing disaster debris should be removed, while still ensuring safe management of hazardous materials such as asbestos. For example, concrete from fallen buildings was successfully used as a road base and as recycled aggregate for infrastructure projects following the 2004 tsunami in Indonesia and 2011 Tohoku Earthquake in Japan.
This critical issue demands urgent public and political attention. Resilient buildings made with safe, sustainable materials will save lives in future disasters.
Provided by The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
https://arstechnica.com/gadgets/2025/12/the-ars-technica-guide-to-dumb-tvs/
Sick of smart TVs? Here are your best options.
Smart TVs can feel like a dumb choice if you're looking for privacy, reliability, and simplicity.
Today's TVs and streaming sticks are usually loaded up with advertisements and user tracking, making offline TVs seem very attractive. But ever since smart TV operating systems began making money, "dumb" TVs have been hard to find.
In response, we created this non-smart TV guide that includes much more than dumb TVs. Since non-smart TVs are so rare, this guide also breaks down additional ways to watch TV and movies online and locally without dealing with smart TVs' evolution toward software-centric features and snooping. We'll discuss a range of options suitable for various budgets, different experience levels, and different rooms in your home.
The advice from the submitted article is largely: buy yourself a dumb TV. This is good advice, but what about if you don't have one, or can't get one? If you have a TV with built-in "smarts," do you Soylentils go to extra lengths to deal with the ads and tracking? If you picked up a "smart" TV during this holiday giving time, what are your plans for it? If you picked up a "dumb" TV, from where did you acquire it? Detailed advice on your setup would be interesting to share for those wondering where to start. (E.g., does putting it behind a Pi-hole solve all problems? Can I maintain some of the convienent functions of the TV using one of the media server solutions and still avoid the ads and tracking? If I don't have spare RasPis or other hardware kicking around to repurpose, what should I look to get, particularly when RAM prices are spiking?) --Ed.
Reddit has filed suit against Australian social media laws on the grounds of privacy and free speech. It is the first major company to do so. The Australian government is confident the social media ban will be upheld saying that they act in the interests of the children. Meanwhile, Australian children are easily circumventing the ban by ditching accounts associated with their real identity leaving the people primarily affected by the intrusive age testing to be adults.
https://phys.org/news/2025-12-climate-misinformation-national-threat-canada.html
When a crisis strikes, rumors and conspiracy theories often spread faster than emergency officials can respond and issue corrections.
In Canada, social media posts have falsely claimed wildfires were intentionally set, that evacuation orders were government overreach or that smoke maps were being manipulated. In several communities, people delayed leaving because they were unsure which information to trust.
This wasn't just online noise. It directly shaped how Canadians responded to real danger. When misinformation delays evacuations, fragments compliance or undermines confidence in official warnings, it reduces the state's ability to protect lives and critical infrastructure.
At that point, misinformation is no longer merely a communications problem, but a national security risk. Emergency response systems depend on public trust to function. When that trust erodes, response capacity weakens and preventable harm increases.
Canada is entering an era where climate misinformation is becoming a public-safety threat. As wildfires, floods and droughts grow more frequent, emergency systems rely on one fragile assumption: that people believe the information they receive. When that assumption fails, the entire chain of crisis communication begins to break down. We are already seeing early signs of that failure.
This dynamic extends far beyond acute disasters. It also affects long-running climate policy and adaptation efforts. When trust in institutions erodes and misinformation becomes easier to absorb than scientific evidence, public support for proactive climate action collapses.
Recent research by colleagues and me on how people perceive droughts shows that members of the public often rely on lived experiences, memories, identity and social and institutional cues—such as environmental concerns, perceived familiarity and trust—to decide whether they are experiencing a drought, even when official information suggests otherwise.
These complex cognitive dynamics create predictable vulnerabilities. Evidence from Canada and abroad documents how false narratives during climate emergencies reduce protective behavior, amplify confusion and weaken institutional authority.
Canada has invested billions of dollars in physical resiliency, firefighting capacity, flood resiliency and energy reliability. In addition, the Canadian government also recently joined the Global Initiative for Information Integrity on Climate Change to investigate false narratives and strengthen response capacity.
These are much-needed steps in the right direction. But Canada still approaches misinformation as secondary rather than a key component of climate-risk management.
That leaves responsibility for effective messaging fragmented across public safety, environment, emergency management and digital policy, with no single entity accountable for monitoring, anticipating or responding to information threats during crises. The cost of this fragmentation is slower response, weaker coordination and greater risk to public safety.
Canada also continues to rely heavily on outdated communication mediums like radio, TV and static government websites, while climate misinformation is optimized for the social-media environment. False content often circulates quickly online digitally, with emotional resonance and repetition giving it an advantage over verified information.
Research on misinformation dynamics shows how platforms systematically amplify sensational claims and how false claims travel farther and faster than verified updates.
Governments typically attempt to correct misinformation during emergencies when emotions are high, timelines are compressed and false narratives are already circulating. By then, correction is reactive and often ineffective.
Trust cannot be built in the middle of a crisis. It is long-term public infrastructure that must be maintained through transparency, consistency and modern communication systems before disasters occur.
Canada needs to shift from reactive correction to proactive preparedness. With wildfire season only months ahead, this is the window when preparation matters most. Waiting for the next crisis to expose the same weaknesses is not resilience, but repetition.
We cannot afford another round of reacting under pressure and then reflecting afterwards on steps that should have been taken earlier. That shift requires systemic planning:
- Proactive public preparedness: Federal and provincial emergency agencies should treat public understanding of alerts, evacuation systems and climate risks as a standing responsibility, not an emergency add-on. This information must be communicated well before disaster strikes, through the platforms people actually use, with clear expectations about where authoritative information will come from.
- Institutional coordination: Responsibility for tackling climate misinformation currently falls between departments. A federal-provincial coordination mechanism, linked to emergency management rather than political communications, would allow early detection of misinformation patterns and faster response, just as meteorological or hydrological risks are monitored today.
- Partnerships with trusted messengers: Community leaders, educators, health professionals and local organizations often have more credibility than institutions during crises. These relationships should be formalized in emergency planning, not improvised under pressure. During recent wildfires, community-run pages and volunteers were among the most effective at countering false claims.
We cannot eliminate every rumor or every bit of misinformation. But without strengthening public trust and information integrity as core components of climate infrastructure, emergencies will become harder to manage and more dangerous.
Climate resilience is not only about physical systems. It is also about whether people believe the warnings meant to protect them. Canada's long-term security depends on taking that reality seriously.
Provided by The Conversation
This article is republished from The Conversation under a Creative Commons license.
Ubuntu 9.04 was my first distribution. Back in 2009, I was a post-grad student at Politecnico di Torino and we had an Operating Systems class in the curriculum. As this course relied heavily on Linux and thus I kind of forced myself to replace Windows 7 with Ubuntu 9.04. I borrowed the CD from a friend. And yes, CDs were used for operating system installation at that time.
That was 16 years ago. I have not looked back since then. Linux has been the primary operating system on my personal computer. And Ubuntu has been my choice of distro for the most part of it.
And I have seen the transition Ubuntu has taken since then. I was there to see new projects launched by Canonical and not take off, sadly.
This article here is a trip down the memory lane. From Ubuntu One to Ubuntu Unity, all those experimental ideas that could not sustain and were eventually discounted.
Ubuntu One: The iCloud for Ubuntu
Ubuntu One still exists but not in the same format and doesn't have the same features it was once created for.
Today, it's just a single sign on (SSO), a way to log in to Ubuntu related online accounts to be used with websites like Ask Ubuntu, Ubuntu Community forum, Ubuntu Pro account etc.
Back then, Ubuntu One was a lot more. It was equivalent to Apple's iCloud. I think iCloud was the inspiration behind it. Ubuntu wanted to give you a unified, connected experience. Cloud storage was at the beginning stage and so was streaming music.
Ubuntu One account gave you 5 GB of free cloud storage with provison to buy more storage if required. You could save your contacts and automatically backup images taken from your Android smartphone. You could also sync Tomboy notes and some other application data (if the apps supported them) between two Ubuntu computers. [...]
Ubuntu One Music Store: iTunes's Ubuntu version
I did not mention it in the earlier sections, but Ubuntu also sold music through the Ubuntu One mechanism.
As I said, it (seemed to be) modeled on iCloud. Back in 2010, streaming music was still a few years away. People bought digital copies of their favorite songs through platforms like iTunes.
Canonical offered users to buy music and save it in their Ubuntu One account. This was one of the several ways they tried making money to sustain the Ubuntu project.
The option to purchase music was also integrated into Rhythmbox, the default music player at that time (still is). [...]
Convergence: Did not converge
Convergence. This is the term that was on every Ubuntu user's tongue around 2014. Ubuntu had big plans for providing a 'unified' computing experience on devices of all sizes, i.e., laptops/desktop, tablets and smartphones. You connect Ubuntu running on your phone and tablet to a monitor, and it provided you with a desktop feel. Samsung Dex still does this.
And it launched several projects for this, some software and some hardware (well, kind of).
Despite several years of hard work, Canonical gave up on the convergence dream in 2017.
Perhaps they understood that desktop Linux is not where the money is and decided to put their effort on making Ubuntu more attractive for developers (by developing Snaps) and servers (OpenStack, LXD containerization, and more).
Ubuntu Edge: The most successful failure
With the aim at convergence, Canonical was ambitious for an Ubuntu-powered smartphone.
In the summer of 2013, Canonical launched a massive crowdfunding campaign to create Ubuntu Edge, the flagship Ubuntu phone.
It broke records, raising over $12 million through its campaign. Yet we never saw this device because, despite breaking crowdfunding records, it did not reach its target of $32 million.
Yes, the goal was to raise $32 million to create an alternative to Android and iPhone. Sadly, it did not materialize, and Canonical dropped the idea of going into hardware and pulled itself back to focus on the software part.
Ubuntu Touch: Touch and go
Before jumping onto the hardware bandwagon with Ubuntu Edge, Canonical released the developer preview of a touch-friendly Ubuntu-based operating system, Ubuntu Touch. It allowed installing mobile screen friendly version of Ubuntu on devices like Google Nexus. Ah! Google Nexus, the OG flagship killer of that time.
That was in the beginning of the year 2013. By the year end, Canonical's hardware plan with Ubuntu Edge had not materialized. But that did not stop Ubuntu Touch. At least not for the next few years.
The software project was still on and we had our first official Ubuntu phone in 2015. It was produced by a Spanish company, BQ.
The model was BQ Aquaris 4.5. It was a tiny, entry level smartphone with Ubuntu Touch on it. Ubuntu Touch provided a different layout experience and if you see older videos, you'll understand what I mean by it. There were only a few native apps here. So, a workaround was to 'create new apps' using 'web-links'. Web-links were basically like PWA, The icons showed up like apps and when you clicked on them, they would open the mobile version of the website. [...]
Ubuntu Unity: Sowed a divison
In 2011, Ubuntu 11.04 was shipped with a new desktop environment, called Unity.
This was the first revolt I experienced in my Linux life. Those were turbulent times. GNOME 2 was being replaced by a more modern GNOME 3 which had a radical interface change from the classic GNOME.
In addition to that, Ubuntu decided to ditch GNOME altogether and started offering its homegrown Ubuntu Unity.
I was considerably new at the time and had no attachment to GNOME whatsoever. And thus I liked Unity without any prejudice. While we took things for granted, Unity had features that were ahead for that time. Remember lenses, scopes?
[...] Unfortunately, Canonical pulled the curtains back on Unity in a sudden announcement just ahead of Ubuntu 17.04. With version 17.10 release, Ubuntu switched back to GNOME but not vanilla GNOME. It was a customized GNOME version that had the flagship Unity-styled launcher on the left. Till date, Ubuntu uses a similar customized GNOME interface.
Mir: Dead for desktop
Canonical ended its convergence dream in 2017 when it announced the discontinuation of the Unity project. Convergence was supposed to arrive for the masses with Unity version 8 and Unity 8 never arrived on the scene.
While Unity 8 was at the core of the entire convergence thing, Mir display server was at the core of Unity 8.
Technically, Mir is not dead. It is still used for Internet of Things (IoT) projects. But Mir is no longer in the plan for Ubuntu desktop.
Wubi installer: Made testing Ubuntu easier
If you were on the Linux scene between 2008-2013, you must have come across Wubi.
This was an interesting project as it was not started by Canonical but become part of Ubuntu CD with version 8.04 LTS.
With Wubi installer, Windows users could install Ubuntu in (sort of) dual boot mode from inside Windows without touching any disk partition. This made things a lot less scary as you didn't mess with the disk partitioning like the real dual boot process. Ubuntu was installed in a loop device, on the C or D drive of Windows.
Ubuntu Make: Still here but for how long?
Before Snap was a thing, Canonical tried making life easier for developers by providing them a command line interface for easily installing their favorite development tool. This CLI tool was "Ubuntu Make" and you could use it by typing umake in the terminal.
Before 2016, you could use it to install and configure development tools and frameworks using the umake command. Heck! I remember writing installation tutorials of Atom and VS Code that included the umake instructions.
Things changed with the introduction of Snap packaging format. Sandboxed Snap apps made things easier for developers. Ubuntu Make took a backseat.
Ubuntu Make is still an active project but I am not sure if there are many takers for this forgotten project.
What next? Snap .... he he
Just kidding. Despite all the negative feelings in the community about Snap, it is still useful for the developers who just want a usable environment.
That morbid joke aside, when I look back, I think Canonical had a vision. They tried to provide us with a modern desktop operating system that was at par with, if not ahead of, its counterparts like Apple and Microsoft.
It also seems like they didn't stick behind some ideas for long or did not take a risk with the finances (for the Ubuntu One cloud concept). I don't want to put blame on them. It's just that I would have loved to see Ubuntu succeed with all these projects.
The court’s ruling suggests that using the internet now means agreeing to be searched:
The Pennsylvania Supreme Court has a new definition of “reasonable expectation.” According to the justices, it’s no longer reasonable to assume that what you type into Google is yours to keep.
In a decision that reads like a love letter to the surveillance economy, the court ruled that police were within their rights to access a convicted rapist’s search history without a warrant. The reasoning is that everyone knows they’re being watched anyway.
The opinion, issued Tuesday, leaned on the idea that the public has already surrendered its privacy to Silicon Valley.
PDF of ruling.
Related: Jury Orders Google to Pay $425 Million for Unlawfully Tracking Millions of Users
The results of a new study by the University of Washington (UW) and Toyota Research Institute have provided pretty daming evidence against the use of large, distracting touchscreens when driving a vehicle.
Rather eloquently titled "Touchscreens in Motion: Quantifying the Impact of Cognitive Load on Distracted Drivers", the study saw 16 participants placed in ultra-realistic high-fidelity driving simulators while researchers tracked eye and hand movements, pupil dilation, and skin conductivity.
Participants were asked to drive around a typical urban environment and then interact with various side-tasks presented on the touchscreen; nothing major, simply adjusting car functionality or changing the radio station.
Their ability to both drive and their accuracy when interacting with the touchscreen were measured.
According to Car Scoops, the researchers measured a mix of driver performance metrics and physiological markers, from eye movements, index finger tracking and steering consistency to reaction time and stress signals. This helped them build a better picture of stress and cognitive load on the human in the driving seat.
As you would expect, the results weren't pretty for those peddling an increased reliance on touchscreens over physical buttons. Firstly, pointing accuracy on said touchscreen and the speed of use were reduced by more than 58% when compared to non-driving conditions.
Already, this reveals that us humans struggle to physically interact with a touchscreen when busy processing what's going on out of the windscreen of a moving vehicle. This then requires the driver to apply more focus to tapping digital menu screens.
As a result, the study revealed that lane deviation increased by over 40% once touchscreen interaction was introduced. The vicious cycle then continues.
https://www.cnbc.com/2025/12/18/trump-pot-reclassification-cannabis-stocks-medicare-cbd.html
https://archive.is/7sEWG
President Donald Trump signed an executive order Thursday directing federal agencies to reclassify marijuana, loosening long-standing restrictions on the drug and marking the most consequential shift in U.S. cannabis policy in more than half a century.
The order, once finalized by the Drug Enforcement Administration, moves cannabis out of Schedule I classification — the most restrictive category under the Controlled Substances Act, alongside heroin and LSD — to a Schedule III classification, which encompasses substances with accepted medical use and a lower potential for abuse, such as ketamine and Tylenol with codeine.
"This action has been requested by American patients suffering from extreme pain, incurable diseases, aggressive cancers, seizure disorders, neurological problems and more, including numerous veterans with service-related injuries, and older Americans who live with chronic medical problems that severely degrade their quality of life," Trump said from the Oval Office on Thursday.
Also on Thursday, the Centers for Medicare and Medicaid Services, led by Dr. Mehmet Oz, is expected to launch a pilot program in April enabling certain Medicare-covered seniors to receive free, doctor-recommended CBD products, which must comply with all local and state laws on quality and safety, according to senior White House officials. The products must also come from a legally compliant source and undergo third-party testing for CBD levels and contaminants.
Shares of cannabis conglomerates were down following the announcement, likely from worries of new competition from international companies.
"Millions of registered patients across the United States, many of them veterans, rely on cannabis for relief from chronic and debilitating symptoms. We commend the administration for taking this historic step. This is only the beginning," Ben Kovler, founder and CEO of Green Thumb, said in a statement to CNBC.
The reclassification is viewed by many analysts as a financial lifeline for the cannabis industry. The move exempts companies from IRS Code Section 280E, allowing them to deduct standard expenses like rent and payroll for the first time. It also opens the door for banking access and institutional capital previously sidelined by compliance fears.
Many on Wall Street also expect the changes and the Medicare pilot to draw major pharmaceutical players into the sector to chase federally insured revenue.
While CBD has surged in popularity in recent years, with infused consumer goods ranging from seltzers to skin care, the Food and Drug Administration has stopped short of granting the compound its full backing.
Studies have found "inconsistent benefits" for targeted conditions, while FDA-funded research warns that prolonged CBD use can cause liver toxicity and interfere with other lifesaving medications.
Currently, the FDA has only approved one CBD-based drug, Epidiolex, for rare forms of epilepsy.
"I want to emphasize that the order ... doesn't legalize marijuana in any way, shape or form, and in no way sanctions its use as a recreational drug," Trump said.
Experts and industry insiders told CNBC this week that a reclassification could pave the way for more research into the effects of CBD use.
Danish postal service to stop delivering letters after 400 years:
PostNord's decision to end service on 30 December comes after fear over 'increasing digitalisation' of Danish society
The Danish postal service will deliver its last letter on 30 December, ending a more than 400-year-old tradition.
Announcing the decision earlier this year to stop delivering letters, PostNord, formed in 2009 in a merger of the Swedish and Danish postal services, said it would cut 1,500 jobs in Denmark and remove 1,500 red postboxes amid the "increasing digitalisation" of Danish society.
Describing Denmark as "one of the most digitalised countries in the world", the company said the demand for letters had "fallen drastically" while online shopping continued to increase, prompting the decision to instead focus on parcels.
It took only three hours for 1,000 of the distinctive postboxes, which have already been dismantled, to be bought up when they went on sale earlier this month with a price tag of 2,000 DKK (£235) each for those in good condition and 1,500 DKK (£176) for those a little more well-worn. A further 200 will be auctioned in January. PostNord, which will continue to deliver letters in Sweden, has said it will refund unused Danish stamps for a limited time.
Danes will still be able to send letters, using the delivery company Dao, which already delivers letters in Denmark but will expand its services from 1 January from about 30m letters in 2025 to 80m next year. But customers will instead have to go to a Dao shop to post their letters – or pay extra to have it collected from home – and pay for postage either online or via an app.
The Danish postal service has been responsible for delivering letters in the country since 1624. In the last 25 years, letter-sending has been in sharp decline in Denmark, with a fall of more than 90%.
But evidence suggests a resurgence in letter-writing among younger people could be under way.
Dao said its research had found 18- to 34-year-olds send two to three times as many letters as other age groups, citing the trend researcher Mads Arlien-Søborg, who puts the rise down to young people "looking for a counterbalance to digital oversaturation". Letter-writing, he said, had become a "conscious choice".
According to Danish law, the option to send a letter must exist. This means that if Dao were to stop delivering letters, the government would be obliged to appoint somebody else to do it.
A source close to the transport ministry insisted there would not be any "practical difference" in the new year – because people would still be able to send and receive letters, they would simply do so through a different company. Any significance around the change, they said, was purely "sentimental".
But others have said there is an irreversible finality to it. Magnus Restofte, the director of the Enigma postal, the telecommunications and communications museum in Copenhagen, said in the event that it were no longer possible to use digital communications "It's actually quite difficult to turn back [to physical post]. We can't go back to what it was. Also, take into consideration we are one of the most digitalised countries in the world."
Under the MitID scheme – Denmark's national digital ID system, used for everything from online banking to signing documents electronically and booking a doctor's appointment – all official communications from authorities are automatically sent via "digital post" rather than by mail.
While there is the option to opt out and instead receive physical mail, few do. Today, 97% of the Danish population aged 15 and over is enrolled in MitID and only 5% of Danes have opted out of digital post.
The Danish public, said Restofte, had been "quite pragmatic" about the change to postal services because very few people received physical letters in their postboxes any more. Some younger people have never sent a physical letter.
But the scarcity of physical letters has increased their value. "The funny thing is that actually receiving a physical letter, the value of that is extremely high," said Restofte. "People know if you write a physical letter and write by hand you have spent time and also spent money."
Announcing their decision earlier this year, Kim Pedersen, the deputy chief executive of PostNord Denmark, said: "We have been the Danish postal service for 400 years, and therefore it is a difficult decision to tie the knot on that part of our history. The Danes have become more and more digital and this means there are very few letters left today, and the decline continues so significantly that the letter market is no longer profitable."
Hubble Space Telescope captures rare collision in nearby planetary system:
In an unprecedented celestial event, NASA's Hubble Space Telescope (HST) captured the dramatic aftermath of colliding space rocks within a nearby planetary system.
When astronomers initially spotted a bright object in the sky, they assumed it was a dust-covered exoplanet, reflecting starlight. But when the "exoplanet" disappeared and a new bright object appeared, the international team of astrophysicists — including Northwestern University's Jason Wang — realized these were not planets at all. Instead, they were the illuminated remains of a cosmic fender bender.
Two distinct, violent collisions generated two luminous clouds of debris in the same planetary system. The discovery offers a unique real-time glimpse into the mechanisms of planet formation and the composition of materials that coalesce to form new worlds.
The study was published in the journal Science.
"Spotting a new light source in the dust belt around a star was surprising. We did not expect that at all," Wang said. "Our primary hypothesis is that we saw two collisions of planetesimals — small rocky objects, like asteroids — over the last two decades. Collisions of planetesimals are extremely rare events, and this marks the first time we have seen one outside our solar system. Studying planetesimal collisions is important for understanding how planets form. It also can tell us about the structure of asteroids, which is important information for planetary defense programs like the Double Asteroid Redirection Test (DART)."
"This is certainly the first time I've ever seen a point of light appear out of nowhere in an exoplanetary system," said lead author Paul Kalas, an astronomer at the University of California, Berkeley. "It's absent in all of our previous Hubble images, which means that we just witnessed a violent collision between two massive objects and a huge debris cloud unlike anything in our own solar system today."
[...] For years, astronomers have puzzled over a bright object called Fomalhaut b, an exoplanet candidate residing just outside the star Fomalhaut. Located a mere 25 light-years from Earth in the Piscis Austrinus constellation, Fomalhaut is more massive than the sun and encircled by an intricate system of dusty debris belts.
"The system has one of the largest dust belts that we know of," said Wang, who is part of the team that has monitored the system for two decades. "That makes it an easy target to study."
Since discovering Fomalhaut b in 2008, astronomers have struggled to determine whether it is, indeed, an actual planet or a large expanding cloud of dust. In 2023, researchers used the HST to further examine the strange light source. Surprisingly, it was no longer there. But another bright point of light emerged in a slightly different location within the same system.
"With these observations, our original intention was to monitor Fomalhaut b, which we initially thought was a planet," Wang said. "We assumed the bright light was Fomalhaut b because that's the known source in the system. But, upon carefully comparing our new images to past images, we realized it could not be the same source. That was both exciting and caused us to scratch our heads."
The disappearance of Fomalhaut b (now called Fomalhaut cs1) supports the hypothesis that it was a dissipating dust cloud, likely produced by a collision. The appearance of a second point of light (now called Fomalhaut cs2) further supports the theory that neither are planets, but the dusty remnants of dramatic smashups between planetesimals — the rocky building blocks of planets.
The location and brightness of Fomalhaut cs2 bear striking similarities to the initial observations of Fomalhaut cs1 two decades prior. By imaging the system, the team was able to calculate how frequent such planetesimal collisions occur.
"Theory suggests that there should be one collision every 100,000 years, or longer. Here, in 20 years, we've seen two," Kalas said. "If you had a movie of the last 3,000 years, and it was sped up so that every year was a fraction of a second, imagine how many flashes you'd see over that time. Fomalhaut's planetary system would be sparkling with these collisions."
[...] "Fomalhaut cs2 looks exactly like an extrasolar planet reflecting starlight," Kalas said. "What we learned from studying cs1 is that a large dust cloud can masquerade as a planet for many years. This is a cautionary note for future missions that aim to detect extrasolar planets in reflected light."
Although Fomalhaut cs1 has faded from view, the research team will continue to observe the Fomalhaut system. They plan to track the evolution of Fomalhaut cs2 and potentially uncover more details about the dynamics of collisions in the stellar neighborhood.
Journal Reference: https://www.science.org/doi/10.1126/science.adu6266