Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How far do you currently live from the town where you grew up?

  • less than 60 mi or 100 km
  • greater than that, but less than 300 mi or 500 km
  • greater than the above, but less than 600 mi or 1,000 km
  • greater than the above, but less than 3,000 mi or 5,000 km
  • greater than the above, but less than 6,000 mi or 10,000 km
  • greater than the above, but less than 12,000 mi or 20,000 km
  • greater than 12,000 mi or 20,000 km (the truth is out there)
  • I never grew up, you insensitive clod!

[ Results | Polls ]
Comments:71 | Votes:238

posted by jelizondo on Monday December 29, @10:52AM   Printer-friendly

Examining the use of expressions like "recent studies" or "recent data" in different medical specialties:

According to the Oxford English Dictionary, the word recent is defined as "having happened or started only a short time ago." A simple, innocent sounding definition. And yet, in the world of scientific publishing, it may be one of the most elastic terms ever used. What exactly qualifies as "a short time ago"? A few months? A couple of years? The advent of the antibiotic era?

In biomedical literature, "recent" is something of a linguistic chameleon. It appears everywhere: in recent studies, recent evidence, recent trials, recent literature, and so forth. It is a word that conveys urgency and relevance, while neatly sidestepping any commitment to a specific year—much like saying "I'll call you soon" after a first date: reassuring, yet infinitely interpretable. Authors wield it with confidence, often citing research that could have been published in the previous season or the previous century.

Despite its ubiquity, "recent" remains a suspiciously vague descriptor. Readers are expected to blindly trust the author's sense of time. But what happens if we dare to ask the obvious question? What if we take "recent" literally?

In this festive horological investigation, we decided to find out just how recent the recent studies really are. Armed with curiosity, a calendar, and a healthy disregard for academic solemnity, we set out to measure the actual age of those so-called fresh references. The results may not change the course of science, but they might make you raise an eyebrow the next time someone cites a recent paper from the past decade.

On 5 June 2025, we—that is, the junior author, while the senior author remained in supervisory orbit—performed a structured search in PubMed using the following terms: "recent advance*" or "recent analysis" or "recent article*" or "recent data" or "recent development" or "recent evidence" or "recent finding*" or "recent insights" or "recent investigation*" or "recent literature" or "recent paper*" or "recent progress" or "recent report*" or "recent research" or "recent result*" or "recent review*" or "recent study" or "recent studies" or "recent trial*," or "recent work*." These terms were selected on the basis that they appear frequently in the biomedical literature, convey an aura of immediacy, and are ideal for concealing the fact that the authors are citing papers from before the invention of UpToDate.

To avoid skewing the results towards only the freshest of publications (and therefore ruining the fun), we sorted the search results by best match rather than by date. This method added a touch of algorithmic chaos and ensured a more diverse selection of articles. We then included articles progressively until reaching a sample size of 1000, a number both sufficiently round and statistically unnecessary, but pleasing to the eye.

We—again, the junior author, while the senior author offered moral support and the occasional pat on the back—reviewed the full text of each article to identify expressions involving the word "recent," ensuring they were directly linked to a bibliographic reference. [...]

For every eligible publication, we—still the junior author, whose dedication was inversely proportional to his contract stability—manually recorded the following: the doi of the article, its title, the journal of publication, the year it was published, the country where the article's first author was based, the broad medical specialty to which the article belonged, the exact "recent" expression used, the reference cited immediately after that expression, the year in which that reference was published, and the journal's impact factor as of 2024 (as reported in the Journal Citation Reports, Clarivate Analytics). [...]

[...] The final analysis comprised 1000 articles. The time lag between the citing article and the referenced "recent" publication ranged from 0 to 37 years, with a mean of 5.53 years (standard deviation 5.29) and a median of 4 years (interquartile range 2-7). The most frequent citation lag was one year, which was observed for 159 publications. The distribution was right skewed (skewness=1.80), with high kurtosis (4.09), indicating a concentration of values around the lower end with a long tail of much older references. A total of 177 articles had a citation lag of 10 years or longer, 26 articles had a lag of 20 years or longer, and four articles cited references that were at least 30 years old. The maximum lag observed was 37 years, found in one particularly ambitious case.

[...] Our investigation confirms what many readers have long suspected, but none have dared to quantify: in the land of biomedical publishing, "recent" is less a measure of time than a narrative device. With a mean citation lag of 5.5 years and a median of 4, the average "recent" reference is just about old enough to have survived two guideline updates and a systematic review debunking its relevance. Our findings align with longstanding concerns over vague or imprecise terminology in scientific writing, which technical editors have highlighted for decades.3

To be fair, some references were genuinely fresh—barely out of the editorial oven. But then there were the mavericks: 177 articles cited works 10 years or older, 26 drew on sources more than 20 years old, and in a moment of true historical boldness, four articles described "recent" studies that predated the launch of the first iPhone. The record holder clocked in at a 37 year lag, leaving us to wonder whether the authors confused recent with renaissance.

[...] The lexicon of "recent" expressions also revealed fascinating differences. Recent publication and recent article showed reassuringly tight timelines, suggesting that for these terms, recent still means what the dictionary intended. Recent trial, recent guidelines, recent paper, and recent result also maintained a commendable sense of urgency, as if they had checked the calendar before going to press. At the other end of the spectrum, recent study, the most commonly used expression, behaved more like recent-ish study, with a median lag of 5.5 years and a long tail stretching into academic antiquity. Recent discovery and recent approach performed even worse, reinforcing the suspicion that some authors consider "recent" a purely ornamental term. Readers may be advised to handle these terms with protective gloves.

[...] In this study, we found that the term "recent" in biomedical literature can refer to anything from last month's preprint to a study published before the invention of the mobile phone. Despite the rhetorical urgency such expressions convey, the actual citation lag often suggests a more relaxed interpretation of time. Although some fields and phrases showed more temporal discipline than others, the overall picture is one of creative elasticity.

The use of vague temporal language appears to be a global constant, transcending specialties, regions, and decades. Our findings do not call for the abolition of the word "recent," but perhaps for a collective pause before using it— a moment to consider whether it is truly recent or just rhetorically convenient. Authors may continue to deploy "recent" freely, but readers and reviewers might want to consider whether it is recent enough to matter.

Journal Reference: BMJ 2025; 391 doi: https://doi.org/10.1136/bmj-2025-086941 (Published 11 December 2025)


Original Submission

posted by jelizondo on Monday December 29, @06:09AM   Printer-friendly
from the Bjarne-And-Herb-Spitting-Feathers dept.

The Register reports that Microsoft wants to replace all of its C and C++ code bases with Rust rewrites by 2030, developing new technology to do the translation along the way.

"Our strategy is to combine AI and Algorithms to rewrite Microsoft's largest codebases," he added. "Our North Star is '1 engineer, 1 month, 1 million lines of code.'"

The article goes on to quote much management-speak drivel from official Microsoft sources making grand claims about magic bullets and the end of all known software vulnerabilities with many orders of magnitude productivity improvements promised into the bargain.

Unlike C and C++, Rust is a memory-safe language, meaning it uses automated memory management to avoid out-of-bounds reads and writes, and use-after-free errors, as both offer attackers a chance to control devices. In recent years, governments have called for universal adoption of memory-safe languages – and especially Rust – to improve software security.

Automated memory management? Is the magic not in the compiler rather than the runtime? Do these people even know what they're talking about? And anyway, isn't C++2.0 going to solve all problems and be faster than Rust and better than Python? It'll be out Real Soon Now(TM). Plus you'll only have to half-rewrite your code.

Are we witnessing yet another expensive wild goose chase from Microsoft? Windows Longhorn, anyone?


Original Submission

posted by jelizondo on Monday December 29, @01:17AM   Printer-friendly
from the %@#! dept.

Swearing boosts performance by helping people feel focused, disinhibited, study finds:

Letting out a swear word in a moment of frustration can feel good. Now, research suggests that it can be good for you, too: Swearing can boost people's physical performance by helping them overcome their inhibitions and push themselves harder on tests of strength and endurance, according to research published by the American Psychological Association.

"In many situations, people hold themselves back—consciously or unconsciously—from using their full strength," said study author Richard Stephens, PhD, of Keele University in the U.K. "Swearing is an easily available way to help yourself feel focused, confident and less distracted, and 'go for it' a little more."

Previous research by Stephens and others has found when people swear, they perform better on many physical challenges, including how long they can keep their hand in ice water and how long they can support their body weight during a chair push-up exercise.

"That is now a well replicated, reliable finding," Stephens said. "But the question is—how is swearing helping us? What's the psychological mechanism?"

He and his colleagues believed that it might be that swearing puts people in a disinhibited state of mind. "By swearing, we throw off social constraint and allow ourselves to push harder in different situations," he said.

To test this, the researchers conducted two experiments with 192 total participants. In each, they asked participants to repeat either a swear word of their choice, or a neutral word, every two seconds while doing a chair push-up. After completing the chair push-up challenge, participants answered questions about their mental state during the task. The questions included measures of different mental states linked to disinhibition, including how much positive emotion participants felt, how funny they found the situation, how distracted they felt and how self-confident they felt. The questions also included a measure of psychological "flow," a state in which people become immersed in an activity in a pleasant, focused way.

Overall, and confirming earlier research, the researchers found that participants who swore during the chair push-up task were able to support their body weight significantly longer than those who repeated a neutral word. Combining the results of the two experiments as well as a previous experiment conducted as part of an earlier study, they also found that this difference could be explained by increases in participants' reports of psychological flow, distraction and self-confidence—all important aspects of a disinhibition.

"These findings help explain why swearing is so commonplace," said Stephens. "Swearing is literally a calorie neutral, drug free, low cost, readily available tool at our disposal for when we need a boost in performance."

Journal Reference:Stephens, R., Dowber, H., Richardson, C., & Washmuth, N. B. (2025). "Don't hold back": Swearing improves strength through state disinhibition. American Psychologist. Advance online publication. https://doi.org/10.1037/amp0001650 [PDF]


Original Submission

posted by hubie on Sunday December 28, @08:37PM   Printer-friendly

Study finds built-in browsers across gadgets often ship years out of date

Web browsers for desktop and mobile devices tend to receive regular security updates, but that often isn't the case for those that reside within game consoles, televisions, e-readers, cars, and other devices. These outdated, embedded browsers can leave you open to phishing and other security vulnerabilities.

Researchers affiliated with the DistriNet Research Unit of KU Leuven in Belgium have found that newly released devices may contain browsers that are several years out of date and include known security bugs.

In a research paper [PDF] presented at the USENIX Symposium on Usable Privacy and Security (SOUPS) 2025 in August, computer scientists Gertjan Franken, Pieter Claeys, Tom Van Goethem, and Lieven Desmet describe how they created a crowdsourced browser evaluation framework called CheckEngine to overcome the challenge of assessing products with closed-source software and firmware.

The framework functions by providing willing study participants with a unique URL that they're asked to enter into the integrated browser in the device being evaluated. During the testing period between February 2024 and February 2025, the boffins received 76 entries representing 53 unique products and 68 unique software versions.

In 24 of the 35 smart TVs and all 5 e-readers submitted for the study, the embedded browsers were at least three years behind current versions available to users of desktop computers. And the situation is similar even for newly released products.

"Our study shows that integrated browsers are updated far less frequently than their standalone counterparts," the authors state in their paper. "Alarmingly, many products already embed outdated browsers at the time of release; in fact, eight products in our sample included a browser that was over three years obsolete when it hit the market."

According to KU Leuven, the study revealed that some device makers don't provide security updates for the browser, even though they advertise free updates.

[...] In December 2024, the EU Cyber Resilience Act came into force, initiating a transition period through December 2027, when vendors will be fully obligated to tend to the security of their products. The KU Leuven researchers say that many of the devices tested are not yet compliant.

[...] The authors put some of the blame on development frameworks like Electron that bundle browsers with other components.

"We suspect that, for some products, this issue stems from the user-facing embedded browser being integrated with other UI components, making updates challenging – especially when bundled in frameworks like Electron, where updating the browser requires updating the entire framework," they said in their paper. "This can break dependencies and increase development costs."

But in other cases, they suggest the issue arises from inattention on the part of vendors or a choice not to implement essential security measures.

While they suggest mechanisms like product labels may focus consumer and vendor attention on updating embedded browsers, they conclude that broad voluntary compliance is unlikely and that regulations should compel vendors to take responsibility for the security of the browsers they embed in their products.


Original Submission

posted by hubie on Sunday December 28, @05:00PM   Printer-friendly

https://events.ccc.de/congress/2025/infos/index.html

The 39th Chaos Communication Congress (39C3) takes place in Hamburg on 27–30 Dec 2025, and is the 2025 edition of the annual four-day conference on technology, society and utopia organized by the Chaos Computer Club (CCC) and volunteers.

Congress offers lectures and workshops and various events on a multitude of topics including (but not limited to) information technology and generally a critical-creative attitude towards technology and the discussion about the effects of technological advances on society.

Starting in 1984, Congress has been organized by the community and appreciates all kinds of participation. You are encouraged to contribute by volunteering, setting up and hosting hands-on and self-organized events with the other components of your assembly or presenting your own projects to fellow hackers.

Find infos how to get in contact & chat with other participants and the organizing teams on our Communication page.

= More Information:

- Chaos Computer Club at Wikipedia
- Media
- 2025 Hub

Interesting talks, upcoming and previously recorded, available on their streams page --Ed.


Original Submission

posted by hubie on Sunday December 28, @03:49PM   Printer-friendly

"The vast majority of Codex is built by Codex," OpenAI told us about its new AI coding agent writing code:

With the popularity of AI coding tools rising among some software developers, their adoption has begun to touch every aspect of the process, including the improvement of AI coding tools themselves.

In interviews with Ars Technica this week, OpenAI employees revealed the extent to which the company now relies on its own AI coding agent, Codex, to build and improve the development tool. "I think the vast majority of Codex is built by Codex, so it's almost entirely just being used to improve itself," said Alexander Embiricos, product lead for Codex at OpenAI, in a conversation on Tuesday.

Codex, which OpenAI launched in its modern incarnation as a research preview in May 2025, operates as a cloud-based software engineering agent that can handle tasks like writing features, fixing bugs, and proposing pull requests. The tool runs in sandboxed environments linked to a user's code repository and can execute multiple tasks in parallel. OpenAI offers Codex through ChatGPT's web interface, a command-line interface (CLI), and IDE extensions for VS Code, Cursor, and Windsurf.

The "Codex" name itself dates back to a 2021 OpenAI model based on GPT-3 that powered GitHub Copilot's tab completion feature. Embiricos said the name is rumored among staff to be short for "code execution." OpenAI wanted to connect the new agent to that earlier moment, which was crafted in part by some who have left the company.

"For many people, that model powering GitHub Copilot was the first 'wow' moment for AI," Embiricos said. "It showed people the potential of what it can mean when AI is able to understand your context and what you're trying to do and accelerate you in doing that."

It's no secret that the current command-line version of Codex bears some resemblance to Claude Code, Anthropic's agentic coding tool that launched in February 2025. When asked whether Claude Code influenced Codex's design, Embiricos parried the question but acknowledged the competitive dynamic. "It's a fun market to work in because there's lots of great ideas being thrown around," he said. He noted that OpenAI had been building web-based Codex features internally before shipping the CLI version, which arrived after Anthropic's tool.

OpenAI's customers apparently love the command line version, though. Embiricos said Codex usage among external developers jumped 20 times after OpenAI shipped the interactive CLI extension alongside GPT-5 in August 2025. On September 15, OpenAI released GPT-5 Codex, a specialized version of GPT-5 optimized for agentic coding, which further accelerated adoption.

It hasn't just been the outside world that has embraced the tool. Embiricos said the vast majority of OpenAI's engineers now use Codex regularly. The company uses the same open-source version of the CLI that external developers can freely download, suggest additions to, and modify themselves. "I really love this about our team," Embiricos said. "The version of Codex that we use is literally the open source repo. We don't have a different repo that features go in."

[...] The system runs many processes autonomously, addresses feedback, spins off and manages child processes, and produces code that ships in real products. OpenAI employees call it a "teammate" and assign it tasks through the same tools they use for human colleagues. Whether the tasks Codex handles constitute "decisions" or sophisticated conditional logic smuggled through a neural network depends on definitions that computer scientists and philosophers continue to debate. What we can say is that a semi-autonomous feedback loop exists: Codex produces code under human direction, that code becomes part of Codex, and the next version of Codex produces different code as a result.

[...] Despite OpenAI's claims of success with Codex in house, it's worth noting that independent research has shown mixed results for AI coding productivity. A METR study published in July found that experienced open source developers were actually 19 percent slower when using AI tools on complex, mature codebases—though the researchers noted AI may perform better on simpler projects.

Ed Bayes, a designer on the Codex team, described how the tool has changed his own workflow. Bayes said Codex now integrates with project management tools like Linear and communication platforms like Slack, allowing team members to assign coding tasks directly to the AI agent. "You can add Codex, and you can basically assign issues to Codex now," Bayes told Ars. "Codex is literally a teammate in your workspace."

This integration means that when someone posts feedback in a Slack channel, they can tag Codex and ask it to fix the issue. The agent will create a pull request, and team members can review and iterate on the changes through the same thread. "It's basically approximating this kind of coworker and showing up wherever you work," Bayes said.

[...] Given this teammate approach, will there be anything left for humans to do? When asked, Embiricos drew a distinction between "vibe coding," where developers accept AI-generated code without close review, and what AI researcher Simon Willison calls "vibe engineering," where humans stay in the loop. "We see a lot more vibe engineering in our code base," he said. "You ask Codex to work on that, maybe you even ask for a plan first. Go back and forth, iterate on the plan, and then you're in the loop with the model and carefully reviewing its code."

He added that vibe coding still has its place for prototypes and throwaway tools. "I think vibe coding is great," he said. "Now you have discretion as a human about how much attention you wanna pay to the code."

Over the past year, "monolithic" large language models (LLMs) like GPT-4.5 have apparently become something of a dead end in terms of frontier benchmarking progress as AI companies pivot to simulated reasoning models and also agentic systems built from multiple AI models running in parallel. We asked Embiricos whether agents like Codex represent the best path forward for squeezing utility out of existing LLM technology.

He dismissed concerns that AI capabilities have plateaued. "I think we're very far from plateauing," he said. "If you look at the velocity on the research team here, we've been shipping models almost every week or every other week." He pointed to recent improvements where GPT-5-Codex reportedly completes tasks 30 percent faster than its predecessor at the same intelligence level. During testing, the company has seen the model work independently for 24 hours on complex tasks.

[...] But will tools like Codex threaten software developer jobs? Bayes acknowledged concerns but said Codex has not reduced headcount at OpenAI, and "there's always a human in the loop because the human can actually read the code." Similarly, the two men don't project a future where Codex runs by itself without some form of human oversight. They feel the tool is an amplifier of human potential rather than a replacement for it.

The practical implications of agents like Codex extend beyond OpenAI's walls. Embiricos said the company's long-term vision involves making coding agents useful to people who have no programming experience. "All humanity is not gonna open an IDE or even know what a terminal is," he said. "We're building a coding agent right now that's just for software engineers, but we think of the shape of what we're building as really something that will be useful to be a more general agent."


Original Submission

posted by hubie on Sunday December 28, @11:02AM   Printer-friendly
from the Whip-Maker-Association-Annual-Funding-Drive dept.

What happens when a computer can do your job better than you can? What happened to all those people who studied in school and trained to draft designs on huge desks with filing cabinets that would kill you if it fell? What happened, well, to any job that could be done faster, cheaper, or more effectively? Gone like the dodos. So, in this vein, how long do lawyers have before their profession is made redundant? If an LLM can find which law applies, how it applies, and write the legal argument needed, then why pay tens of thousands for a human to do this? Have lawyers had their day in sun and are now the buggy whip makers of the 21st century?


Original Submission

posted by hubie on Sunday December 28, @06:19AM   Printer-friendly
from the don't-stop-with-just-TV-companies-Ken dept.

https://www.bleepingcomputer.com/news/security/texas-sues-tv-makers-for-spying-on-users-selling-data-without-consent/

The Texas Attorney General sued five major television manufacturers, accusing them of illegally collecting their users' data by secretly recording what they watch using Automated Content Recognition (ACR) technology.

The lawsuits target Sony, Samsung, LG, and China-based companies Hisense and TCL [PDF files] Technology Group Corporation. Attorney General Ken Paxton's office also highlighted "serious concerns" about the two Chinese companies being required to follow China's National Security Law, which could give the Chinese government access to U.S. consumers' data.

According to complaints filed this Monday in Texas state courts, the TV makers can allegedly use ACR technology to capture screenshots of television displays every 500 milliseconds, monitor the users' viewing activity in real time, and send this information back to the companies' servers without the users' knowledge or consent.

Paxton's office described ACR technology as "an uninvited, invisible digital invader" designed to unlawfully collect personal data from smart televisions, alleging that the harvested information then gets sold to the highest bidder for ad targeting.

"Companies, especially those connected to the Chinese Communist Party, have no business illegally recording Americans' devices inside their own homes," Paxton said.

"This conduct is invasive, deceptive, and unlawful. The fundamental right to privacy will be protected in Texas because owning a television does not mean surrendering your personal information to Big Tech or foreign adversaries."

[...] Almost a decade ago, in February 2017, Walmart-owned smart TV manufacturer Vizio paid $2.2 million to settle charges brought by the U.S. Federal Trade Commission and the New Jersey Attorney General that it collected viewing data from 11 million consumers without their knowledge or consent using a "Smart Interactivity feature.

The two agencies said that since February 2014, Vizio and an affiliated company have manufactured and sold smart TVs (and retrofitted older models by installing tracking software remotely) that captured detailed information on what is being watched, including content from cable, streaming services, and DVDs.

According to the complaint, Vizio also attached demographic information (such as sex, age, income, and education) to the collected data and sold it to third parties for targeted advertising purposes.

In August 2022, the FTC published a consumer alert on securing Internet-connected devices, advising Americans to adjust the tracking settings on their smart TVs to protect their privacy.

Related:
    • Smart TVs Are Like "a Digital Trojan Horse" in People's Homes
    • Vizio Settles With FTC, Will Pay $2.2 Million and Delete User Data


Original Submission

posted by hubie on Sunday December 28, @01:33AM   Printer-friendly

Recently Popular Mechanics published a report titled Who sets the doomsday clock?. It is a very interesting report and while a bit lengthy, it is perfect to reflect, as we approach a new year, on the fragility of our civilization and indeed, our very existence. Enjoy!

On a warm day in mid-July, a roomful of Nobel laureates and nuclear security experts, some 80 pairs of eyes, gaze out of the expansive windows of a 10th floor University of Chicago conference room, imagining their deaths by nuclear explosion. A presenter directs the group's attention past the trees and gothic buildings of campus, over the apartment buildings in Hyde Park, and out to the Chicago skyline, hazy with wildfire smoke from Canada. He points out which neighborhoods would vanish in blasts of varying size, estimating casualties, injuries, and radiation effects.

[...] It's the opening session of the three-day 2025 Nobel Laureate Assembly for the Prevention of Nuclear War. The gathering is convened by scientists and nuclear security experts alarmed that a new arms race, eroding global cooperation, and the rise of artificial intelligence in warfare—among other factors—are pushing civilization closer to catastrophe. Timed to the 80th anniversary of the Trinity Test, the world's first nuclear explosion, the assembly aims to produce a declaration urging world leaders to reduce the nuclear threat.

The same urgency drives the Bulletin of Atomic Scientists and its iconic Doomsday Clock, the stark graphic that represents how close we are to self-annihilation. The clock is set yearly by the Bulletin's Science and Security Board chaired by Daniel Holz, PhD, a theoretical astrophysicist at the University of Chicago.

In January, six months before the Nobel Assembly, Holz stood at a lectern at the United States Institute of Peace in Washington, D.C., to announce the time. "It is now 89 seconds to midnight," he said, as four solemn presenters swiveled a turntable to reveal a pared-down quarter clockface, a white wedge rimmed by black dots for numbers, the hands angled so sharply they overlapped. It was the closest to midnight since the clock's inception in 1947.

[...] Humans have been telling stories about the apocalypse for thousands of years, at least—often involving divine punishment by natural disaster. But the nuclear age marked a new reality that our end could be self-inflicted. The Doomsday Clock is an early symbol of that awareness—and it began as an artist's vision. In 1947, painter Martyl Langsdorf, wife of Manhattan Project physicist Alexander Langsdorf Jr., was asked to design a cover for the Bulletin's first magazine-length issue. She sketched her idea on the back of a copy of Beethoven's Piano Sonatas, choosing a minimalist clock to convey the "panicky time" and setting the hands seven minutes to midnight because it "suited [her] eye."

Today the clock-setting is more complicated than when nuclear weapons were the only way we knew we could extinct ourselves. The single time represents the board's analysis of the dangers posed by a set of distinct, complex, and intersecting threats in four focus areas: nuclear weapons, climate change, disruptive technologies, and biological threats.

[...] To allow for candid conversation, journalists are not allowed to witness the deliberations. The board members I spoke to told me what they could, pausing occasionally to consider how much to reveal. There are presentations on various threats followed by discussion. The topics are grim, but the clock setters approach them with a professional distance born of careers spent in the trenches.

[...] The clock setters don't always agree—on the magnitude of a certain threat, for instance, or on what should be done about it. Board members have worked on nuclear modernization, negotiated arms-control treaties, or supported nuclear abolition—positions often at odds in the nuclear security space.

There are moments when alarming new information moves from head to heart, bursting like bubbles on the surface of the water. "I thought I was past the ability to be scared about new things," says [Alexandra] Bell. As the new president, she attended her first clock meeting this past June. "I walked out of that room more concerned than I was walking in."

Mirror life was what scared her.

Last December, researchers issued a warning about the dangers of synthesizing molecules that reverse the natural structures of those that form the building blocks of all known life. While such developments could have beneficial medical applications, they might also lead to the production of mirror-image organisms that could spread unchecked through humans, plants, and animals, evading natural immune defenses, predators, and breakdown processes. A lab leak or bioweapon could devastate life on Earth, the scientists warned as they called for a pause on research to assess the risks.

[...] After a day or two of presentations and deliberations, the time must be set. Is the world safer or in more danger than last year? And how does this year compare to the nearly 80 years of the clock's history?

The board homes in on a time through rounds of voting and discussion. Sometimes agreement is immediate. Other times, they need follow-up discussions. In the months between decision and announcement, John Mecklin, editor of the Bulletin, drafts the statement that lays out the board's analysis of the threats and suggested actions. The text is circulated and revised until everyone stands behind it. By the January announcement, the message is united.

That unity, however, doesn't erase the ambiguities at the heart of the clock. What does a second or minute mean? Until 2017, the clock had only ever shifted by minutes. But closer to midnight, every second counts for more. In 2017, the clock was moved 30 seconds, and the changes have continued to get smaller. The one-second shift in 2025 was the smallest yet.

It's this imprecision that most critics take issue with. Midnight itself is difficult to define. In 1947, the threat of nuclear annihilation represented a clear and catastrophic end. But the expansion of threats with the addition of climate change in 2007 complicated matters. Is midnight societal collapse, millions of deaths, human extinction?

Rachel Bronson, who finished her 10-year term as president of the Bulletin in January 2025, tells me that midnight is "the end of life on Earth as we know it" or "civilization changing events." [Asha] George goes further: "To me, it's extinction."

[...] Certainly, preventing nuclear war and pandemics, mitigating climate change, and regulating dangerous emerging technologies are incontrovertibly in the interest of all humanity. The Bulletin strives to be a nonpartisan space of informed debate and analysis, publishing a variety of divergent perspectives. "We have one prejudice," their website reads. "We are opposed to extinction."

[...] Even as I push for a definition of midnight, I find the attempt chilling. Inez Fung, climate scientist and professor emerita at UC Berkeley, now in her second year on the board, worries about increasing agricultural failure, water insecurity, floods, deadly heat waves, sea level rise, disasters already underway across the world. She tosses the question back to me—how many people would have to die before we'd call it a catastrophe?

The board frequently discusses the nuance of midnight but, for now, have agreed not to be definitive. "Here's a very quantitative group of people choosing not to use very quantitative methods," reflects Princeton professor emeritus Robert Socolow, a physicist and climate scientist serving a second term on the board. "We're just allowing the ambiguities to be absorbed within the decision."

[...] The June 2025 clock meeting landed midway through a dizzying year. Even before Holz took the podium to announce the 2025 time in late January, headlines about a still- smoldering Los Angeles had been drowned out by coverage of Trump's first week, a barrage of actions paving the way for rapid defunding of scientific research, backtracking on climate action, dismantling public health protections, and destabilizing international relations. All dropped into a world of escalating global conflicts and humanitarian crises. The meeting was "not boring," Holz acknowledges.

If the experts felt anxious before about their ability to break through, growing mistrust of science and the proliferation of alternate facts is making them desperate. "We're driving at the edge of a cliff with dim headlights," Socolow told me in July. "[These] last six months have been different for me than any other time thinking about existential risk." Three months later, Trump casually suggested the U.S. would resume nuclear testing, a move that would break a decades-long moratorium and inflame nuclear tensions with Russia and China.

[...] Wrestling with a symbol I find simultaneously compelling and unsettling, useful and provocative, has helped me think more precisely about how I choose to engage with the things that scare me.

The new time will be announced on January 27, 2026. It will represent a distillation of counted and analyzed threats. Between us and midnight are the unpredictable dynamics of individual and collective human behavior. Perhaps the bigger question than where the clock hands lie is, what will we do in the space of what is still possible?


Original Submission

posted by jelizondo on Saturday December 27, @08:44PM   Printer-friendly
from the to-sleep-perchance-to-dream dept.

Older adults who were awake more during the night performed worse on cognitive tests the next day, no matter how long they slept:

When it comes to sleep, traditional advice has focused on the number of hours a person sleeps. But for older adults, the quality of sleep may affect cognitive performance the following day regardless of their quantity of sleep, according to a new study by researchers from the Penn State College of Health and Human Development and Albert Einstein College of Medicine, Bronx, New York.

In a study published this week (Dec. 17) in Sleep Health, the researchers found that the quality of a night of sleep — rather than the length of the night of sleep — predicted how quickly older adults processed information the next day. The researchers evaluated sleep quality based on how much time someone was awake between when they first went to sleep and when they rose in the morning.

[...] Few studies have examined how poor sleep impacts cognitive functioning the following day, according to Carol Derby, professor of neurology and epidemiology & population health, Louis and Gertrude Feil Faculty Scholar in Neurology at Albert Einstein College of Medicine and senior author of the study.

"Understanding the nuances of how sleep impacts older adults' cognition and their ability to perform daily activities may indicate which individuals are at risk for later cognitive impairment, such as Alzheimer's disease," Derby said.

[...] When the researchers compared performance on cognitive tests not just to participants' own performance but across participants in the entire study sample, they found that older adults who, on average, spent more time awake during their night's sleep performed worse on three of the four cognitive tests. In addition to slower processing speed, participants with more wake time after falling asleep performed worse on two tests of visual working memory.

"Repeatedly waking after you've fallen asleep for the night diminishes the overall quality of your sleep," said Buxton, associate director of both the Penn State Clinical and Translational Science Institute and the Penn State Social Science Research Institute and an investigator in the Penn State Center for Healthy Aging. "We examined multiple aspects of sleep, and quality is the only one that made a day-to-day difference in cognitive performance."

[...] "My number one piece of advice is not to worry about sleep problems," Buxton said. "Worrying only creates stress that can disrupt sleep further. This does not mean that people should ignore sleep, though. There are research-validated interventions that can help you sleep better."

To promote healthy sleep, people should go to bed at a consistent time each night, aiming for a similar length of sleep in restful circumstances, Buxton continued.

"When it comes to sleep, no single night matters, just like no single day is critical to your exercise or diet," Buxton said. "What matters is good habits and establishing restful sleep over time."

[...] "The work demonstrating the day-to-day impact of sleep quality on cognition among individuals who do not have dementia suggests that disrupted sleep may have an early impact on cognitive health as we age," Derby said. "This finding suggests that improving sleep quality may help delay later onset of dementia."

Journal Reference: https://doi.org/10.1016/j.sleh.2025.11.010


Original Submission

posted by jelizondo on Saturday December 27, @03:53PM   Printer-friendly

Relief for those dealing with data pipelines between the two, but move has its critics:

The EU has extended its adequacy decision, allowing data sharing with and from the UK under the General Data Protection Regulation for at least six more years.

This will be some relief to techies in the UK and the member state block and beyond whose work or product set depends on the frictionless movement of data between the two, especially as they can point to the 2031 expiration date as a risk managing aspect to backers and partners. But the move does have its critics.

After GDPR was more-or-less replicated in UK law following the nation's official departure from the EU, the trading and political bloc made its first adequacy decision to allow sharing with a specific jurisdiction outside its boundaries.

In a statement last week, the European Commission — the executive branch of the EU — said that it was renewing the 2021 decision to allow the free flow of personal data with the United Kingdom. "The decisions ensure that personal data can continue flowing freely and safely between the European Economic Area (EEA) and the United Kingdom, as the UK legal framework contains data protection safeguards that are essentially equivalent to those provided by the EU," it said.

In June 2025, the Commission had adopted a technical extension of the 2021 adequacy decisions with the United Kingdom – one under the GDPR and the other concerning the Law Enforcement Directive – for a limited period of six months, as they were set to expire on 27 December this year.

The renewal decisions will last for six years until 27 December 2031 and will be reviewed after four years. It followed the European Data Protection Board's opinion and the Member States' approval.

Following the UK's departure from the EU, the Conservative government originally made plans to diverge from EU data protection law, potentially jeopardizing the adequacy decision. In 2022, for example, then digital minister Michelle Donelan said that the UK planned t

These proposals never made it into law. Since the election of a Labour government, Parliament has passed the Data Use and Access Act.

The government promised the new data regime would boost the British economy by £10 billion over the next decade by cutting NHS and police bureaucracy, speeding up roadworks, and turbocharging innovation in tech and science.

The Act also offers a lawful basis for relying on people's personal information to make significant automated decisions about them, as long as data processors apply certain safeguards.

None of this has been enough to upset the EU, it seems.


Original Submission

posted by jelizondo on Saturday December 27, @11:16AM   Printer-friendly

Science sleuths raise concerns about scores of bioengineering papers:

In December 2024, Elisabeth Bik noticed irregularities in a few papers by a highly-cited bioengineer, Ali Khademhosseini. She started looking at more publications for which he was a co-author, and the issues soon piled up: some figures were stitched together strangely, and images of cells and tissues were duplicated, rotated, mirrored and sometimes reused and labelled differently.

Bik, a microbiologist and leading research-integrity specialist based in San Francisco, California, ended up flagging about 80 papers on PubPeer, a platform that allows researchers to review papers after publication. A handful of other volunteer science sleuths found more, bringing the total to 90.

The articles were published in 33 journals over 20 years and have been cited a combined total of 14,000 times. Although there are hundreds of co-authors on the papers, the sleuthing effort centred on Khademhosseini, who is a corresponding author for about 60% of them.

He and his co-authors sprang into action. Responding to the concerns, some of which were reported in the blog For Better Science, became like a full-time job, says Khademhosseini, who until August was director and chief executive of the Terasaki Institute for Biomedical Innovation in Los Angeles, California. "I alerted journals, I alerted collaborators. We tried to do our best to make the literature correct." In many cases, he and his co-authors provided original source data to journal editors, and the papers were corrected.

Khademhosseini told Nature that investigations into his work have been carried out and have found no evidence of misconduct by him. The Terasaki Institute says that an "internal review has not found that Dr. Khademhosseini engaged in research misconduct".

The case raises questions about oversight in large laboratories and about when a paper needs to be retracted and when a correction is sufficient. In some cases, journals have issued corrections for papers containing issues that research-integrity sleuths describe as "clearly data manipulation", and the corrections were issued without source data. Bik and others argue that this approach sets a bad precedent. "I don't think that any part of a study that bears these signs of data manipulation should be trusted," says Reese Richardson, who studies data integrity at Northwestern University in Evanston, Illinois. He argues that such papers should be retracted.

Khademhosseini defends the corrections and says that the conclusions of the papers still hold. He says he has not seen any "conclusive evidence" of misconduct or "purposeful manipulation" in the papers, and nothing that would require a retraction.

For three decades, Khademhosseini has developed biomedical technologies such as organs on chips and hydrogel wound treatments. His work has been funded by the US National Institutes of Health, and by other public and private agencies. As a PhD student, he worked under Robert Langer, a renowned bioengineer at the Massachusetts Institute of Technology in Cambridge. Khademhosseini has published more than 1,000 papers, which have been cited more than 100,000 times in total. He has also received numerous awards and honours — most recently, the 2024 Biomaterials Global Impact Award, from the journal Biomaterials.

Related:
    • Why retractions could be a powerful tool for cleaning up science
    • This science sleuth revealed a retraction crisis at Indian universities

Original Submission

posted by jelizondo on Saturday December 27, @06:25AM   Printer-friendly
from the cogito-ergo-sum dept.

This gulf in knowledge could be exploited by a tech industry intent on selling the "next level of AI cleverness":

A University of Cambridge philosopher argues that our evidence for what constitutes consciousness is far too limited to tell if or when artificial intelligence has made the leap – and a valid test for doing so will remain out of reach for the foreseeable future.

As artificial consciousness shifts from the realm of sci-fi to become a pressing ethical issue, Dr Tom McClelland says the only "justifiable stance" is agnosticism: we simply won't be able to tell, and this will not change for a long time – if ever.

While issues of AI rights are typically linked to consciousness, McClelland argues that consciousness alone is not enough to make AI matter ethically. What matters is a particular type of consciousness – known as sentience – which includes positive and negative feelings.

"Consciousness would see AI develop perception and become self-aware, but this can still be a neutral state," said McClelland, from Cambridge's Department of History and Philosophy of Science.

"Sentience involves conscious experiences that are good or bad, which is what makes an entity capable of suffering or enjoyment. This is when ethics kicks in," he said. "Even if we accidentally make conscious AI, it's unlikely to be the kind of consciousness we need to worry about."

"For example, self-driving cars that experience the road in front of them would be a huge deal. But ethically, it doesn't matter. If they start to have an emotional response to their destinations, that's something else."

Companies are investing vast sums of money pursuing Artificial General Intelligence: machines with human-like cognition. Some claim that conscious AI is just around the corner, with researchers and governments already considering how we regulate AI consciousness.

McClelland points out that we don't know what explains consciousness, so don't know how to test for AI consciousness.

"If we accidentally make conscious or sentient AI, we should be careful to avoid harms. But treating what's effectively a toaster as conscious when there are actual conscious beings out there which we harm on an epic scale, also seems like a big mistake."

In debates around artificial consciousness there are two main camps, says McClelland. Believers argue that if an AI system can replicate the "software" – the functional architecture – of consciousness, it will be conscious even though it's running on silicon chips instead of brain tissue.

On the other side, sceptics argue that consciousness depends on the right kind of biological processes in an "embodied organic subject". Even if the structure of consciousness could be recreated on silicon, it would merely be a simulation that would run without the AI flickering into awareness.

[...] "We do not have a deep explanation of consciousness. There is no evidence to suggest that consciousness can emerge with the right computational structure, or indeed that consciousness is essentially biological," said McClelland.

"Nor is there any sign of sufficient evidence on the horizon. The best-case scenario is we're an intellectual revolution away from any kind of viable consciousness test."

"I believe that my cat is conscious," said McClelland. "This is not based on science or philosophy so much as common sense – it's just kind of obvious."

"However, common sense is the product of a long evolutionary history during which there were no artificial lifeforms, so common sense can't be trusted when it comes to AI. But if we look at the evidence and data, that doesn't work either.

"If neither common sense nor hard-nosed research can give us an answer, the logical position is agnosticism. We cannot, and may never, know."

[...] McClelland's work on consciousness has led members of the public to contact him about AI chatbots. "People have got their chatbots to write me personal letters pleading with me that they're conscious. It makes the problem more concrete when people are convinced they've got conscious machines that deserve rights we're all ignoring."

"If you have an emotional connection with something premised on it being conscious and it's not, that has the potential to be existentially toxic. This is surely exacerbated by the pumped-up rhetoric of the tech industry."

Journal Reference: Tom McClelland, Agnosticism about artificial consciousness [OPEN], Mind & Language First published: 18 December 2025
https://doi.org/10.1111/mila.70010


Original Submission

posted by jelizondo on Saturday December 27, @01:40AM   Printer-friendly

https://scitechdaily.com/mit-reveals-how-high-fat-diets-quietly-prime-the-liver-for-cancer/

A fatty diet doesn't just damage the liver — it rewires its cells in ways that give cancer a dangerous head start.

Eating a diet high in fat is one of the strongest known risk factors for liver cancer. New research from MIT explains why, showing that fatty diets can fundamentally change how liver cells behave in ways that make cancer more likely to develop.

The study found that when the liver is exposed to a high-fat diet, mature liver cells called hepatocytes undergo a striking shift. Instead of maintaining their specialized roles, these cells revert to a more primitive, stem-cell-like state. While this transformation helps the cells cope with the ongoing stress caused by excess fat, it also leaves them far more vulnerable to becoming cancerous over time.

"If cells are forced to deal with a stressor, such as a high-fat diet, over and over again, they will do things that will help them survive, but at the risk of increased susceptibility to tumorigenesis," says Alex K. Shalek, director of the Institute for Medical Engineering and Sciences (IMES), the J. W. Kieckhefer Professor in IMES and the Department of Chemistry, and a member of the Koch Institute for Integrative Cancer Research at MIT, the Ragon Institute of MGH, MIT, and Harvard, and the Broad Institute of MIT and Harvard.

The team also pinpointed several transcription factors that appear to drive this cellular regression. Because these molecules help control whether liver cells stay mature or revert to an immature state, they may offer promising targets for future drugs aimed at reducing cancer risk in vulnerable patients.

High-fat diets are known to promote inflammation and fat buildup in the liver, leading to a condition called steatotic liver disease. This disorder can also result from other long-term metabolic stresses, including heavy alcohol use, and may progress to cirrhosis, liver failure, and eventually cancer.

To better understand what drives this progression, the researchers focused on how liver cells respond at the genetic level when exposed to a high-fat diet, especially which genes are activated or shut down as damage accumulates over time.

The team fed mice a high-fat diet and used single-cell RNA-sequencing to analyze liver cells at multiple stages of disease development. This approach allowed them to track changes in gene activity as the animals moved from early inflammation to tissue scarring and, ultimately, liver cancer.

Early in the process, hepatocytes began activating genes that promote survival under stress. These included genes that reduce the likelihood of cell death and encourage continued cell division. At the same time, genes essential for normal liver function, such as those involved in metabolism and protein secretion, were gradually switched off.

"This really looks like a trade-off, prioritizing what's good for the individual cell to stay alive in a stressful environment, at the expense of what the collective tissue should be doing," Tzouanas says.

Some of these shifts occurred quickly, while others developed more slowly. In particular, the decline in metabolic enzyme production unfolded over a longer period. By the end of the study, nearly all mice on the high-fat diet had developed liver cancer.

According to the researchers, liver cells that revert to a less mature state appear to be especially susceptible to cancer if they later acquire harmful mutations.

"These cells have already turned on the same genes that they're going to need to become cancerous. They've already shifted away from the mature identity that would otherwise drag down their ability to proliferate," Tzouanas says. "Once a cell picks up the wrong mutation, then it's really off to the races and they've already gotten a head start on some of those hallmarks of cancer."

The team also identified specific genes that help coordinate this shift back to an immature state. During the course of the study, a drug targeting one of these genes (thyroid hormone receptor) was approved to treat a severe form of steatotic liver disease known as MASH fibrosis. In addition,

a drug that activates another enzyme highlighted in the research (HMGCS2) is currently being tested in clinical trials for steatotic liver disease.

Another potential drug target identified by the researchers is a transcription factor called SOX4. This factor is typically active during fetal development and in only a limited number of adult tissues (but not the liver), making its reactivation in liver cells particularly notable.

After observing these effects in mice, the researchers examined whether the same patterns could be found in people. They analyzed liver tissue samples from patients at various stages of liver disease, including individuals who had not yet developed cancer.

The human data closely matched the findings in mice. Over time, genes required for healthy liver function declined, while genes linked to immature cell states became more active. Using these gene expression patterns, the researchers were also able to predict patient survival outcomes.

"Patients who had higher expression of these pro-cell-survival genes that are turned on with high-fat diet survived for less time after tumors developed," Tzouanas says. "And if a patient has lower expression of genes that support the functions that the liver normally performs, they also survive for less time."

While cancer developed within about a year in mice, the researchers believe the same process unfolds much more slowly in humans, potentially over a span of roughly 20 years. The timeline likely varies depending on factors such as diet, alcohol use, and viral infections, all of which can encourage liver cells to revert to an immature state.

The researchers now plan to explore whether the cellular changes triggered by a high-fat diet can be reversed. Future studies will test whether returning to a healthier diet or using weight-loss medications such as GLP-1 agonists can restore normal liver cell function.

They also hope to further evaluate the transcription factors identified in the study as possible drug targets to prevent damaged liver tissue from progressing to cancer.

"We now have all these new molecular targets and a better understanding of what is underlying the biology, which could give us new angles to improve outcomes for patients," Shalek says.

Reference: “Hepatic adaptation to chronic metabolic stress primes tumorigenesis” 22 December 2025, Cell.


Original Submission

posted by jelizondo on Friday December 26, @08:55PM   Printer-friendly

https://phys.org/news/2025-12-disaster-raw-materials.html

This Christmas Day marks 21 years since the terrifying Indian Ocean tsunami. As we remember the hundreds of thousands of lives lost in this tragic event, it is also a moment to reflect on what followed. How do communities rebuild after major events such as the tsunami, and other disasters like it? What were the financial and hidden costs of reconstruction?

Beyond the immediate human toll, disasters destroy hundreds of thousands of buildings each year. In 2013, Typhoon Haiyan damaged a record 1.2 million structures in Philippines. Last year, earthquakes and cyclones damaged more than half a million buildings worldwide. For communities to rebuild their lives, these structures must be rebuilt.

While governments, non-government agencies and individuals struggle to finance post-disaster reconstruction, rebuilding also demands staggering volumes of building materials. In turn, these require vast amounts of natural resource extraction.

For instance, an estimated 1 billion burnt clay bricks were needed to reconstruct the half-million homes destroyed in the Nepal earthquake. This is enough bricks to circle the Earth six times if laid end to end. How can we responsibly source such vast quantities of materials to meet demand?

Sudden spikes in demand have led to severe shortages of common building materials after nearly every major disaster over the past two decades, including the 2015 Nepal earthquake and the 2019 California wildfires. These shortages often trigger price hikes of 30%–40%, which delays reconstruction and prolongs the suffering of affected communities. Disasters not only increase demand for building materials but also generate enormous volumes of debris.

For example, the 2023 Turkey–Syria earthquake produced more than 100 million cubic meters of debris—40 times the volume of the Great Pyramid of Giza.

Disaster debris can pose serious environmental and health risks, including toxic dust and waterway pollution. But some debris can be safely transformed into useful assets such as recycled building materials. Rubble can be crushed and repurposed as a base for low-traffic roads or turned into cement blocks .

The consequences of poor post-disaster building materials management have reached alarming global proportions. After the 2004 Indian Ocean Tsunami, for example, the surge in sand demand led to excessive and illegal sand mining in rivers along Sri Lanka's west coast. This caused irreversible ecological damage to two major watersheds, devastating the livelihoods of thousands of farmers and fisherpeople.

Similar impacts from the overextraction of materials such as sand, gravel, clay and timber have been reported following other major disasters, including the 2008 Sichuan earthquake in China and Cyclone Idai in Mozambique in 2019. If left unaddressed, the social, environmental and economic impacts of resource extraction will escalate to catastrophic levels, especially as climate change intensifies disaster frequency.

This crisis has yet to receive adequate international attention. Earlier this year, several global organizations came together to publish a Global Call to Action on sustainable building materials management after disasters.

Based on an analysis of 15 major disasters between 2005 and 2020, it identified three key challenges: building material shortages and price escalation, unsustainable extraction and use of building materials, and poor management of disaster debris.

Although well-established solutions exist to address these challenges, rebuilding efforts suffer from policy and governance gaps. The Call to Action urges international bodies such as the United Nations Office for Disaster Risk Reduction to take immediate policy and practical action.

After a disaster hits, it leaves an opportunity to build back better. Rebuilding can boost resilience to future hazards, encourage economic development and reduce environmental impact. The United Nations' framework for disaster management emphasizes the importance of rebuilding better and safer rather than simply restoring communities to pre-disaster conditions.

Disaster-affected communities should be rebuilt with the capacity to cope with future external shocks and environmental risks. Lessons can be learned from both negative and positive experiences of past disasters. For example, poor planning of some reconstruction projects after the Indian Ocean Tsunami (2004) in Sri Lanka made the communities vulnerable again to coastal hazards within a few years. On the other hand, the community-led reconstruction approach followed after the Bhuj earthquake, India (2001), has resulted in safer and more socio-economically robust settlements, standing the test of 24 years.

As an integral part of the "build back better" approach, authorities must include strategies for environmentally and socially responsible management of building materials. These should encourage engineers, architects and project managers to select safe, sustainable materials for reconstruction projects.

At the national level, regulatory barriers to repurposing disaster debris should be removed, while still ensuring safe management of hazardous materials such as asbestos. For example, concrete from fallen buildings was successfully used as a road base and as recycled aggregate for infrastructure projects following the 2004 tsunami in Indonesia and 2011 Tohoku Earthquake in Japan.

This critical issue demands urgent public and political attention. Resilient buildings made with safe, sustainable materials will save lives in future disasters.

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Original Submission