Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How far do you currently live from the town where you grew up?

  • less than 60 mi or 100 km
  • greater than that, but less than 300 mi or 500 km
  • greater than the above, but less than 600 mi or 1,000 km
  • greater than the above, but less than 3,000 mi or 5,000 km
  • greater than the above, but less than 6,000 mi or 10,000 km
  • greater than the above, but less than 12,000 mi or 20,000 km
  • greater than 12,000 mi or 20,000 km (the truth is out there)
  • I never grew up, you insensitive clod!

[ Results | Polls ]
Comments:2 | Votes:29

posted by mrpg on Monday December 15, @02:11PM   Printer-friendly

https://www.wired.com/story/scientists-thought-parkinsons-was-in-our-genes-it-might-be-in-the-water/

[...] Parkinson's is the second most common neurological disease in the United States, after Alzheimer's; each year 90,000 Americans are diagnosed. For decades, Parkinson's research has focused on genetics, on finding the rogue letters in our genome that cause this incurable misery. Today, published research on the genetics behind Parkinson's outnumbers all other potential causes six to one. This is partially because one of the disease's most generous benefactors, Google cofounder Sergey Brin, can tie Parkinson's to his genetics. Some Parkinson's patients diagnosed before age 50—as Michael J. Fox was—can trace the disease to their genes; Brin, whose mother has the disease, carries a mutation of the LRRK2 gene, which significantly increases the likelihood of him developing PD. Over the years, Fox's foundation has raised billions for Parkinson's research, and Brin has personally committed $1.8 billion to fighting the disorder. All told, more than half of Parkinson's research dollars in the past two decades have flowed toward genetics.

But Parkinson's rates in the US have doubled in the past 30 years. And studies suggest they will climb another 15 to 35 percent in each coming decade. This is not how an inherited genetic disease is supposed to behave.

Despite the avalanche of funding, the latest research suggests that only 10 to 15 percent of Parkinson's cases can be fully explained by genetics. The other three-quarters are, functionally, a mystery. "More than two-thirds of people with PD don't have any clear genetic link," says Briana De Miranda, a researcher at the University of Alabama at Birmingham. "So, we're moving to a new question: What else could it be?"

"The health you enjoy or don't enjoy today is a function of your environment in the past," says Ray Dorsey, a physician and professor of neurology at the University of Rochester. Your "environment" could be the refinery a town over, the lead in the paint of your mother's home, the plastic sheath of the Hot Pocket you microwaved in 1996. It is air pollution and PFAS and pesticides and so much more.

And this environment of yours—the sum of all your exposures, from conception to the grave—could be making you sicker than you realize. In a study of half a million Britons, Oxford researchers determined that lifestyle and the environment is 10 times more likely to explain early death than genetics. But that also offers a tantalizing prospect. If Parkinson's is an environmental disease, as Dorsey and a small band of researchers emphatically believe, then maybe we can end it.


Original Submission

posted by mrpg on Monday December 15, @09:22AM   Printer-friendly

The State of Open Source Software in 2025:

A few weeks ago, Linux Foundation Research published "The State of Global Open Source 2025," the third annual report based on its survey of the open source community. The report highlights the evolution of open source software (OSS) from a productivity tool to a key component of global mission-critical infrastructures. The 2025 global survey on which it's based confirms that organizations depend on OSS as the backbone of their critical systems.

Given my long involvement with open source technologies and the Linux Foundation, I was invited to write the Foreword of the 2024 Open Source report, where I tried to explain why open source has been so successful over the past several decades:

"For centuries, experts have worked together to jointly address some of the most complex and important problems of their times, from exploring the secrets of the universe to developing new healthcare treatments. Open source is part of this long tradition of collaborative innovation."

[...] The 2025 report warns that despite open source software being the backbone of organizations' critical systems, "most lack the governance and security frameworks to manage this dependency safely. While expecting enterprise-level reliability and support, organizations systematically underinvest in the security practices, formal governance structures, community engagement, and comprehensive strategies that production environments demand. ... This governance gap creates substantial risk exposure given the mission-critical nature of these deployments."

[...] "The 2025 World of Open Source Survey reveals a paradox: while open source software has achieved mission-critical status with widespread adoption across enterprise technology stacks, organizational maturity significantly lags behind this adoption," said the report in conclusion. "This disconnect creates significant business risks: organizations depend on foundational technologies they cannot adequately assess, understand, or strategically influence."

Finally, the report offers a few key recommendations:

  • Establish open source governance structures. Implement Open Source Program Offices (OSPOs) or formalize open source strategies to manage compliance, security, and contribution workflows.
  • Strengthen security evaluation practices. Move beyond the community health checks currently used by 44% of organizations to implement systematic security assessment frameworks.
  • Establish enterprise-grade support arrangements. Organizations should establish support arrangements with sub-12-hour response times for mission-critical workloads.
  • Promote strategic participation through active engagement. Prioritize sponsoring critical open source dependencies to ensure project sustainability and gain strategic influence over technology roadmaps.

Original Submission

posted by mrpg on Monday December 15, @04:45AM   Printer-friendly
from the let-every-man-be-master-of-his-time dept.

NIST scientists have calculated that clocks on Mars will tick an average of 477 millionths of a second faster than clocks on Earth per day:

Ask someone on Earth for the time and they can give you an exact answer, thanks to our planet's intricate timekeeping system, built with atomic clocks, GPS satellites and high-speed telecommunications networks.

However, Einstein showed us that clocks don't tick at the same rate across the universe. Clocks will run slightly faster or slower depending on the strength of gravity in their environment, making it tricky to synchronize our watches here on Earth, let alone across the vast solar system. If humans want to establish a long-term presence on the red planet, scientists need to know: What time is it on Mars?

Physicists at the National Institute of Standards and Technology (NIST) have calculated a precise answer for the first time. On average, clocks on Mars will tick 477 microseconds (millionths of a second) faster than on Earth per day. However, Mars' eccentric orbit and the gravity from its celestial neighbors can increase or decrease this amount by as much as 226 microseconds a day over the course of the Martian year. These findings, just published in The Astronomical Journal, follow a 2024 paper in which NIST physicists developed a plan for precise timekeeping on the Moon.

Knowing how clocks will tick on Mars is a steppingstone for future space missions, said NIST physicist Bijunath Patla. As NASA plans Mars exploration missions, understanding time on our planetary neighbor will help synchronize navigation and communication across our solar system.

"The time is just right for the Moon and Mars," Patla said. "This is the closest we have been to realizing the science fiction vision of expanding across the solar system."

Journal Reference: Neil Ashby and Bijunath R. Patla 2026 AJ 171 2 DOI 10.3847/1538-3881/ae0c16


Original Submission

posted by hubie on Sunday December 14, @11:59PM   Printer-friendly

The new COSMIC desktop environment is written in the Rust programming language, designed and developed by System76 for all GNU/Linux distributions:

Linux hardware vendor System76 launched today [December 11, 2025] the first stable release of the Rust-based COSMIC desktop environment, along with the stable release of the Ubuntu-based Pop!_OS 24.04 LTS Linux distribution.

Based on the Ubuntu 24.04 LTS (Noble Numbat) operating system series, Pop!_OS 24.04 LTS ships with the brand-new COSMIC desktop environment written in the Rust programming language, designed and developed by System76 for all GNU/Linux distributions.

Previous Pop!_OS releases used a version of the COSMIC desktop that was based on the GNOME desktop environment. However, System76 wanted to create a new desktop environment from scratch while keeping the same familiar interface and user experience built for efficiency and fun.

This means that some GNOME apps have been replaced by COSMIC apps, including COSMIC Files instead of Nautilus (Files), COSMIC Terminal instead of GNOME Terminal, COSMIC Text Editor instead of GNOME Text Editor, and COSMIC Media Player instead of Totem (Video Player).

Also, the Pop!_Shop graphical package manager used in previous Pop!_OS releases has now been replaced by a new app called COSMIC Store. On top of that, COSMIC ships with a built-in screenshot tool and a Welcome app to make it easier to set up your COSMIC/Pop!_OS Linux desktop experience.

COSMIC Launcher lets users launch and navigate apps quickly and efficiently with features like web search, calculator, and file search. Moreover, COSMIC supports both dual-panel and single-panel layouts, feature-rich workspaces, intuitive window tiling and stacking, multi-monitor setups, and new theming options.

"This year, System76 turned twenty. For twenty years, we have shipped Linux computers. For seven years, we've built the Pop!_OS Linux distribution. Three years ago, it became clear we had reached the limit of our current potential and had to create something new. Today, we break through that limit with the release of Pop!_OS 24.04 LTS with the COSMIC desktop environment," said System76 CEO Carl Richell.

The best part about COSMIC is that it's not only available for Pop!_OS 24.04 LTS users, but also for many other distributions, including Arch Linux, openSUSE Tumbleweed, NixOS, Fedora Linux, AerynOS, as well as BSD and Redox OS platforms.

Under the hood, Pop!_OS 24.04 LTS is powered by the Linux 6.17 kernel series and ships with the Mesa 25.1.5 open-source graphics stack. You can download 64-bit and ARM64 live ISO images for Intel/AMD or NVIDIA systems right now from the official website.


Original Submission

posted by hubie on Sunday December 14, @07:11PM   Printer-friendly

The Agentic AI Foundation launches to support MCP, AGENTS.md, and goose:

Big Tech has spent the past year telling us we're living in the era of AI agents, but most of what we've been promised is still theoretical. As companies race to turn fantasy into reality, they've developed a collection of tools to guide the development of generative AI. A cadre of major players in the AI race, including Anthropic, Block, and OpenAI, has come together to promote interoperability with the newly formed Agentic AI Foundation (AAIF). This move elevates a handful of popular technologies and could make them a de facto standard for AI development going forward.

The development path for agentic AI models is cloudy to say the least, but companies have invested so heavily in creating these systems that some tools have percolated to the surface. The AAIF, which is part of the nonprofit Linux Foundation, has been launched to govern the development of three key AI technologies: Model Context Protocol (MCP), goose, and AGENTS.md.

MCP is probably the most well-known of the trio, having been open-sourced by Anthropic a year ago. The goal of MCP is to link AI agents to data sources in a standardized way—Anthropic (and now the AAIF) is fond of calling MCP a "USB-C port for AI." Rather than creating custom integrations for every different database or cloud storage platform, MCP allows developers to quickly and easily connect to any MCP-compliant server.

Since its release, MCP has been widely used across the AI industry. Google announced at I/O 2025 that it was adding support for MCP in its dev tools, and many of its products have since added MCP servers to make data more accessible to agents. OpenAI also adopted MCP just a few months after it was released.

Expanding use of MCP might help users customize their AI experience. For instance, the new Pebble Index 01 ring uses a local LLM that can act on your voice notes, and it supports MCP for user customization.

Local AI models have to make some sacrifices compared to bigger cloud-based models, but MCP can fill in the functionality gaps. "A lot of tasks on productivity and content are fully doable on the edge," Qualcomm head of AI products, Vinesh Sukumar, tells Ars. "With MCP, you have a handshake with multiple cloud service providers for any kind of complex task to be completed."

[...] Think about the timeline here. The world in which tech companies operate has changed considerably in a short time as everyone rushes to stuff gen AI into every product and process. And no one knows who is on the right track—maybe no one!

Against that backdrop, big tech has seemingly decided to standardize. Even for MCP, the most widely supported of these tools, there's still considerable flux in how basic technologies like OAuth will be handled.

The Linux Foundation has spun up numerous projects to support neutral and interoperable development of key technologies. For example, it formed the Cloud Native Computing Foundation (CNCF) in 2015 to support Google's open Kubernetes cluster manager, but the project has since integrated a few dozen cloud computing tools. Certification and training for these tools help keep the lights on at the foundation, but Kubernetes was already a proven technology when Google released it widely. All these AI technologies are popular right now, sure, but is MCP or AGENTS.md going to be important in the long term?

Regardless, everyone in the AI industry seems to be on board. In addition to the companies adding their tools to the project, the AAIF has support from Amazon, Google, Cloudflare, Microsoft, and others. The Linux Foundation says it intends to shepherd these key technologies forward in the name of openness, but it may end up collecting a lot of nascent AI tools at this rate.


Original Submission

posted by hubie on Sunday December 14, @02:26PM   Printer-friendly
from the hal-open-the-pod-bay-doors dept.

Calibre Now Lets You Chat About Your E-Books Using Local AI

You can ask questions about any book in your library and run AI models locally via LM Studio:

A few months ago, Calibre introduced its first AI feature, letting users highlight text and ask questions directly in the eBook reader. It was a good start but relied entirely on cloud-based AI providers.

Now, Calibre 8.16.2 has arrived with some pretty handy upgrades to those capabilities, adding support for running AI models completely offline on-device. There are plenty of other new refinements too!

Thanks to LM Studio integration, Calibre can tap into AI models running locally on your machine instead of needing to rely on user data-hungry cloud services. If you didn't know, LM Studio is a desktop application that lets you run large language models on your own hardware without much technical know-how.

Beyond that, this release introduces two additional AI-powered features. The first one is a book discussion feature where Calibre can answer questions about any book in your library through a simple right-click menu, and the second is a reading recommendation system that suggests similar books based on your selection.

Both of these work locally or via any configured cloud providers.

[...] I tested out two of the new AI-powered features, and I must say, they work really well. First up was the book discussion feature, which can be accessed from the "View" menu either by right-clicking and selecting "Discuss selected book(s) with AI" or the keyboard shortcut: Ctrl + Alt + A.

I told it to summarize Dracula, a book by Bram Stoker, and the output it provided was pretty good; I got a quick rundown of the happenings in the book without needing to fully read it. This could be handy if you have forgotten how a book ended or when you are deciding whether to commit to reading something.

Next, I tested the paragraph explanation feature on a section from the book. The AI broke down the text clearly and provided useful context. Keep in mind that results will vary depending on which model you use. A more capable model will give better explanations, while smaller ones might be hit or miss.

For any AI features to work on Calibre, you need to configure an AI provider first. In my case, I used LM Studio with the DeepSeek-R1-0528-Qwen3-8B model loaded for testing. The setup is quite straightforward. I started the LM Studio server with a model loaded, entered the URL in Calibre's AI provider settings, and clicked "Ok".

Calibre has finally given into the AI trend

Calibre has finally given into the AI trend:

Calibre just dropped version 8.16.1, and it brings a new feature that lets you ask an AI what book you should read next. This latest update builds on the AI capabilities the Calibre team has been adding over the past few months, which follows the trend of adding AI whenever possible.

The biggest feature of 8.16.1 is the ability to tap into an AI to find your next great read. You can now right-click on a book in your library and use the Similar Books menu to ask the AI for recommendations. This is a top-tier addition for anyone dealing with the dreaded reading slump, or if you simply need a little nudge toward a title that matches the tone or genre of something you just finished.

What has stood out is that over the past few months, we've seen Calibre slowly add AI features that made a lot of sense. However, adding LM Studio is confirmation that the developer is trying to push AI onto the app. So if you want a version that is AI-free, you're going to have to stick to version 8.10.

Beyond getting reading suggestions, the team also expanded the ways you can interact with the AI regarding specific titles. You can now right-click the "View" button for any book and select "Discuss selected book(s) with AI." This lets users pose direct questions about the book, which is incredibly useful if you need a quick summary, clarification on a character arc, or just want to dig deeper into the themes without even opening the book.

[...] I'm generally wary of AI being shoehorned into apps that don't need it. Calibre's initial AI dictionary tools felt like genuinely useful add-ons. Howeer, the integration of full-blown local model support makes me think it is heading toward an AI-first library manager, which is a direction I'm not sure that I want.


Original Submission

posted by hubie on Sunday December 14, @09:36AM   Printer-friendly
from the I-feel-you-in-my-heart dept.

https://phys.org/news/2025-12-ultra-thin-nanomembrane-device-soft.html

Researchers have developed a new class of ultra-thin, flexible bioelectronic material that can seamlessly interface with living tissues. They introduced a novel device called THIN (transformable and imperceptible hydrogel-elastomer ionic-electronic nanomembrane). THIN is a membrane just 350 nanometers thick that transforms from a dry, rigid film into an ultra-soft, tissue-like interface upon hydration.

[...] Biological tissues—especially vital organs such as the heart, brain, and muscles—are soft, curved, and constantly in motion. Even the thinnest existing bioelectronic devices can feel foreign, leading to poor adhesion, inflammation, and unstable signal acquisition. While ultrathin flexible devices have been developed, most still require adhesives, rigid packaging, or mechanical supports, particularly for dynamic tissues such as the heart or brain.

This challenge inspired the team to ask a simple but compelling question: "What if a device could become soft, sticky, and shape-adapting only when it touches tissue—like magic?"

That question led to the development of THIN, a transformable, substrate-free nanomembrane that self-adheres to wet tissue without sutures, adhesives, or external pressure. By exploiting hydration-triggered swelling, THIN self-adheres without sutures or external pressure even on microscopically folded or highly curved surfaces, which allows it to maintain long-term contact with the tissue.

The nanomembrane is specifically engineered by design to be "soft when wet" and "robust when dry." THIN consists of two layers—the first being a mussel-inspired, tissue-adhesive hydrogel (catechol-conjugated alginate; Alg-CA), and the second being a high-performance semiconducting elastomer, P(g2T2-Se).

Together they form a freestanding bilayer only 350 nm thick—nearly a thousand times thinner than a human hair. The device's bending stiffness decreases over a million-fold (to 9.08 × 10⁻⁵ GPa·μm⁴) when hydrated, allowing it to wrap around surfaces with curvature radii below 5 μm—so soft that it becomes mechanically imperceptible to tissue.

When dry, the hydrogel layer is rigid (1.35 GPa), enabling easy handling and direct semiconductor coating. Upon hydration, it softens dramatically (0.035 GPa) and curls spontaneously, forming natural, gentle adhesion to the target organ surface.

[...] In animal experiments, THIN-OECTs instantly adhered to rodent hearts, muscles, and brain cortices, recording epicardial electrograms (EGM), electromyograms (EMG), and electrocorticograms (ECoG) with high fidelity. The devices remained stable and biocompatible for over four weeks, showing no inflammation or tissue damage after long-term implantation.

"Our THIN-OECT platform acts like a nano-skin—it is invisible to the body, mechanically imperceptible, and yet electrically powerful," said Prof. Son Donghee, corresponding author of the study. "It opens new possibilities for chronic brain-machine interfaces, cardiac monitoring, and soft neuroprosthetics."

Unlike conventional bioelectronic systems that depend on elastomeric substrates or adhesives, THIN is substrate-free, freestanding, and operational at the nanoscale. Its mechanical imperceptibility and autonomous adhesion make it suitable for stable signal acquisition from dynamic tissues without interference or foreign-body sensation.

Because THIN amplifies electrophysiological signals directly at the contact site, it eliminates bulky external amplifiers, paving the way for next-generation implantable, wearable, and injectable medical devices.

More information: Hydrogel–elastomer-based conductive nanomembranes for soft bioelectronics, Nature Nanotechnology (2025). DOI: 10.1038/s41565-025-02031-x.


Original Submission

posted by hubie on Sunday December 14, @04:46AM   Printer-friendly
from the its-89-seconds-to-midnight dept.

The Bulletin of the Atomic Scientists published a report on the possible crash of the AI bubble:

Silicon Valley and its backers have placed a trillion-dollar bet on the idea that generative AI can transform the global economy and possibly pave the way for artificial general intelligence, systems that can exceed human capabilities. But multiple warning signs indicate that the marketing hype surrounding these investments has vastly overrated what current AI technology can achieve, creating an AI bubble with growing societal costs that everyone will pay for regardless of when and how the bubble bursts.

The history of AI development has been punctuated by boom-and-bust cycles (with the busts called AI winters) in the 1970s and 1980s. But there has never been an AI bubble like the one that began inflating around corporate and investor expectations since OpenAI released ChatGPT in November 2022. Tech companies are now spending between $72 billion and $125 billion per year each on purchasing vast arrays of AI computing chips and constructing massive data centers that can consume as much electricity as entire cities—and private investors continue to pour more money into the tech industry's AI pursuits, sometimes at the expense of other sectors of the economy.

That huge AI bet is increasingly looking like a bubble; it has buoyed both the stock market and a US economy otherwise struggling with rising unemployment, inflation, and the longest government shutdown in history. In September, Deutsche Bank warned that the United States could already be in an economic recession without the tech industry's AI spending spree and cautioned that such spending cannot continue indefinitely.

Warning signs. Silicon Valley's focus on developing ever-larger AI models has spurred a buildout of bigger data centers crammed with computing power. The staggering growth in AI compute demand would require tech companies to build $500 billion worth of data centers packed with chips each year—and companies would need to rake in $2 trillion in combined annual revenue to fund that buildout, according to a Bain & Company report. The report also estimates that the tech industry is likely to fall $800 billion short of the required revenue.

That shortfall is less surprising than it might seem. US Census Bureau data show that AI adoption by companies with more than 250 employees may have already peaked and began declining or flattening out this year. Most businesses still don't see a significant return on their investment when trying to use the latest generative AI tools: Software company Atlassian found that 96 percent of companies didn't achieve significant productivity gains, and MIT researchers showed that 95 percent of companies get zero return from their pilot programs with generative AI. [...] Claims that AI can replace human workers on a large scale also appear overblown, or at least premature. When evaluating AI's impact on employment, the Yale Budget Lab found that the "broader labor market has not experienced a discernible disruption since ChatGPT's release 33 months ago," according to the group's analysis published in October 2025.

Another bubble warning sign: Silicon Valley's accelerating spending spree on data centers and chips has outpaced what even the largest tech companies can afford. Companies such as Amazon, Google, Microsoft, Meta, and Oracle have already spent a record 60 percent of operating cash flow on capital expenditures like data centers and chips as of June 2025.

The financing ouroboros. Now, tech companies are increasingly resorting to "creative finance" such as circular financing deals to continue raising money for data centers and chips, says Andrew Odlyzko, professor emeritus of mathematics at the University of Minnesota, who has studied the history of financial manias and previous bubbles around technologies like railroads.

For example, Meta sold $30 billion of corporate bonds in late October and also secured another $30 billion in off-balance-sheet debt through a joint venture structured by Morgan Stanley, arrangements that can hide the risks and liabilities of such deals. The swift accumulation of $100 billion in AI-related debt per quarter among various companies "raises eyebrows for anyone that has seen credit cycles," said Matthew Mish, head of credit strategy at UBS Investment Bank, in a Bloomberg interview.

As a result, a growing number of business leaders and institutions have voiced alarm about the stock market bubble building around AI, including the Bank of England and the International Monetary Fund. Even bullish tech and financial CEOs such as Amazon's Jeff Bezos, JPMorgan Chase's Jamie Dimon, Google's Sundar Pichai, and OpenAI's Sam Altman have acknowledged the existence of an AI bubble, despite their optimism about the advance of AI generally.

After the crash. If the stock market craters after a bursting of the AI bubble, it won't just be financial institutions and venture capitalists losing money. Some 62 percent of Americans who reported owning stocks in 2025, according to a Gallup survey, could also be affected.

The market mayhem brought on by a deflation of the AI bubble could also mean economic disruption worldwide. Writing for The Economist, Gita Gopinath, former chief economist for the International Monetary Fund, warned that a bursting of the AI bubble on the magnitude of the dot-com bubble collapse in 2000 could have "severe global consequences," including the wipeout of more than $20 trillion in wealth for American households and $15 trillion in wealth for foreign investors.

If the AI bubble pops, the US government will likely turn to its central bank, the Federal Reserve, to stabilize the wider economy by injecting huge amounts of cash into the financial system, as it did after the 2008 financial crisis, Odlyzko says. But he warned that a new government bailout of the financial system would mean another significant jump in the national debt and increased wealth inequality, because taxpayer dollars would be once again focused on stabilizing a sector in which the wealthiest individuals will benefit disproportionately from recovering corporate profits and rebounding share prices. A repeat of the financial bailout cycle that privatizes the gains of wealthy risk-takers while socializing the losses to everyone else is "likely to lead to even more [political] polarization and perhaps true populist movements," Odlyzko says.

The United States is less well equipped to handle the AI bubble if it were to burst today because of the weakened US dollar, political pressure on the Federal Reserve's institutional independence, limitations on economic growth due to President Trump's sweeping tariffs and trade wars, and record levels of government debt that could constrain attempts to use fiscal stimulus to right-size a sinking economy, Gopinath wrote in The Economist.

Paying for power. Data centers currently represent the fastest-rising source of power demand for the United States, and the electricity needs of individual data center campuses are also growing to gargantuan proportions. Tech companies have rushed to build new gigawatt-scale data centers such as Meta's "Hyperion" data center in Louisiana, which would consume twice as much electricity as the entire city of New Orleans. Meanwhile, a new Amazon data center campus in Indiana will require as much electricity as half of all homes in the state, or approximately 1.5 million households.

There is already some evidence showing that data center demand for power is driving up local electricity costs. A Bloomberg investigation found that areas of the country with "significant data center activity" saw wholescale electricity prices soar by as much as 267 percent for a single month compared to five years ago. More than 70 percent of regions that saw price increases were located within 50 miles of such data center clusters.

But utility companies and their other ratepayers still bear the brunt of expenses for building new power plants, local power lines and transformers, and transmission lines to carry electricity across longer distances.

[...] Energy infrastructure development costs associated with data centers could still be "socialized" and borne by ordinary utility customers if projects don't have those protections in place, Peskoe says. "I'm sure there would be some utilities that, if there were a burst of the bubble, would probably go to regulators and say, 'Hey, we want to recover the cost of these facilities from everyone,'" he says.

"Ultimately, for society's sake, it would be a wonderful thing the faster this thing goes, because very few people are benefiting from it," says Hetrick, the labor economist at Lightcast. "Had we spread the wealth and invested in various industries, who knows how many innovations we could have come up with by now while we've been incinerating this money."


Original Submission

posted by hubie on Sunday December 14, @12:04AM   Printer-friendly

Privacy stalwart Nicholas Merrill spent a decade fighting an FBI surveillance order. Now he wants to sell you phone service—without knowing almost anything about you:

Nicholas Merrill has spent his career fighting government surveillance. But he would really rather you didn't call what he's selling now a "burner phone."

Yes, he dreams of a future where anyone in the US can get a working smartphone—complete with cellular coverage and data—without revealing their identity, even to the phone company. But to call such anonymous phones "burners" suggests that they're for something illegal, shady, or at least subversive. The term calls to mind drug dealers or deep-throat confidential sources in parking garages.

With his new startup, Merrill says he instead wants to offer cellular service for your existing phone that makes near-total mobile privacy the permanent, boring default of daily life in the US. "We're not looking to cater to people doing bad things," says Merrill. "We're trying to help people feel more comfortable living their normal lives, where they're not doing anything wrong, and not feel watched and exploited by giant surveillance and data mining operations. I think it's not controversial to say the vast majority of people want that."

That's the thinking behind Phreeli, the phone carrier startup Merrill launched today, designed to be the most privacy-focused cellular provider available to Americans. Phreeli, as in, "speak freely," aims to give its user a different sort of privacy from the kind that can be had with end-to-end encrypted texting and calling tools like Signal or WhatsApp. Those apps hide the content of conversations, or even, in Signal's case, metadata like the identities of who is talking to whom. Phreeli instead wants to offer actual anonymity. It can't help government agencies or data brokers obtain users' identifying information because it has almost none to share. The only piece of information the company records about its users when they sign up for a Phreeli phone number is, in fact, a mere ZIP code. That's the minimum personal data Merrill has determined his company is legally required to keep about its customers for tax purposes.

[...] Signing up a customer for phone service without knowing their name is, surprisingly, legal in all 50 states, Merrill says. Anonymously accepting money from users—with payment options other than envelopes of cash—presents more technical challenges. To that end, Phreeli has implemented a new encryption system it calls Double-Blind Armadillo, based on cutting-edge cryptographic protocols known as zero-knowledge proofs. Through a kind of mathematical sleight of hand, those crypto functions are capable of tasks like confirming that a certain phone has had its monthly service paid for, but without keeping any record that links a specific credit card number to that phone. Phreeli users can also pay their bills (or rather, prepay them, since Phreeli has no way to track down anonymous users who owe them money) with tough-to-trace cryptocurrency like Zcash or Monero.

Phreeli users can, however, choose to set their own dials for secrecy versus convenience. If they offer an email address at signup, they can more easily recover their account if their phone is lost. To get a SIM card, they can give their mailing address—which Merrill says Phreeli will promptly delete after the SIM ships—or they can download the digital equivalent known as an eSIM, even, if they choose, from a site Phreeli will host on the Tor anonymity network.

Phreeli's "armadillo" analogy—the animal also serves as the mascot in its logo—is meant to capture this sliding scale of privacy that Phreeli offers its users: Armadillos always have a layer of armor, but they can choose whether to expose their vulnerable underbelly or curl into a fully protected ball.

Even if users choose the less paranoid side of that spectrum of options, Merrill argues, his company will still be significantly less surveillance-friendly than existing phone companies, which have long represented one of the weakest links in the tech world's privacy protections. All major US cellular carriers comply, for instance, with law enforcement surveillance orders like "tower dumps" that hand over data to the government on every phone that connected to a particular cell tower during a certain time. They've also happily, repeatedly handed over your data to corporate interests: Last year the Federal Communications Commission fined AT&T, Verizon, and T-Mobile nearly $200 million for selling users' personal information, including their locations, to data brokers. (AT&T's fine was later overturned by an appeals court ruling intended to limit the FCC's enforcement powers.) Many data brokers in turn sell the information to federal agencies, including ICE and other parts of the DHS, offering an all-too-easy end run around restrictions on those agencies' domestic spying.

Phreeli doesn't promise to be a surveillance panacea. Even if your cellular carrier isn't tying your movements to your identity, the operating system of whatever phone you sign up with might be. Even your mobile apps can track you.

But for a startup seeking to be the country's most privacy-focused mobile carrier, the bar is low. "The goal of this phone company I'm starting is to be more private than the three biggest phone carriers in the US. That's the promise we're going to massively overdeliver on," says Merrill. "I don't think there's any way we can mess that up."

[...] Building a system that could function like a normal phone company—and accept users' payments like one—without storing virtually any identifying information on those customers presented a distinct challenge. To solve it, Merrill consulted with Zooko Wilcox, one of the creators of Zcash, perhaps the closest thing in the world to actual anonymous cryptocurrency. The Z in Zcash stands for "zero-knowledge proofs," a relatively new form of crypto system that has allowed Zcash's users to prove things (like who has paid whom) while keeping all information (like their identities, or even the amount of payments) fully encrypted.

For Phreeli, Wilcox suggested a related but slightly different system: so-called "zero-knowledge access passes." Wilcox compares the system to people showing their driver's license at the door of a club. "You've got to give your home address to the bouncer," Wilcox says incredulously. The magical properties of zero knowledge proofs, he says, would allow you to generate an unforgeable crypto credential that proves you're over 21 and then show that to the doorman without revealing your name, address, or even your age. "A process that previously required identification gets replaced by something that only requires authorization," Wilcox says. "See the difference?"

The same trick will now let Phreeli users prove they've prepaid their phone bill without connecting their name, address, or any payment information to their phone records—even if they pay with a credit card. The result, Merrill says, will be a user experience for most customers that's not very different from their existing phone carrier, but with a radically different level of data collection.

As for Wilcox, he's long been one of that small group of privacy zealots who buys his SIM cards in cash with a fake name. But he hopes Phreeli will offer an easier path—not just for people like him, but for normies too.

"I don't know of anybody who's ever offered this credibly before," says Wilcox. "Not the usual telecom-strip-mining-your-data phone, not a black-hoodie hacker phone, but a privacy-is-normal phone."


Original Submission

posted by janrinok on Saturday December 13, @07:23PM   Printer-friendly

https://scitechdaily.com/scientists-finally-uncover-why-the-worlds-most-common-heart-drug-causes-muscle-pain/

A new study explains how cholesterol-lowering medications can lead to muscle damage and identifies a possible approach to making these treatments safer.

Statins have dramatically improved cardiovascular health by lowering cholesterol levels and reducing the likelihood of heart attacks and strokes. Yet many people who take these medications experience unwanted muscle symptoms, including soreness, weakness, and in rare situations, severe muscle breakdown that can harm the kidneys.

Researchers at the University of British Columbia, working with colleagues at the University of Wisconsin-Madison, have now uncovered the biological reason behind these side effects. Their results, published last week in Nature Communications, may help guide the development of statins that do not trigger muscle problems.

Using cryo-electron microscopy, a technique capable of visualizing proteins at extremely high resolution, the team observed how statins interact with a key muscle protein known as the ryanodine receptor (RyR1). This receptor controls the flow of calcium inside muscle cells and opens only when a muscle is meant to contract. When statins attach to it, however, the channel is forced open, causing calcium to escape continuously, which can injure the surrounding muscle fibers.

"We were able to see, almost atom by atom, how statins latch onto this channel," said lead author Dr. Steven Molinarolo, a postdoctoral researcher in UBC's department of biochemistry and molecular biology. "That leak of calcium explains why some patients experience muscle pain or, in extreme cases, life-threatening complications."

The study examined atorvastatin, one of the most commonly used statins, but the evidence suggests the same effect could occur with other drugs in this class. The researchers found that three statin molecules gather within a single pocket of the protein. The first molecule connects when the channel is closed, preparing it to open, while the other two settle in afterward and push the channel fully open.

"This is the first time we've had a clear picture of how statins activate this channel," said Dr. Filip Van Petegem, senior author and professor at UBC's Life Sciences Institute. "It's a big step forward because it gives us a roadmap for designing statins that don't interact with muscle tissue."

By adjusting only those parts of the statin molecule that are responsible for the negative effects, scientists could preserve the part that lowers cholesterol while reducing the risk.

Implications for patient safety and future drug design

While severe muscle damage affects only a small fraction of over 200 million statin users worldwide, milder symptoms like aches and fatigue are far more common, and often lead patients to stop treatment. The new findings could help prevent those problems and improve adherence to life-saving therapy.

The research underscores the importance of advanced imaging technology in driving medical breakthroughs. Using the UBC faculty of medicine's high-resolution macromolecular cryo-electron microscopy facility, the team was able to visualize the statin-protein interaction in extraordinary detail—turning a fundamental question about drug safety into practical insights that could shape the next generation of therapies.

"Statins have been a cornerstone of cardiovascular care for decades," Dr. Van Petegem said. "Our goal is to make them even safer, so patients can benefit without fear of serious side effects."

For millions of people who rely on statins, that could mean fewer muscle problems—and a better quality of life.

Reference: “Cryo-electron microscopy reveals sequential binding and activation of Ryanodine Receptors by statin triplets” by Steven Molinarolo, Carmen R. Valdivia, Héctor H. Valdivia and Filip Van Petegem, 20 November 2025, Nature Communications.

DOI: 10.1038/s41467-025-66522-0


Original Submission

posted by janrinok on Saturday December 13, @02:37PM   Printer-friendly

Germany Might Have Just Saved Gas Engines From A European Ban:

  • German chancellor's letter reversing ban welcomed by EU.
  • EU may support zero- and low-emission fuels long-term.
  • Lawmakers delayed finalizing plans by several weeks.

For years, lawmakers and carmakers across Europe have been at odds over the 2035 deadline to ban vehicles that produce emissions. What once looked like an immovable cutoff now seems to be softening, as pressure builds to keep combustion-powered cars on sale well into the next decade.

German chancellor Friedrich Merz has added his voice to the debate, sending a letter to European Commission president Ursula von der Leyen. In it, he urged the EU to allow car companies to continue selling new internal combustion engine (ICE) models after 2035.

According to the European Commissioner for Sustainable Transport and Tourism, Apostolos Tzitzikostas, this letter was "very well received in Brussels," noting that the EU must do what it can do "protect European companies, European industry, and European production."

In an interview with Handelsblatt, Tzitzikostas revealed that the EU has delayed its December 10 deadline for confirming emissions regulations, revealing it has been delayed by several weeks because the commission is still finalizing details.

When it's ready, the package will "contain everything, from revising carbon dioxide targets to company fleets and many other points."

Tzitzikostas added that the commission is taking into account all technological developments, noting there could be a role to play for "zero- and low-emission fuels, [and] advanced biofuels."

He stopped short of confirming that vehicles with combustion engines, but powered by these fuels, will continue to be available beyond 2035, but it appears increasingly likely that this will be the case.

Some of Europe's major automakers are already preparing for that possibility. Porsche has invested in synthetic fuel production in Chile, while BMW powers many of its diesel models with fuel derived from vegetable oil.

When asked about the shift in tone, Tzitzikostas explained, "We want to stick to our goals, but we must take into account all recent geopolitical developments. We must be careful not to jeopardize our competitiveness while helping European industry maintain its technological lead."

His comments reflect a wider recalibration across Europe, where environmental ambitions now have to coexist with industrial stability and global competition.


Original Submission

posted by janrinok on Saturday December 13, @09:52AM   Printer-friendly

NASA's 1969-71 design process offers a road map for today's breakthrough inventions, from rockets to new drugs:

What does the space shuttle have in common with the original iPhone? According to Francisco Polidoro Jr., professor of management of Texas McCombs, they're both breakthrough inventions that integrate webs of interdependent features.

In an iPhone, he notes, its size, weight, camera, and Wi-Fi capabilities influence one another. Push one feature too far, and the phone becomes heavier, bulkier, or more expensive.

Companies can't test each feature in isolation, and they can't experiment with every possible combination. So, how does an organization design a complicated product for which there's no existing template?

In a new study, Polidoro finds present-day answers in an old story: how NASA developed its space shuttles, which flew from 1981 to 2011.

Rather than a straightforward sequence, NASA used a meandering knowledge-building process, he finds. That process allowed it to systematically explore rocket features, both individually and together.

"With breakthrough inventions, the number of combinations of possible features quickly explodes, and you just can't test all of them," Polidoro says. "It has to be a much more selective search process."

His findings have implications for both modern-day rocketeers and other cutting-edge fields, from phones to pharmaceuticals.

To trace NASA's design process, Polidoro combed through its archives with Raja Roy of the New Jersey Institute of Technology, as well as Minyoung Kim of the Ohio State University and Curba Morris Lampert of Florida International University.

The archives included 7,000 pages of books, papers, and technical documents — such as internal memoranda between engineers and managers — along with published and unpublished accounts and oral histories by NASA scientists, engineers, and historians.

From that material, the researchers created a timeline of successive space shuttle designs, from 1969 to 1971.

NASA recognized that the high costs of its Mercury, Gemini, and Apollo programs were largely due to nonreusable systems. At the outset, engineers working on a solution identified a series of performance features to test, including:

  • The capacity to carry payloads of 50,000 pounds.
  • Boosters with solid-propellant rocket motors that could be jettisoned and reused.
  • An external fuel tank with liquified oxygen and hydrogen that could be jettisoned.

To achieve those goals, the researchers found, NASA engineers built new knowledge in two distinct ways that repeated and built on each other over time: oscillation and accumulation.

  • With oscillation, engineers focused on achieving one specific performance goal. Then, they deliberately stepped back to explore alternatives, returning later to the initial goal with new insights.
  • With accumulation, they steadily met more performance goals in later designs as they built up knowledge.

Past research has looked at oscillation and accumulation separately, Polidoro says. But it was the two processes working synergistically that drove the shuttle's breakthroughs.

For example, in its first design iteration, engineers discovered how to burn fuel efficiently using a combination of liquid hydrogen and liquid oxygen. Then, in subsequent designs, they temporarily reverted to an older fuel: kerosene.

They used kerosene while they tested other features, including solid rocket motors and reusable boosters. Those features were incorporated successfully into later designs.

"Stepping back and letting go, temporarily, of solutions that are superior creates a space for you to keep on accumulating knowledge," Polidoro says.

"But that could be challenging, because technologists might be really proud of what they've achieved. It requires a humbleness to step away."

Today, he says, engineers face an added challenge. Space technology innovation is spread across several private companies, not just NASA, making it harder to coordinate oscillation and accumulation.

Journal Reference: https://doi.org/10.1016/j.respol.2025.105313


Original Submission

posted by janrinok on Saturday December 13, @05:05AM   Printer-friendly
from the Hands-and-Social-Media-Posts-Where-We-Can-See-Them dept.

An Anonymous Coward has submitted the following news:

https://www.news.com.au/world/north-america/us-politics/us-politics-live/live-coverage/ae8338db24bcd7f86abbc6a1650db724

The USA is kicking up border checks for foreigners with a plan to take a copy of the last five years worth of social media posts for prospective travellers.

This is assuming people actually continue to visit the USA for a holiday. Anyone seeking to enter the United States may very well need to go back over their online social activity and review their publicly posted thoughts. No word on what the USA will do with this data. At this time it is only a plan to collect it.


Original Submission

posted by hubie on Saturday December 13, @12:21AM   Printer-friendly

Michigan man received kidney transplant from donor who had fought off a skunk and was later found unresponsive:

A Michigan man has died of rabies after receiving a kidney from another man who died of the disease when he was scratched by a skunk while defending a kitten, in what officials are describing as an "exceptionally rare event".

According to a recent report from the Centers for Disease Control and Prevention (CDC), the Michigan patient received a kidney transplant at an Ohio hospital in December 2024.

Around five weeks later, he began experiencing tremors, lower extremity weakness, confusion and urinary incontinence. He was soon hospitalized and ventilated, then died. Postmortem testing confirmed rabies, the CDC report said, baffling authorities because the recipient's family had said he had not had any exposure to animals.

Doctors then reviewed records about the kidney donor, a man in Idaho, and discovered that in the Donor Risk Assessment Interview (DRAI) questionnaire he said he had been scratched by a skunk.

When asked, the family explained that a couple of months before, in October, while he was holding a kitten in a shed on his country property, a skunk approached, showing "predatory aggression toward the kitten".

The man fought off the animal in an encounter that the report says "rendered the skunk unconscious", but not before the man received a "shin scratch that bled", although he did not think he had been bitten.

Five weeks later, a family member said, he became confused, had difficulty swallowing and walking, experienced hallucinations and had a stiff neck. Two days later, he was found unresponsive at home after a presumed cardiac arrest. Although he was resuscitated and hospitalized, he never regained consciousness, and after several days was "declared brain dead and removed from life support".

The report states that several of his organs, including his left kidney, were donated.

After rabies was suspected in the kidney recipient, authorities went back to test laboratory samples from the donor; they tested negative for rabies. But biopsy samples directly from his kidneys did detect a strain "consistent with a silver-haired bat rabies", suggesting that he had, in fact, died of rabies and had passed it on to the donor.

The investigation suggested a "likely three-step transmission chain" in which a bat infected a skunk, which infected the donor, whose kidney then infected the recipient.

The CDC said it was only the fourth reported transplant-transmitted rabies event in the United States since 1978. It noted that the risk for any transplant-transmitted infection, including rabies, is extremely low.

After discovering that three people also received cornea grafts from the same donor, authorities immediately removed the grafts and administered Post-Exposure Prophylaxis (PEP) to prevent infection. The three people remained asymptomatic, the report said.

The CDC report stated that in the US, family members often provide information about a prospective donor's infectious disease risk factors, including animal exposures. Rabies is typically "excluded from routine donor pathogen testing because of its rarity in humans in the United States and the complexity of diagnostic testing".

"In this case, hospital staff members who treated the donor were initially unaware of the skunk scratch and attributed his pre-admission signs and symptoms to chronic co-morbidities," the report said.

In an interview with the New York Times, Dr Lara Danziger-Isakov, the director of immunocompromised host infectious diseases at Cincinnati Children's Hospital Medical Center, described the incident as "an exceptionally rare event", adding that "overall, the risk is exceptionally small".


Original Submission

posted by hubie on Friday December 12, @07:36PM   Printer-friendly

https://www.npr.org/2025/12/08/nx-s1-5631826/iceblock-app-lawsuit-trump-bondi

The developer of ICEBlock, an iPhone app that anonymously tracks the presence of Immigration and Customs Enforcement agents, has sued the Trump administration for free speech violations after Apple removed the service from its app store under demands from the White House.

The suit, filed on Monday in federal court in Washington, asks a judge to declare that the administration violated the First Amendment when it threatened to criminally prosecute the app's developer and pressured Apple to make the app unavailable for download, which the tech company did in October.

Following Apple ejecting ICEBlock, Attorney General Pam Bondi said in a statement that "we reached out to Apple today demanding they remove the ICEBlock app from their App Store — and Apple did so."

Lawyer Noam Biale, who filed the suit against the administration, said Bondi's remarks show the government illegally pressuring a private company to suppress free speech.

"We view that as an admission that she engaged in coercion in her official role as a government official to get Apple to remove this app," Biale said in an interview with NPR.

The Justice Department did not return a request for comment, but Trump administration officials have said the app puts the lives of ICE agents in danger.

When reached for comment, Apple also did not respond. The lawsuit, which does not name Apple, says the tech giant bowed in the face of political pressure.

"For what appears to be the first time in Apple's nearly fifty-year history, Apple removed a U.S.-based app in response to the U.S. government's demands," according to the suit.

[...] To First Amendment advocates, the White House's pressure campaign targeting ICEBlock is the latest example of what's known as "jawboning," when government officials wield state power to suppress speech. The Cato Institute calls the practice "censorship by proxy."

ABC's suspension of Jimmy Kimmel after FCC Chair Brendan Carr threatened regulatory action and Bondi promising a crackdown on hate speech following the killing of conservative activist Charlie Kirk are two other prominent instances.

"The use of a high-level government threat to force a private platform to suppress speech fundamentally undermines the public's right to access information about government activities," said Spence Purnell, a resident senior fellow at R Street, a center-right think tank. "If high-level officials can successfully silence political opposition, it sets a dangerous precedent for the future of free expression in this country."

Genevieve Lakier, a First Amendment scholar at the University of Chicago Law School, said the White House's campaign against ICEBlock shows the administration using what has become a familiar playbook: "To use threats of adverse legal and financial consequences, sometimes vague sometimes not so vague, to pressure universities, media companies, law firms, you name it, into not speaking in the ways they like," she said.

One potential weak spot for the lawsuit, however, is a lack of direct evidence that Attorney General Bondi, or other administration officials, made threats against Apple to have the app removed, rather than merely convinced the tech company to do so.

Previously: Apple Removes ICE Tracking Apps After Pressure by Trump Administration


Original Submission