Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Arthur T Knackerbracket has processed the following story:
An alliance of cloud service providers in Europe is investing €1 million into the Fulcrum Project, an open source cloud federation tech that gives an alternative to local customers anxious about using US hypercalers.
Speaking to The Register, Francisco Mingorance, Secretary General at the Cloud Infrastructure Service Providers in Europe (CISPE) association, explained that as part of the settlement reached with Microsoft last year an innovation fund was set up using Microsoft's cash.
Some €1 million of that has now been allocated to Fulcrum, an open source project to aggregate products from smaller tech vendors to rival the hyperscalers, he said.
"It's happening," he said. "There's been work going on for over a year on this, you know, coding and everything, testing, proof of concept...
"We cannot wait another five years. I mean, the sector lost half of its market share in four years to hyperscalers."
According to CISPE, the project marks "a significant step towards European cloud sovereignty" and is designed to "enable European cloud providers to pool and federate their infrastructures, offering a scalable and competitive alternative to foreign-controlled hyperscale cloud providers."
[...] Led by Opiquad, the open source code of the Fulcrum Core Project was officially unveiled at last week's CloudConf in Turin, Italy. Things are moving fast – Mingorance told us the team is aiming for July 2025 for "the first aggregated services available for purchase composition."
Emile Chalouhi, CEO of Opiquad, told The Register, "I think this is the only way to actually be able to finally create a common digital market.
"Smaller providers have access to resources that they didn't have before, and in locations where they didn't have them before."
[...] "Our goal here is to go to the market ASAP. We don't have all the superstructures that you might have in a lot of these other European projects.
"We don't have to mediate with the politicians or with a public project or with all these things. We're just building it, bottom-up to go live, and everything that's really bottom-up needs to be open, public, and go to the market fast."
[...] There is growing unease in Europe from some customers in the public and the private sector that are no longer happy to rely on US-headquartered cloud providers, so it seems the Trump administration is a galvanizing force for change overseas as well as on US soil... or in the clouds.
Arthur T Knackerbracket has processed the following story:
A robotics and machine learning engineer has developed a command-line interface tool that monitors power use from a smart plug and then tunes system performance based on electricity pricing. The simple program, called WattWise, came about when Naveen built a dual-socket EPYC workstation with plans to add four GPUs. It's a power-intensive setup, so he wanted a way to monitor its power consumption using a Kasa smart plug. The enthusiast has released the monitoring portion of the project to the public now, but the portion that manages clocks and power will be released later.
Unfortunately, the Kasa Smart app and the Home Assistant dashboard was inconvenient and couldn't do everything he desired. He already had a terminal window running monitoring tools like htop, nvtop, and nload, and decided to take matters into his own hands rather than dealing with yet another app.
Naveen built a terminal-based UI that shows power consumption data through Home Assistant and the TP-Link integration. The app monitors real-time power use, showing wattage and current, as well as providing historical consumption charts. More importantly, it is designed to automatically throttle CPU and GPU performance.
Naveen’s power provider uses Time-of-Use (ToU) pricing, so using a lot of power during peak hours can cost significantly more. The workstation can draw as much as 1400 watts at full load, but by reducing the CPU frequency from 3.7 GHz to 1.5 GHz, he's able to reduce consumption by about 225 watts. (No mention is made of GPU throttling, which could potentially allow for even higher power savings with a quad-GPU setup.)
Results will vary based on the hardware being used, naturally, and servers can pull far more power than a typical desktop — even one designed and used for gaming.
WattWise optimizes the system’s clock speed based on the current system load, power consumption as reported by the smart plug, and the time — with the latter factoring in peak pricing. From there, it uses a Proportional-Integral (PI) controller to manage the power and adapts system parameters based on the three variables.
At the moment, the app only supports one smart plug at a time and only works with the Kasa brand. However, Naveem says there are plans to add support for multiple plugs, more smart plug brands, integration with other power management tools, and other features. The app in its current form is a pretty simple tool, but sometimes simple is all you need to solve a problem.
Naveen made WattWise open source under the MIT license, and you can download it directly from GitHub. If you’re interested, you can leave feedback and contributions, or you can fork it and adapt it for other systems. Note that the current version only contains the dashboard, not the actual power optimizer, which still needs further work.
Arthur T Knackerbracket has processed the following story:
A newly discovered bacterial weapon against fungi can kill even drug-resistant strains, raising hopes for a new antifungal drug.
Fungal infections have been spreading rapidly and widely in recent years, fueled in part by climate change. Some fungi, including Candida auris, have developed resistance to some highly effective antifungal drugs which have been in use for decades. So scientists have been searching for new drugs to keep fungi in check.
Researchers in China may have found a new type of antifungal called mandimycin, the team reports March 19 in Nature. Mandimycin killed fungal infections in mice more effectively than amphotericin B and several other commonly used antifungal drugs. It even worked against resistant C. auris strains.
Bacteria are masters at fending off fungi, says Martin Burke, a chemist at the University of Illinois Urbana-Champaign. “There’s been this war raging for 2 billion years,” he says. Bacteria and fungi have been “building weapons to try to compete with each other for nutrients in the environment.” Humans have been spying on both armies to learn how to make antibiotics and antifungal drugs.
In one such mission, Zongqiang Wang of China Pharmaceutical University in Nanjing and colleagues combed more than 300,000 bacterial genomes looking for possible weapons against fungi. One strain of Streptomyces netropsis contained a cluster of genes that encode enzymes for building the compound mandimycin.
The antifungal has a backbone structure similar to some other antifungal drugs but has two sugar molecules tacked onto its tail. Those sugars are important for how the molecule kills fungi, because they change the target that the weapon is aimed at.
[...] Instead of ergosterol, mandimycin is attracted to phospholipids, the major building blocks of membranes, Wang and colleagues discovered. It’s the sugars on the tail that allow mandimycin to target phospholipids, particularly one called phosphatidylinositol, the team found. Removing those sugars caused mandimycin to latch on to ergosterol, though more weakly than existing antifungals.
While intact mandimycin proved to be a potent fungi killer, it was far less toxic to mice’s kidneys and to human kidney cells grown in lab dishes than amphotericin B. Bacteria escaped mandimycin unscathed.
The ability to destroy fungi but not harm human and bacterial cells has Burke puzzled.
“This is the wild part about mandimycin that I don’t understand,” he says. Why doesn’t it kill the bacteria that produce it?
Only fungi have ergosterol in their membranes, so other cells aren’t harmed by drugs that soak it up. But fungi, bacteria and mammals all have phospholipids, which means pulling those out of membranes should be damaging across the board, including to the mandimycin-making bacteria. Wang and colleagues suggest that mandimycin’s attacks might be specific to phospholipids found in fungi, but not in other types of cells.
That is just one of the mysteries researchers will need to solve before mandimycin can be tested in people, Burke says. “It’s one of those exciting papers that opens a lot of doors, [and] pretty much behind every one is another question.”
Journal References:
• Q. Deng et al. A polyene macrolide targeting phospholipids in the fungal cell membrane. Nature. Published online March 19, 2025. doi: 10.1038/s41586-025-08678-9
• New antifungal breaks the mould. Nature. Published online March 19, 2025. doi: 10.1038/d41586-025-00801-0
Arthur T Knackerbracket has processed the following story:
More doubt is being cast over the US CHIPS Act program with the Trump administration threatening to halt payments unless companies in line to receive funding commit to substantially expand their own investments.
President Donald Trump has issued an Executive Order to establish a new office within the Department of Commerce titled the United States Investment Accelerator.
The office's aim is "to encourage companies to make large investments in the United States," and among its powers will be oversight of the CHIPS Program to maximize the benefits for taxpayers, the White House states.
This move follows earlier calls by President Trump to scrap CHIPS Act funding entirely, and any remaining money to be allocated to cutting federal debt.
According to reports, Secretary of Commerce Howard Lutnick has indicated that he intends to withhold CHIPS Act grants already agreed in order to push the companies involved to substantially expand the projects they have planned.
The aim is to force semiconductor makers promised grants and subsidies for building new manufacturing facilities on American soil to invest even more, without increasing the size of federal grants. This follows the example of TSMC, which earlier this month pledged to spend $100 billion to expand its US fabrication plants.
However, that $100 billion figure disclosed by TSMC chief CC Wei during his meeting with Trump was merely an estimated price tag for plans the company had in the pipeline anyway. Intel's former boss, Pat Gelsinger, also pointed out recently that while TSMC is building fabs in the US, it is keeping its research and development in Taiwan.
"If you don't have R&D in the US, you will not have semiconductor leadership in the US," Gelsinger said at the end of last week.
His old company finalized an agreement with the Department of Commerce in November to receive up to $7.86 billion from the CHIPS Act, which would make it the largest beneficiary of the federal government's cash, if it actually receives it all.
That was also conditional on Intel retaining control of its foundries, amid talk that the troubled Santa Clara-based biz was potentially looking to spin them off as part of a restructure. Intel has since announced it is delaying some of its fab buildout, such as pushing back the completion of its $28 billion Ohio plant until at least 2030.
Gelsinger had previously stated that without CHIPS Act funding, Intel would continue to build new fabs in Arizona and Ohio, however the expansion would take longer, and it wouldn't be as comprehensive.
Along with import tariffs on chips, the tough approach the Trump administration is taking with semiconductor makers is likely to lead to more uncertainty in the tech industry. This has already caused mayhem in the PC business, with costs increasing and customers rethinking purchases.
Richard Gordon, Vice President and Practice Lead, Semiconductors, The Futurum Group, referred to AMD's Lisa Su's comments about the impact of tariffs, remarking that Su appeared to be "waiting to see how things pan out in the coming weeks / months before coming to any major conclusions ... and I think that's the only sensible way to deal with Trump."
Gordon added: "The threats about withholding CHIPS Act Funding are largely rhetorical and designed to keep up the pressure on the US semis companies IMO. I think the threats are unnecessary and won't make much difference because US companies are already rapidly re-shoring, as Lisa mentions...
"In terms of investment generally, it's always been my view that semis companies will invest regardless of government handouts because if they don't they won't be around for long. It's nice to have handouts and companies will gladly accept them (depending on the strings attached) but often they only serve to prop up weaker companies."
In addition to overseeing the CHIPS Act, the Investment Accelerator office will try to cut through bureaucracy to ensure that businesses can quickly deploy capital and create jobs, according to the White House.
"By streamlining processes, the Accelerator will attract both foreign and domestic investment, reinforcing America's position as the premier destination for large-scale investment," it claimed.
As well as scrapping some subsidies previously agreed, the Commerce Secretary may consider initiating a separate 25 percent tax credit from the CHIPS Act.
Bruce Schneier and Davi Ottenheimer have co-authored an essay about the essential nature of data integrity in the future of the WWW. (There is an alternative link to the essay published in the Communications of the ACM hosted at the ACM's digital library.) The ability to verify the origin of data and that it has remained unchanged and unmanipulated is becoming increasingly important. Basically they call for a verifiable chain of trust for data production and usage.
The risks of deploying AI without proper integrity control measures are severe and often underappreciated. When AI systems operate without sufficient security measures to handle corrupted or manipulated data, they can produce subtly flawed outputs that appear valid on the surface. The failures can cascade through interconnected systems, amplifying errors and biases. Without proper integrity controls, an AI system might train on polluted data, make decisions based on misleading assumptions, or have outputs altered without detection. The results of this can range from degraded performance to catastrophic failures.
We see four areas where integrity is paramount in this Web 3.0 world. The first is granular access, which allows users and organizations to maintain precise control over who can access and modify what information and for what purposes. The second is authentication—much more nuanced than the simple "Who are you?" authentication mechanisms of today—which ensures that data access is properly verified and authorized at every step. The third is transparent data ownership, which allows data owners to know when and how their data is used and creates an auditable trail of data providence. Finally, the fourth is access standardization: common interfaces and protocols that enable consistent data access while maintaining security.
Although they focus on the ability to prove the origin of data, an obvious risk is that the chain of trust becomes a chain of surveillance. In some ways this essay overlaps with a few of the topics brought up in Bruce Schneier's 2016 post on thoughts about integrity and availability threats.
Previously:
(2025) 10 Years on After 'Data and Goliath' Warned of Data Collection
(2025) Biggest Privacy Erosion in 10 Years? On Google's Policy Change Towards Fingerprinting
(2023) Snowden Ten Years Later - Schneier on Security
(2014) If You Read Boing Boing or Linux Journal, The NSA is Watching You
... and more
On May 9 2012 Scott Murphy and Mark Crowe, aka 'The Two Guys from Andromeda', started a kickstarter campaign to make a new Space Adventure in the style of the Sierra Space Quest series. In April 2024 the kickstarter completed with a successful release of the game on Steam and as the promised DRM version to kickstarter backers. This is good news for anyone looking for the nostalgia from old PC point and click gaming. Kudos to the Two Guys team for bringing this project to fruition.
Tracing the Thoughts of a Large Language Model:
Language models like Claude aren't programmed directly by humans—instead, they're trained on large amounts of data. During that training process, they learn their own strategies to solve problems. These strategies are encoded in the billions of computations a model performs for every word it writes. They arrive inscrutable to us, the model's developers. This means that we don't understand how models do most of the things they do.
Knowing how models like Claude think would allow us to have a better understanding of their abilities, as well as help us ensure that they're doing what we intend them to. For example:
- Claude can speak dozens of languages. What language, if any, is it using "in its head"?
- Claude writes text one word at a time. Is it only focusing on predicting the next word or does it ever plan ahead?
- Claude can write out its reasoning step-by-step. Does this explanation represent the actual steps it took to get to an answer, or is it sometimes fabricating a plausible argument for a foregone conclusion?
We take inspiration from the field of neuroscience, which has long studied the messy insides of thinking organisms, and try to build a kind of AI microscope that will let us identify patterns of activity and flows of information. There are limits to what you can learn just by talking to an AI model—after all, humans (even neuroscientists) don't know all the details of how our own brains work. So we look inside.
Today, we're sharing two new papers that represent progress on the development of the "microscope", and the application of it to see new "AI biology". In the first paper, we extend our prior work locating interpretable concepts ("features") inside a model to link those concepts together into computational "circuits", revealing parts of the pathway that transforms the words that go into Claude into the words that come out. In the second, we look inside Claude 3.5 Haiku, performing deep studies of simple tasks representative of ten crucial model behaviors, including the three described above. Our method sheds light on a part of what happens when Claude responds to these prompts, which is enough to see solid evidence that:
- Claude sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal "language of thought." We show this by translating simple sentences into multiple languages and tracing the overlap in how Claude processes them.
- Claude will plan what it will say many words ahead, and write to get to that destination. We show this in the realm of poetry, where it thinks of possible rhyming words in advance and writes the next line to get there. This is powerful evidence that even though models are trained to output one word at a time, they may think on much longer horizons to do so.
- Claude, on occasion, will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps. We show this by asking it for help on a hard math problem while giving it an incorrect hint. We are able to "catch it in the act" as it makes up its fake reasoning, providing a proof of concept that our tools can be useful for flagging concerning mechanisms in models.
We were often surprised by what we saw in the model: In the poetry case study, we had set out to show that the model didn't plan ahead, and found instead that it did. In a study of hallucinations, we found the counter-intuitive result that Claude's default behavior is to decline to speculate when asked a question, and it only answers questions when something inhibits this default reluctance. In a response to an example jailbreak, we found that the model recognized it had been asked for dangerous information well before it was able to gracefully bring the conversation back around. While the problems we study can (and often have been) analyzed with other methods, the general "build a microscope" approach lets us learn many things we wouldn't have guessed going in, which will be increasingly important as models grow more sophisticated.
These findings aren't just scientifically interesting—they represent significant progress towards our goal of understanding AI systems and making sure they're reliable. We also hope they prove useful to other groups, and potentially, in other domains: for example, interpretability techniques have found use in fields such as medical imaging and genomics, as dissecting the internal mechanisms of models trained for scientific applications can reveal new insight about the science.
At the same time, we recognize the limitations of our current approach. Even on short, simple prompts, our method only captures a fraction of the total computation performed by Claude, and the mechanisms we do see may have some artifacts based on our tools which don't reflect what is going on in the underlying model. It currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words. To scale to the thousands of words supporting the complex thinking chains used by modern models, we will need to improve both the method and (perhaps with AI assistance) how we make sense of what we see with it.
As AI systems are rapidly becoming more capable and are deployed in increasingly important contexts, Anthropic is investing in a portfolio of approaches including realtime monitoring, model character improvements, and the science of alignment. Interpretability research like this is one of the highest-risk, highest-reward investments, a significant scientific challenge with the potential to provide a unique tool for ensuring that AI is transparent. Transparency into the model's mechanisms allows us to check whether it's aligned with human values—and whether it's worthy of our trust.
For full details, please read the papers. Below, we invite you on a short tour of some of the most striking "AI biology" findings from our investigations.
In TFS there follows a lot of technical explanation which will surely interest some of our community but might leave others less inclined to read it. It is your choice.
See also:
There appears to have been a problem with the update of the Poll. Some made comments before the Poll appeared on the front pages - or so it seems. How they accessed it I do not know. The Poll released at the programmed time but comments that had already been made seem to have been lost. I am investigating the cause.
My apologies to anyone who had already made a comment. All I can do is ask that you make your comments again. We have not seen the problem before so if anyone can provide additional information that might help in identifying the cause it would be most useful. Did you access the Poll via the front page? If not, can you explain how you did access it please? Which browser are you using? Did anything appear 'different' to when you usually access the site?
[Addendum: The poll displayed on my front page has reverted to the previous poll again. If anyone else has seen the same please confirm in a reply to this Meta.] After a minute or so it returned to displaying the correct Poll.
In a significant move, Intel and Taiwan Semiconductor Manufacturing Company (TSMC) are reportedly teaming up to create a joint chipmaking venture. This collaboration aims to enhance semiconductor production capabilities and foster innovation in the semiconductor industry. The joint venture is poised to have a substantial impact on the global tech market.
The two firms are said to have reached a tentative agreement to create a joint venture that will operate Intel's chipmaking facilities, according to The Information. TSMC will have a 20% stake in the new venture.
Instead of funding its stake with capital, TSMC will share some of its chipmaking practices with Intel employees and train them, added The Information.
The Trump administration reportedly kindled the discussions in an effort to boost Intel's turnaround efforts. Intel executives are worried about mass layoffs.
Arthur T Knackerbracket has processed the following story:
Fire Rover, a company that specializes in automated and semi-automated fire suppression systems, released its annual report noting that waste and recycling fires are steadily rising. In 2024, the company recorded 2,910 fires – a 60-percent increase from 2023's 1,809 and a 100-percent jump from 2022's 1,409 incidents. The report also notes that fire crews dispatched to emergencies at trash and recycling facilities hit a record high of 398, a steady growth since Fire Rover began tracking the stat at 275 incidents in 2016.
Lithium-ion battery fires are not new, nor are they the only cause of trash and recycling blazes. Fire Rover CEO Ryan Fogelman told Ars Technica things like fireworks, pool chemicals, and hot barbeque briquettes pose just as much risk. However, batteries, particularly those in disposable vaping products, are a rapidly growing cause mainly because of consumer ignorance and a lack of widespread e-waste collection.
Many well-meaning customers know not to throw their vapes in the regular trash, so they use the other option – recycling bins, which is no better. No matter which facility these devices land in, they can ignite in many ways. Crushing pressure, puncturing, short-circuiting, and vibration from facility operations are common causes. However, battery defects, internal cell failure, and overheating are indirect means of ignition that refuse centers cannot control. Fogelman estimates that about half of the incidents Fire Rover tracks are battery-related, costing facilities approximately $2.5 billion in 2024 alone.
The CEO says that a properly functioning e-waste infrastructure could reduce this trend, but that does not currently exist and does not seem to be a high priority. Furthermore, the few facilities offering e-waste collection are abandoning or restricting it, likely because of the associated costs.
For example, my local refuse center used to pick up e-waste once a year. It recently discontinued that service. Customers can still bring in their e-waste, but the facility has a long list of items it refuses to accept. The added inconvenience of having to haul in their old electronics and the annoyance of not having anywhere else to dispose of the unaccepted items has likely led many to just chuck the lot into the regular recycle bin or the trash.
Fire Rover points its finger at the vaping industry, believing it should take more responsibility for helping clean up the mess it has helped create.
"Not only are their batteries being improperly discarded in waste and recycling bins, but the vape industry has done the bare minimum to invest in the technology needed to address the 1.2 billion vapes entering our waste and recycling streams annually," the report states.
Are the batteries easily removed from the vape pen? Could they be of value to hobbyists or for other uses?
https://medicalxpress.com/news/2025-03-perceptions-songbird-parallels-human-speech.html
Expectations can influence perception in seemingly contradictory ways, either by directing attention to expected stimuli and enhancing perceptual acuity or by stabilizing perception and diminishing acuity within expected stimulus categories. The neural mechanisms supporting these dual roles of expectation are not well understood. Here, we trained European starlings to classify ambiguous song syllables in both expected and unexpected acoustic contexts. We show that birds employ probabilistic, Bayesian integration to classify syllables, leveraging their expectations to stabilize their perceptual behavior. However, auditory sensory neural populations do not reflect this integration. Instead, expectation enhances the acuity of auditory sensory neurons in high-probability regions of the stimulus space. This modulation diverges from patterns typically observed in motor areas, where Bayesian integration of sensory inputs and expectations predominates. Our results suggest that peripheral sensory systems use expectation to improve sensory representations and maintain high-fidelity representations of the world, allowing downstream circuits to flexibly integrate this information with expectations to drive behavior.
Past neuroscience and psychology studies have shown that people's expectations of the world can influence their perceptions, either by directing their attention to expected stimuli or by reducing their sensitivity (i.e., perceptual acuity) to variations within the categories of stimuli we expect to be exposed to.
While the effects of expectations on perceptions are now well-documented, their neural underpinnings remain poorly understood.
Researchers at University of California San Diego (UC San Diego) carried out a study involving songbirds aimed at better understanding how expectation-fueled biases in perception shape brain activity and behavior.
Their findings, published in Nature Neuroscience, suggest that the perceptions of songbirds, like those of humans, are influenced by expectations, with peripheral sensory systems utilizing expectations to enhance sensory perception and retain high-fidelity representations of the world.
"This work was inspired by an observation about human speech, namely that listeners are able to comprehend speech even though there is a great degree of variability in the sound entering their ears," Tim Sainburg, first author of the paper, told Medical Xpress.
"Not only are we tasked with understanding speech in noisy environments, but we also have to deal with variability in the actual speech signal."
Human speakers are known to have different voices, while also pronouncing many words differently. Past studies suggest that the human brain possesses robust underlying mechanisms designed to address these differences, by grouping speech sounds into stable perceptual categories, a process referred to as "categorical perception."
"One of these mechanisms is that we use context to cue and bias our perception," said Sainburg. "The goal of our study was to understand how that bias works in behavior and in the brain."
Timothy Q. Gentner's lab at UC San Diego, which Sainburg is a part of, often examines the vocal behavior and perceptions of songbirds. This is because songbirds are known to share many similarities with humans in terms of their vocal behavior, thus studying them can help to better understand human speech and speech-related perceptions.
"Behaviorally, we were interested in how expectation biases perception in songbirds," explained Sainburg.
The Trump Administration issued yet another executive order on Thursday [March 20, 2025]. This one directs the federal government to mine federal public lands "to the maximum possible extent," and to prioritize mining over all other uses on federal lands that contain critical mineral deposits.
This should be alarming to conservationists and wilderness advocates. Because in addition to putting critical areas like the Boundary Waters and Bristol Bay back in the crosshairs, the administration's extraction-first approach could dramatically shift what our public lands look like and how we use them.
"There are really three main thrusts to this executive order," Dan Hartinger, senior director of agency policy for the Wilderness Society, tells Outdoor Life. "Job one is to open new places to mining. Job two is to subsidize mining in those places. And job three is to ram through individual projects regardless of public input or what the science says."
The executive order, Immediate Measures to Increase American Mineral Production, invokes wartime powers granted by the Defense Production Act. It allows Interior Secretary Doug Burgum to expand the country's list of critical minerals. It also directs Burgum to make a priority list of all federal lands with mineral deposits, and to take whatever actions necessary to expedite and issue mining permits there. This includes rolling back environmental regulations and finding ways to fund and subsidize private mining companies with taxpayer dollars.
[...] By invoking its wartime powers with the executive order, the Trump administration claims that taking a mining-first approach is vital to shore up national security and compete with foreign hostile nations. But its actions via executive order stand to benefit the international mining conglomerates that are already operating on U.S. federal lands and which, Hartinger says, are not required to pay royalties or other fees for the value of minerals they extract.
"This would use taxpayer funding to issue loans and capital assistance, and essentially subsidize these operations. So not only are these companies getting the land for free, and the minerals for free, and the ability to dump their waste basically wherever they want. We're going to pay them with taxpayer money to do that."
[...] "I think it's helpful to think about this as part of a pattern. And, you know, it's very concerning to us to hear Secretary Burgum saying our federal lands are assets on the nation's balance sheet," Hartinger says. "But I think it's very instructive, too. Because this administration sees our public lands not as things that provide inherent and intrinsic benefits to us, in the form of clean water and air, recreation, wildlife habitat, or any of these myriad uses. It is purely a matter of: How can we extract the maximum short-term dollars from these places?"
https://www.righto.com/2025/03/pentium-microcde-rom-circuitry.html
Most people think of machine instructions as the fundamental steps that a computer performs. However, many processors have another layer of software underneath: microcode. With microcode, instead of building the processor's control circuitry from complex logic gates, the control logic is implemented with code known as microcode, stored in the microcode ROM. To execute a machine instruction, the computer internally executes several simpler micro-instructions, specified by the microcode. In this post, I examine the microcode ROM in the original Pentium, looking at the low-level circuitry.
https://medicalxpress.com/news/2025-03-cravings-food-neurons-amygdala-play.html
To ensure we get the calories and hydration we need, the brain relies on a complex network of cells, signals, and pathways to guide us when to eat, drink, or stop. Yet, much about how the brain deciphers the body's needs and translates them into action remains unknown.
In a new study published in Nature Communications, researchers from the Max Planck Institute for Biological Intelligence, in collaboration with the University of Regensburg and Stanford University, have identified specific populations of neurons in the amygdala—an emotional and motivational center of the brain—that play a key role in this process.
These specialized "thirst" and "hunger" neurons operate through distinct circuits, influencing the drive to eat or drink. The study, which was carried out in mice, sheds new light on the amygdala's role in regulating our nutritional needs and may offer insights into eating disorders and addiction.
The amygdala, a brain region often linked to emotions and decision-making, also plays a key role in shaping our desire to eat and drink. Earlier research led by Rüdiger Klein's group at the Max Planck Institute for Biological Intelligence revealed that neurons in the central nucleus of the amygdala connect food to feelings—pairing tasty meals with positive emotions, associating bad food with aversion, and suppressing appetite when nausea sets in.
The team also demonstrated that changing the activity of these neurons can alter behavior, prompting mice to eat even when they are full or feeling unwell.
Building on these findings, the new research has detailed distinct groups of neurons in the same central region of the amygdala that respond specifically to thirst and others that respond to hunger, guided by a complex web of molecular cues.
"One of these groups of neurons is solely dedicated to regulating the desire to drink, the first 'thirst neuron' that has been identified in the amygdala," explains Federica Fermani, who led the study. "When we activated these neurons, the mice drank more, and when we suppressed their activity, the mice drank less.
"We also identified another group of neurons in the same region of the amygdala that drives thirst but also plays a role in regulating hunger. These findings highlight how some neurons show remarkable specialization for specific behaviors, while others have more general roles in guiding food and drink choices."
To explore how neurons in the central nucleus of the amygdala regulate drinking and eating, the researchers used advanced genetic tools to study brain activity in mice during hunger, thirst, and when they were already full and hydrated. One method, called optogenetics, allowed the team to activate specific neurons using light-sensitive proteins and a laser precisely tuned to trigger those cells.
They also used approaches to silence the neurons, observing how their absence influenced the mice's tendency to eat or drink. By combining this with new methods that enable the monitoring of individual neurons across multiple brain regions, the researchers mapped where these neurons receive information and identified other brain regions they communicate with.
Journal Reference: Fermani, F., Chang, S., Mastrodicasa, Y. et al. Food and water intake are regulated by distinct central amygdala circuits revealed using intersectional genetics. Nat Commun 16, 3072 (2025). https://doi.org/10.1038/s41467-025-58144-3
Automated AI bots seeking training data threaten Wikipedia project stability, foundation says:
On Tuesday, the Wikimedia Foundation announced that relentless AI scraping is putting strain on Wikipedia's servers. Automated bots seeking AI model training data for LLMs have been vacuuming up terabytes of data, growing the foundation's bandwidth used for downloading multimedia content by 50 percent since January 2024. It's a scenario familiar across the free and open source software (FOSS) community, as we've previously detailed.
The Foundation hosts not only Wikipedia but also platforms like Wikimedia Commons, which offers 144 million media files under open licenses. For decades, this content has powered everything from search results to school projects. But since early 2024, AI companies have dramatically increased automated scraping through direct crawling, APIs, and bulk downloads to feed their hungry AI models. This exponential growth in non-human traffic has imposed steep technical and financial costs—often without the attribution that helps sustain Wikimedia's volunteer ecosystem.
The impact isn't theoretical. The foundation says that when former US President Jimmy Carter died in December 2024, his Wikipedia page predictably drew millions of views. But the real stress came when users simultaneously streamed a 1.5-hour video of a 1980 debate from Wikimedia Commons. The surge doubled Wikimedia's normal network traffic, temporarily maxing out several of its Internet connections. Wikimedia engineers quickly rerouted traffic to reduce congestion, but the event revealed a deeper problem: The baseline bandwidth had already been consumed largely by bots scraping media at scale.
This behavior is increasingly familiar across the FOSS world. Fedora's Pagure repository blocked all traffic from Brazil after similar scraping incidents covered by Ars Technica. GNOME's GitLab instance implemented proof-of-work challenges to filter excessive bot access. Read the Docs dramatically cut its bandwidth costs after blocking AI crawlers.
Wikimedia's internal data explains why this kind of traffic is so costly for open projects. Unlike humans, who tend to view popular and frequently cached articles, bots crawl obscure and less-accessed pages, forcing Wikimedia's core datacenters to serve them directly. Caching systems designed for predictable, human browsing behavior don't work when bots are reading the entire archive indiscriminately.
As a result, Wikimedia found that bots account for 65 percent of the most expensive requests to its core infrastructure despite making up just 35 percent of total pageviews. This asymmetry is a key technical insight: The cost of a bot request is far higher than a human one, and it adds up fast.
[...] Across the Internet, open platforms are experimenting with technical solutions: proof-of-work challenges, slow-response tarpits (like Nepenthes), collaborative crawler blocklists (like "ai.robots.txt"), and commercial tools like Cloudflare's AI Labyrinth. These approaches address the technical mismatch between infrastructure designed for human readers and the industrial-scale demands of AI training.
[...] The organization is now focusing on systemic approaches to this issue under a new initiative: WE5: Responsible Use of Infrastructure. It raises critical questions about guiding developers toward less resource-intensive access methods and establishing sustainable boundaries while preserving openness.
The challenge lies in bridging two worlds: open knowledge repositories and commercial AI development. Many companies rely on open knowledge to train commercial models but don't contribute to the infrastructure making that knowledge accessible. This creates a technical imbalance that threatens the sustainability of community-run platforms.
Better coordination between AI developers and resource providers could potentially resolve these issues through dedicated APIs, shared infrastructure funding, or more efficient access patterns. Without such practical collaboration, the platforms that have enabled AI advancement may struggle to maintain reliable service. Wikimedia's warning is clear: Freedom of access does not mean freedom from consequences.