Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
WindBorne says its balloons are compliant with all applicable airspace regulations:
The mysterious impact of a United Airlines aircraft in flight last week has sparked plenty of theories as to its cause, from space debris to high-flying birds.
However the question of what happened to flight 1093, and its severely damaged front window, appears to be answered in the form of a weather balloon.
"I think this was a WindBorne balloon," Kai Marshland, co-founder of the weather prediction company WindBorne Systems, told Ars in an email on Monday evening. "We learned about UA1093 and the potential that it was related to one of our balloons at 11 pm PT on Sunday and immediately looked into it. At 6 am PT, we sent our preliminary investigation to both NTSB and FAA, and are working with both of them to investigate further."
WindBorne is a six-year old company that seeks to both collect weather observations with its fleet of small, affordable weather balloons as well as use that atmospheric data for its proprietary artificial intelligence weather models.
Scott Manley, a popular YouTube creator and pilot, was among the first people to speculate online about the collision being caused by a WindBorne balloon, having coordinated the position of a balloon data point with the flight path of the aircraft. Asked about this by Ars, the company confirmed that its balloon likely hit the plane.
The strike occurred Thursday, during a United Airlines flight from Denver to Los Angeles. Images shared on social media showed that one of the two large windows at the front of a 737 MAX aircraft was significantly cracked. Related images also reveal a pilot's arm that has been cut multiple times by what appear to be small shards of glass.
Speculation built over the weekend after one of the aircraft's pilots described the object that impacted the aircraft as "space debris." On Sunday the National Transportation Safety Board confirmed that it is investigating the collision, which did not cause any fatalities. However, one of the pilot's arms appeared to be cut up by small shards of glass from the windshield.
WindBorne has a fleet of global sounding balloons that fly various vertical profiles around the world, gathering atmospheric data. Each balloon is fairly small, with a mass of 2.6 pounds (1.2 kg), and provides temperature, wind, pressure, and other data about the atmosphere. Such data is useful for establishing initial conditions upon which weather models base their outputs.
Notably, the company has an FAQ on its website (which clearly was written months or years ago, before this incident) that addresses several questions, including: Why don't WindBorne balloons pose a risk to airplanes?
"The quick answer is our constellation of Global Sounding Balloons (GSBs), which we call WindBorne Atlas, doesn't pose a threat to airplanes or other objects in the sky. It's not only highly improbable that a WindBorne balloon could even collide with an aircraft in the first place; but our balloons are so lightweight that they would not cause significant damage.
WindBorne also said that its balloons are compliant with all applicable airspace regulations.
"For example, we maintain active lines of communication with the FAA to ensure our operations satisfy all relevant regulatory requirements," the company states. "We also provide government partners with direct access to our comprehensive, real-time balloon tracking system via our proprietary software, WindBorne Live."
It started with a now-deleted tweet from OpenAI manager Kevin Weil, who wrote that GPT-5 had "found solutions to 10 (!) previously unsolved Erdős problems" and made progress on eleven more. He described these problems as "open for decades." Other OpenAI researchers echoed the claim.
The wording made it sound like GPT-5 had independently produced mathematical proofs for tough number theory questions - a potential scientific breakthrough and a sign that generative AI could uncover unknown solutions, showing its ability to drive novel research and open the door to major advances.
Mathematician Thomas Bloom, who runs erdosproblems.com, pushed back right away. He called the statements "a dramatic misinterpretation," clarifying that "open" on his site just means he personally doesn't know the solution - not that the problem is actually unsolved. GPT-5 had only surfaced existing research that Bloom had missed.
Deepmind-CEO Demis Hassabis called the episode "embarrassing", and Meta AI chief Yann LeCun pointed out that OpenAI had basically bought into its own hype ("Hoisted by their own GPTards").
The original tweets were mostly deleted, and the researchers admitted their mistake. Still, the incident adds to the perception that OpenAI is an organization under pressure and careless in its approach. It raises questions about why leading AI researchers would share such dramatic claims without verifying the facts, especially in a field already awash in hype, with billions at stake. Bubeck knew what GPT-5 actually contributed, but still used the ambiguous phrase "found solutions."
The real story here is getting overshadowed: GPT-5 actually proved useful as a research tool for tracking down relevant academic papers. This is especially valuable for problems where the literature is scattered or the terminology isn't consistent.
Mathematician Terence Tao sees this as the most immediate potential for AI in math—not solving the toughest open problems, but speeding up tedious tasks like literature searches. While there have been some "isolated examples of progress" on difficult questions, Tao says AI is most valuable as a time-saving assistant. He has also said that generative AI could help "industrialize" mathematics and accelerate progress in the field. Still, human expertise is crucial for reviewing, classifying, and safely integrating AI-generated results into real research.
Too many services depend not just on one cloud provider, but on one location:
Analysis Amazon's US-EAST-1 region outage caused widespread chaos, taking websites and services offline even in Europe and raising some difficult questions. After all, cloud operations are supposed to have some built-in resiliency, right?
The problems began just after midnight US Pacific Time today when Amazon Web Services (AWS) noticed increased error rates and latencies for multiple services running within its home US-EAST-1 region.
Within a couple of hours, Amazon's techies had identified DNS as a potential root cause of the issue – specifically the resolution of the DynamoDB API endpoint in US-EAST-1 – and were working on a fix.
However, it was affecting other AWS services, including global services and or features that rely on endpoints operating from AWS' original region, such as IAM (Identity and Access Management) updates and DynamoDB global tables.
While Amazon worked to fully resolve the problem, the issue was already causing widespread chaos to websites and online services beyond the Northern Virginia locale of US-EAST-1, and even outside of America's borders.
As The Register reported earlier, Amazon.com itself was down for a time, while the company's Alexa smart speakers and Ring doorbells stopped working. But the effects were also felt by messaging apps such as Signal and WhatsApp, while in the UK, Lloyds Bank and even government services such as tax agency HMRC were impacted.
According to a BBC report, outage monitor Downdetector indicated there had been more than 6.5 million reports globally, with upwards of 1,000 companies affected.
How could this happen? Amazon has a global footprint, and its infrastructure is split into regions, physical locations with a cluster of datacenters. Each region consists of a minimum of three isolated and physically separate availability zones (AZ), each with independent power and connected via redundant, ultra-low-latency networks.
Customers are encouraged to design their applications and services to run in multiple AZs to avoid being taken down by a failure in one of them.
Sadly, it seems that the entire edifice has an Achilles heel that can cause problems regardless of how much redundancy you design into your cloud-based operations, at least according to the experts we asked.
"The issue with AWS is that US East is the home of the common control plane for all of AWS locations except the federal government and European Sovereign Cloud. There was an issue some years ago when the problem was related to management of S3 policies that was felt globally," Omdia Chief Analyst Roy Illsley told us.
He explained that US-EAST-1 can cause global issues because many users and services default to using it since it was the first AWS region, even if they are in a different part of the world.
Certain "global" AWS services or features are run from US-EAST-1 and are dependent on its endpoints, and this includes DynamoDB Global Tables and the Amazon CloudFront content delivery network (CDN), Illsley added.
Sid Nag, president and chief research officer for Tekonyx, agreed.
"Although the impacted region is in the AWS US East region, many global services (including those used in Europe) depend on infrastructure or control-plane / cross-region features located in US-EAST-1. This means that even if the European region was unaffected in terms of its own availability zones, dependencies could still cause knock-on impact," he said.
"Some AWS features (for example global account-management, IAM, some control APIs, or even replication endpoints) are served from US-EAST-1, even if you're running workloads in Europe. If those services go down or become very slow, even European workloads may be impacted," he added.
Any organization whose resiliency plans extend to duplicating resources across two or more different cloud platforms will no doubt be feeling smug right now, but that level of redundancy costs money, and don't the cloud providers keep telling us how reliable they are?
The upshot of this is that many firms will likely be taking another look at the assumptions underpinning their cloud strategy.
"Today's massive AWS outage is a visceral reminder of the risks of over-reliance on two dominant cloud providers, an outage most of us will have felt in some way," said Nicky Stewart, Senior Advisor at the Open Cloud Coalition.
Cloud services in the UK are largely dominated by AWS and Microsoft's Azure, with Google Cloud coming a distant third.
"It's too soon to gauge the economic fallout, but for context, last year's global CrowdStrike outage was estimated to have cost the UK economy between £1.7 and £2.3 billion ($2.3 and $3.1 billion). Incidents like this make clear the need for a more open, competitive and interoperable cloud market; one where no single provider can bring so much of our digital world to a standstill," she added.
"The AWS outage is yet another reminder of the weakness of centralised systems. When a key component of internet infrastructure depends on a single US cloud provider, a single fault can bring global services to their knees - from banks to social media, and of course the likes of Signal, Slack and Zoom," said Amandine Le Pape, Co-Founder of Element, which provides sovereign and resilient communications for governments.
But there could also be compensation claims in the offing, especially where financial transactions may have failed or missed deadlines because of the incident.
"An outage such as this can certainly open the provider and its users to risk of loss, especially businesses that rely on its infrastructure to operate critical services," said Henna Elahi, Senior Associate at Grosvenor Law.
Elahi added that it would, of course, depend on factors, such as the terms of service and any service level agreements between the business and AWS, the specific causes of the outage and its severity and length.
"The impacts on Lloyds Bank, for example, could have very serious implications for the end user. Key payments and transfers that are being made may fail and this could lead to far reaching issues for a user such as causing breaches of contracts, failure to complete purchases and failure to provide security information. This may very well lead to customer complaints and attempts to recover any loss caused by the outage from the business," she said.
At 15.13 UTC today, AWS updated its Health Dashboard:
"We have narrowed down the source of the network connectivity issues that impacted AWS Services. The root cause is an underlying internal subsystem responsible for monitoring the health of our network load balancers. We are throttling requests for new EC2 instance launches to aid recovery and actively working on mitigations."
Thirty minutes later, it added:
"We have taken additional mitigation steps to aid the recovery of the underlying internal subsystem responsible for monitoring the health of our network load balancers and are now seeing connectivity and API recovery for AWS services. We have also identified and are applying next steps to mitigate throttling of new EC2 instance launches."
Water bound in mantle rock alters our view of the Earth's composition:
Researchers from Northwestern University and the University of New Mexico report evidence for potentially oceans worth of water deep beneath the United States. Though not in the familiar liquid form — the ingredients for water are bound up in rock deep in the Earth's mantle — the discovery may represent the planet's largest water reservoir.
The presence of liquid water on the surface is what makes our "blue planet" habitable, and scientists have long been trying to figure out just how much water may be cycling between Earth's surface and interior reservoirs through plate tectonics.
Northwestern geophysicist Steve Jacobsen and University of New Mexico seismologist Brandon Schmandt have found deep pockets of magma located about 400 miles beneath North America, a likely signature of the presence of water at these depths. The discovery suggests water from the Earth's surface can be driven to such great depths by plate tectonics, eventually causing partial melting of the rocks found deep in the mantle.
The findings, to be published June 13 in the journal Science, will aid scientists in understanding how the Earth formed, what its current composition and inner workings are and how much water is trapped in mantle rock.
"Geological processes on the Earth's surface, such as earthquakes or erupting volcanoes, are an expression of what is going on inside the Earth, out of our sight," said Jacobsen, a co-author of the paper. "I think we are finally seeing evidence for a whole-Earth water cycle, which may help explain the vast amount of liquid water on the surface of our habitable planet. Scientists have been looking for this missing deep water for decades."
A blue crystal of ringwoodite containing around one percent of H2O in its crystal structure is compressed to conditions of 700 km depth inside a diamond-anvil cell. Using a laser to heat the sample to temperatures over 1500C (orange spots), the ringwoodite transformed to minerals found in the lowermost mantle. Synchrotron-infrared spectra collected on beamline U2A at the NSLS reveal changes in the OH-absorption spectra that correspond to melt generation, which was also detected by seismic waves underneath most of North America.
Scientists have long speculated that water is trapped in a rocky layer of the Earth's mantle located between the lower mantle and upper mantle, at depths between 250 miles and 410 miles. Jacobsen and Schmandt are the first to provide direct evidence that there may be water in this area of the mantle, known as the "transition zone," on a regional scale. The region extends across most of the interior of the United States.
Schmandt, an assistant professor of geophysics at the University of New Mexico, uses seismic waves from earthquakes to investigate the structure of the deep crust and mantle. Jacobsen, an associate professor of Earth and planetary sciences at Northwestern's Weinberg College of Arts and Sciences, uses observations in the laboratory to make predictions about geophysical processes occurring far beyond our direct observation.
The study combined Jacobsen's lab experiments in which he studies mantle rock under the simulated high pressures of 400 miles below the Earth's surface with Schmandt's observations using vast amounts of seismic data from the USArray, a dense network of more than 2,000 seismometers across the United States.
Jacobsen's and Schmandt's findings converged to produce evidence that melting may occur about 400 miles deep in the Earth. H2O stored in mantle rocks, such as those containing the mineral ringwoodite, likely is the key to the process, the researchers said.
"Melting of rock at this depth is remarkable because most melting in the mantle occurs much shallower, in the upper 50 miles," said Schmandt, a co-author of the paper. "If there is a substantial amount of H2O in the transition zone, then some melting should take place in areas where there is flow into the lower mantle, and that is consistent with what we found."
If just one percent of the weight of mantle rock located in the transition zone is H2O, that would be equivalent to nearly three times the amount of water in our oceans, the researchers said.
This water is not in a form familiar to us — it is not liquid, ice or vapor. This fourth form is water trapped inside the molecular structure of the minerals in the mantle rock. The weight of 250 miles of solid rock creates such high pressure, along with temperatures above 2,000 degrees Fahrenheit, that a water molecule splits to form a hydroxyl radical (OH), which can be bound into a mineral's crystal structure.
Schmandt and Jacobsen's findings build on a discovery reported in March in the journal Nature in which scientists discovered a piece of the mineral ringwoodite inside a diamond brought up from a depth of 400 miles by a volcano in Brazil. That tiny piece of ringwoodite — the only sample in existence from within the Earth — contained a surprising amount of water bound in solid form in the mineral.
"Whether or not this unique sample is representative of the Earth's interior composition is not known, however," Jacobsen said. "Now we have found evidence for extensive melting beneath North America at the same depths corresponding to the dehydration of ringwoodite, which is exactly what has been happening in my experiments."
For years, Jacobsen has been synthesizing ringwoodite, colored sapphire-like blue, in his Northwestern lab by reacting the green mineral olivine with water at high-pressure conditions. (The Earth's upper mantle is rich in olivine.) He found that more than one percent of the weight of the ringwoodite's crystal structure can consist of water — roughly the same amount of water as was found in the sample reported in the Nature paper.
"The ringwoodite is like a sponge, soaking up water," Jacobsen said. "There is something very special about the crystal structure of ringwoodite that allows it to attract hydrogen and trap water. This mineral can contain a lot of water under conditions of the deep mantle."
For the study reported in Science, Jacobsen subjected his synthesized ringwoodite to conditions around 400 miles below the Earth's surface and found it forms small amounts of partial melt when pushed to these conditions. He detected the melt in experiments conducted at the Advanced Photon Source of Argonne National Laboratory and at the National Synchrotron Light Source of Brookhaven National Laboratory.
Jacobsen uses small gem diamonds as hard anvils to compress minerals to deep-Earth conditions. "Because the diamond windows are transparent, we can look into the high-pressure device and watch reactions occurring at conditions of the deep mantle," he said. "We used intense beams of X-rays, electrons and infrared light to study the chemical reactions taking place in the diamond cell."
Jacobsen's findings produced the same evidence of partial melt, or magma, that Schmandt detected beneath North America using seismic waves. Because the deep mantle is beyond the direct observation of scientists, they use seismic waves — sound waves at different speeds — to image the interior of the Earth.
"Seismic data from the USArray are giving us a clearer picture than ever before of the Earth's internal structure beneath North America," Schmandt said. "The melting we see appears to be driven by subduction — the downwelling of mantle material from the surface."
The melting the researchers have detected is called dehydration melting. Rocks in the transition zone can hold a lot of H2O, but rocks in the top of the lower mantle can hold almost none. The water contained within ringwoodite in the transition zone is forced out when it goes deeper (into the lower mantle) and forms a higher-pressure mineral called silicate perovskite, which cannot absorb the water. This causes the rock at the boundary between the transition zone and lower mantle to partially melt.
"When a rock with a lot of H2O moves from the transition zone to the lower mantle it needs to get rid of the H2O somehow, so it melts a little bit," Schmandt said. "This is called dehydration melting."
"Once the water is released, much of it may become trapped there in the transition zone," Jacobsen added.
Just a little bit of melt, about one percent, is detectible [sic] with the new array of seismometers aimed at this region of the mantle because the melt slows the speed of seismic waves, Schmandt said.
The USArray is part of EarthScope, a program of the National Science Foundation that deploys thousands of seismic, GPS and other geophysical instruments to study the structure and evolution of the North American continent and the processes the cause earthquakes and volcanic eruptions.
The National Science Foundation (grants EAR-0748797 and EAR-1215720) and the David and Lucile Packard Foundation supported the research.
The paper [pay-walled] is titled "Dehydration melting at the top of the lower mantle." In addition to Jacobsen and Schmandt, other authors of the paper are Thorsten W. Becker, University of California, Los Angeles; Zhenxian Liu, Carnegie Institution of Washington; and Kenneth G. Dueker, the University of Wyoming.
See also:
https://distrowatch.com/?newsid=12607
OpenBSD is a security-focused, free software, Unix-like operating system based on the Berkeley Software Distribution (BSD).
Theo de Raadt has announced the release of OpenBSD 7.8, the latest of the regular biannual updates of the project's free, multi-platform 4.4BSD-based UNIX-like operating system. This version adds support for Raspberry Pi 5, among many other changes:
"We are pleased to announce the official release of OpenBSD 7.8. This is our 59th release. We remain proud of OpenBSD's record of thirty years with only two remote holes in the default install. As in our previous releases, 7.8 provides significant improvements, including new features, in nearly all areas of the system: added support for Raspberry Pi 5 (with console on serial port); implement acpicpu(4) for arm64; on Apple variants, enter DDB when exuart(4) detects a BREAK; on arm64 and riscv64, avoid multiple threads of a process continuously faulting on a single page when pmap_enter(9) is asked to enter a mapping that already exists; make apm and hw.cpuspeed work on Snapdragon X Elite machines; fix processing of GPIO events for pin numbers less than 256 with an _EVT method, fixes power button on various ThinkPads with AMD CPUs...."
Why did NASA's chief just shake up the agency's plans to land on the Moon?:
NASA acting Administrator Sean Duffy made two television appearances on Monday morning in which he shook up the space agency's plans to return humans to the Moon.
Speaking on Fox News, where the secretary of transportation frequently appears in his acting role as NASA chief, Duffy said SpaceX has fallen behind in its efforts to develop the Starship vehicle as a lunar lander. Duffy also indirectly acknowledged that NASA's projected target of a 2027 crewed lunar landing is no longer achievable. Accordingly, he said he intended to expand the competition to develop a lander capable of carrying humans down to the Moon from lunar orbit and back.
"They're behind schedule, and so the President wants to make sure we beat the Chinese," Duffy said of SpaceX. "He wants to get there in his term. So I'm in the process of opening that contract up. I think we'll see companies like Blue [Origin] get involved, and maybe others. We're going to have a space race in regard to American companies competing to see who can actually lead us back to the Moon first."
There are a couple of significant takeaways from this interview. First is the public acknowledgement by a senior NASA official that the space agency's current timeline of a 2027 landing is completely untenable. And secondly, the timing of Duffy's public appearances on Monday morning seems tailored to influence a fierce, behind-the-scenes battle to hold onto the NASA leadership position.
SpaceX won a contract from NASA, worth $2.9 billion, in April 2021 to develop and modify its ambitious Starship rocket to serve as a "human landing system" (HLS). This rocket would work in concert with NASA's Space Launch System and Orion spacecraft to get humans from Earth, to the lunar surface, and back. Two years later Blue Origin, a rocket company founded by Jeff Bezos, won a second contract, worth $3.4 billion, to develop a second lander.
Duffy is correct that SpaceX is moving slower than anticipated. The company must still cross several technical hurdles before it can provide landing services to NASA. In their funded contracts for reusable landers, SpaceX and Blue Origin must refuel their vehicles in low-Earth orbit, something that has never been done before on a large scale.
When Duffy says "companies like Blue" may get involved, he is not referring to the existing contract, in which Blue Origin will not deliver a ready-to-go lunar lander until the 2030s. Rather he is almost certainly referring to a plan developed by Blue Origin that uses multiple Mk 1 landers, a smaller vehicle originally designed for cargo only. Ars reported on this new lunar architecture three weeks ago, which company engineers have been quietly developing. This plan would not require in-space refueling, and the Mk 1 vehicle is nearing its debut flight early next year.
Duffy also cites "maybe others" getting involved. This refers to a third option. In recent weeks, officials from traditional space companies have been telling Duffy and the chief of staff at the Department of Transportation, Pete Meachum, that they can build an Apollo Lunar Module-like lander within 30 months. Amit Kshatriya, NASA's associate administrator, favors this government-led approach, sources said.
On Monday, in a statement to Ars, a Lockheed Martin official confirmed that the company was ready if NASA called upon them.
"Throughout this year, Lockheed Martin has been performing significant technical and programmatic analysis for human lunar landers that would provide options to NASA for a safe solution to return humans to the Moon as quickly as possible," said Bob Behnken, vice president of Exploration and Technology Strategy at Lockheed Martin Space. "We have been working with a cross-industry team of companies and together we are looking forward to addressing Secretary Duffy's request to meet our country's lunar objectives."
NASA would not easily be able to rip up its existing HLS contracts with SpaceX and Blue Origin, as, especially with the former, much of the funding has already been awarded for milestone payments. Rather, Duffy would likely have to find new funding from Congress. And it would not be cheap. This NASA analysis from 2017 estimates that a cost-plus, sole-source lunar lander would cost $20 billion to $30 billion, or nearly 10 times what NASA awarded to SpaceX in 2021.
SpaceX founder Elon Musk, responding to Duffy's comments, seemed to relish the challenge posed by industry competitors.
"SpaceX is moving like lightning compared to the rest of the space industry," Musk said on the social media site he owns, X. "Moreover, Starship will end up doing the whole Moon mission. Mark my words."
Duffy's remarks on television on Monday morning, although significant for the broader space community, also seemed intended for an audience of one—President Trump.
The president appointed Duffy, already leading the Department of Transportation, to lead NASA on an interim basis in July. This came six weeks after the president rescinded his nomination of billionaire and private astronaut Jared Isaacman, for political reasons, to lead the space agency.
Trump was under the impression that Duffy would use this time to shore up NASA's leadership while also looking for a permanent chief of the space agency. However, Duffy appears to have not paid more than lip service to finding a successor.
Since late summer there has been a groundswell of support for Isaacman in the White House, and among some members of Congress. The billionaire has met with Trump several times, both at the White House and Mar-a-Lago, and sources report that the two have a good rapport. There has been some momentum toward the president re-nominating Isaacman, with Trump potentially making a decision soon. Duffy's TV appearances on Monday morning appear to be part of an effort to forestall this momentum by showing Trump he is actively working toward a lunar landing during his second term, which ends in January 2029.
Duffy has appeared to enjoy the limelight that comes with leading NASA. In the future, one source said, "Duffy wants to be president." The NASA position has afforded him greater visibility, including television appearances, to expand his profile in a positive way. "He doesn't want to give up the job," the source added.
A Republican advisor to the White House told Ars that it is good that Duffy has moved beyond his rhetoric about NASA beating China to the Moon and to look for creative tactics to land there. But, this person said, the mandate from the Trump administration is to dominate the emerging commercial space industry, not hand out large cost-plus contracts.
"Duffy hasn't implemented any of the strategic reforms of Artemis that the president proposed this spring," the Republican source said. "He has the perfect opportunity during the current shutdown, but there is no sign of any real reform under his leadership. Instead, Duffy is being co-opted by the deep state at NASA."
With bonuses, maximum rewards can be as high as $5 million:
Since launching its bug bounty program nearly a decade ago, Apple has always touted notable maximum payouts—$200,000 in 2016 and $1 million in 2019. Now the company is upping the stakes again. At the Hexacon offensive security conference in Paris on Friday, Apple vice president of security engineering and architecture Ivan Krstić announced a new maximum payout of $2 million for a chain of software exploits that could be abused for spyware.
The move reflects how valuable exploitable vulnerabilities can be within Apple's highly protected mobile environment—and the lengths the company will go to to keep such discoveries from falling into the wrong hands. In addition to individual payouts, the company's bug bounty also includes a bonus structure, adding additional awards for exploits that can bypass its extra secure Lockdown Mode as well as those discovered while Apple software is still in its beta testing phase. Taken together, the maximum award for what would otherwise be a potentially catastrophic exploit chain will now be $5 million. The changes take effect next month.
"We are lining up to pay many millions of dollars here, and there's a reason," Krstić tells WIRED. "We want to make sure that for the hardest categories, the hardest problems, the things that most closely mirror the kinds of attacks that we see with mercenary spyware—that the researchers who have those skills and abilities and put in that effort and time can get a tremendous reward."
[...] In addition to higher potential rewards, Apple is also expanding the bug bounty's categories to include certain types of one-click "WebKit" browser infrastructure exploits as well as wireless proximity exploits carried out with any type of radio. And there is even a new offering known as "Target Flags" that puts the concept of capture the flag hacking competitions into real-world testing of Apple's software to help researchers demonstrate the capabilities of their exploits quickly and definitively.
One topic dominated the recent 2025 OpenInfra Summit Europe, and it wasn't AI:
Unlike any tech conference I've attended in the last few years, the top issue at the 2025 OpenInfra Summit Europe at the École Polytechnique Paris was not AI. Shocking, I know. Indeed, OpenInfra Foundation general manager Thierry Carrez commented, "Did you notice what I didn't talk about in my keynote? I made no mention of AI." But one issue that did appear -- and would show up over and over again in the keynotes, the halls, and the vendor booths -- was digital sovereignty.
Digital sovereignty is the ability of a country, organization, or individual to control its own digital infrastructure, technologies, data, and online processes without undue external dependency on foreign entities or large technology companies. In other words, Europeans are tired of relying on what they see as increasingly unreliable American companies and the US government.
Carrez explained: "We've seen old alliances between the US and the EU being questioned or leveraged for immediate gains. We have seen the very terms of exchange of goods changing almost every day. And as a response to that, in Europe, we're moving to digital sovereignty." That shift, in turn, means open-source software.
"The world needs sovereign, high-performance and sustainable infrastructure," continued Carrez, "that remains interoperable and secure, while collaborating tightly with AI, containers and trusted execution environments. Open infrastructure allows nations and organizations to maintain control over their applications, their data, and their destiny while benefiting from global collaboration."
Carrez thinks a better word for what Europe wants is not isolation from the US: "What we're really looking for is resilience. What we want for our countries, for our companies, for ourselves, is resilience. Resilience in the face of unforeseen events in a fast-changing world. Open source," he concluded, "allows us to be sovereign without being isolated."
[...] To make life easier for users -- and to turn a profit, naturally -- many European companies are now offering technology programs to help users achieve digital sovereignty. These programs include Deutsche Telekom, with its Open Telekom Cloud, and OVH, STACKIT, and VanillaCore. Each of these companies relies on OpenStack to power its European-based cloud offerings for individuals, companies, and governments. In addition, other European open-source-based tech businesses, such as SUSE and NextCloud, offer digital sovereignty solutions using other programs.
In conversations at the conference, it became clear that while the changes in American government policy have been worrying Europeans, it's not just politics that has them concerned. People are also upset about Microsoft's 365 price increases. Another tech business issue that's unnerved them is Broadcom's acquisition of VMware and its subsequent massive price increases. This has led to a rise in the use of open-source office software, such as LibreOffice, and its web-based brother, Collabora Online, and the migration of VMware customers to OpenStack-based services.
The sovereignty issue is not going to go away. As Carrez said in a press conference: "It's extremely top of mind in the EU right now, it's what everyone is just talking about, and it's what everybody is doing." Open source is essential to this movement. As Mike McDonough, head of software product management for Catchengo, a "sovereign by design" cloud company, said: "No one can lock you up; no one can take it away from you, and if someone decides to fork the code, you can continue adopting it anywhere in the world."
All in all, participants agreed that Europe's sovereign cloud movement is reaching critical mass as governments and enterprises move data back from the US-based hyperscalers. European organizations are realizing they need more private infrastructure capacity and local talent to run big cloud initiatives. So, they're turning to open source because, as Carrez concluded, "what makes us resilient is our open-source community."
OpenAI launches Atlas broswer.
https://www.reuters.com/technology/openai-unveils-ai-browser-atlas-2025-10-21/
OpenAI on Tuesday unveiled ChatGPT Atlas, a long-anticipated artificial intelligence-powered web browser built around its popular chatbot, in a direct challenge to Google Chrome's dominance.
The launch marks OpenAI's latest move to capitalize on 800 million weekly active ChatGPT users, as it expands into more aspects of users' online lives by collecting data about consumers' browser behavior. It could accelerate a broader shift toward AI-driven search, as users increasingly turn to conversational tools that synthesize information instead of relying on traditional keyword-based results from Google — intensifying competition between OpenAI and Google.
OpenAI said Atlas launches Tuesday on Apple laptops and will later come to Microsoft's Windows, Apple's iOS phone operating system and Google's Android phone system.
OpenAI CEO Sam Altman called it a "rare, once-a-decade opportunity to rethink what a browser can be about and how to use one."
But analyst Paddy Harrington of market research group Forrester said it will be a big challenge "competing with a giant who has ridiculous market share."
OpenAI's browser is coming out just a few months after one of its executives testified that the company would be interested in buying Google's industry-leading Chrome browser if a federal judge had required it to be sold to prevent the abuses that resulted in Google's ubiquitous search engine being declared an illegal monopoly.
But U.S. District Judge Amit Mehta last month issued a decision that rejected the Chrome sale sought by the U.S. Justice Department in the monopoly case, partly because he believed advances in the AI industry already are reshaping the competitive landscape.
I have just installed Lynx.
Bill Atkinson was a computing pioneer who, in the 1980s, effectively made Apple computers usable for everyday people by transforming code into windows, menus, and graphics.
But few people know that later in life he was a secret advocate of what's widely considered the world's most potent psychedelic: 5-MeO-DMT.
The hallucinogen, also called "the God molecule," is a compound found in the venomous secretions of the Sonoran Desert toad named Incilius alvarius (it's commonly called Bufo alvarius) and is known to bring about ego death, a total dissolution of the senses, and a euphoric feeling of existential connectedness, all in a roughly 20-minute trip. Atkinson, who died from pancreatic cancer on June 5 at the age of 74, was a member of a close-knit, private online community of 5-MeO-DMT enthusiasts called OneLight, where he went by the alias "Grace Within."
Several of Atkinson's friends and fellow psychonauts tell WIRED their "beloved" Atkinson played a key role in helping people access smaller doses of 5-MeO-DMT, which can be made synthetically, as he believed it would maximize the benefits of the potentially dangerous drug while minimizing harm. "The same creative mind who affected personal computers so profoundly continued to influence human evolution through his efforts to make the miracle of 'bufo' safer and more manageable," says friend Charles Lindsay, an artist who has worked with the SETI Institute, which works to find signs of extraterrestrial intelligence. "He truly pushed boundaries. That requires a willingness to consider what might easily be deemed ridiculous." Or, he adds, "risky."
[...] Wishing to spread the gospel about how to use the drug more responsibly, six sources confirmed to WIRED that Atkinson was behind a pseudonymously published manual that contains step-by-step production photos detailing how to produce lower-dose 5-MeO-DMT vape pens known as "LightWands." The guide was published online, on the psychedelic educational nonprofit Erowid. It was first posted in 2021, before it was updated in the month before Atkinson's death. Atkinson collaborated with the makers of the pens—also members of OneLight—to help refine the manufacturing process and make the vaporization process safer, friends say.
"My deepest gratitude goes first to this amazing molecule and to all those who have given of their heart, mind, and courage to bring it to our world," Atkinson wrote pseudonymously on Erowid, outlining how "many of the most beautiful and healing insights are found at lower levels of Jaguar." (Jaguar is the name given by psychologist and psychedelics pioneer Ralph Metzner to 5-MeO-DMT.)
Atkinson—who was also a keen nature photographer—first smoked 5-MeO-DMT in 2012, according to OneLight member Axle Davids, but his relationship with psychedelics goes back much further. In 1985, Atkinson took LSD. He wrote about that experience in 2020: "For the first time in my life I knew deep down inside that we are not alone." He explained how his LSD trip inspired him to develop HyperCard, a Mac application that wove text, graphics, and sound together in a format that predated the World Wide Web and popularized hyperlinking. "I thought if we could encourage sharing of ideas between different areas of knowledge, perhaps more of the bigger picture would emerge," he wrote.
In his final years, he gave away up to 1,000 LightWand kits containing low- to medium-dose 5-MeO-DMT pens and mentored other creators in the OneLight community, according to Davids. Giving people access to lower doses is important, particularly because some are "hypersensitive" to 5-MeO-DMT, he says: "They can lose consciousness. They can purge and choke on their vomit. They can lose their shit entirely."
[...] Atkinson's use of "the God molecule" appeared to contribute toward a spiritual shift and an interest in the search for extraterrestrial life, says MacNiven. "Bill was a completely non-spiritual guy in the beginning," he says. "Then he became extremely spiritual, talking about past lives and future lives."
According to a "Request for Prayers" Atkinson posted on the OneLight forum in November 2024, revealing his identity to the wider community and disclosing he had terminal cancer, he said he had taken the intense African psychedelic iboga in 2017 and that it helped him accept death. "From my Iboga experience seven years ago, I know for certain that my consciousness will continue after I leave my body behind," Atkinson wrote, signing off the letter with his name instead of his pseudonym. "I have no existential fear of death. Actually more anticipation and curiosity."
https://www.bleepingcomputer.com/news/security/hackers-exploit-cisco-snmp-flaw-to-deploy-rootkit-on-switches/
https://archive.ph/crr3o
Threat actors exploited a recently patched remote code execution vulnerability (CVE-2025-20352) in older, unprotected Cisco networking devices to deploy a Linux rootkit and gain persistent access.
The security issue leveraged in the attacks affects the Simple Network Management Protocol (SNMP) in Cisco IOS and IOS XE and leads to RCE if the attacker has root privileges.
According to cybersecurity company Trend Micro, the attacks targeted Cisco 9400, 9300, and legacy 3750G series devices that did not have endpoint detection response solutions.
In the original bulletin for CVE-2025-20352, updated on October 6, Cisco tagged the vulnerability as exploited as a zero day, with the company's Product Security Incident Response Team (PSIRT) saying it was "aware of successful exploitation."
Trend Micro researchers track the attacks under the name 'Operation Zero Disco' because the malware sets a universal access password that contains the word "disco."
The report from Trend Micro notes that the threat actor also attempted to exploit CVE-2017-3881, a seven-year-old vulnerability in the Cluster Management Protocol code in IOS and IOS XE.
The rootkit planted on vulnerable systems features a UDP controller that can listen on any port, toggle or delete logs, bypass AAA and VTY ACLs, enable/disable the universal password, hide running configuration items, and reset the last write timestamp for them.
In a simulated attack, the researchers showed that it is possible to disable logging, impersonate a waystation IP via ARP spoofing, bypass internal firewall rules, and move laterally between VLANs.
Although newer switches are more resistant to these attacks due to Address Space Layout Randomization (ASLR) protection, Trend Micro says that they are not immune and persistent targeting could compromise them.
After deploying the rootkit, the malware "installs several hooks onto the IOSd, which results in fileless components disappearing after a reboot," the researchers say.
The researchers were able to recover both 32-bit and 64-bit variants of the SNMP exploit.Trend Micro notes that there currently exists no tool that can reliably flag a compromised Cisco switch from these attacks. If there is suspicion of a hack, the recommendation is to perform a low-level firmware and ROM region investigation.
A list of the indicators of compromise (IoCs) associated with 'Operation Zero Disco' can be found here.
Geostationary satellites are broadcasting large volumes of unencrypted data to Earth, including private voice calls and text messages as well as consumer internet traffic, researchers have discovered.
Scientists at the University of California, San Diego, and the University of Maryland, College Park, say they were able to pick up large amounts of sensitive traffic largely by just pointing a commercial off-the-shelf satellite dish at the sky from the roof of a university building in San Diego.
In its paper, Don't Look Up: There Are Sensitive Internal Links in the Clear on GEO Satellites [PDF], the team describes how it performed a broad scan of IP traffic on 39 GEO satellites across 25 distinct longitudes and found that half of the signals they picked up contained cleartext IP traffic.
This included unencrypted cellular backhaul data sent from the core networks of several US operators, destined for cell towers in remote areas. Also found was unprotected internet traffic heading for in-flight Wi-Fi users aboard airliners, and unencrypted call audio from multiple VoIP providers.
According to the researchers, they were able to identify some observed satellite data as corresponding to T-Mobile cellular backhaul traffic. This included text and voice call contents, user internet traffic, and cellular network signaling protocols, all "in the clear," but T-Mobile quickly enabled encryption after learning about the problem.
More seriously, the team was able to observe unencrypted traffic for military systems including detailed tracking data for coastal vessel surveillance and operational data of a police force.
In addition, they found retail, financial, and banking companies all using unencrypted satellite communications to link their internal networks at various sites. The researchers were able to see unencrypted login credentials, corporate emails, inventory records, and information from ATM cash dispensers.
Reg readers will no doubt find this kind of negligence staggering after years of security breaches and warnings about locking down sensitive data. As the researchers note in their report: "There is a clear mismatch between how satellite customers expect data to be secured and how it is secured in practice; the severity of the vulnerabilities we discovered has certainly revised our own threat models for communications."
The team noted that the sheer level of unencrypted traffic observed results from a failure to encrypt at multiple levels of the communications protocol stack.
At the satellite link/transport layer, streams using MPEG encoding have the option to use MPEG scrambling. While TV transponders mostly do this, only 10 percent of the non-TV transponders did. Only 20 percent of transponders had encryption enabled for downlinks, and just 6 percent consistently used IPsec at the network layer.
The report notes that organizations with visibility into these networks have been raising alarms for some time. It cites a 2022 NSA security advisory about GEO satellite links that warns: "Most of these links are unencrypted, relying on frequency separation or predictable frequency hopping rather than encryption to separate communications."
The team states that it obtained clearance from legal counsel at their respective institutions for this research, and that it securely stored any unencrypted data collected from transmissions. It also claims that it made efforts to contact the relevant parties wherever possible to inform them of the security shortcomings.
T-Mobile has been in touch with a statement since the publication of the story:
"T-Mobile immediately addressed a vendor's technical misconfiguration that affected a limited number of cell sites using geosynchronous satellite backhaul in remote, low-population areas, as identified in this research from 2024. This was not network-wide, is unrelated to our T-Satellite direct-to-cell offering, and we implemented nationwide Session Initiation Protocol (SIP) encryption for all customers to further protect signaling traffic as it travels between mobile handsets and the network core, including call set up, numbers dialed and text message content.
"We appreciate our collaboration with the security research community, whose work helps reinforce our ongoing commitment to protecting customer data and enhances security across the industry."
Eavesdropping on Internal Networks via Unencrypted Satellites
https://satcom.sysnet.ucsd.edu/
https://archive.ph/kpA93
We pointed a commercial-off-the-shelf satellite dish at the sky and carried out the most comprehensive public study to date of geostationary satellite communication. A shockingly large amount of sensitive traffic is being broadcast unencrypted, including critical infrastructure, internal corporate and government communications, private citizens' voice calls and SMS, and consumer Internet traffic from in-flight wifi and mobile networks. This data can be passively observed by anyone with a few hundred dollars of consumer-grade hardware. There are thousands of geostationary satellite transponders globally, and data from a single transponder may be visible from an area as large as 40% of the surface of the earth.
A Surprising Amount of Satellite Traffic Is Unencrypted - Schneier on Security:
Larry Sanger says the website has become biased against conservative and religious viewpoints, but sees a way to fix it:
Wikipedia, a popular online encyclopedia millions of people treat as an authoritative source of information, is systemically biased against conservative, religious, and other points of view, according to the site's co-founder, Larry Sanger.
Sanger, 57, who now heads the Knowledge Standards Foundation, believes Wikipedia can be salvaged either by a renewed emphasis on free speech withttps://larrysanger.org/nine-theses/hin the organization or by a grassroots campaign to make diverse viewpoints heard.
Failing that, Sanger said, government intervention may be required to pierce the shell of anonymity that now protects Wikipedia's editors from defamation lawsuits by public figures who believe the site portrays them unfairly.
[...] "Basically, it's required now, even for the sake of neutrality, that they take a side when [they believe] one side is clearly wrong," Sanger said. "Pretensions of objectivity are out the window."
[...] "You simply may not cite as sources of Wikipedia articles anything that has been branded as right wing," he said. [...] "Even now, people are still sort of waking up to the reality that Wikipedia does, on many pages ... act as essentially propaganda."
[...] On his website, Sanger outlines a series of ideas for returning Wikipedia to its original stance on fairness and free speech. A handful of his ideas center on increasing transparency into site management, such as revealing who Wikipedia's leaders are, allowing the public to rate articles, ending decision-making by consensus, and adopting a legislative process for determining editorial policy.
Related: Elon Musk Plans to Take on Wikipedia With 'Grokipedia'
Malicious app required to make "Pixnapping" attack work requires no permissions:
Android devices are vulnerable to a new attack that can covertly steal two-factor authentication codes, location timelines, and other private data in less than 30 seconds.
The new attack, named Pixnapping by the team of academic researchers who devised it, requires a victim to first install a malicious app on an Android phone or tablet. The app, which requires no system permissions, can then effectively read data that any other installed app displays on the screen. Pixnapping has been demonstrated on Google Pixel phones and the Samsung Galaxy S25 phone and likely could be modified to work on other models with additional work. Google released mitigations last month, but the researchers said a modified version of the attack works even when the update is installed.
Pixnapping attacks begin with the malicious app invoking Android programming interfaces that cause the authenticator or other targeted apps to send sensitive information to the device screen. The malicious app then runs graphical operations on individual pixels of interest to the attacker. Pixnapping then exploits a side channel that allows the malicious app to map the pixels at those coordinates to letters, numbers, or shapes.
"Anything that is visible when the target app is opened can be stolen by the malicious app using Pixnapping," the researchers wrote on an informational website. "Chat messages, 2FA codes, email messages, etc. are all vulnerable since they are visible. If an app has secret information that is not visible (e.g., it has a secret key that is stored but never shown on the screen), that information cannot be stolen by Pixnapping."
The new attack class is reminiscent of GPU.zip, a 2023 attack that allowed malicious websites to read the usernames, passwords, and other sensitive visual data displayed by other websites. It worked by exploiting side channels found in GPUs from all major suppliers. The vulnerabilities that GPU.zip exploited have never been fixed. Instead, the attack was blocked in browsers by limiting their ability to open iframes, an HTML element that allows one website (in the case of GPU.zip, a malicious one) to embed the contents of a site from a different domain.
Pixnapping targets the same side channel as GPU.zip, specifically the precise amount of time it takes for a given frame to be rendered on the screen.
"This allows a malicious app to steal sensitive information displayed by other apps or arbitrary websites, pixel by pixel," Alan Linghao Wang, lead author of the research paper "Pixnapping: Bringing Pixel Stealing out of the Stone Age," explained in an interview. "Conceptually, it is as if the malicious app was taking a screenshot of screen contents it should not have access to. Our end-to-end attacks simply measure the rendering time per frame of the graphical operations to determine whether the pixel was white or nonwhite."
[...] In an online interview, paper coauthor Ricardo Paccagnella described the attack in more detail:
Step 1: The malicious app invokes a target app to cause some sensitive visual content to be rendered.
Step 2: The malicious app uses Android APIs to "draw over" that visual content and cause a side channel (in our case, GPU.zip) to leak as a function of the color of individual pixels rendered in Step 1 (e.g., activate only if the pixel color is c).
Step 3: The malicious app monitors the side effects of Step 2 to infer, e.g., if the color of those pixels was c or not, one pixel at a time.
Steps 2 and 3 can be implemented differently depending on the side channel that the attacker wants to exploit. In our instantiations on Google and Samsung phones, we exploited the GPU.zip side channel. When using GPU.zip, measuring the rendering time per frame was sufficient to determine if the color of each pixel is c or not. Future instantiations of the attack may use other side channels where controlling memory management and accessing fine-grained timers may be necessary (see Section 3.3 of the paper). Pixnapping would still work then: The attacker would just need to change how Steps 2 and 3 are implemented.
[...] In an email, a Google representative wrote, "We issued a patch for CVE-2025-48561 in the September Android security bulletin, which partially mitigates this behavior. We are issuing an additional patch for this vulnerability in the December Android security bulletin. We have not seen any evidence of in-the-wild exploitation."
Pixnapping is useful research in that it demonstrates the limitations of Google's security and privacy assurances that one installed app can't access data belonging to another app. The challenges in implementing the attack to steal useful data in real-world scenarios, however, are likely to be significant. In an age when teenagers can steal secrets from Fortune 500 companies simply by asking nicely, the utility of more complicated and limited attacks is probably of less value.
The noise of Bitcoin mining is driving Americans crazy
"It echoes across agricultural land and forests, chasing away deer. It seeps into walls, vibrating bedrooms and dinner tables." One resident said it was as though a "jet engine is forever stationed nearby".
Bitcoin mining has exploded in the US over the past decade, particularly in the wake of Donald Trump's re-election to the White House and his embrace of cryptocurrency. But it's an energy-intensive process: the powerful computers that create and protect the cryptocurrency need fans on the go constantly to cool them down. And across rural, mostly Republican towns, residents are getting sick of the noise – and getting sick, full stop.
Much of America's Bitcoin mining industry is in Texas, said Time, "home to giant power plants, lax regulation, and crypto-friendly politicians". In Granbury, where Marathon – one of the world's largest Bitcoin holders – has a mine, a group of people are being "worn thin from strange, debilitating illnesses". Some were experiencing fainting spells, chest pains, migraines and panic attacks; others were "wracked by debilitating vertigo and nausea. The mine is causing "mental and physical" health issues, said one ears, nose, and throat specialist based in Granbury. "Imagine if I had vuvuzela in your ear all the time."
Granbury Residents Demand Answers from MARA's Bitcoin Mine As Lawsuit Over Noise Nuisance Continues
Texas state court rejected MARA's dismissal bid, now residents are demanding that the cryptomine turn over documents
Granbury, TX —
Today, Citizens Concerned About Wolf Hollow, a community group composed of Granbury residents and represented by Earthjustice, filed a motion to compel in its lawsuit against MARA Holdings, Inc, asking the Texas State Court to require the cryptomining plant to turn over key information pertaining to the excessive noise the facility creates and the resulting nuisance level conditions. This comes on the heels of the Court denying MARA's motion to dismiss earlier this summer, a decision which allows the community group to move forward in the lawsuit. The cryptomining company has withheld basic information and documentation related to the excessive noise generated by its 24/7 cryptocurrency mining operations — noise that has caused ongoing harm to the surrounding community. Now, the community group is demanding answers, seeking much needed information including the equipment used at the plant, any mitigation measures the company has taken, and detailed noise pollution data.
(YT Warning) I Live 500 Feet From A Bitcoin Mine. My Life Is Hell.
In Texas, the legal limit for noise is 85dB. Researchers have found that prolonged exposure can impact hearing and cardiovascular health, increased blood pressure and heart rate.
Other potential risks include headaches, dizziness, and psychological effects.85dB is considered industrial noise inside of a plant. That would mean that you would have to wear hearing protection all the time at your home.
...Teresa lives 18 miles from Corsicana, Texas, where Riot Platforms is building out what is expected to be one of the largest bitcoin mining operations in the world. We decided, well, what better place to build a one gigawatt site?
Teresa is concerned about Bitcoin's demand for water. Corsicana's mine is projected to use up to 1.5 million gallons of water per day.That's an eighth of the city's water supply. She took us to nearby Navarro Lake, which she says dries up every 4 to 5 years.
"So this is the lake that you are concerned that that the Bitcoin mining companies could be drawing water from?"
"Yes. You've got a lot of people that have moved into this area. The last thing we needed was more pressure on this lake. I know I can survive without electricity. I do know that. I can't survive without water."...
All of this makes it even more damning that the politicians representing the residents we spoke to are all in on Bitcoin. Which brings us to the crypto money in politics.
Texas Senator Ted Cruz received a $350,000 donation from Bitcoin Freedom PAC in 2024, in a tight reelection race against Democratic challenger Colin Allred. The same year, Cruz announced he was getting into the Bitcoin business himself, announcing on X he bought his own miners and started running them in Iran, Texas. Cruz was commended by Marathon Digital's CEO and welcomed to the club.
...According to Public Citizen, crypto corporations provided nearly half of the $248 million in corporate money to influence federal elections in 2024 and the industry has gotten exactly what they paid for. Efforts to regulate crypto at the state and federal level have been largely unsuccessful.
Rural Cheyenne Residents Have A Noisy New Neighbor — A Bitcoin Miner
Michigan school sues over constant noise from Bitcoin mining rigs
Norway Considers Restricting Bitcoin Mining
The Norwegian government will consider by autumn the possibility of banning the establishment of new cryptocurrency mining enterprises using energy-intensive algorithms like Proof-of-Work (PoW).
According to the head of the Ministry of Local Government and Modernisation, Karianne Tung, this activity "offers little to local communities in terms of jobs and income."
"This is energy we could use differently – in industry or for the operation of socially beneficial data centres," she added.
The authorities will conduct a comprehensive study of the sector. Existing enterprises are required to register by July 1.
Energy Minister Terje Aasland referred to the additional burden mining places on generating capacity, networks, and infrastructure.
"By prohibiting energy-intensive cryptocurrency mining, we can free up land, electricity, and network capacity for other purposes that contribute more to value creation, jobs, and reducing greenhouse gas emissions," he stated.