Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What would you use if you couldn't use your current distribution/operating system?

  • Linux
  • Windows
  • BSD
  • ChromeOS / Android
  • macOS / iOS
  • Open[DOS, Solaris, STEP, VMS]
  • I don't use a computer you insensitive clod!
  • Other (describe in comments)

[ Results | Polls ]
Comments:162 | Votes:203

posted by jelizondo on Thursday March 12, @04:23AM   Printer-friendly

https://arstechnica.com/ai/2026/03/are-consumers-doomed-to-pay-more-for-electricity-due-to-data-center-buildouts/

Big Tech is set to agree to build its own power plants for data centers and shield consumers from rising electricity costs, but companies face daunting logistical obstacles to delivering on the pledge championed by President Donald Trump.

At a White House event on Wednesday, executives from Amazon, Google, Meta, Microsoft, xAI, Oracle, and OpenAI are due to sign the pledge to supply their own power instead of relying on a grid connection.

Trump hailed the plan in his State of the Union speech last week, promising US consumers that "no one's prices will go up" as a result of "energy demand from AI data centers."

But industry executives have suggested the commitment will not be binding, while experts warn it is likely impossible to fully insulate consumers from the extra power demand coming from the vast expansion of data centers to run AI.

"Regardless of how these data centers connect, behind the meter or as part of the network, you're going to increase demand," said Ari Peskoe, director at Harvard Law School's Electricity Law Initiative.

Independent power supplies for data centers most often come from gas turbines, which are in short supply and not always designed to provide continuous power. "We still need more of these turbines," Peskoe added.

Trump's pressure on big data center operators comes in response to consumer backlash and political pressure over rising power bills.

On the campaign trail in 2024, Trump pledged to cut energy bills in half within a year of taking office.

In reality, residential electricity costs rose by 6 percent nationwide in February, compared with a year before, according to the US Energy Information Administration.

States such as New Jersey and Pennsylvania, which have clusters of data centers, reported bigger increases at 16 percent and 19 percent respectively.

Natural gas prices, extreme weather, and the need to upgrade aging grid infrastructure have all contributed to higher costs—after decades of low investment in power plants and transmission lines. The hit to energy supplies from Trump's war against Iran could add to the problem.

Critics of data centers say they are increasing energy bills by adding to demand. US data center power demand will more than triple by 2035, rising from almost 35 gigawatts in 2024 to 106 GW, according to data from BloombergNEF.

To avoid political backlash and waits of up to four years for grid connections, tech companies are already building their own power supplies for many new data centers.

Nearly three-quarters of planned generation equipment for data centers is natural gas fired, according to energy research firm Cleanview, which is tracking 56 GW of projects across the US.

Wednesday's pledge would see tech companies expand these efforts to prevent higher power costs being pushed on to customer bills.

Josh Price, director of energy and utilities at strategy firm Capstone, said Big Tech was "trying to push back against the narrative that they're the bad guy."

But the boom in data center building is already pushing the limits of the supply chain for power generation, making it difficult for companies to meet their commitment to Trump.

Competition for gas turbines is fierce, with waits as long as seven years for new orders.

Turbine-maker GE Vernova said it would expand production by 25 percent, and Mitsubishi Power announced plans to double its output over the next two years. But manufacturers have been cautious about expanding capacity, and it may not be enough to meet booming demand.

Two-thirds of gas projects in development in the US have not announced a turbine manufacturer, according to Global Energy Monitor.

The price of gas turbines has risen sharply, and greater competition from tech companies will mean higher costs for utilities and industrial customers who also need generating capacity—costs that could still be passed on to ratepayers.

To overcome shortages, data centers are increasingly relying on alternatives. Companies, including Google and Microsoft, have also struck deals to reopen nuclear power plants, but these plans will take years to deliver.

In the near term, companies are using options such as reciprocal engines and diesel generators. Experts point out that these power sources, as well as ordinary gas turbines, are not designed to provide the kind of continuous power needed by data centers.

"They say, 'we have documented evidence that these can run 90 percent of the time'... But that's not the average use case," said Jigar Shah, an energy investor and former Department of Energy official.

Keeping these data centers, and their power supplies, operational for decades would also present challenges around securing spare parts and qualified technicians, he added.

Shah said: "The level of ineptitude by which the data center companies are sleepwalking into major problems just seems shocking for trillion-dollar companies."


Original Submission

posted by jelizondo on Wednesday March 11, @11:36PM   Printer-friendly

Two of ME-CENTRAL-1's three availability zones went offline after Iran targeted Amazon's cloud infrastructure:

AWS confirmed on its health dashboard that two facilities in the UAE were "directly struck" and that a third site in Bahrain sustained damage from a nearby explosion. The strikes caused structural damage, disrupted power delivery, and, in some cases, triggered fire suppression systems that produced additional water damage, according to the AWS Health Dashboard. Amazon told customers it expects recovery to be prolonged "given the nature of the physical damage involved".

These outages then cascaded into consumer-facing services across the Gulf. Ride-sharing and delivery platform Careem, payments firms Hubpay and Alaan, data management company Snowflake, and several major UAE banks — including Emirates NBD, First Abu Dhabi Bank, and Abu Dhabi Commercial Bank — all reported disruptions. AWS advised customers to activate disaster recovery plans and migrate workloads away from the affected Middle East regions.

Iran's Islamic Revolutionary Guard Corps stated it targeted the Bahrain facility specifically because AWS hosts U.S. military workloads there; AWS declined to comment on that claim. Sean Gorman, Air Force contractor and CEO of Zephr.xyz, told DefenseScoop on Tuesday that classified government workloads at Impact Level 4 and 5 are held in U.S.-only facilities, but acknowledged that “contractor and non-operational data… may have been impacted,” at the struck sites.

The attacks followed joint U.S.-Israeli strikes on Iran over the last week. AWS urged customers with workloads in the region to migrate to unaffected regions while repairs continue.


Original Submission

posted by hubie on Wednesday March 11, @06:54PM   Printer-friendly

https://www.slashgear.com/2116892/canada-discovery-botswana-rare-earth-minerals-tech-companies-want/

Rare earth elements (REE) are a group of 17 metallic elements that are increasingly important in the modern world. This importance is fueling an increasing global demand for these essential commodities. The figures speak for themselves: Worldwide REE production reached 390,000 metric tons in 2024 — nearly triple the level recorded in 2017. Currently, the vast majority of these important minerals are mined in China, which produced 69% of the total production in 2024.

Now, a new discovery by Canadian company Tsodilo Resources Limited could bring a new player into the market — Botswana. This is important, as much of the modern world relies on these elements. Technologies like wind turbines, telecommunication systems, the defense industry, and many high-tech consumer industries all rely on them, including in the rare earth magnets used in electric vehicles.

In an announcement, Tsodilo said that drilling at its Gcwihaba Metals Project identified high-grade mineralization at two exploration targets known as C26 and C27. The deposit, located at depths of between 20 and 50 meters (66 and 164 feet) below the surface, is located in what geologists call a skarn system — a type of metamorphic rock formation that's been altered by hot and chemically active fluids like magma.

Company data indicates that the deposit contains all 15 REEs listed as critical by the U.S. Geological Survey, along with more common metals like copper, cobalt, nickel, and silver. The project remains in the exploration stage, with further drilling planned to define its full scale.

Despite the name, rare earths are not actually that rare. For instance, cerium — an element used in catalytic converters — is actually the planet's 25th most abundant element. However, what is rare is finding REEs in high enough concentrations that mining becomes viable. The initial findings from the Gcwihaba sites suggest that this is one of these rare instances.

[...] Despite the announcement, the Gcwihaba project is still in the exploration stage. Tsodilo has outlined a conceptual exploration target based on drilling at the C26 and C27 zones, but it hasn't yet published a compliant mineral resource estimate. Further drilling is planned to better define the size, grade, and economic viability of the project.

Even if additional studies confirm the presence of a viable REE resource, we are unlikely to see Botswanan dysprosium anytime soon. Factors like environmental assessments, permitting, infrastructure development, and financing all add time and complexity. Then there's the processing to consider. As noted, China accounts for about 69% of the world's REE production. However, when it comes to the separation and processing these minerals, China accounts for about 90% of the figure.

Still, Botswana's reputation as one of Africa's more mining-friendly countries could work in the project's favor. The country has already got a thriving diamond mining industry — it's the world's second biggest producer of the gem — but recent market uncertainties have seen it look to diversify.

While the discovery does not immediately alter the global REE balance, it represents another potential source at a time when governments and technology companies are actively looking to reduce supply chain risk and ease the potential threat of China grinding worldwide car production to a halt.


Original Submission

posted by Fnord666 on Wednesday March 11, @02:03PM   Printer-friendly
from the pinky-swear dept.

With no enforcement and questionable economics, it may not make a difference:

On Wednesday, the Trump administration announced that a large collection of tech companies had signed on to what it's calling the Ratepayer Protection Pledge. By agreeing, the initial signatories—Amazon, Google, Meta, Microsoft, OpenAI, Oracle, and xAI—are saying they will pay for the new generation and transmission capacities needed for any additional data centers they build. But the agreement has no enforcement mechanism, and it will likely run into issues with hardware supplies. It also ignores basic economics.

Other than that, it seems like a great idea.

The agreement is quite simple, laying out five points. The key ones are the first three: that the companies building data centers pledge to pay for new generating capacity, either building it themselves or paying for it as part of a new or expanded power plant. They'll also pay for any transmission infrastructure needed to connect their data centers and the new supply to the grid and will cover these costs whether or not the power ultimately gets used by their facilities.

The companies also pledge to consider allowing the local grid to use on-site backup generators to handle emergency power shortages affecting the community. They will also hire and train locally when they build new data centers.

The agreement suggests that these promises will protect American consumers from price hikes due to the expansion of data centers and will somehow "lower electricity costs for consumers in the long term." How that will happen is not specified.

Also missing from the agreement is any sort of enforcement mechanism. If a company decides to ignore the agreement, the worst it is guaranteed to suffer is bad publicity, something these companies already have experience handling. That said, Trump has been known to resort to blatantly illegal tactics to pressure companies to conform to his wishes, so ignoring the agreement carries risks.

[...] As recent coverage has made clear, most of the companies plan to handle (or are already handling) the added power demand with natural-gas-generating equipment. But there's a limited supply of such equipment; various sources quote wait times of up to seven years. That's longer than even the planned timeline for a new nuclear power plant. While there's likely to be some expanded manufacturing capacity due to the surging demand, the companies that build gas turbines will be very hesitant to invest in meeting demand that is likely to be transient.

Even if they did, basic economics indicate that expanded use of this fuel would raise consumer costs, as it would mean more competition for the supply of natural gas used either directly by consumers for heating or by grid operators that supply consumers with electricity. That will likely force utilities to meet demand with plants that are rarely used at present, typically because those plants are less efficient and more expensive to run.

[...] So it's very unlikely that data center builders will be able to meet the added demands with natural gas. Even if they could, it would shift costs onto consumers unless we somehow scaled natural gas production to match while keeping overseas consumers from buying the excess.

Are there alternatives? We haven't built any coal plants in decades, and many of the ones still in operation are reaching their normal end-of-life points; the electricity they produce is also expensive relative to the alternatives. It's highly unlikely that anyone would invest in new coal plants given the cost and environmental consequences.

Nuclear is a questionable option. There are plans to restart a couple of shuttered plants—the secretary of energy is holding a press conference at one at Indian Point in New York on Friday, suggesting further efforts in that regard. But there simply aren't enough shuttered plants to make a difference. The administration is promoting small modular nuclear power and hopes to have some test reactors built within the next few years. But it will likely take considerably longer than that before they can be widely deployed.

That leaves a combination of solar and batteries as one of the most viable alternatives, although it's still more expensive than most natural gas plants at present. That combination is already being installed at record paces—solar output in the US has grown by over 30 percent for two years in a row. It's not clear how much faster we could be installing them, and in any case, it's clear that the administration doesn't view them as a solution. The announcement specifically takes a shot at policies favoring renewable energy, saying, "President Trump terminated the job-killing 'Green New Scam,' ended massive taxpayer subsidies for unreliable energy sources, and rescinded the Biden Administration's anti-American energy regulations."

Whatever is built will also face the general challenge that transmission remains a huge problem, with many proposed power plants waiting in the queue for years for interconnects to the wider grid.

Supplying the data center boom with power in a way that's minimally disruptive to the wider public will be an extremely difficult challenge. Dismissing it with a toothless agreement that hopes the companies involved will solve all the problems is not cause for optimism that we're prepared to meet that challenge.


Original Submission

posted by Fnord666 on Wednesday March 11, @09:17AM   Printer-friendly

It's Official: The Cybertruck is More Explosive Than the Ford Pinto:

We now have a full year of data for the Cybertruck, and a strange preponderance of headlines about Cybertrucks exploding into flames, including several fatalities. That's more than enough data to compare to the Ford Pinto, a car so notoriously combustible that it has become a watchword for corporate greed.

Let's start with the data summary, then we'll do a deep dive.

TL;DR: The CyberTruck is 17 times more likely to have a fire fatality than a Ford Pinto

With that maddening statistic out of the way, let's dive into the numbers.

Here's the table, with all sources linked below.

                              CyberTruck and Ford Pinto Fire Fatalities
  +--------------------------+-------------+---------------+-----------------------+
  |        Vehicle Model        | Total Units | Reported Fire |       Fatality Rate      |
  |                                   |                  |    Fatalities    | (Per 100,000 units)  |
  +--------------------------+-------------+---------------+-----------------------+
  | Tesla Cybertruck           |     34,438  |         5          |           14.52            |
  | Ford Pinto (1971-1980) | 3,173,491 |       27          |             0.85            |
  +--------------------------+-------------+---------------+-----------------------+


Original Submission

posted by janrinok on Wednesday March 11, @04:32AM   Printer-friendly
from the slip-sliding-away dept.

Ancient clay hidden under Japan caused rupture that triggered devastating 2011 earthquake and tsunami:

A thin, soft and slippery layer of clay-rich mud embedded in rock below the seafloor intensified the 2011 Japan earthquake that produced a tsunami, killing tens of thousands of people and damaging the Fukushima Daiichi nuclear power plant.

The discovery, which is published in the journal Science, was made by an international team of scientists including researchers from The Australian National University (ANU).

Onboard  the world's most advanced drilling-equipped science vessel,  Chikyu, the team sailed to  the Japan Trench  in late 2024 to investigate what caused the Tōhoku-oki fault to rupture and trigger  the earthquake.

The re searchers  drilled up to 7,906 metres below the sea surface, setting a Guinness World Record for the deepest scientific ocean drilling ever conducted.

Earth core  samples recovered from in and around the fault zone reveal that the fault rupture occurred in a layer of clay no more than a few metres thick.

According to ANU geophysicist Associate Professor Ron  Hackney, the clay  is  very soft,  slippery  and exceptionally weak – a discovery that  was  "surprising and unusual".

He said this is the first time scientists have linked the presence of soft and slippery clay in a fault plane to ancient sediments deposited on the seafloor over millions of years.

"This work helps explain why the 2011 earthquake behaved so differently from what many of our models predicted,"  Associate Professor Hackney,  who is also  Director of the Australian and New Zealand International Scientific Drilling Consortium  (ANZIC), said.

According to the scientists, learning more about the properties  and nature of  a  fault plane  can tell them how much of the fault plane might rupture during an earthquake and where the energy released during an earthquake will be concentrated along the fault.

This, in turn,  provides  greater insights into  the  processes and properties that control giant earthquakes, the resulting movement of the seafloor and tsunami generation, and  the  likely size  and extent of any tsunami that might be triggered.

"This  clay-rich ancient mud  formed from microscopic particles that slowly settled on the seafloor beneath the Pacific Ocean over time – a process that took place over 130 million years – as the Pacific tectonic plate slowly moved west to ultimately be forced under Japan," Associate Professor  Hackney  said.

"The fault zone formed in that weak layer of clay as those sediments slowly slid under Japan, moving roughly 10 centimetres a year.

"Given that the weak clay layer is sandwiched between stronger layers of rock above and below, the clay acted like a natural 'tear line' that caused the fault to form within that layer of clay."

The 2011 Japan earthquake was the result of a steady build-up of stress over the hundreds of years since the previous earthquake in a never-ending cycle caused by the movement of the Pacific tectonic plate as it pushed under the tectonic plate on which Japan sits.

According to Associate Professor Hackney, once the built-up stress was abruptly released, the weak nature of the clay offered little resistance to the rupture generated, allowing that rupture to rapidly propagate up the fault, all the way to the seafloor.

This caused the seafloor to rise by several metres, which in turn triggered a tsunami on a scale not expected for this region.

"Amazingly, the fault didn't rupture the whole layer of clay, which extends for hundreds of kilometres along the Japan Trench – the deep ocean boundary where the Pacific and Japan tectonic plates collide with one another," he said.

"The rupture plane was just a centimetre or so thick, yet it allowed between 50 and 70 metres of movement on the fault and caused the seafloor off Japan to rise abruptly by several metres during the earthquake."

By learning more about the properties of  the  Tōhoku-oki  earthquake  fault,  scientists hope to  conduct  better assessments of earthquake and tsunami hazards for coastal communities around the world.

"There are indications that the sediments being drawn towards and under Sumatra  may  also  contain  a weak clay layer, which suggests that the giant 2004 Boxing Day tsunami may be linked to  similar fault characteristics," Associate Professor  Hackney said.

"Although we  can't  be sure without extracting and analysing core samples directly from that fault."

The research team has also published  a documentary [Not reviewed -- Ed] taking viewers behind the sciences of their epic  expedition.  The film follows  the international team of researchers onboard Chikyu as they recover samples from beneath the Japan seafloor.

Journal Reference: https://doi.org/10.1126/science.ady0234


Original Submission

posted by janrinok on Tuesday March 10, @11:43PM   Printer-friendly

https://www.tomshardware.com/tech-industry/norwegian-consumer-watchdog-calls-out-enshittification

Claims Hardware Deliberately Degraded After Purchase

Alongside the report, the Forbrukerrådet and 28 co-signers — including the Electronic Frontier Foundation, Access Now, and Cory Doctorow — sent an open letter to EU policymakers on February 27, urging stronger enforcement of the Digital Markets Act and the GDPR, and pushing back against the European Commission's "Digital Omnibus" package, which the letter argued risks diluting existing consumer protections.

The collective is pushing toward the EU Digital Fairness Act, which the Commission included in its 2026 work program with a proposal expected in Q4 2026. The act is expected to target dark patterns, influencer marketing, addictive design, and unfair personalization across digital products and services.

A public consultation that closed in October 2025 drew roughly 3,000 responses in its first two weeks alone, many from gamers pushing for provisions that would prevent publishers from disabling titles consumers have already purchased — a campaign known as Stop Killing Games.


Original Submission

posted by janrinok on Tuesday March 10, @06:57PM   Printer-friendly

The Slow Death of the Power User:

There's a certain kind of person who's becoming extinct. You've probably met one. Maybe you are one. Someone who actually understood the tools they used. Someone who could sit down at an unfamiliar system, poke at it for twenty minutes, and have a working mental model of what it was doing and why. Someone who read error messages instead of dismissing them. Someone who, when something broke, treated it as a puzzle rather than a betrayal.

That person is dying off. And nobody in the industry seems to care. In fact, most of them are actively celebrating the funeral while billing it as progress.

This isn't an accident. This is the result of two decades of deliberate, calculated effort by the largest technology companies on earth to turn users into consumers, instruments into appliances, and technical literacy into a niche hobby for weirdos. They succeeded beyond their wildest expectations. Congratulations to everyone involved. You've built a generation that can't extract a zip file without a dedicated app and calls it innovation.

The average person who grew up with smartphones has a fundamentally broken mental model of computing. Not broken in the sense that they can't operate their devices — they can, with terrifying efficiency. Broken in the sense that their understanding stops at the glass. They know how to use apps. They do not know what apps are. They know files exist somewhere, in the cloud maybe, or possibly inside the app itself — the distinction isn't clear to them and they've never needed it to be.

[...] Ask a twenty-two-year-old to connect to a remote server via SSH. Ask them to explain what DNS is at a conceptual level. Ask them to tell you the difference between their router's public IP and the local IP of their laptop. Ask them to open a terminal and list the contents of a directory. These are not advanced topics. Twenty years ago these were things you learned in the first week of any serious engagement with computers. Today they're exotic knowledge that even a lot of working software developers don't have, because you can go a long way in modern development without ever leaving the managed abstractions your platform provides.

And that's the real damage. It's not just end users who don't know this stuff. It's developers. People who write software for a living who've never had to think about what happens between their API call and the response. Who've never had to debug something at the network layer. Who've never had to read a full stack trace and understand every frame of it. Because the frameworks handle all of that, and the frameworks are good enough, and figuring out how things actually work is optional.

[...] The smartphone didn't just shift computing to a smaller screen. It replaced a computing paradigm — one built on ownership, modification, and composability — with a consumption paradigm built on managed access, curated experience, and dependency. And it did so with the full, deliberate, enthusiastic participation of every major platform vendor.

[...] All of this was sold as a feature. "It just works." Safety. Privacy. User experience. What it actually was, was control — Apple's control over what you could do with hardware you supposedly bought. And the genius move, the move that should make any serious observer furious, was convincing users that this control was being exercised on their behalf.

[...] Android played the same game with better PR. Google launched Android as an open platform, and for a few years it genuinely was. You could sideload APKs trivially. You could root your device and replace the entire OS. Manufacturers shipped custom builds. The ecosystem was messy and fragmented and occasionally awful and genuinely interesting. Then, gradually, systematically, Google started closing it down.

[...] The users who grew up on these platforms don't know what they're missing. They've never used a system where they were genuinely in control. The idea that you should be able to run arbitrary code on hardware you paid for is foreign to them — not rejected, but simply absent as a concept. They'll defend the restrictions without prompting because they've internalized the vendor's framing so thoroughly that they experience the cage as comfortable. "I don't want to root my phone, that sounds scary." Cool. You've successfully trained yourself to be afraid of ownership. The platform vendors are proud of you.

Technology culture used to celebrate technical competence. Not as gatekeeping, not as elitism — as genuine, infectious enthusiasm for understanding how systems worked. The BBS scene in the eighties ran on self-taught systems operators who understood their hardware and their network protocols well enough to build infrastructure that had never existed before. The early web had a "view source" ethos: you saw something interesting, you looked at how it was built, you learned from it, you made something of your own. [...]

These were not professional circles. You didn't need a CS degree. You needed curiosity and stubbornness and a tolerance for reading things that were too long and trying things that didn't work on the first ten attempts. The culture valued that and passed it down. Kids learned by watching, by lurking in forums, by getting their stupid questions answered by people who then expected them to answer someone else's stupid questions eventually. The knowledge propagated because the culture treated knowledge as worth propagating.

That culture didn't die because the knowledge became irrelevant. It died because it became economically inconvenient. The platforms that replaced the open internet — YouTube, Reddit, Discord, eventually TikTok — are consumption platforms. Their business model requires passive engagement. A user who spends three hours going down a documentation rabbit hole, breaking things in a terminal, and actually understanding something is worth less to them than a user who watches three hours of content. They don't ban technical material. They algorithmically deprioritize anything that demands active engagement, they reward passive consumption, and they shape the culture of their platform accordingly over years and years until the culture that emerges is one that treats passive consumption as the default relationship with technology.

[...] The man page is dead for most users. The RFC is unread by most developers who depend on the protocols it describes. Stack Overflow, which used to be a genuinely valuable resource for understanding why things behaved certain ways, has become a paste-and-pray operation: scan for a code snippet that looks related to your problem, copy it, run it, hope it works. When it doesn't, find another snippet. The understanding never enters the loop. LLMs have accelerated this to a degree that should make anyone who cares about software quality genuinely alarmed. You can now write complete programs without understanding what a single line of them does, and the programs will often work well enough in the happy path that you'll never know how thoroughly you don't understand what you've built until something goes wrong in production at two in the morning and you are completely without tools to respond.

This is what the culture has normalized: outcomes without understanding, solutions without models. And the response when you point this out is "okay but who has time for that," as if understanding were a productivity cost rather than the entire point.

The problem is not, primarily, that services collect data. The problem is that users have been convinced to treat pervasive surveillance infrastructure as benign or beneficial, and to respond to any criticism of it as paranoia, technical elitism, or failure to appreciate convenience. The learned helplessness is the crisis. The data collection is the symptom.

[...] The algorithm situation is the one that most directly affects daily life and receives the least serious scrutiny. Every major platform uses recommendation systems that are, in the most literal sense, making decisions about what information you encounter. What news exists in your world. Which of your friends' thoughts reach you. Which ideas get surfaced and which get buried. These systems are explicitly not neutral — they're optimized for engagement, which empirically correlates with outrage, anxiety, conflict, and tribal reinforcement, because those emotional states produce the behavioral signals the engagement metrics reward. The platforms are making your information diet worse on purpose, because worse converts to engagement, and engagement converts to revenue.

[...] We're losing the ability to audit. A person who understands their tools can notice when those tools start behaving badly. They can run a packet capture with tcpdump or Wireshark and see what their phone is actually transmitting. They can look at what their DNS resolver is returning. They can read the permissions an app requests and reason about whether those permissions make sense for what the app claims to do. They can notice when an update changes behavior in ways that benefit the developer at the user's expense. Most people have none of these capabilities and depend entirely on external review — journalists, academic security researchers, occasionally regulators — which is slow, incomplete, paid for by advertising revenue from the same companies being reviewed, and easily captured. [...]

We're losing resilience. Communities with high concentrations of technical competence can adapt when platforms change or die. They migrate. They self-host. They fork. When Google killed Reader, the technical community had self-hosted alternatives running within weeks. When Twitter's API became hostile to third-party clients, developers built ActivityPub implementations and federated alternatives. When a platform shifts its terms in ways that make it untenable, technically competent users can leave and rebuild elsewhere, carrying their data with them, because they understand their data as something they own rather than something that lives in the platform. Communities without those skills get stranded. [...]

We're losing the builder pipeline. This one compounds over time and the compounding is already visible. Power users become developers. Tinkerers become engineers. The kid who roots their Android phone and breaks it and fixes it and then writes a script to automate something the official interface doesn't support — that kid, ten years later, has intuitions about system behavior that you cannot get from a bootcamp and cannot get from building inside managed platforms your entire career. They know what it means when something is running slower than it should. They have hypotheses about failure modes before they start debugging because they've caused those failure modes themselves. They understand that abstractions are leaky and that the leak is usually where the interesting problems are.

Close off the tinkering and you close off the pipeline. What you get instead is a generation of developers who've only ever worked within platform constraints, who've never pushed against the edges of the abstractions they've been given, who treat framework behavior as ground truth rather than implementation detail. [...]

We're losing the adversarial capacity to hold platforms accountable. This is the one that matters most and gets talked about least. The open-source movement, the early security research community, the hacker culture in the original sense — these were not just about building things. They were a check on the power of institutions. [...]

[...] The industry isn't going to fix this. Every financial incentive points the other way. Confused, dependent users are more profitable than competent, autonomous ones. Lock-in is more valuable than interoperability. Opacity is more valuable than transparency. The architecture of modern consumer technology has been optimized against user competence with extraordinary success, and every quarterly earnings report validates the approach.

Regulators aren't going to fix it. They're fighting over app store fees while the underlying issue — the right of users to own and control the devices they've paid for — gets no serious legislative traction in most jurisdictions. The EU's Digital Markets Act has done some real work on interoperability requirements and is being fought by every affected platform with everything they have, because the platforms understand that the real threat is not the specific provisions but the principle that user autonomy is a value the law should protect.

Educators aren't going to fix it. Most digital literacy curricula teach application use. How to use Google Workspace. How to spot a phishing email. "Coding" in the form of block-based visual programming that produces no transferable understanding of how software actually works. The schools that teach real systems thinking, real network knowledge, real debugging skills — those schools cost money and are not where most people go.

The technical community is mostly not going to fix it either, because most of it has retreated into professional specialization and has largely given up on the broader project of maintaining technical literacy outside the profession. The open-source community does important work maintaining alternative infrastructure. It communicates almost entirely with itself.

So what's left is individual stubbornness. Which is not nothing. Organized individual stubbornness, pointed in the right direction, is how every important counter-cultural technical movement has worked.

Learn how your tools actually work. Not just how to operate them. Use the command line. Set up a home server and break it and fix it. Root a phone or, if you're on a platform where that's been made impossibly difficult, buy something where it isn't. Run a Linux install on bare metal and deal with the driver problems. Learn to read a network capture. Understand what your browser is sending with every request — the dev tools have been there the whole time. Host something yourself instead of using the managed service. Use open protocols where they exist: XMPP, ActivityPub, RSS, SMTP — these are old and unglamorous and they work and you own your data when you use them. Feed the federated alternatives even when they're worse than the centralized ones, because they're worse partly due to network effects and network effects respond to participation.

This is not about purity. Nobody is asking you to reject every managed service on principle or run Gentoo on everything. It's about maintaining enough technical competence that you are a participant in the systems you depend on rather than a permanent subject of them. It's about being able to make informed choices instead of having choices made for you by systems optimized for someone else's revenue.

The power user isn't dead. The skills exist. The communities exist — smaller, grayer, more scattered, fighting an institutional headwind that grows stronger every year. But they exist, and the knowledge is still propagating in the spaces the platforms haven't fully colonized.

The trajectory is bad. Every generation of new users arrives knowing less and expecting less. Every generation of new developers builds on more layers of managed abstraction and understands fewer of them. Every year it gets harder to explain why ownership matters, why understanding matters, why the convenience-for-control trade is a bad deal even when the convenience is genuinely excellent — because the people you're explaining it to have lived their entire lives inside the control and experienced it as freedom.

The obituary for the power user is being written right now. The people writing it are the same ones who sold you the phone, designed the app store, wrote the terms of service you didn't read, and built the algorithm that decided you didn't need to see this.

They are probably right about the timeline. They've been right about most things. The market has validated them at every step.

That is not an argument for giving up. It is an argument for being considerably angrier about it than most people currently are.

The full blog post is much longer and is a very interesting read.


Original Submission

posted by hubie on Tuesday March 10, @02:10PM   Printer-friendly

A fascinating report in New Scientist tells of common ancestry between Amazonian and Australasian peoples, dating possibly back more than 10,000 years. How could Australasian people crossed the ocean to arrive at the Amazon?

The genomes of 15 ancient Americans, including six that are more than 10,000 years old, have been sequenced. The results reveal how people first spread through the Americas – and also throw up a major mystery.

The big picture is clear. Around 25,000 years ago during the last ice age, the ancestors of modern native Americans moved across the Beringian land bridge into what is now Alaska. They remained there for millennia because the way south was blocked by ice. Once a path opened up, groups of hunter-gatherers moved south very quickly.

[...]

Southern native Americans split from northern ones around 16,000 years ago, the results suggest, and reached South America not long afterwards.

The genomes reveal many more details about this process. For instance, it appears some previously unknown group split away from northern native Americans at some point and then moved into South America around 8000 years ago, long after the initial migration.

But the study also adds to a big mystery: some groups in the Amazon are somewhat more closely related to the Australasians of Australia and Papua New Guinea than other native Americans are. The genomes show this "Australasian signal" is more than 10,000 years old. So where did it come from?

If another group of people more closely related to Australasians crossed the Beringian land bridge at some point and moved down to the Amazon, why is there no trace of them in North America? And in the exceedingly unlikely event they somehow managed to cross the vast Pacific long before the Polynesians, how did they end up in the Amazon, on the other side of the Andes?

Based on shape of their skulls, it has also been claimed that many ancient humans found in the Americas cannot be the ancestors of present-day native Americans and instead belonged to a distinct group dubbed the "Paleoamericans". "But we see again that they are most closely related to present-day native Americans," says Moreno-Mayar.

This finding has led to the remains of one of the early humans, the 10,000-year-old Spirit Cave mummy, being returned to the Fallon Paiute-Shoshone Tribe after a long legal battle.

In 2015, Moreno-Mayar's team showed that another supposed "Paleoamerican", called Kennewick Man, was closely related to present-day native Americans.

Also at Smithsonian magazine


Original Submission

posted by hubie on Tuesday March 10, @09:29AM   Printer-friendly

https://buttondown.com/suchbadtechads/archive/maxell-life-size-robots/

The idea of robots literally eating your precious and portable files must have been far more terrifying than it was exciting that Maxell's 5.25" disks were on some Michelin-rated menu of computer hardware.

That could be oil in their glasses but it sure looks like white wine. And what, they're going to season their floppy appetizer with table salt? Pick a lane, Maxell!

The ad above was a massive departure from Maxell's previous "Gold Standard" campaigns, those with their rainbow prisms and racecar disks. The restaurant ad seems like it had a lot more money behind it too, showing up in several issues of PC Mag, Personal Computer, and Byte throughout 1985 and 1986. It is not hard to find online or in print, whether on eBay, WorthPoint, or in a frame at a Value Village in Ottawa.

Despite its enduring popularity, this was actually the worst showing of what would go on to be a campaign so good that it wound up in a museum. Because, yes, Maxell's dollar-store C-3PO was, in fact, a life-size prop. And far from lonely.


Original Submission

posted by hubie on Tuesday March 10, @04:47AM   Printer-friendly
from the billion-dollar-questions dept.

It has seemed to me for a long time it might be better if building AGI were a government project," Sam Altman publicly mused last week... Altman speculated on possibility of the government "nationalizing" private AI companies into a public project, admitting more than once he's wondered what would happen next. "I obviously don't know," Altman said — but he added that "I have thought about it, of course" Altman's speculation hedged that "It doesn't seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important."

Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine's AI editor points out that "many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — were government-funded and largely government-directed." And Fortune added that last week the Defense Department threatened Anthropic with the Defense Production Act, which allows the president to designate "critical and strategic" goods for which businesses must accept the government's contracts. Fortune speculates this would've been "a sort of soft nationalization of Anthropic's production pipeline".

Altman acknowledged Saturday that he'd felt the threat of attempted nationalization "behind a lot of the questions" he'd received when answering questions on X.com... How exactly will this AI build-out be handled — and how should AI companies be working with the government? In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer broached an AGI-government scenario with OpenAI's Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the DoD?

"No," Mulligan answered. At our current moment in time, "We control which models we deploy."


Original Submission

posted by hubie on Tuesday March 10, @12:02AM   Printer-friendly
from the and-then-there-was-one dept.

FCC rejects protests because Charter and Cox don't compete directly in most places:

Charter Communications, operator of the Spectrum cable brand, has obtained Federal Communications Commission permission to buy Cox and surpass Comcast as the country's largest home Internet service provider.

Charter has 29.7 million residential and business Internet customers compared to Comcast's 31.26 million. Buying Cox will give Charter another 5.9 million Internet customers. The FCC approved the deal on Friday, but the companies still need Justice Department approval and sign-offs from states including California and New York.

Opponents of Charter's $34.5 billion acquisition told the FCC that eliminating Cox as an independent entity will make it easier for Charter and Comcast to raise prices. But the FCC dismissed those concerns on the grounds that Charter and Cox don't compete directly against each other in the vast majority of their territories.

FCC Chairman Brendan Carr's primary demand from companies seeking to merge has been to eliminate diversity, equity, and inclusion (DEI) programs and policies. In a press release, the Carr-led FCC said that "Charter has committed to new safeguards to protect against DEI discrimination," and that Charter's network-expansion plans will bring "faster broadband and lower prices" to rural areas.

The merger was approved one day after Charter sent a letter to Carr outlining its actions to end DEI. Charter offers broadband and cable service in 41 states, while Cox does so in 18 states.

The FCC's Charter/Cox decision dismissed competition concerns raised in a November 2025 petition to deny filed by Public Knowledge, the Communications Workers of America, the Benton Institute for Broadband & Society, and the Center for Accessible Technology. The FCC said:

Petitioners argue that the Transaction would reduce the number of cable operators, making it easier for competitors, such as Comcast, to "benchmark" their pricing, promotions, bundling, and rate schedules to New Charter. Specifically, they argue that "[r]educing the number of major cable operators makes it easier for each to benchmark pricing decisions against others, reducing competitive pressure across the industry."

Citing the literature on multimarket contact, they further argue that "the merger could transform the competitive landscape such that New Charter becomes the benchmark for Comcast,... thereby enabling parallel behavior." We find this argument unpersuasive. First, there is very little multimarket contact in this case. Because cable companies have generally offered residential broadband service within their non-overlapping franchise territories, they compete directly against each other only at a very small number of locations.

The FCC added that Charter and other cable firms will continue to face competition from fiber, fixed wireless, and satellite broadband providers. Competition from those sectors "will have a significantly greater impact on their pricing decisions than the possible increased ability to benchmark due to the loss of a single cable provider (Cox) in a different territory," the FCC said.

The petition to deny the merger said it "would reduce the number of sizable independent cable operators" that compete against Comcast and other cable firms. "With fewer independent peers, Comcast could rely more on parallel conduct rather than competitive differentiation, especially in non-overlapping territories," the petition said. "The consolidation of pricing benchmarks makes parallel moves (rate increases, reduced promotional discounts) more feasible, simplifying rivals' strategic comparisons and promoting conscious parallelism."

The petition cited research suggesting that in the US airline industry, some "mergers increased fares not only on overlap routes but also on non-overlap routes."

[...] Public Knowledge Legal Director John Bergmayer said that the Carr FCC "did not require Charter to do anything it wasn't already planning to do." He said this is in stark contrast to the FCC's 2016 approval of Charter's merger with Time Warner Cable, which allowed Charter to become the second biggest cable company in the US.

"In 2016, the commission approved Charter's acquisition of Time Warner Cable only after imposing conditions on data caps, usage-based pricing, and paid interconnection," Bergmayer said on Friday. "Today's order finds those concerns no longer apply, largely because the agency credits fixed wireless and satellite as competitive constraints on cable. Further, the Commission imposed no affordability conditions, despite doing so in the 2016 Charter, Comcast-NBCU, and Verizon-TracFone transactions. The record does not support this outcome."


Original Submission

posted by hubie on Monday March 09, @07:20PM   Printer-friendly

Built on open-source software, this European cloud office suite aims to keep your data out of Microsoft 365 and Google Workspace:

Digital sovereignty in Europe is taking another step forward. Office.eu has officially launched in The Hague. This new cloud service is positioning itself as a fully European, open‑source‑based alternative to Microsoft 365 and Google Workspace. The service promises digital sovereignty, strict compliance with European Union (EU) law, and a familiar cloud‑office experience for organizations wary of US platforms.

The new service is operated entirely by European owners and runs solely on EU-based infrastructure and data centers. This design, the company argues, keeps customer data "under European jurisdiction" and insulated from foreign legal regimes, such as the US CLOUD Act. By tying its technical and corporate structure to European territory, the company is directly tapping into long‑running concerns among EU policymakers and public bodies about dependence on US cloud giants for everyday productivity tools.

In a statement, Maarten Roelfs, CEO of Office EU, made this position clear: "We have seen more and more how essential it is to become cloud-independent and to rely on software that is built around European values. For many years, Europe has relied on American software and, therefore, created a certain risk of dependency. We have also given away control over our own data. Office.eu proves that we now have a strong European alternative, with sovereignty, privacy, and transparency at its core."

Roelfs isn't trying to convince people to change. With the change in government in the US, many EU governments and agencies are dumping American-based cloud services as fast as they can. This movement includes France, which is dumping Microsoft Teams and Zoom, the Austrian military, the German state of Schleswig-Holstein, Danish government organizations, and the French city of Lyon. These governments and agencies are dropping Microsoft programs in favor of homegrown European alternatives.

Built primarily on the EU-based, open-source Nextcloud Hub, Office.eu bundles file storage and sharing, email, calendar, online document editing, and chat plus video calls into a single, browser‑based platform. The service deliberately mimics the look and feel of Microsoft 365 and Google Workspace to ease migration.

Office.eu suggests most migrations will be fast and easy because core components rely on standard formats and protocols. For example, email via IMAP and calendars via CalDAV. For documents, Office EU supports common Microsoft Office formats such as DOCX, XLSX, and PPTX. Office.eu will provide migration tools, though it hasn't said what these tools will be.

The company also provides desktop sync clients for Windows, MacOS, and Linux, as well as mobile apps. You can also use the web interface if you don't want to install anything.

However, Office.eu recognizes its service isn't right for everyone.  Microsoft 365 is a strong choice when you want the widest feature set and the most familiar experience, especially if your team already lives inside Outlook, Teams, and Microsoft identity.

Office EU is the better choice when you want a Europe-hosted workspace by default, a more transparent foundation, and a simpler place for daily work. For many teams, that makes it the best alternative to Office 365, not because it tries to copy every Microsoft feature, but because it reduces complexity and gives you a clearer sense of control over where your data lives, who can access it, and how dependent you are on decisions made outside your organisation.

Still, for many Europeans, Office EU will prove an excellent choice. If privacy and control are important to you, Office EU deserves your attention. 


Original Submission

posted by hubie on Monday March 09, @02:35PM   Printer-friendly
from the fuel-the-standard-vs-daylight-saving-fires dept.

March and April are the time of year where a decent fraction of the world shifts their clocks forward (or back, in the Southern Hemisphere) for Daylight Saving Time (DST). Every year, it seems to result in debate about whether to abolish DST, and, if so, whether to stick with standard time or daylight time.

Soylent News, being a science/fact-oriented site, would likely be interested in a comparison of time zones with Mean Solar Time (MST). There is a map showing the difference between the two in the Wikipedia article on time zones. The person who created that map has some short-yet-interesting articles on creating that map and later discussion about it. The articles are old (timeless?), but largely still relevant, as the time zones, and the existence of DST, are largely unchanged since the articles were written.

Interesting how standard time, over most of the landmass of the world, is largely ahead of MST, in some places (e.g. western China) by a lot. DST, where observed, makes that difference worse.


Original Submission

posted by jelizondo on Monday March 09, @09:52AM   Printer-friendly
from the now-you-see-now-you-don't dept.

Claude Code deletes developers' production setup, including its database and snapshots — 2.5 years of records were nuked in an instant

Story has a happy ending of sorts, but should serve as a cautionary tale.

Everyone loves a good story about agent bots gone wrong, and those often come with a bit of schadenfreude towards our virtual companions. Sometimes, though, the errors can be attributed to improper supervision, as was the case of Alexey Grigorev, who was brave enough to detail how he got Claude Code to wipe years' worth of records on a website, including the recovery snapshots.

The story begins when Grigorev wanted to move his website, AI Shipping Labs, to AWS and have it share the same infrastructure as DataTalks.Club. Claude itself advised against that option, but Grigorev considered it wasn't worth the hassle or cost of keeping two separate setups.

Gregory uses Terraform, an infrastructure management utility that can create (or destroy) entire setups, including networks, load balancing, databases, and, naturally, the servers themselves. He had Claude run a Terraform plan to set up the new website, but forgot to upload a vital state file that contains a full description of the setup as it exists at any moment in time.

[Source]: Tom's Hardware

Have any of you been in a similar situation ? and, if yes, how did you recover your data ?


Original Submission