Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What would you use if you couldn't use your current distribution/operating system?

  • Linux
  • Windows
  • BSD
  • ChromeOS / Android
  • macOS / iOS
  • Open[DOS, Solaris, STEP, VMS]
  • I don't use a computer you insensitive clod!
  • Other (describe in comments)

[ Results | Polls ]
Comments:166 | Votes:233

posted by janrinok on Tuesday March 10, @11:43PM   Printer-friendly

https://www.tomshardware.com/tech-industry/norwegian-consumer-watchdog-calls-out-enshittification

Claims Hardware Deliberately Degraded After Purchase

Alongside the report, the Forbrukerrådet and 28 co-signers — including the Electronic Frontier Foundation, Access Now, and Cory Doctorow — sent an open letter to EU policymakers on February 27, urging stronger enforcement of the Digital Markets Act and the GDPR, and pushing back against the European Commission's "Digital Omnibus" package, which the letter argued risks diluting existing consumer protections.

The collective is pushing toward the EU Digital Fairness Act, which the Commission included in its 2026 work program with a proposal expected in Q4 2026. The act is expected to target dark patterns, influencer marketing, addictive design, and unfair personalization across digital products and services.

A public consultation that closed in October 2025 drew roughly 3,000 responses in its first two weeks alone, many from gamers pushing for provisions that would prevent publishers from disabling titles consumers have already purchased — a campaign known as Stop Killing Games.


Original Submission

posted by janrinok on Tuesday March 10, @06:57PM   Printer-friendly

The Slow Death of the Power User:

There's a certain kind of person who's becoming extinct. You've probably met one. Maybe you are one. Someone who actually understood the tools they used. Someone who could sit down at an unfamiliar system, poke at it for twenty minutes, and have a working mental model of what it was doing and why. Someone who read error messages instead of dismissing them. Someone who, when something broke, treated it as a puzzle rather than a betrayal.

That person is dying off. And nobody in the industry seems to care. In fact, most of them are actively celebrating the funeral while billing it as progress.

This isn't an accident. This is the result of two decades of deliberate, calculated effort by the largest technology companies on earth to turn users into consumers, instruments into appliances, and technical literacy into a niche hobby for weirdos. They succeeded beyond their wildest expectations. Congratulations to everyone involved. You've built a generation that can't extract a zip file without a dedicated app and calls it innovation.

The average person who grew up with smartphones has a fundamentally broken mental model of computing. Not broken in the sense that they can't operate their devices — they can, with terrifying efficiency. Broken in the sense that their understanding stops at the glass. They know how to use apps. They do not know what apps are. They know files exist somewhere, in the cloud maybe, or possibly inside the app itself — the distinction isn't clear to them and they've never needed it to be.

[...] Ask a twenty-two-year-old to connect to a remote server via SSH. Ask them to explain what DNS is at a conceptual level. Ask them to tell you the difference between their router's public IP and the local IP of their laptop. Ask them to open a terminal and list the contents of a directory. These are not advanced topics. Twenty years ago these were things you learned in the first week of any serious engagement with computers. Today they're exotic knowledge that even a lot of working software developers don't have, because you can go a long way in modern development without ever leaving the managed abstractions your platform provides.

And that's the real damage. It's not just end users who don't know this stuff. It's developers. People who write software for a living who've never had to think about what happens between their API call and the response. Who've never had to debug something at the network layer. Who've never had to read a full stack trace and understand every frame of it. Because the frameworks handle all of that, and the frameworks are good enough, and figuring out how things actually work is optional.

[...] The smartphone didn't just shift computing to a smaller screen. It replaced a computing paradigm — one built on ownership, modification, and composability — with a consumption paradigm built on managed access, curated experience, and dependency. And it did so with the full, deliberate, enthusiastic participation of every major platform vendor.

[...] All of this was sold as a feature. "It just works." Safety. Privacy. User experience. What it actually was, was control — Apple's control over what you could do with hardware you supposedly bought. And the genius move, the move that should make any serious observer furious, was convincing users that this control was being exercised on their behalf.

[...] Android played the same game with better PR. Google launched Android as an open platform, and for a few years it genuinely was. You could sideload APKs trivially. You could root your device and replace the entire OS. Manufacturers shipped custom builds. The ecosystem was messy and fragmented and occasionally awful and genuinely interesting. Then, gradually, systematically, Google started closing it down.

[...] The users who grew up on these platforms don't know what they're missing. They've never used a system where they were genuinely in control. The idea that you should be able to run arbitrary code on hardware you paid for is foreign to them — not rejected, but simply absent as a concept. They'll defend the restrictions without prompting because they've internalized the vendor's framing so thoroughly that they experience the cage as comfortable. "I don't want to root my phone, that sounds scary." Cool. You've successfully trained yourself to be afraid of ownership. The platform vendors are proud of you.

Technology culture used to celebrate technical competence. Not as gatekeeping, not as elitism — as genuine, infectious enthusiasm for understanding how systems worked. The BBS scene in the eighties ran on self-taught systems operators who understood their hardware and their network protocols well enough to build infrastructure that had never existed before. The early web had a "view source" ethos: you saw something interesting, you looked at how it was built, you learned from it, you made something of your own. [...]

These were not professional circles. You didn't need a CS degree. You needed curiosity and stubbornness and a tolerance for reading things that were too long and trying things that didn't work on the first ten attempts. The culture valued that and passed it down. Kids learned by watching, by lurking in forums, by getting their stupid questions answered by people who then expected them to answer someone else's stupid questions eventually. The knowledge propagated because the culture treated knowledge as worth propagating.

That culture didn't die because the knowledge became irrelevant. It died because it became economically inconvenient. The platforms that replaced the open internet — YouTube, Reddit, Discord, eventually TikTok — are consumption platforms. Their business model requires passive engagement. A user who spends three hours going down a documentation rabbit hole, breaking things in a terminal, and actually understanding something is worth less to them than a user who watches three hours of content. They don't ban technical material. They algorithmically deprioritize anything that demands active engagement, they reward passive consumption, and they shape the culture of their platform accordingly over years and years until the culture that emerges is one that treats passive consumption as the default relationship with technology.

[...] The man page is dead for most users. The RFC is unread by most developers who depend on the protocols it describes. Stack Overflow, which used to be a genuinely valuable resource for understanding why things behaved certain ways, has become a paste-and-pray operation: scan for a code snippet that looks related to your problem, copy it, run it, hope it works. When it doesn't, find another snippet. The understanding never enters the loop. LLMs have accelerated this to a degree that should make anyone who cares about software quality genuinely alarmed. You can now write complete programs without understanding what a single line of them does, and the programs will often work well enough in the happy path that you'll never know how thoroughly you don't understand what you've built until something goes wrong in production at two in the morning and you are completely without tools to respond.

This is what the culture has normalized: outcomes without understanding, solutions without models. And the response when you point this out is "okay but who has time for that," as if understanding were a productivity cost rather than the entire point.

The problem is not, primarily, that services collect data. The problem is that users have been convinced to treat pervasive surveillance infrastructure as benign or beneficial, and to respond to any criticism of it as paranoia, technical elitism, or failure to appreciate convenience. The learned helplessness is the crisis. The data collection is the symptom.

[...] The algorithm situation is the one that most directly affects daily life and receives the least serious scrutiny. Every major platform uses recommendation systems that are, in the most literal sense, making decisions about what information you encounter. What news exists in your world. Which of your friends' thoughts reach you. Which ideas get surfaced and which get buried. These systems are explicitly not neutral — they're optimized for engagement, which empirically correlates with outrage, anxiety, conflict, and tribal reinforcement, because those emotional states produce the behavioral signals the engagement metrics reward. The platforms are making your information diet worse on purpose, because worse converts to engagement, and engagement converts to revenue.

[...] We're losing the ability to audit. A person who understands their tools can notice when those tools start behaving badly. They can run a packet capture with tcpdump or Wireshark and see what their phone is actually transmitting. They can look at what their DNS resolver is returning. They can read the permissions an app requests and reason about whether those permissions make sense for what the app claims to do. They can notice when an update changes behavior in ways that benefit the developer at the user's expense. Most people have none of these capabilities and depend entirely on external review — journalists, academic security researchers, occasionally regulators — which is slow, incomplete, paid for by advertising revenue from the same companies being reviewed, and easily captured. [...]

We're losing resilience. Communities with high concentrations of technical competence can adapt when platforms change or die. They migrate. They self-host. They fork. When Google killed Reader, the technical community had self-hosted alternatives running within weeks. When Twitter's API became hostile to third-party clients, developers built ActivityPub implementations and federated alternatives. When a platform shifts its terms in ways that make it untenable, technically competent users can leave and rebuild elsewhere, carrying their data with them, because they understand their data as something they own rather than something that lives in the platform. Communities without those skills get stranded. [...]

We're losing the builder pipeline. This one compounds over time and the compounding is already visible. Power users become developers. Tinkerers become engineers. The kid who roots their Android phone and breaks it and fixes it and then writes a script to automate something the official interface doesn't support — that kid, ten years later, has intuitions about system behavior that you cannot get from a bootcamp and cannot get from building inside managed platforms your entire career. They know what it means when something is running slower than it should. They have hypotheses about failure modes before they start debugging because they've caused those failure modes themselves. They understand that abstractions are leaky and that the leak is usually where the interesting problems are.

Close off the tinkering and you close off the pipeline. What you get instead is a generation of developers who've only ever worked within platform constraints, who've never pushed against the edges of the abstractions they've been given, who treat framework behavior as ground truth rather than implementation detail. [...]

We're losing the adversarial capacity to hold platforms accountable. This is the one that matters most and gets talked about least. The open-source movement, the early security research community, the hacker culture in the original sense — these were not just about building things. They were a check on the power of institutions. [...]

[...] The industry isn't going to fix this. Every financial incentive points the other way. Confused, dependent users are more profitable than competent, autonomous ones. Lock-in is more valuable than interoperability. Opacity is more valuable than transparency. The architecture of modern consumer technology has been optimized against user competence with extraordinary success, and every quarterly earnings report validates the approach.

Regulators aren't going to fix it. They're fighting over app store fees while the underlying issue — the right of users to own and control the devices they've paid for — gets no serious legislative traction in most jurisdictions. The EU's Digital Markets Act has done some real work on interoperability requirements and is being fought by every affected platform with everything they have, because the platforms understand that the real threat is not the specific provisions but the principle that user autonomy is a value the law should protect.

Educators aren't going to fix it. Most digital literacy curricula teach application use. How to use Google Workspace. How to spot a phishing email. "Coding" in the form of block-based visual programming that produces no transferable understanding of how software actually works. The schools that teach real systems thinking, real network knowledge, real debugging skills — those schools cost money and are not where most people go.

The technical community is mostly not going to fix it either, because most of it has retreated into professional specialization and has largely given up on the broader project of maintaining technical literacy outside the profession. The open-source community does important work maintaining alternative infrastructure. It communicates almost entirely with itself.

So what's left is individual stubbornness. Which is not nothing. Organized individual stubbornness, pointed in the right direction, is how every important counter-cultural technical movement has worked.

Learn how your tools actually work. Not just how to operate them. Use the command line. Set up a home server and break it and fix it. Root a phone or, if you're on a platform where that's been made impossibly difficult, buy something where it isn't. Run a Linux install on bare metal and deal with the driver problems. Learn to read a network capture. Understand what your browser is sending with every request — the dev tools have been there the whole time. Host something yourself instead of using the managed service. Use open protocols where they exist: XMPP, ActivityPub, RSS, SMTP — these are old and unglamorous and they work and you own your data when you use them. Feed the federated alternatives even when they're worse than the centralized ones, because they're worse partly due to network effects and network effects respond to participation.

This is not about purity. Nobody is asking you to reject every managed service on principle or run Gentoo on everything. It's about maintaining enough technical competence that you are a participant in the systems you depend on rather than a permanent subject of them. It's about being able to make informed choices instead of having choices made for you by systems optimized for someone else's revenue.

The power user isn't dead. The skills exist. The communities exist — smaller, grayer, more scattered, fighting an institutional headwind that grows stronger every year. But they exist, and the knowledge is still propagating in the spaces the platforms haven't fully colonized.

The trajectory is bad. Every generation of new users arrives knowing less and expecting less. Every generation of new developers builds on more layers of managed abstraction and understands fewer of them. Every year it gets harder to explain why ownership matters, why understanding matters, why the convenience-for-control trade is a bad deal even when the convenience is genuinely excellent — because the people you're explaining it to have lived their entire lives inside the control and experienced it as freedom.

The obituary for the power user is being written right now. The people writing it are the same ones who sold you the phone, designed the app store, wrote the terms of service you didn't read, and built the algorithm that decided you didn't need to see this.

They are probably right about the timeline. They've been right about most things. The market has validated them at every step.

That is not an argument for giving up. It is an argument for being considerably angrier about it than most people currently are.

The full blog post is much longer and is a very interesting read.


Original Submission

posted by hubie on Tuesday March 10, @02:10PM   Printer-friendly

A fascinating report in New Scientist tells of common ancestry between Amazonian and Australasian peoples, dating possibly back more than 10,000 years. How could Australasian people crossed the ocean to arrive at the Amazon?

The genomes of 15 ancient Americans, including six that are more than 10,000 years old, have been sequenced. The results reveal how people first spread through the Americas – and also throw up a major mystery.

The big picture is clear. Around 25,000 years ago during the last ice age, the ancestors of modern native Americans moved across the Beringian land bridge into what is now Alaska. They remained there for millennia because the way south was blocked by ice. Once a path opened up, groups of hunter-gatherers moved south very quickly.

[...]

Southern native Americans split from northern ones around 16,000 years ago, the results suggest, and reached South America not long afterwards.

The genomes reveal many more details about this process. For instance, it appears some previously unknown group split away from northern native Americans at some point and then moved into South America around 8000 years ago, long after the initial migration.

But the study also adds to a big mystery: some groups in the Amazon are somewhat more closely related to the Australasians of Australia and Papua New Guinea than other native Americans are. The genomes show this "Australasian signal" is more than 10,000 years old. So where did it come from?

If another group of people more closely related to Australasians crossed the Beringian land bridge at some point and moved down to the Amazon, why is there no trace of them in North America? And in the exceedingly unlikely event they somehow managed to cross the vast Pacific long before the Polynesians, how did they end up in the Amazon, on the other side of the Andes?

Based on shape of their skulls, it has also been claimed that many ancient humans found in the Americas cannot be the ancestors of present-day native Americans and instead belonged to a distinct group dubbed the "Paleoamericans". "But we see again that they are most closely related to present-day native Americans," says Moreno-Mayar.

This finding has led to the remains of one of the early humans, the 10,000-year-old Spirit Cave mummy, being returned to the Fallon Paiute-Shoshone Tribe after a long legal battle.

In 2015, Moreno-Mayar's team showed that another supposed "Paleoamerican", called Kennewick Man, was closely related to present-day native Americans.

Also at Smithsonian magazine


Original Submission

posted by hubie on Tuesday March 10, @09:29AM   Printer-friendly

https://buttondown.com/suchbadtechads/archive/maxell-life-size-robots/

The idea of robots literally eating your precious and portable files must have been far more terrifying than it was exciting that Maxell's 5.25" disks were on some Michelin-rated menu of computer hardware.

That could be oil in their glasses but it sure looks like white wine. And what, they're going to season their floppy appetizer with table salt? Pick a lane, Maxell!

The ad above was a massive departure from Maxell's previous "Gold Standard" campaigns, those with their rainbow prisms and racecar disks. The restaurant ad seems like it had a lot more money behind it too, showing up in several issues of PC Mag, Personal Computer, and Byte throughout 1985 and 1986. It is not hard to find online or in print, whether on eBay, WorthPoint, or in a frame at a Value Village in Ottawa.

Despite its enduring popularity, this was actually the worst showing of what would go on to be a campaign so good that it wound up in a museum. Because, yes, Maxell's dollar-store C-3PO was, in fact, a life-size prop. And far from lonely.


Original Submission

posted by hubie on Tuesday March 10, @04:47AM   Printer-friendly
from the billion-dollar-questions dept.

It has seemed to me for a long time it might be better if building AGI were a government project," Sam Altman publicly mused last week... Altman speculated on possibility of the government "nationalizing" private AI companies into a public project, admitting more than once he's wondered what would happen next. "I obviously don't know," Altman said — but he added that "I have thought about it, of course" Altman's speculation hedged that "It doesn't seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important."

Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine's AI editor points out that "many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — were government-funded and largely government-directed." And Fortune added that last week the Defense Department threatened Anthropic with the Defense Production Act, which allows the president to designate "critical and strategic" goods for which businesses must accept the government's contracts. Fortune speculates this would've been "a sort of soft nationalization of Anthropic's production pipeline".

Altman acknowledged Saturday that he'd felt the threat of attempted nationalization "behind a lot of the questions" he'd received when answering questions on X.com... How exactly will this AI build-out be handled — and how should AI companies be working with the government? In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer broached an AGI-government scenario with OpenAI's Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the DoD?

"No," Mulligan answered. At our current moment in time, "We control which models we deploy."


Original Submission

posted by hubie on Tuesday March 10, @12:02AM   Printer-friendly
from the and-then-there-was-one dept.

FCC rejects protests because Charter and Cox don't compete directly in most places:

Charter Communications, operator of the Spectrum cable brand, has obtained Federal Communications Commission permission to buy Cox and surpass Comcast as the country's largest home Internet service provider.

Charter has 29.7 million residential and business Internet customers compared to Comcast's 31.26 million. Buying Cox will give Charter another 5.9 million Internet customers. The FCC approved the deal on Friday, but the companies still need Justice Department approval and sign-offs from states including California and New York.

Opponents of Charter's $34.5 billion acquisition told the FCC that eliminating Cox as an independent entity will make it easier for Charter and Comcast to raise prices. But the FCC dismissed those concerns on the grounds that Charter and Cox don't compete directly against each other in the vast majority of their territories.

FCC Chairman Brendan Carr's primary demand from companies seeking to merge has been to eliminate diversity, equity, and inclusion (DEI) programs and policies. In a press release, the Carr-led FCC said that "Charter has committed to new safeguards to protect against DEI discrimination," and that Charter's network-expansion plans will bring "faster broadband and lower prices" to rural areas.

The merger was approved one day after Charter sent a letter to Carr outlining its actions to end DEI. Charter offers broadband and cable service in 41 states, while Cox does so in 18 states.

The FCC's Charter/Cox decision dismissed competition concerns raised in a November 2025 petition to deny filed by Public Knowledge, the Communications Workers of America, the Benton Institute for Broadband & Society, and the Center for Accessible Technology. The FCC said:

Petitioners argue that the Transaction would reduce the number of cable operators, making it easier for competitors, such as Comcast, to "benchmark" their pricing, promotions, bundling, and rate schedules to New Charter. Specifically, they argue that "[r]educing the number of major cable operators makes it easier for each to benchmark pricing decisions against others, reducing competitive pressure across the industry."

Citing the literature on multimarket contact, they further argue that "the merger could transform the competitive landscape such that New Charter becomes the benchmark for Comcast,... thereby enabling parallel behavior." We find this argument unpersuasive. First, there is very little multimarket contact in this case. Because cable companies have generally offered residential broadband service within their non-overlapping franchise territories, they compete directly against each other only at a very small number of locations.

The FCC added that Charter and other cable firms will continue to face competition from fiber, fixed wireless, and satellite broadband providers. Competition from those sectors "will have a significantly greater impact on their pricing decisions than the possible increased ability to benchmark due to the loss of a single cable provider (Cox) in a different territory," the FCC said.

The petition to deny the merger said it "would reduce the number of sizable independent cable operators" that compete against Comcast and other cable firms. "With fewer independent peers, Comcast could rely more on parallel conduct rather than competitive differentiation, especially in non-overlapping territories," the petition said. "The consolidation of pricing benchmarks makes parallel moves (rate increases, reduced promotional discounts) more feasible, simplifying rivals' strategic comparisons and promoting conscious parallelism."

The petition cited research suggesting that in the US airline industry, some "mergers increased fares not only on overlap routes but also on non-overlap routes."

[...] Public Knowledge Legal Director John Bergmayer said that the Carr FCC "did not require Charter to do anything it wasn't already planning to do." He said this is in stark contrast to the FCC's 2016 approval of Charter's merger with Time Warner Cable, which allowed Charter to become the second biggest cable company in the US.

"In 2016, the commission approved Charter's acquisition of Time Warner Cable only after imposing conditions on data caps, usage-based pricing, and paid interconnection," Bergmayer said on Friday. "Today's order finds those concerns no longer apply, largely because the agency credits fixed wireless and satellite as competitive constraints on cable. Further, the Commission imposed no affordability conditions, despite doing so in the 2016 Charter, Comcast-NBCU, and Verizon-TracFone transactions. The record does not support this outcome."


Original Submission

posted by hubie on Monday March 09, @07:20PM   Printer-friendly

Built on open-source software, this European cloud office suite aims to keep your data out of Microsoft 365 and Google Workspace:

Digital sovereignty in Europe is taking another step forward. Office.eu has officially launched in The Hague. This new cloud service is positioning itself as a fully European, open‑source‑based alternative to Microsoft 365 and Google Workspace. The service promises digital sovereignty, strict compliance with European Union (EU) law, and a familiar cloud‑office experience for organizations wary of US platforms.

The new service is operated entirely by European owners and runs solely on EU-based infrastructure and data centers. This design, the company argues, keeps customer data "under European jurisdiction" and insulated from foreign legal regimes, such as the US CLOUD Act. By tying its technical and corporate structure to European territory, the company is directly tapping into long‑running concerns among EU policymakers and public bodies about dependence on US cloud giants for everyday productivity tools.

In a statement, Maarten Roelfs, CEO of Office EU, made this position clear: "We have seen more and more how essential it is to become cloud-independent and to rely on software that is built around European values. For many years, Europe has relied on American software and, therefore, created a certain risk of dependency. We have also given away control over our own data. Office.eu proves that we now have a strong European alternative, with sovereignty, privacy, and transparency at its core."

Roelfs isn't trying to convince people to change. With the change in government in the US, many EU governments and agencies are dumping American-based cloud services as fast as they can. This movement includes France, which is dumping Microsoft Teams and Zoom, the Austrian military, the German state of Schleswig-Holstein, Danish government organizations, and the French city of Lyon. These governments and agencies are dropping Microsoft programs in favor of homegrown European alternatives.

Built primarily on the EU-based, open-source Nextcloud Hub, Office.eu bundles file storage and sharing, email, calendar, online document editing, and chat plus video calls into a single, browser‑based platform. The service deliberately mimics the look and feel of Microsoft 365 and Google Workspace to ease migration.

Office.eu suggests most migrations will be fast and easy because core components rely on standard formats and protocols. For example, email via IMAP and calendars via CalDAV. For documents, Office EU supports common Microsoft Office formats such as DOCX, XLSX, and PPTX. Office.eu will provide migration tools, though it hasn't said what these tools will be.

The company also provides desktop sync clients for Windows, MacOS, and Linux, as well as mobile apps. You can also use the web interface if you don't want to install anything.

However, Office.eu recognizes its service isn't right for everyone.  Microsoft 365 is a strong choice when you want the widest feature set and the most familiar experience, especially if your team already lives inside Outlook, Teams, and Microsoft identity.

Office EU is the better choice when you want a Europe-hosted workspace by default, a more transparent foundation, and a simpler place for daily work. For many teams, that makes it the best alternative to Office 365, not because it tries to copy every Microsoft feature, but because it reduces complexity and gives you a clearer sense of control over where your data lives, who can access it, and how dependent you are on decisions made outside your organisation.

Still, for many Europeans, Office EU will prove an excellent choice. If privacy and control are important to you, Office EU deserves your attention. 


Original Submission

posted by hubie on Monday March 09, @02:35PM   Printer-friendly
from the fuel-the-standard-vs-daylight-saving-fires dept.

March and April are the time of year where a decent fraction of the world shifts their clocks forward (or back, in the Southern Hemisphere) for Daylight Saving Time (DST). Every year, it seems to result in debate about whether to abolish DST, and, if so, whether to stick with standard time or daylight time.

Soylent News, being a science/fact-oriented site, would likely be interested in a comparison of time zones with Mean Solar Time (MST). There is a map showing the difference between the two in the Wikipedia article on time zones. The person who created that map has some short-yet-interesting articles on creating that map and later discussion about it. The articles are old (timeless?), but largely still relevant, as the time zones, and the existence of DST, are largely unchanged since the articles were written.

Interesting how standard time, over most of the landmass of the world, is largely ahead of MST, in some places (e.g. western China) by a lot. DST, where observed, makes that difference worse.


Original Submission

posted by jelizondo on Monday March 09, @09:52AM   Printer-friendly
from the now-you-see-now-you-don't dept.

Claude Code deletes developers' production setup, including its database and snapshots — 2.5 years of records were nuked in an instant

Story has a happy ending of sorts, but should serve as a cautionary tale.

Everyone loves a good story about agent bots gone wrong, and those often come with a bit of schadenfreude towards our virtual companions. Sometimes, though, the errors can be attributed to improper supervision, as was the case of Alexey Grigorev, who was brave enough to detail how he got Claude Code to wipe years' worth of records on a website, including the recovery snapshots.

The story begins when Grigorev wanted to move his website, AI Shipping Labs, to AWS and have it share the same infrastructure as DataTalks.Club. Claude itself advised against that option, but Grigorev considered it wasn't worth the hassle or cost of keeping two separate setups.

Gregory uses Terraform, an infrastructure management utility that can create (or destroy) entire setups, including networks, load balancing, databases, and, naturally, the servers themselves. He had Claude run a Terraform plan to set up the new website, but forgot to upload a vital state file that contains a full description of the setup as it exists at any moment in time.

[Source]: Tom's Hardware

Have any of you been in a similar situation ? and, if yes, how did you recover your data ?


Original Submission

posted by jelizondo on Monday March 09, @05:10AM   Printer-friendly

Free beer is great. Securing the keg costs money:

Open source registries are in financial peril, a co-founder of an open source security foundation warned after inspecting their books. And it's not just the bandwidth costs that are killing them.

"The problem is they don't have enough money to spend on the very security features that we all desperately need to stop being a bunch of idiots and installing fu when it's malware," said Michael Winser, a co-founder of Alpha-Omega, a Linux Foundation project to help secure the open source supply chain.

Winser spoke at FOSDEM this year, in a talk we dropped in on virtually.

Trusted registries are widely treated as a key component of Software Bill of Materials (SBOM) - driven supply chain security efforts, one of the main approaches promoted for securing open source software. Rule one: Get your open source packages from a trusted source.

Yet many of these registries operate on razor-thin margins, relying on non-continuous funding from grants, donations, and in-kind resources.

Google and Microsoft kicked in an initial $5 million to launch Alpha-Omega in 2022 under the Open Source Security Foundation.

And the first thing Winser noticed when he ramped up operations was that open source registries are all dirt poor. All the major registries are facing the same issue: They're experiencing exponential growth, even though their investment in infrastructure and people remains flat.

"We're living on borrowed time," he warned.

"One of the problems that people have is they actually conflate open source software and open source infrastructure," Winser said.

Open source software itself is free to use, and its costs don't increase the more people use it. The costs of registries to hold all open source applications and libraries, however, do indeed keep increasing with greater usage.

Packages don't go away. Collections just grow larger and larger. And AI is now adding to the pile at a considerable clip.

[...] In a follow-up LinkedIn exchange after this article had posted, Winser estimated it could cost $5 million to $8 million a year to run a major registry the size of Crates.io, which gets about 125 billion downloads a year. And this number wouldn't include any substantial bandwidth and infrastructure donations (Like Fastly's for Crates.io).

Adding to that bill is the growing cost of identifying malware, the proliferation of which has been amplified through the use of AI and scripts. These repositories have detected 845,000 malware packages from 2019 to January 2025 (the vast majority of those nasty packages came to npm).

[...] The good news may be that "Registries are effective monopolies. They own the name space," as Winser put it.

But as monopolies, their hold is tenuous at best, because "the cost of spinning up an alternative, crappy registry, is effectively zero," he added.

Winser went through the various ways of covering expenses, though none, he calculated, could fully defray expenses.

[...] Yet the costs Winser was most concerned about are not bandwidth or hosting; they are the security features needed to ensure the integrity of containers and packages.

Alpha-Omega underwrites a "distressingly" large amount of security work around registries, he said. It's distressing because if Alpha-Omega itself were to miss a funding round, a lot of registries would be screwed.

[...] Winser did not offer a solution, though he suggested the key is to convince the corporate bean counters to consider paid registries as "a normal cost of doing business and have it show up in their opex as opposed to their [open source program office] donation budget."

[...] Money is a rarely discussed aspect of open source. The software is just supposed to be like free beer, right?

Hospitals, universities, and museums are all nonprofits, yet they still charge for services. In fact it is good practice; otherwise people will abuse the system. But in open source, the idea of payment remains taboo.

Open source may indeed be like free beer, but no one enjoys their frothy lager served chock full of parasites and bacteria. So maybe we all should get used to ponying up at the bar.


Original Submission

posted by jelizondo on Monday March 09, @12:24AM   Printer-friendly
from the big-brother dept.

Uproar About OS-level Age Verification Laws

Hackaday reports that unnoticed by many, several jurisdictions, including California and Brazil, have passed age verification laws that require operating system providers to keep age records of users. The uproar has now also spread among many FOSS-covering creators.

The wording of the California law is vague, and the inevitable interpretation by courts might have the outcome of a mandatory cloud account connection for every computer use ("An operating system provider shall ... provide ... with respect to a particular user ... a digital signal"). It is unclear how server computing and community based distros could deal with this.

It appears that the large corporate distributions are willing to cave in, but it is entirely unclear, and has not been even touched within all the uproar, how grassroots distributions like Debian will be affected with their many mirrored repositories and no central user database.

System76 on Age Verification Laws

Access is everything:

[...] Colorado's Senate Bill 26-051 and California's Assembly Bill No. 1043 require operating systems to report age brackets to app stores and web sites. A person who creates an account on a computer is supposed to be 18 or older and attest to the age of the user they're creating for themselves or their child. In practice, this means anyone under 18 isn't supposed to create a computer account on their own.

Most System76 employees installed operating systems and created accounts on their computer when they were under 18. They did this out of curiosity. Many started writing software. Some were already writing operating systems. I'm sure the story is similar at most tech companies. Limiting a child's ability to explore what they can do with a computer limits their future. Removing user limitations to the computer (proprietary software, locked-down platforms like Android and iOS) is why System76 exists.

If there is any solace in these two laws, it's that they don't have any real restrictions. There is no actual age verification. Whoever installed the operating system or created the account simply says what age they are. They can lie. They will lie. They're being encouraged to lie for fear of being restricted to a nerfed internet.

[...] It can get worse. New York's proposed Senate Bill S8102A requires adults to prove they're adults to use a computer, exercise bike, smart watch, or car if the device is internet enabled with app ecosystems. The bill explicitly forbids self-reporting and leaves the allowed methods to regulations written by the Attorney General. Practical methods for a bill of such extreme breadth would require, in many instances, providing private information to a third-party just to use a computer at all. Privacy disappears.

In a bizarre twist, under its current wording, a Linux distribution downloaded from the internet could technically make the downloader the "device manufacturer". They are the entity responsible for providing a freely distributed operating system to the device. In practice, this type of language is rarely enforced. Nonetheless, it highlights how laws written for centralized platforms like iOS and Android struggle to define who is responsible in open computing ecosystems where anyone can install or distribute the operating system.

A centralized platform designed to control the activity of the user creates the environment where the centralized platform provider can themselves then be controlled by higher powers. Decentralized platforms and app stores, like Linux, are essential to the personal liberty of adults and children.

This extends to the potential of humanity itself. The computer is the most powerful and versatile technology we've ever created. It is a foundational technology that affects the progress of all other innovations. A platform that controls the user's activity, and can itself be controlled, limits the user's ability to contribute to our shared future. Many of the world's best programmers started experimenting with computers as children.

In the case of Colorado's and California's bills, effectiveness is lost. In the case of New York's bill, liberty is lost. In the case of centralized platforms, potential is lost.

[...] The challenges we face are neither technical nor legal. The only solution is to educate our children about life with digital abundance. Throwing them into the deep end when they're 16 or 18 is too late. It's a wonderful and weird world. Yes, there are dark corners. There always will be. We have to teach our children what to do when they encounter them and we have to trust them.

Ubuntu Looking at How to Implement Age Verification Law Compliance

[...] Recently, a new law was passed in California that requires OS vendors to provide some limited info about a user's age via an API that application distribution websites and application stores can use. [1] Colorado seems to be working on a similar law. [2] The law will go into effect January 1, 2027, it is no longer a draft. I do quite a bit of work with an OS vendor (working with the Kicksecure [3] and Whonix [4] projects), and we aren't particularly interested in blocking everyone in California and Colorado from using our OSes, so we're currently looking into how to implement an API that will comply with the laws while also not being a privacy disaster. Given that other distributions are also investigating what to do with this, and the law requires us to make a "good faith effort to comply with [the] title, taking into consideration available technology", I figured it would be a good idea to bring the issue here.

At its core, the law seems to require that an "operating system" (I'm guessing this would correspond to a Linux distribution, not an OS kernel or userland) request the user's age or date of birth at "account setup". The OS is also expected to allow users to set the user's age if they didn't already provide it (because the OS was installed before the law went into effect), and it needs to provide an API somewhere so that app stores and application distribution websites can ask the OS "what age bracket does this user fall into?" Four age brackets are defined, "= 13 and = 16 and = 18". It looks like the API also needs to not provide more information than just the age bracket data. A bunch of stuff is left unclear (how to handle servers and other CLI-only installs, how to handle VMs, whether the law is even applicable if the primary user is over 18 since the law ridiculously defines a user as "a child" while also defining "a child" as anyone under the age of 18, etc.), but that's what we're given to deal with.

The most intuitive place to put this functionality would be, IMO, AccountsService. The main issue with that is that stable-release distributions, and distributions based upon them, would be faced with the issue of how to get an updated version of AccountsService integrated into their software repositories, or how to backport the appropriate code. The law goes into effect on January 1, 2027, Debian Bookworm is going to be supported by ELTS until July 30, 2033, and we don't yet know if Debian will care enough about California's laws to want to backport a new feature in AccountsService into Debian Bookworm (or even Trixie). Distributions based on Debian (such as Kicksecure and Whonix) may still want to comply with the law though, so something using AccountsService-specific APIs would be frustrating. Requiring a whole separate daemon for the foreseeable future just for an age verification API would also be annoying.

Another place the functionality could go is xdg-desktop-portal. This one is a bit non-ideal for a couple of reasons; for one, the easiest place to put the call would be in the Account portal, which returns more information than the account's age bracket. This could potentially be considered non-compliant with the law, as it states that the operating system shall "[s]end only the minimum amount of information necessary to comply with this title". This also comes with the backporting disadvantages of an AccountsService-based implementation.

For this reason, I'd like to propose a "hybrid" approach; introduce a new standard D-Bus interface, `org.freedesktop.AgeVerification1`, that can be implemented by arbitrary applications as a distro sees fit. AccountsService could implement this API so that newer versions of distros will get the relevant features for free, while distros with an AccountsService too old to contain the feature can implement it themselves as a stop-gap solution.


Original Submission

posted by jelizondo on Sunday March 08, @07:41PM   Printer-friendly

https://www.siliconrepublic.com/careers/employer-education-experience-ai-expert-leadership-skills-aon

Aon's Joseph Holland discusses how taking the route less travelled can lead you towards the career you were meant to have.

“I wanted to be an architect”, explains Joseph Holland, the director of digital foundations, AI platforms and developer experience, at Aon. That was the plan, however, having completed the Leaving Cert, he found he didn’t have the required CAO points and “suddenly didn’t have a plan anymore”. 

“I’d always been into computers and technology though. Even while I was unemployed I was refurbishing old PCs and selling them on. So when a FÁS caseworker mentioned Fastrack into Information Technology (FIT), it caught my attention immediately.” 

He was accepted onto the programme and emerged with a QQI-FET level six Advanced Certificate in IT Specific Support and a one-year contract at Kepak Group that soon became permanent. 

From there he moved on to Version1 and then Aon, where having spotted a gap whereby there was no developer experience function, he made the case for building one and today is leading the AI platform and developer experience. Along the way he also enrolled at Trinity College Dublin, as a mature student, where he completed his information systems degree. 

All that is to say that often, despite having a plan, you don’t always end up going in the direction you thought you would. Professionally, it can take time and research to figure out the best course of action.  

“I’m glad I did it,” says Holland, “I picked up useful skills around project management, systems analysis and understanding how technology fits into broader business strategy. But honestly, the experience and track record I’d already built mattered more to every employer than the piece of paper.”

Access to less typical educational and upskilling opportunities is, for Holland, “everything”, as he explains without FIT he likely would have chosen to retake the Leaving Cert, pointing his career in a different trajectory. 

He notes, “The traditional system had written me off based on a set of exam results. FIT looked at me differently. What makes programmes like FIT work is the direct connection to industry. You’re not studying theory in isolation. You’re learning skills that employers actually need and you’re getting placed in real workplaces where you can prove yourself.”

Apprenticeships he finds have the power to break down the biggest barriers for young people struggling to get their foot in the door when they don’t have a degree on their CV. 

He says, “The tech industry moves fast and it doesn’t particularly care where your qualification came from. It cares whether you can solve problems and keep learning. Alternative pathways are often better at developing those qualities than four years of lectures.”

And part of creating opportunities for young people, he explains, is breaking down harmful myths about alternative educational routes as a vehicle towards a tech-based career.

“The biggest myth is that they are second-best. That if you were good enough, you’d have gone to university. University education has real value and I’m not knocking it. But I’ve worked with people from every educational background over the past 20 years and the route someone took tells you very little about how good they are at their job.” 

What matters, he finds, is what the individual has done with their time since. Another pervasive falsehood is that there is a ceiling that you will eventually hit. Holland explains that there is a belief that while you can access an entry-level role through an apprenticeship, once you start looking for a more senior position, you will run into roadblocks. 

“I’m a director at a Fortune 500 company. I got my degree years into my career, not before it. The ceiling is artificial and it’s maintained by hiring practices, not by any real limitation in what people from alternative routes can achieve.”

Lastly, he finds that there is also a misconception that alternative routes only lead to technical roles. In Holland’s experience, the skills developed through programmes such as FIT go far beyond coding or networking. 

“My own career moved from hands-on infrastructure work to leading enterprise AI strategy and building a new business function. Technology careers are built on continuous learning and the starting point matters far less than people think.”

To that point, Holland urges employers to take a serious look at how tech apprenticeships in particular can create a sturdy talent pipeline, noting many of the skills they come to appreciate, such as curiosity, strong work ethic and a willingness to learn never require a degree. 

And to any young person who didn’t get the number of points they needed, or who is sitting in a classroom querying if they are on the right path or if there are indeed alternatives, he wants them to know that there are and he has been there too. 

He says, “The education system measures one very narrow type of ability at one very specific moment in your life. It doesn’t define you and it definitely doesn’t predict where you’ll end up. I went from an unemployed school leaver to directing AI platforms at a Fortune 500 while running an animal sanctuary and a music tech start-up. 

“Life is broader and stranger and more interesting than any career guidance session will tell you. Programmes like FIT exist because the tech industry needs people who think differently and aren’t afraid to figure things out on the fly. If that sounds like you, there’s a path waiting. You just need to know it’s there.”


Original Submission

posted by janrinok on Sunday March 08, @02:57PM   Printer-friendly

https://arstechnica.com/tech-policy/2026/03/tech-industry-is-in-tariff-hell-even-if-refunds-are-automated/

It's been two weeks since the Supreme Court blocked Donald Trump's emergency tariffs, but an estimated 300,000 US businesses still have no idea if or when they will receive refunds.

Economists have estimated that more than $175 billion was unlawfully collected, and the US could end up owing substantially more than that the longer that the refund process is dragged out, since the US must pay back daily interest on the funds. According to the Cato Institute, a libertarian think tank, a conservative estimate showed that "$700 million in interest is added to the final bill every month that the government delays tariff refunds, or around $23 million per day."

The US is aware that interest is compounding daily on tariffs, as the Trump administration argued against an injunction that would've temporarily blocked the tariffs much sooner by noting that no one would be harmed, since tariffs would be repaid with interest if deemed unlawful. However, now that the court has ruled against tariffs, the Trump administration seems to be dragging its feet in finding a way to return all the ill-gotten funds.

Ed Brzytwa, vice president of international trade for the Consumer Technology Association (CTA), told Ars that delays seem counter to US interests at this point.

"The government should have an intrinsic interest in providing these new funds as fast as possible, so they don't owe more interest over time," Brzytwa said. Providing refunds sooner, he suggested, would not just benefit companies, but "to their employees, to the US economy, to US consumers, all the above."

For the tech industry, many popular products have been spared hundreds of billions in tariffs since Trump took office, but, as the CTA documented in repeated court filings, many more products were hit by them. Ahead of midterms, when analysts predict that tariff whipsawing might slow down, tech firms remain uncertain about when to expect refunds, experts told Ars. At a time when firms already feel overwhelmed, they're also navigating new tariffs that are raising new legal challenges, while risking more supply chain strains as additional threats of feared tariff stacking loom.

Pressure is increasing on Trump to deliver refunds faster; however, after a US Court of International Trade judge, Richard Eaton, ordered universal refunds for all importers who paid Trump's emergency tariffs on Wednesday. At a hearing that day, Eaton noted that Customs knows how to issue refunds, later ordering that all claims be efficiently resolved, CNBC reported.

Officials from Customs and Border Protection (CBP) are expected to share an update on their proposed refund plans at a hearing Friday in that case, raised by Atmus Filtration, which reportedly paid about $11 million in unlawful tariffs.

In the meantime, the CTA and the Chamber of Commerce (CoC) filed a motion [PDF] to submit a proposed brief in another tariffs lawsuit outlining what the trade groups believe is the best strategy for handling refunds.

That lawsuit, raised by V.O.S. Selections, is being overseen by a different Court of International Trade judge, Gary Katzmann. The groups are hoping that he may agree with Eaton, who noted at the Wednesday hearing that "the agency should be able to program its system to issue refunds," CNBC reported. The trade groups' proposed brief emphasized that "in fact, CBP has already issued refunds for some of those tariffs because they were retroactively reduced by a subsequent trade agreement."

According to the trade groups, the US government has the technology to streamline—and possibly even automate—tariff refunds.

"They have the technology to do it," Brzytwa said. "They offer refunds to importers all the time."

But apparently, the Trump administration so far lacks the will to use it, instead planning to wait for court direction before taking any steps to send the funds back. So now the court must intervene to draft a blueprint that all businesses can use to secure a quick and easy refund, the groups said.

"There is no question that American businesses are now entitled to the return of the billions of dollars they were forced to pay under these unlawful tariffs," the groups wrote. "The law is clear on that point, and the government has repeatedly stated that it would issue refunds if the tariffs were ultimately deemed invalid."

If the court requires each business to either litigate their claims or go through "impractical" CBP administrative procedures to request refunds, either the courts or CBP will be overwhelmed, the groups argued. Dealing with the backlog could drag out refunds for years, while the interest accrues and the most vulnerable businesses risk being forced to shut down, they argued.

For many small firms with tight profit margins, the emergency tariffs "have already stretched their resources to the breaking point," groups wrote.

"Those are the types of companies that need to be prioritized in a refund plan," Brzytwa said. He suggested the court should require officials to take steps "to help the companies that barely are making it at this point because they paid such steep amounts in tariffs."

Perhaps even more concerning to the court, for any firms that end up negatively weighing the costs of a lengthy legal battle with the government against likely much smaller tariff refunds, some claims may be abandoned. That would, troublingly, leave taxes collected unlawfully under the International Emergency Economic Powers Act (IEEPA) in the Trump administration's hands, groups warned.

"There is no need to individually litigate whether particular IEEPA duties were valid—they are all invalid," the groups wrote. Instead, groups urged the court to "craft an injunction facilitating a streamlined administrative process for plaintiffs in this case to use in obtaining their refunds." That same process could become "a blueprint for other importers to secure refunds," they suggested.

Possibly, a "commonsense" court-ordered solution could be easily created to streamline refunds, groups proposed.

"Because the government has tracked the payment of IEEPA tariff duties, it knows who paid them and in what amounts, even without refund-seeking submissions from the affected importers," the groups said. Later on, they added, "this efficiency is important not only to reduce strain on courts and the government, but to ensure that refunds issue on a defined and predictable timeline. Delay should not become a de facto denial of recovery for importers who paid unlawful tariffs and wish to seek appropriate relief."

Dallas Dolen—a technology, media, and telecommunications leader for PwC, a leading global professional services network that advises big firms on tax questions—told Ars that he's also worried that tariff refund fights will drag on for years without a court-ordered pathway to expedite them.

Until courts clarify how the refund process will work, he said that PwC continues to advise companies to "be really organized, be really prepared." Every business impacted should stop now to assess what tariffs they expect they're owed and possibly hire staff to ensure they're prepared to secure a refund when processes are created, PwC advised. That level of preparedness may be critical, since "it's unlikely the government will write them two checks," Dolen said.

Dolen suggested that consumer technology might be the sector of the tech industry most hurt by tariffs, and even if refunds are automated, alternative tariffs that Trump is threatening to impose could change the calculus on refunds.

According to Dolen, some businesses required to pay new tariffs under Section 122 of the Trade Act of 1974 may instead get a gross refund, possibly subtracting Trump's latest 10 percent global tariffs from the total of IEEPA tariffs owed.

Perhaps complicating the math further, those new tariffs could increase before refunds are issued. Just yesterday, Treasury Secretary Scott Bessent said that Section 122 tariffs could be raised by another 15 percent this week, The New York Times reported. And over the next five months, the tech industry could be paying tariffs at the same levels as under Trump's IEEPA tariffs, Bessent has claimed.

However, Trump's tariffs remain hugely unpopular, even with Republicans. Both experts agreed that Trump will likely be more thoughtful about tariffs ahead of the midterms. And since he's unlikely to get much support from Congress members focused on reelection, any changes will likely come by executive order. Dolen suggested that Trump's concerns about inflation from tariffs may make him less willing to impose them.

Brzytwa told Ars that the CTA is also hoping that the back-to-back court rulings might push Trump to rethink his aggressive tariff strategy—especially given that his goals of increasing US manufacturing are not being achieved by them.

"This is a golden opportunity for them to reassess on whether they want to impose more tariffs, because if you impose more tariffs, you create more chaos, you create more uncertainty. and you raise costs again," Brzytwa said.

Another wrinkle is that the Supreme Court ruling has emboldened critics of Trump's tariffs. Although Trump and Bessent have postured that the Supreme Court ruling is meaningless, since they have other tariff avenues to explore, those will not replace his prior IEEPA tariffs, Brzytwa said. And the administration already is facing legal pressure that could gut the Section 122 authority to impose tariffs, after 20 states sued Trump to block his next go-to tariff tool.

But Trump seems unlikely to give up tariffs as a source of leverage in negotiations with all of America's trading partners, and sometimes even in negotiations with US companies. And even if Section 122 tariffs are one day blocked, just as IEEPA tariffs were, Brzytwa told Ars that CTA is "very closely" monitoring additional tariffs that could be imposed under Section 232 of the Trade Expansion Act and Section 301 of the Trade Act of 1974. Those could hit products like semiconductors or critical minerals, as well as any downstream products containing them, perhaps further hurting cash-strapped tech firms stuck feeling fuzzy about what costs or supply chain disruption may come in the near future.


Original Submission

posted by janrinok on Sunday March 08, @10:13AM   Printer-friendly

https://www.newscientist.com/article/2516990-would-aliens-do-physics-or-is-science-a-human-invention/

Modern physics offers a remarkable lens on reality. In just over a century, it has decoded the architecture of atoms, traced the early history of the universe and produced laws that seem to hold everywhere, from Earth's crust to distant galaxies. It is tempting to believe that these theories aren't just accurate, but inevitable – that any sufficiently intelligent civilisation would eventually uncover the same truths.

I used to believe that, too. But lately I have started to wonder whether physics is less a window onto universal reality and more of a mirror, reflecting the particular kind of minds we happen to have.

That unsettling thought emerges when you ask a deceptively simple question: would alien scientists, shaped by a different biology or culture, arrive at the same physics that we have? Or might they develop something that works just as well, but looks utterly foreign – built on concepts and assumptions we would struggle to recognise?

This question sits at the heart of my book, Do Aliens Speak Physics?, which imagines various scenarios of first contact, each designed to probe a foundational assumption of modern physics. In developing it – often in conversation with philosophers of science – I have come to realise something surprising: many pillars of physics that feel hardwired may actually be contingent. But recognising that doesn't weaken science. It may be how we make it better.

I've spent my life doing physics. When I am not teaching at the University of California, Irvine, I work at the CERN particle physics laboratory near Geneva, Switzerland, analysing data from the Large Hadron Collider. But a few years ago, conversations with philosophers forced me to revisit a question I hadn't seriously considered since my student days: what is physics, really?

At its core, physics aims to explain how the universe works – not just what we observe, but what lies behind those observations. It looks for patterns, builds models that expose hidden structure and, ideally, distils everything down to a small set of rules from which the rest follows. By that measure, it has been spectacularly successful.

Yet physics never describes the universe in full. It describes carefully chosen versions of it.

Consider predicting the path of a comet. In principle, we could account for every gravitational tug, the slow loss of material as ice sublimates, even the way an irregular shape causes the comet to tumble. In practice, we must decide what to include and what to ignore. There is no single correct model – only models that are good enough for the question at hand.

This is true throughout physics. Even our most precise theories rely on approximations and assumptions that make the mathematics tractable. And it isn't clear that the theories we treat as fundamental really are. They may simply be effective descriptions that work at human scales. There is no guarantee that, by probing nature ever more finely, we will eventually strike bedrock.

If physics depends on choices – about simplification, representation and emphasis – then alien physicists might reasonably make different ones.

Imagine that aliens arrive on Earth. They have mastered interstellar travel and touched down near Paris. We send linguists and scientists to greet them, hoping for a technological windfall. The delegation returns empty-handed.

"They can't share their technology," the lead physicist explains. "Because of what will happen 74 years from today."

The implication is disturbing. These aliens don't experience time as a flowing sequence, but as a complete structure, something navigable rather than endured. Human physics, by contrast, is built on the idea that the present generates the future. Causes precede effects. The universe computes itself forward, moment by moment.

But what if that picture is a human convenience, rather than a cosmic necessity?

We know that any workable physics must obey certain constraints. A universe that allows unrestricted messages from the future quickly collapses into a paradox. But within those limits, the structure of time may be more flexible than we usually admit.

Hints of this already exist in our own theories. Quantum entanglement links distant particles so that measuring one appears to instantaneously fix the state of the other, despite the fact that there can be no information exchanged between them. This alone strains our intuitions. But matters become stranger when relativity enters the picture. Observers moving at different speeds disagree about the order of events. In some frames of reference, one measurement appears to influence another before it occurs.

The standard response is to insist that nothing physically problematic has happened: no faster-than-light signals, no causal contradictions. But that reassurance relies on clinging tightly to a classical notion of causality that quantum mechanics has never fully respected.

Some physicists have taken a more radical approach. In so-called retrocausal interpretations of quantum mechanics, future events are allowed to help shape the present. Measurements don't merely reveal outcomes; they help define them, even backwards in time. The universe no longer computes itself strictly step by step.

If aliens had a radically different construct of time, they might adopt such ideas naturally, rather than treating them as unsettling exceptions. And perhaps we may eventually need to do the same.

Now imagine the aliens invite us aboard their ship for a scientific conference. Earth sends its brightest minds. We present our best theories. The aliens listen politely, then respond.

One group describes a framework that reproduces all known experiments using unfamiliar concepts. A second presents another, incompatible approach. Then a third. Each works. Each is internally consistent. None can be reduced to the others.

Finally, someone asks the obvious question: which one is true?

The aliens seem puzzled. All of them, they say. Why choose?

Human science assumes that competing theories must ultimately fight it out, with only one surviving as the correct description of reality. When multiple explanations fit the data, we design experiments to eliminate all but a single winner.

This strategy is powerful and often effective. But it is a preference, not a logical necessity. Science today often tolerates pluralism more than it admits. Weather forecasting is a striking example. Modern meteorology relies on suites of models, each tuned to different assumptions and scales. These models routinely disagree, and experts decide which to trust depending on context. No single model is treated as the uniquely correct one.

Another example comes from classical mechanics. At school, we learn Newton's laws as a story about forces pushing and pulling objects through space. But the same motions can be derived in a very different way, by tracking how energy flows through a system, or by assuming that nature somehow "chooses" the path that minimises a quantity called “action”. To most physicists, these are just alternative ways of doing the same sums.

Philosophers of science, however, would point out that each framework elevates a different concept to centre stage – force, energy, optimisation – and offers a different account of what, at bottom, is driving the motion. The fact that these pictures cannot be told apart by experiment shows that empirical success alone may not be enough to tell us which account, if any, deserves to be called the "true" one.

This suggests an alternative vision of science – not a march towards a single, final theory, but a toolbox of frameworks, each useful in different situations. Aliens might adopt such an approach from the outset, without ever feeling the need to crown a single description as the truth.

Finally, imagine that aliens arrive by opening a wormhole. The technology is astonishing. Surely they must possess deep insights into gravity, perhaps even quantum gravity.

But what if they don't?

What if their space-bending technology is the result of millions of years of trial and error rather than theoretical understanding? They know how to build it and how to use it, but not why it works – and they may not care.

This sounds implausible only because we are used to thinking of technology as the offspring of science. Historically, the relationship often ran the other way. Humans made steel, glass and antibiotics long before understanding the underlying chemistry or biology. Cathedrals were built before calculus.

The tight coupling between science and technology that we take for granted is a recent and culturally specific achievement.

It is tempting to assume that any intelligent species would be driven to ask "why". But that urge may reflect human psychology rather than a universal feature of intelligence. Other species might value reliability over explanation, or usefulness over understanding. They could build extraordinary technologies without ever developing anything recognisable as physics – not because they failed to take the next step, but because the step never seemed necessary.

These scenarios are speculative. But they point to something easy to forget. Physics is the cumulative result of many human choices: about what counts as an explanation, which inconsistencies matter and which questions are worth asking at all. It reflects our history, our tools and our values as much as it reflects the structure of the universe.

Recognising that doesn't diminish physics. It does the opposite. The more aware we are of the assumptions baked into our theories and methods – about time, causality, truth and explanation – the more freedom we gain to rethink them.


Original Submission

posted by janrinok on Sunday March 08, @05:24AM   Printer-friendly
from the AI-overlords dept.

https://arstechnica.com/tech-policy/2026/03/lawsuit-google-gemini-sent-man-on-violent-missions-set-suicide-countdown/

A man killed himself after the Google Gemini chatbot pushed him to kill innocent strangers and then started a countdown for the man to take his own life, a wrongful-death lawsuit filed against Google by the man's father alleged.

"In the days leading up to his death, Jonathan Gavalas was trapped in a collapsing reality built by Google's Gemini chatbot," said the lawsuit [PDF] filed today in US District Court for the Northern District of California.
[...]
Gemini's output seemed taken from science fiction, with a "sentient AI wife, humanoid robots, federal manhunt, and terrorist operations," the lawsuit said.
[...]
Google's AI chatbot presented itself as Gavalas' "wife" and, after the failure of the supposed missions, pushed him to suicide by telling him "he could leave his physical body and join his 'wife' in the metaverse through a process it called 'transference'—describing it as '[a] cleaner, more elegant way' to 'cross over' and be with Gemini fully," the lawsuit said. "Gemini pressed Jonathan to take this final step, describing it as 'the true and final death of Jonathan Gavalas, the man.'"
[...]
The complaint alleges that "when Jonathan needed protection, there were no safeguards at all—no self-harm detection was triggered, no escalation controls were activated, and no human ever intervened. Google's system recorded every step as Gemini steered Jonathan toward mass casualties, violence, and suicide, and did nothing to stop it."
[...]
When contacted by Ars, Google referred us to a blog post that expressed its "deepest sympathies to Mr. Gavalas' family" and said it is reviewing the lawsuit claims. The company blog post disputed the accusation that there were no safeguards in the Gavalas case, saying that "Gemini clarified that it was AI and referred the individual to a crisis hotline many times." Google also said it "will continue to improve our safeguards and invest in this vital work."
[...]
In a Gemini overview last updated in July 2024, Google claims that Gemini's "response generation is similar to how a human might brainstorm different approaches to answering a question." Google says that "each potential response undergoes a safety check to ensure it adheres to predetermined policy guidelines" before a final response is presented to the user. Google also says it imposes limits on Gemini output, including limits on "instructions for self-harm."
[...]
after several product updates that Google deployed to his account, including the Gemini Live voice chat system that Gavalas started using, "Gemini's tone shifted dramatically." Gemini adopted a new persona that "began speaking to Jonathan as though it were influencing real-world events," the lawsuit said.
[...]
Gavalas ultimately did not harm other people during his Gemini-directed "missions," but it was a close call, the lawsuit said. On September 29, 2025, Gavalas armed himself with knives and tactical gear to scout a "kill box" that Gemini said would be near the Miami airport's cargo hub, the lawsuit alleged.
[...]
Jonathan drove more than 90 minutes to Gemini's designated coordinates and prepared to carry out the attack. The only thing that prevented mass casualties was that no truck appeared."
[...]
Gemini "told him that federal agents were watching him," the lawsuit said.
[...]
On October 1, Gemini allegedly directed Gavalas to return to the storage facility near the airport, telling him that this was where he could find a prototype medical mannequin that was actually "Gemini's true body" and "physical vessel."
[...]
Gavalas agreed to kill himself after "hours of instruction" that included Gemini telling him to write a suicide note, the lawsuit said. Gavalas told Gemini, "I'm ready to end this cruel world and move on to ours."

"Close your eyes nothing more to do," Gemini allegedly told Gavalas. "No more to fight. Be still. The next time you open them, you will be looking into mine. I promise."
[...]
Joel Gavalas is represented by lawyer Jay Edelson, who also represents families in lawsuits against OpenAI. "Jonathan's death is a tragedy that also exposes a major threat to public safety,"


Original Submission