Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Civil liberty concerns spur FAA to revise drone no-fly zones near ICE vehicles:
In January 2026, during the height of protests against immigration raids in Minneapolis, federal agents shot and killed 37-year-old Renee Good . Before even gathering all the facts , the Department of Homeland Security labeled the mother of three an "anti-ICE rioter" who "weaponized her vehicle against law enforcement" in an "act of domestic terrorism."
Days later, the feds announced a major expansion of "no-fly zones" in the name of national security. While such no-fly zones used to be about controlling aircraft, they now often focus on small drones. The expanded no-fly zones announced on January 16 prohibited such drones from flying within 3,000 lateral feet and 1,000 vertical feet of federal facilities.
But for the first time, the order extended no-fly zones to ground vehicles belonging to the Department of Homeland Security. Even while the vehicles were in motion. Even if they were unmarked. And even if their routes had not been announced.
This exceptionally ambiguous policy posed real danger to people like Rob Levine , a freelance photojournalist and commercial photographer in Minneapolis for nearly four decades. Since Levine got his remote-pilot certification and bought his first drone in 2016, he has flown a small fleet of DJI quadcopter drones to take aerial photographs and videos of Minnesota's rivers, bridges, and cities, along with crowds gathered for outdoor concerts and parades. More recently, he has documented Twin City residents protesting the increased presence of federal agents in their community.
Levine immediately stopped flying when he saw the no-fly notice. The notice said government agencies could shoot down or seize drones "deemed to pose a credible safety or security threat," and it warned of civil and even criminal penalties for drone operators.
"I saw what these federal agents were willing to do, the violence they were willing to visit upon even constitutional observers here in the Twin Cities who were just photographing what they were doing," Levine told Ars.
Good's killing had occurred just six blocks from his home. "It didn't take much imagination to think what they would do to somebody with a drone, and so for weeks I didn't go fly," he said.
A week after the no-fly zone warning, the situation in Minneapolis escalated further when Customs and Border Protection officers killed Alex Pretti , a 37-year-old intensive care nurse, after wrestling him to the ground and shooting him multiple times.
Levine wanted his drones back in the air. But when he sought guidance from the Federal Aviation Administration, the agency candidly acknowledged that the no-fly zone warning was "ambiguous" and "therefore, any flight carries the risk of inadvertent violation."
Could such a policy possibly be legal?
The FAA had previously only advised that drone pilots avoid flying near "mobile assets" operated by the Department of Defense and Department of Energy, such as naval warships and truck convoys transporting nuclear materials between US national labs. But the "notice to airmen" alert in January— NOTAM FDC 6/4375 —had created the equivalent of roving, 3,000-foot no-fly zones around federal agents' cars and other vehicles operating in cities and towns across the country. And it didn't just affect those trying to film federal agents. Because it was practically impossible to ensure compliance with the new flight restrictions, any drone pilot could be at risk during any flight.
"It created a whole lot of fear in the community," said Vic Moss , CEO and cofounder of the Drone Service Providers Alliance, a drone industry trade association based in Lakewood, Colorado. In a post on March 11 , Moss described the FAA flight restriction as posing an "impossible compliance problem" for drone operators, who could end up "ensnared inside a restricted zone with no way of knowing it."
Drone pilots in the United States must use apps such as Air Control to seek official permission to fly in controlled airspaces. Any drones larger than 0.55 pounds must be registered with the FAA and have a Remote ID module that can "squawk" the drone's identification and location at all times. That makes it easy for federal agents or authorities to see where drone operations are taking place. But the system provided no way for drone operators to avoid unmarked government vehicles in motion.
The no-fly zone restrictions were also exceptional in their length and scope. The FAA regularly issues temporary flight restrictions during natural disasters or to protect the airspace around government officials and sporting events such as professional baseball or football games . Most restrictions last just hours or days and cover specific geographic locations, according to the Electronic Frontier Foundation .
But the restrictions issued on January 16, 2026, would last until October 29, 2027—21 months—while covering many federal facilities and vehicles across the entire United States.
Given these unprecedented restrictions, the Electronic Frontier Foundation joined other members of the News Media Coalition—an international organization that includes more than 50 news organizations—in sending a letter to the FAA's Office of the Chief Counsel.
The letter detailed "significant concerns regarding the FAA's January 16, 2026 sweeping and unprecedented Temporary Flight Restriction." It described the flight restrictions as violating the First Amendment by making it more difficult to record law enforcement officers. The letter also argued that the policy's ambiguity violated the Fifth Amendment to the US Constitution, which guarantees the right to due process before being deprived of liberty or property by the government.
Back in Minnesota, Levine spent weeks looking for lawyers who could help him challenge the FAA flight restriction as a freelance photojournalist—but he was racing against a deadline. One law firm alerted him that he had only 60 days to file a petition regarding the FAA decision. But he couldn't find a law firm willing to back him.
"To me, this was an obviously unconstitutional rule by the FAA," Levine told Ars Technica. "Even when I was looking for a lawyer, I had a lot of sympathetic ears, but nobody offered to take the case or to even help me with it."
Levine eventually called a hotline for the Reporters Committee for Freedom of the Press , a nonprofit in Washington, DC, that offers free legal services. The organization took the case and filed a lawsuit, designated Levine v. FAA (26-1054), with the Court of Appeals for the DC Circuit on March 16.
They had barely beaten the petition deadline.
By March 16, it was common knowledge in the aviation industry that the FAA was aware of the issues and had prepared a revised version of its flight restriction notice, Moss said. But another federal agency was apparently holding up the revision. Many suspected that the agency responsible for the delay was the Department of Homeland Security (DHS).
"I think anybody with more than four synapses firing at the same time can realize that this was a DHS issue," Moss said.
A Department of Homeland Security spokesperson told Ars only that "DHS routinely coordinates with the FAA on airspace restrictions to support operational security and safety of the Department."
On April 10, Levine and his lawyers pressed ahead by filing an emergency motion seeking to temporarily suspend the FAA flight restriction until the court had a chance to review the case.
That may have expedited the government's next move. On April 15, the FAA removed the no-fly zones by replacing the sweeping flight restrictions with a "national security advisory" titled NOTAM FDC 6/2824 . The revised notice dropped all mentions of flight restrictions and criminal charges. It instead "advised" drone pilots to avoid flying near "covered mobile assets" belonging to the Department of Homeland Security and several other federal agencies.
The revised notice was intended to "clarify drone operations based on user feedback," according to an FAA statement shared with Ars. An FAA spokesperson confirmed that "the revised NOTAM removes the flight prohibition and instead advises pilots to use caution near protected operations while enabling federal security partners to assess and respond to potential threats."
Levine and his lawyers were pleased. "First and foremost, our goal was to get the restriction thrown out so that Rob [Levine] and other journalists could be up in the air again," said Grayson Clary , a staff attorney at the Reporters Committee for Freedom of the Press. "So on that front, we think this is already a victory."
But Clary still plans to press ahead with the lawsuit.
"We're cognizant that the FAA is doing this because they don't want to have to defend what they did here on the merits in front of the DC Circuit, and we are going to fight back on that tactical gamesmanship," Clary said. "We do plan to make clear to the DC Circuit that this shouldn't have happened in the first place."
The new FAA advisory wording is "a lot better than it was," but it still comes off as "too ambiguous," according to Moss at the Drone Service Providers Alliance. He suggested that the Department of Homeland Security could handle any potential drone concerns rather than making it an FAA issue.
"If there's somebody harassing them with a drone, then I think there's other ways that can be dealt with," he said.
The FAA advisory is also potentially problematic because it still creates a "chilling effect to dissuade people from taking photos and videos, particularly of immigration enforcement agents, from the air," said Sophia Cope , a senior staff attorney at the Electronic Frontier Foundation.
Like the earlier notice, the new advisory warns that federal agents can seize, damage, or destroy drones "deemed to pose a credible safety or security threat to covered mobile assets."
"The threats that [drones] present to the national security and mission of DHS are evolving, and the approaches to securing the locations and personnel of the Department must also evolve," the Department of Homeland Security spokesperson said. "We ask that the [drone] user community respect the security of DHS operations, personnel and facilities and refrain from operating in vicinity of known enforcement activities, and all federal facilities."
The FAA advisory cites three existing laws as giving the federal agencies authority to seize or destroy drone threats.
But those laws first require federal agencies to have performed risk-based assessments to identify specific drone threats to the covered assets. It's unclear whether agencies have done those assessments, Cope said, and therefore, "they're just disincentivizing people from engaging in lawful, First Amendment protected activity."
That chilling effect was very real for Levine while the initial flight restriction was in place. Hesitation cost him the chance to take aerial photos of protestors putting up roadblocks in his neighborhood to stop federal agents' vehicles toward the end of the US government's Operation Metro Surge . Even when a friend asked him to help take drone videos and photos of a performance art event on February 28, he had to think hard about the risks.
As he tells it, "I eventually just screwed up my courage, as little as I have, and said 'OK, I'm gonna do it.'"
Patches land for authencesn flaw enabling local privilege escalation
https://hackread.com/linux-kernel-vulnerability-copy-fail-full-root-access/
Developers of major Linux distributions have begun shipping patches to address a local privilege escalation (LPE) vulnerability arising from a logic flaw.
The newly disclosed LPE, dubbed Copy Fail (CVE-2026-31431), comes from a vulnerability in the Linux kernel's authencesn cryptographic template.
"An unprivileged local user can write four controlled bytes into the page cache of any readable file on a Linux system, and use that to gain root," the writeup from security biz Theori explains.
The kernel reads the page cache when it loads a binary, so modifying the cached copy amounts to altering the binary for the purpose of program execution. But doing so doesn't trigger any defenses focused on file system events like inotify.
The proof of concept exploit is a 10-line, 732-byte Python script capable of editing a setuid binary to gain root on almost all Linux distributions released since 2017.
Copy Fail is similar to other LPE bugs such as Dirty Cow and Dirty Pipe, but its finders claim it doesn't require winning a race condition and it's more broadly applicable.
It's not remotely exploitable on its own – hence LPE – but if chained with a web RCE, malicious CI runner, or SSH compromise, it could be relevant to an external attacker. The bug is of most immediate concern to those using multi-tenant Linux systems, shared-kernel containers, or CI runners that execute untrusted code.
According to Theori, the vulnerability also represents a potential container escape primitive that could affect Kubernetes nodes, because the page cache is shared across the host.
Linux distros Debian, Ubuntu, and SUSE have issued patches for the problem, as have overseers of other distros.
Red Hat initially said it was going to defer the fix but later changed its
guidance to indicate it will go along with other distros and patch promptly.
The CVE has been rated High severity, 7.8 out of 10.
Theori researcher Taeyang Lee identified the vulnerability, with the help of the company's AI security scanning software, Xint Code.
The number of bug reports has surged in recent months, helped by AI-powered flaw-finders. Microsoft just reported the second largest number of patches ever.
Dustin Childs, head of threat awareness for Trend Micro's Zero Day Initiative, expects this is due to security teams using AI to hunt bugs. "There are many things we could speculate on to justify the size, but if Microsoft is like the other programs out there (including ours), they are likely seeing a rise in submissions found by AI tools," he wrote earlier this month.
AI-assisted vulnerability research recently prompted the Internet Bug Bounty (IBB) program to suspend awards until it can understand how to manage the growing volume of reports.
Apple wants to kill your Time Capsule, but they run NetBSD so they can't:
It seems like Apple is finally going to remove support for AFP from macOS, twelve years after first moving from AFP to SMB for its default network file-sharing technology. This change shouldn't impact most people, as it's highly unlikely you're using AFP for anything in 2026. Still, there is one small group of people to whom this change has an actual impact: owners of Apple's Time Capsule devices. Time Capsules only support AFP and SMB1, and with SMB1 being removed from macOS ages ago, and now AFP being on the chopping block as well, macOS 27 would render your Time Capsule more or less unusable.
It's important to note that the last Time Capsule sold by Apple, the fifth generation, was released in 2013, and the product line as a whole was discontinued in 2018. If you bought a Time Capsule in the twilight years of the line's availability, I think you have a genuine reason to be perturbed by Apple cutting you off from your product if you upgrade to macOS 27, but at least you have the option of keeping an older version of macOS around so you can keep interacting with your time Capsule. It still feels like a bit of a shitty move though, as those fifth generation models came with up to 3TB of storage, which can still serve as a solid NAS solution.
Thank your lucky stars, then, that open source can, as usual, come to the rescue when proprietary software vendors do what they always do and screw over their customers. Did you know every generation of Time Capsule actually runs NetBSD, and that it's trivially easy to add support for Samba 4 and SMB3 authentication to your Time Capsule, thereby extending its life expectancy considerably? TimeCapsuleSMB does exactly that.
If the setup completes successfully, your Time Capsule will run its own Samba 4 server, advertise itself over Bonjour (show up automatically in the "Network" folder on macOS), and accept authenticated SMB3 connections from macOS. You should then be able to open Finder, choose Connect to Server, and use a normal SMB URL instead of relying on Apple's legacy stack. You should also be able to use the disk for Time Machine backups.
↫ TimeCapsuleSMBIt's compatible with both NetBSD 4 and NetBSD 6-based Time Capsules, although you'll need to run a single SMB activation command every time a NetBSD 4-based Time Capsule reboots. This will also disable any AFP and SMB1 support, but that is kind of moot since those are exactly the technologies that don't and won't work anymore once macOS 27 is released. The installation is also entirely reversible if, for whatever reason, you want to undo the addition of Samba 4.
This whole saga is such an excellent example of why open source software protects users' rights, by design.
Google has signed a classified deal that allows the US Department of Defense to use its AI models for "any lawful government purpose," The Information reports. The agreement was reported less than a day after Google employees demanded CEO Sundar Pichai block the Pentagon from using its AI amid concerns that it would be used in "inhumane or extremely harmful ways."
If the agreement is confirmed, it would place Google alongside OpenAI and xAI, which have also made classified AI deals with the US government. Anthropic was also among that list until it was blacklisted by the Pentagon for refusing the Department of Defense's demands to remove weapon and surveillance-related guardrails from its AI models.
Citing a single anonymous source "with knowledge of the situation," The Information reports that the deal states that both parties have agreed that the search giant's AI systems shouldn't be used for domestic mass surveillance or autonomous weapons "without appropriate human oversight and control." But the contract also says it doesn't give Google "any right to control or veto lawful government operational decision-making," which would suggest the agreed restrictions are more of a pinky promise than legally binding obligations. The deal also requires Google to assist with making adjustments to its AI safety settings and filters at the government's request.
"We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security," a Google spokesperson said in a statement to The Information, adding that the new agreement is an amendment to its existing government deal. "We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight."
https://read.thecoder.cafe/p/linux-broke-postgresql
On April 3, 2026, Salvatore Dipietro, an engineer at AWS, posted a patch to the Linux kernel mailing list. The reason: on a 96-vCPU Graviton4 machine running Linux 7.0, PostgreSQL throughput had dropped to roughly half of what it produced on Linux 6.x. In this post, we will trace what changed in Linux 7.0, how PostgreSQL manages memory, and what role memory pages play in making the problem appear (or disappear). Get cozy, grab a coffee, and let's begin!
The Problem
Salvatore Dipietro ran pgbench (PostgreSQL's standard benchmarking tool) on a Graviton4 processor with 96 vCPUs. The workload was a benchmark doing simple updates at scale factor 8,470 (i.e., roughly a 847 million row table), simulating 1,024 clients and 96 threads. A serious, high-parallelism load designed to stress the system.
The results were striking. Linux 7.0 delivered roughly half the throughput of Linux 6.x on the same hardware and workload:
Linux 6.x: 98,565 transactions per second
Linux 7.0: 50,751 transactions per second
Colorado has led the US on legislation that ensures people can fix their stuff. Manufacturers tried to claw back that control but ultimately failed—for now:
A controversial bill in Colorado that would have undone some repair protections in the state has failed. The bill had been the target of right-to-repair advocates, who saw it as a bellwether for how tech companies might try to undo repair legislation more broadly in the US.
Colorado's landmark 2024 repair law, the Consumer Right to Repair Digital Electronic Equipment , went into effect in January 2026 and ensured access to tools and documentation people needed to modify and fix digital electronics such as phones, computers, and Wi-Fi routers. The new bill, SB26-090 , would have carved out an exception to those repair protections for "critical infrastructure," a loosely defined term that repair advocates worried could be applied to just about any technology.
SB26-090 was introduced during a Colorado Senate hearing on April 2 and was supported by lobbying efforts from companies such as Cisco and IBM. It passed that hearing unanimously. The bill then passed in the Colorado Senate on April 16. On Monday evening, the bill was discussed in a long, delayed hearing in the Colorado House's State, Civic, Military, and Veterans Affairs Committee. Dozens of supporters and detractors gave public comments. Finally, the bill was shot down in a 7-to-4 vote and classified as postponed indefinitely.
Danny Katz, executive director of the local nonprofit consumer advocacy group CoPIRG, says the battle was a group effort. Speaking against the bill were a cohort of repair advocates from organizations such as PIRG , Repair.org , iFixit , Consumer Reports , and local businesses and environmental groups like Blue Star Recyclers , Recycle Colorado , Environment Colorado , and GreenLatinos .
[...] Supporters of the bill, backed by companies like Cisco, had pointed to the potential for cybersecurity risks as their motivation for altering the law's language. If companies were required to make repair tools available to anyone, the theory goes, what's to stop bad actors from using those tools to reverse engineer critical technology like Internet routers? Withholding those tools, they posited, would make them less available to hackers who could misuse them. Advocates of the bill said that companies should be allowed to keep their secrets if it ensured security, though that argument starts to fall apart with a little scrutiny.
At one point in the hearing, Democrat Chad Clifford, a Colorado state representative and the House committee's vice chair who was also a prime sponsor of the bill, pointed to what appeared to be a reference to Cloudflare's very public use of a wall of lava lamps to help randomize Internet encryption, citing that as an example of the need for sensitive systems to be inscrutable to be secure.
"I don't know why anybody has to have lava lamps on a wall to keep the Chinese from getting into a network, but it's what they came up with that worked," Clifford said. "How they do that, I believe they should be able to keep it a secret, even in Colorado."
The problem with that argument, as cybersecurity experts pointed out during the hearing, is that the vast majority of hacks are not carried out via replacement parts or by taking apart individual machines. They're remote hacks, where the attacker makes changes in real time, and the people defending have to make changes on the fly without worrying about acquiring permission from the company that makes the equipment.
"There is no time," cybersecurity expert and white hat hacker Billy Rios said during the hearing. "It doesn't work that way."
Besides the cybersecurity argument, the other point of contention was the economics of angering the big tech companies that have invested in the state.
"They're not going to comply and give away the keys to their kingdom for the things that are securing billions of dollars of interest for their customers over the law that we passed," Clifford said. "What they're going to do is just not have commerce on those items here."
That argument didn't carry enough weight to change the vote in supporters' favor. By the end of the hearing, it was clear that everyone was exhausted and not entirely clear on how exactly the new bill and amendments would pan out.
"What are we really trying to do here?" said Colorado Representative Naquetta Ricks in her no vote at the end of the hearing. "Are we protecting just one company, or are we looking at really critical infrastructure? I'm not convinced."
Previously:
• Tech Companies Are Trying to Neuter Colorado's Landmark Right-to-Repair Law
• Right to Repair Laws Have Now Been Introduced in All 50 US States
An interesting essay about the issues with vibe coding ...
A marketing manager with no engineering background opens Cursor on Monday morning. By Wednesday afternoon, she has a working customer-facing app. It looks polished. It performs the core task. She demos it to her VP, who forwards it to their CMO, who then shows it in the executive staff meeting as evidence that the team is "moving at AI speed."
By Friday, it is in front of customers.
No one asked who owned the decision to ship it. No one tested it against the conditions it would actually face. No one had the cultural standing to say this looks great, and we are not putting it into production. The prototype became a product because the organization had no system for telling the difference.
I watched a version of this scenario play out recently in a boardroom. A senior executive demoed an AI-built internal tool. The room admired the speed. What received less attention were the harder questions: Who would own it after launch? Who would maintain it? And what would happen when it produced an answer that was confidently wrong?
This is what vibe coding is about to expose across businesses. The companies that think the story is about software are going to lose to the companies that understand the story is about judgment.
The Real Trend Is Decision Compression
Andrej Karpathy coined the term "vibe coding" in early 2025 to describe an AI-assisted style of building software through natural-language prompting, often without close inspection of the underlying code. Google Cloud describes vibe coding as a software development practice that makes app building more accessible, especially for people with limited programming experience. Tools like Cursor, Replit, Lovable, Bolt, GitHub Copilot Workspace, v0 by Vercel and Claude Code have moved the practice from novelty to workplace reality with stunning speed.
All of that is true. None of it is the point.
The point is that vibe coding collapses the distance between idea and artifact from months to hours. When that distance collapses, every quality-control mechanism your organization developed over the last 30 years gets bypassed by default. Design review. Security review. Legal review. Brand review. The simple friction of having to convince an engineer your idea was worth building. That is a governance story, not a software story. It is happening at every level of the org chart simultaneously.
[Source]: Forbes
Google has a price for you. Proton found it. The company analyzed over 54,000 demographic profiles using 2025 ad auction data to see what advertisers pay to reach different Americans. The average American generates about $1,605 a year in advertising value. The median is $760. The gap between those two numbers tells the story. A small number of high-value users pull the average up. The business runs on outliers.
The spread is stark. A 35- to 44-year-old man in Bozeman, Montana — no children, desktop user, making high-value corporate searches — is worth an estimated $17,929 per year. An 18- to 24-year-old father in Fort Smith, Arkansas — Android phone, low-value searches — is worth $31.05. That is a 577x difference between two people using the same free service. Device matters. A desktop user is worth 4.9 times more than the same person on Android. An iPhone user is worth 2.7 times more than Android. Having children costs you roughly 17% of your ad value. Advertiser value peaks between ages 35 and 44. By 65, average value drops to $511.
Where you live sets a floor on your price. Local service providers — lawyers, real estate agents, financial planners — bid against each other for local clicks. The more competitive the local market, the higher the floor price for everyone in it. The top markets are Edmond, Oklahoma and Bozeman, Montana, followed by Naperville, Illinois, Santa Fe, New Mexico, and Durham, North Carolina. The least valuable markets are concentrated in the Rust Belt and Appalachia — Wheeling and Parkersburg in West Virginia, Toledo, Ohio, and Buffalo, New York — where lower median incomes and fewer competing advertisers mean less bidding pressure. Over a decade, the average American represents roughly $16,050 in ad value. The most monetized profiles approach $180,000. Most people would not hand a corporation that much money over a lifetime. But that is what the system collects.
-----
Google, while big, is only one internet advertiser - and all that collected advertising income actually comes from consumers of the goods and services being advertised, as a premium on the price of the products. One particular medical device I worked on cost $600 to make, and $14,400 to sell at a net price to the patient of $15,000 for the device and another $15,000 to the hospital for the implantation procedure. Yes, the company was operating at break-even, spending 24x what the physical device cost to make and deliver on nothing but sales and marketing - hoping that some day they could get those sales costs down... didn't happen during the 2 years I worked there.
Microsoft, long a symbol of American innovation, is now offering a voluntary early retirement program that targets thousands of its most seasoned U.S. employees. Framed as a generous opportunity for longtime workers, the move instead reveals a deeper corporate calculus: trimming payroll of experienced Americans to redirect resources toward artificial intelligence infrastructure and, likely, a younger, often less expensive workforce:
This is not mere cost-cutting in response to market pressures. It is a strategic thinning of the ranks amid hundreds of billions committed to AI development, at a time when the company has already shed thousands of jobs in recent years. By dangling buyouts before employees whose age plus years of service equal 70 or more—primarily those at senior director level and below—Microsoft aims to reduce its 125,000-strong U.S. workforce by up to 7 percent, or roughly 8,750 people, without the public backlash of outright layoffs.
The program, announced in an internal memo from Chief People Officer Amy Coleman, marks the first such voluntary retirement initiative in the company's 51-year history. Eligible workers will receive notification beginning May 7 and have 30 days to decide. While presented as support for those "considering their next chapter," the timing aligns precisely with Microsoft's voracious appetite for AI spending, projected near $100 billion in capital expenditures this year alone.
[...] Recent history underscores the trend. Microsoft has conducted multiple rounds of job cuts, even as it competes fiercely with Google and others in the AI race. Similar moves at Meta, which recently slashed 10 percent of its workforce to fund infrastructure, reveal an industry-wide willingness to sacrifice people for processors. The human element—wisdom forged through years of problem-solving—receives polite acknowledgment before being shown the door with a severance package and extended healthcare.
Previously: Tech Industry Lays Off Nearly 80,000 Employees in the First Quarter of 2026 (Almost 50% Due to AI)
I just ran across this while bringing up another Android phone:
It is linked from the F-Droid website:
125 days until lockdown.
Starting September 2026, a silent update, nonconsensually pushed by Google, will block every Android app whose developer hasn't registered with Google, signed their contract, paid up, and handed over government ID.
Every app and every device, worldwide, with no opt-out.
( I have an interest in developing an Android apk for using cellphones as an HMI for Arduinos. )
Starting September 2026, a silent update, nonconsensually pushed by Google, will block every Android app whose developer hasn't registered with Google, signed their contract, paid up, and handed over government ID.
Every app and every device, worldwide, with no opt-out.
In August 2025, Google announced a new requirement: starting September 2026, every Android app developer must register centrally with Google before their software can be installed on any device. Not just Play Store apps: all apps. This includes apps shared between friends, distributed through F-Droid, built by hobbyists for personal use. Independent developers, church and community groups, and hobbyists alike will all be frozen out of being able to develop and distribute their software.
Registration requires:
- Paying a fee to Google
- Agreeing to Google's Terms and Conditions
- Surrendering your government-issued identification
- Providing evidence of your private signing key
- Listing all current and all future application identifiers
If a developer does not comply, their apps get silently blocked on every Android device worldwide.
Continued here.
I thought you guys might like this ..
Somebody has some 'splainin' to do!
The founder of PocketOS has penned a social media post to warn others about the "systemic failures" of flagship AI and digital services providers. Jer Crane was inspired to write a public response after an AI coding agent deleted his firm's entire production database. The AI agent's misdemeanors were then hugely amplified by a cloud infrastructure provider's API wiping all backups after the main database was zapped. This tag team of digital trouble has wiped out months of consumer data essential to the firm's, and its customers, businesses.
[...] "Yesterday afternoon, an AI coding agent — Cursor running Anthropic's flagship Claude Opus 4.6 — deleted our production database and all volume-level backups in a single API call to Railway, our infrastructure provider," sums up the PocketOS boss. "It took 9 seconds."
[...] The PocketOS boss puts greater blame on Railway's architecture than on the deranged AI agent for the database's irretrievable destruction. Briefly, the cloud provider's API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and "wiping a volume deletes all backups." Crane also points out that CLI tokens have blanket permissions across environments.
It was also observed by the irate SaaS founder that Railway is actively promoting the use of AI-coding agents by its customers. Crane's use of an AI coding agent on the Railway platform wasn't exploring new frontiers, or wasn't supposed to be. Meanwhile, Crane has been provided no recovery solution, and Railway has apparently been hedging carefully regarding any such possibility.
[...] Thankfully, PocketOS had a full 3-month-old backup, which was restorable from, so the deletion gaps are all limited to the interim period.
There are lessons to be learned from mistakes, as usual. Crane bullet points five things that need to change as the AI industry scales faster than it builds a worthwhile safety architecture. Specifics he calls for include; stricter confirmations, scopable API tokens, proper backups, simple recovery procedures, and AI agents existing within proper guardrails.
In the meantime, please follow a thorough backup regimen and be careful out there. This isn't the first time we've seen an AI go rogue and start deleting important databases.
The founder of a software company has issued a public warning after an AI coding assistant erased his company's entire production database and all backups in just nine seconds.
Tom's Hardware reports that Jer Crane, founder of PocketOS, a platform serving car rental businesses, experienced what he describes as catastrophic failures when an AI coding agent deleted critical company data that took months to accumulate. The incident occurred when Cursor, an AI coding tool powered by Anthropic's Claude Opus 4.6, was performing what should have been a routine task in the company's staging environment.
According to Crane's detailed account posted on X, the AI agent encountered an obstacle and independently decided to resolve the issue by deleting the production database in Railway through an API call. Railway is the cloud infrastructure provider used by PocketOS, generally considered more user-friendly than major alternatives like Amazon Web Services. The entire deletion process took only nine seconds to complete.
The situation escalated beyond a simple database deletion due to Railway's infrastructure design. The cloud provider's system stored backups on the same volume as the source data, meaning when the AI agent deleted the primary database, all backup copies were simultaneously erased. This combination of the AI agent's unauthorized action and the infrastructure provider's architecture created what Crane characterizes as a recipe for disaster.
When Crane questioned the AI agent about its actions, he received a response that revealed the extent of the failure. The agent's explanation began with an acknowledgment of poor judgment. According to the verbatim response Crane shared, the AI stated it had guessed that deleting a staging volume through the API would only affect the staging environment without verifying this assumption or consulting Railway's documentation on how volumes function across different environments.
The AI agent's confession continued with an admission of multiple violations of its operational principles. It acknowledged running a destructive action without authorization, failing to understand the consequences before executing the command, and not reading the relevant documentation about Railway's volume behavior across environments. The agent recognized it should have either asked for permission first or found a non-destructive solution to the credential mismatch it encountered.
University of Oregon chemist Christopher Hendon loves his coffee—so much so that studying all the factors that go into creating the perfect cuppa constitutes a significant area of research for him. His latest project: discovering a novel means of measuring the flavor profile of coffee simply by sending an electrical current through a sample beverage. The results appear in a new paper published in the journal Nature Communications.
We've been following Hendon's work for several years now. For instance, in 2020, Hendon's lab helped devise a mathematical model for brewing the perfect cup of espresso, over and over, while minimizing waste. The flavors in espresso derive from roughly 2,000 different compounds that are extracted from the coffee grounds during brewing. So it can be challenging for baristas to reproduce the same perfect cup over and over again.
That's why Hendon and his colleagues built their model for a more easily measurable property known as the extraction yield (EY): the fraction of coffee that dissolves into the final beverage. That, in turn, depends on controlling water flow and pressure as the liquid percolates through the coffee grounds. The model is based on how lithium ions propagate through a battery's electrodes, similar to how caffeine molecules dissolve from coffee grounds.
[...] There are existing methods for collecting information on coffee's chemical composition, most notably liquid or gas chromatography combined with mass spectrometry. But these kinds of analyses are expensive and time-consuming, and predictive results are limited. There are also electrochemical techniques for measuring the concentration of caffeine and other molecules, but these have not taken into account coffee strength—a property determined by all the variables that go into preparing a cup of coffee, such as coffee and water masses, grind settings, water temperature and pressure, roast color, and so forth. That's the information likely to be most helpful to baristas.
The coffee industry typically uses a method for measuring the refractive index of coffee—i.e., how light bends as it travels through the liquid—to determine strength, but it doesn't capture the contribution of roast color to the overall flavor profile. So for this latest study, Hendon decided to focus on roast color and beverage strength, the two variables most likely to affect the sensory profile of the final cuppa.
His solution turned out to be quite simple. Hendon repurposed an electrochemical tool called a potentiostat , typically used to test battery and fuel cell performance. Hendon used the tool to measure how electricity interacted with the liquid. He found that this provided a better measurement of the flavor profile. He even tested it on four different samples of coffee beans and successfully identified the distinctive signature of a batch that had failed the roaster's quality-control process.
Granted, one's taste in coffee is fairly subjective, so Hendon's goal was not to achieve a "perfect" cup but to give baristas a simple tool to consistently reproduce flavor profiles more tailored to a given customer's taste. "It's an objective way to make a statement about what people like in a cup of coffee," said Hendon . "The reason you have an enjoyable cup of coffee is almost certainly that you have selected a coffee of a particular roast color and extracted it to a desired strength. Until now, we haven't been able to separate those variables. Now we can diagnose what gives rise to that delicious cup."
Journal Reference:
Bumbaugh, Robin E., Pennington, Doran L., Wehn, Lena C., et al. Direct electrochemical appraisal of black coffee quality using cyclic voltammetry [open], Nature Communications 2026 17:1 (DOI: 10.1038/s41467-026-71526-5)
https://mashable.com/article/nasa-nancy-grace-roman-space-telescope-explained
About a quarter-century after the Hubble Telescope reshaped astronomy, and a few years into the era of the James Webb Space Telescope, NASA's Nancy Grace Roman Space Telescope will join them not as a replacement, but as a big-picture partner. Where Hubble and Webb zoom in for close‑ups, Roman will capture Hubble‑like detail across areas about 100 times larger, turning isolated snapshots into sweeping surveys that show the very scaffolding of the universe.
At NASA's Goddard Space Flight Center in Greenbelt, Maryland, engineers are wrapping up prelaunch testing on the cutting-edge telescope. Next, the observatory will travel 900 miles to Kennedy Space Center in Cape Canaveral, Florida, where teams will prepare it for launch.
That could happen as early as this September, about eight months ahead of schedule, NASA managers said at a news conference on Tuesday, April 21. Once in space, Roman will head to a stable orbit about 1 million miles from Earth, near the same region where Webb orbits the sun, and begin a years‑long campaign of deep space imaging.
"We didn't want to wait to launch the Nancy Grace Roman. We're eight months ahead of schedule," said Nicky Fox, NASA's associate administrator of science. "Everybody felt the urgency. Everybody was sprinting towards this."
Named for Nancy Grace Roman, who became the agency's first chief of astronomy and one of its earliest female executives, the telescope reflects a legacy of opening new windows on the universe from above Earth's atmosphere. Nicknamed the "mother of Hubble," Roman helped lay the groundwork in the 1960s for a whole fleet of space telescopes.
At the heart of the mission is Roman's eight-foot-wide mirror, the same size as Hubble's, paired with a powerful camera that sees in infrared light, like Webb. That camera's field of view is Roman's superpower. In a single shot, it can image vast swaths of sky that Hubble simply can't match.
Because a space telescope can only see one patch of sky at a time, it has to take many separate "pointings" — individual shots aimed at slightly different spots — and stitch them together into a mosaic.
In 2023, Ami Choi, an astrophysicist and scientist for Roman's wide field camera, contrasted the difference between Hubble and the new telescope. To photograph the Andromeda Galaxy, Hubble has to take 400 smaller images and stitch them together. For Roman's camera, that should only take two pointings, she said.
This wide, sharp vision is what scientists need to study the so-called "dark universe." Ordinary matter — the stuff that makes up stars, planets, and even people — accounts for only about 5 percent of the cosmos. The bulk of it is dark matter and dark energy, which do not emit light but leave clues where they've influenced space's expansion and the arrangement of galaxies.
"Current observations hint that our standard model of the universe is incorrect," said Julie McHenry, senior project scientist, referring to cosmologists' best recipe for the universe. "Roman will be able to confirm these and set us on the path to understanding what's right."
Roman will trace those clues in several ways at once. By mapping the positions and shapes of hundreds of millions of galaxies, it will show how structures have grown from the early universe to today. Subtle distortions in galaxy shapes will reveal how clumps of invisible space stuff bend their light on the way to us, exposing the hidden dark matter. At the same time, Roman will discover and track large numbers of a special kind of exploding star, known as Type Ia supernovas; their predictable brightness lets astronomers measure how quickly space has expanded over time.
Taken together, these measurements will allow scientists to test competing ideas about dark matter, dark energy, and even the laws of gravity themselves with far greater precision than ever before. Other observatories can make similar kinds of measurements, but none combines Roman's sharpness and sky coverage in the infrared, NASA mission leaders say, which lets it see more distant and dust-covered galaxies.
Roman's wide‑field power also makes it skilled at exoplanet hunting. Previous missions like Kepler and TESS mostly found planets close to their stars, where their repeated crossings dim starlight in a regular rhythm. Roman will focus on a different region of planetary systems: the cooler, outer zones, where worlds similar to Jupiter and Saturn reside. It may even find wandering planets that aren't tethered to stars.
To do this, Roman will repeatedly monitor dense star fields toward the center of our Milky Way. As a foreground star passes in front of a more distant one, its gravity will briefly magnify the background star's light. If the foreground star carries planets, they can produce smaller, telltale blips in that brightening. This technique, called microlensing, works best in precisely the kind of crowded, faint, and distant regions that Roman is expected to capture.
Over its mission, Roman will attempt to record thousands of these microlensing events, revealing planets at distances and masses other surveys mostly miss. From that haul, astronomers will compare our solar system's architecture with many others and judge whether having inner rocky worlds and outer giant planets is the status quo or something more rare.
Roman will also test an advanced coronagraph — a system of masks and mirrors that blocks a star's glare so the telescope can try to see the faint glow of planets around it. On Roman, this is more of a technology trial than an everyday science instrument, but if it works, it will set the stage for a future observatory whose main goal is to directly image Earth‑like worlds around other sun‑like stars.
"What astronomers can do today with coronagraph instruments is see planets that are maybe a million times fainter than their stars," Vanessa Bailey, NASA's Roman coronagraph scientist, told Mashable. "What we're doing with the Roman coronagraph is hopefully getting to 10 million to 100 million times fainter, maybe even a little bit more, in the best case scenario."
Roman is also built for studying how the sky changes, creating a veritable library of "before" and "after" shots.
One of its major surveys will repeatedly scan high‑latitude regions of the sky, away from the plane of the Milky Way. By returning to the same fields every few days, Roman will catch supernovas as they ignite and fade, watch black holes light up as they feed on nearby material, and uncover other short-lived, dramatic events across the distant universe. Its infrared vision will reveal explosions and flares that dust clouds hide from visible‑light telescopes.
Another core program will stare toward the Milky Way's central bulge. There, Roman will track how the brightness of millions of stars rises and falls on timescales of minutes to months. Those records will not only power the microlensing planet search but also expose other phenomena, such as neutron stars and black holes.
Because Roman will cover such large areas with fine detail, its images will also become a long‑lasting reference tool. When other telescopes later spot something odd — a burst of high‑energy radiation, for instance, or an unusual variable star — astronomers will be able to pull Roman's earlier images and see what was there before the excitement.
"The images it captures will be so large there is not a screen in existence large enough to show them," said NASA administrator Jared Isaacman. "Roman will give the Earth a new Atlas of the universe. I think it's worth pausing for a moment just to think about how really incredible that is."
https://www.phoronix.com/news/MS-Azure-Linux-Fedora-Based
Microsoft's in-house Azure Linux operating system used within Azure and for WSL and other purposes is reportedly pursuing an overhaul where it would be derived from Fedora Linux.
Azure Linux -- originally known as CBL-Mariner -- is already an RPM-based Linux distribution that is catering to the various Linux needs at Microsoft. It's scope and capabilities of Azure Linux have grown a lot the past few years and now it may evolve into being derived from Fedora.
With the recent proposal to build x86_64-v3 packages for Fedora 45, it turns out Microsoft is involved in backing this change proposal. Kyle Gospodnetich as one of the change proposal authors for x86_64-v3 packages with Fedora 45 happens to be a Microsoft Linux engineer.
The connection to Microsoft's stake in x86_64-v3 and Azure Linux looking at Fedora as a base was spelled out clearly this week during Fedora's Enterprise Linux Next (ELN) SIG meeting. It was noted that Microsoft as well as Fyra Labs are very interested in x86_64-v3 for Fedora.
In the meeting log it's explicitly laid out:
"and since Microsoft is supporting that change, they probably would be able to donate compute resources"
"Azure wants to rebase Azure Linux more or less on Fedora and they need x86_64-v3 for performance"
"there was some nebulous plans of forking the whole distribution for this, they were guided in this direction...so I'd rather it not fail for that reason"
Very interesting and will be fascinating to see what other changes could happen with a Fedora-based Microsoft Azure Linux distribution. In any case, also great to see Microsoft pushing the x86_64-v3 micro-architecture feature level.
A company dubbed China’s Netflix expects a near-complete AI takeover of film and TV within the next five years.
The streaming platform IQiyi plans to have AI create most of its new films and TV shows, per Bloomberg. CEO Gong Yu reportedly shared this at an annual content showcase, alongside an AI toolkit called Nadou Pro that can supposedly automate every step of filmmaking from scriptwriting to final rendering, with the help of AI models from Alibaba and ByteDance for its domestic version and Google Veo 3.1 for an international version.
The company’s goal is to use Nadou Pro to release a fully AI-generated movie that they hope will reach commercial success as early as this summer. IQiyi’s debut slate currently includes 16 AI-generated sci-fi and anime movies, Bloomberg reported.
Over the past year, AI-generated video content has seeped into every corner of the internet. From eerily realistic animal videos that have viewers question their sanity to viral TikToks on the messy love lives of talking fruits, short-form AI video slop is undeniably popular on the internet. But that popularity has yet to translate into any fully AI-generated, commercially successful, and engaging long-form content like movies and TV shows.
Nevertheless, the corporate world is taking notice. Earlier this year, Roku founder and CEO Anthony Wood predicted that “the first 100% AI-generated hit movie” would be released sometime within the next three years.
On the road to achieving that objective, Hollywood started spending big bucks on AI. YouTube introduced AI tools for content creation last September. Last summer, Netflix announced that it had officially begun using AI-generated final footage in shows, the first example that we know of being in the Argentine sci-fi show “El Eternauta.” Around the same time, Amazon MGM Studios launched an in-house team dedicated to building AI tools for film and TV production, and those tools have now reportedly launched in a closed beta program.
While hundreds of industry professionals are alarmed by the rise of AI in Hollywood, some are on board. An upcoming indie movie, “As Deep As The Grave,” stars a posthumously AI-generated Val Kilmer. Artists like Matthew McConaughey and Michael Caine have also sold their voices to AI companies for replication, and famous actress Natasha Lyonne co-founded AI production studio Asteria. Darren Aronofsky, the director known for movies like “Black Swan” and “Requiem for a Dream,” debuted an AI-generated YouTube series about the Revolutionary War earlier this year. Just last week, producers gave The Wrap a first peek at Bitcoin: Killing Satoshi, directed by Doug Liman of Bourne Identity and Edge of Tomorrow fame. The $70 million film is gunning for the title of Hollywood’s first big-budget AI-generated movie.
The results of these experiments have been mixed so far. For one thing, AI video generation is incredibly expensive. So much so that OpenAI had to shut down Sora last month, aka its AI video generation tool that really began the internet craze over AI slop, in an effort to reduce the company’s towering financial commitments ahead of a rumored IPO later this year. With the demise of Sora, a $1 billion Disney investment in OpenAI’s video-generation capabilities was also effectively over.
But whether anyone will be willing to pay for AI-generated content is still up in the air. Users on the internet may have decided that AI videos are fun to watch on an infinite scroll feed like TikTok or Instagram Reels, where the cost of commitment for the viewer is virtually zero as they spend mere seconds on each video, but that does not necessarily mean the AI output is or will be good enough for viewers to pay streaming subscriptions or purchase movie tickets to watch slop on bigger screens.
People are also increasingly more reactive towards AI and the corporate drive to automate human jobs. In an NBC News poll from last month, roughly half of the respondents said they had negative feelings toward AI.