Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 11 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you pay for premium AI subscriptions?

  • Yes
  • No
  • I use someone else's paid one
  • What, in THIS economy?
  • I don't use AI, you insensitive clod!

[ Results | Polls ]
Comments:78 | Votes:230

posted by hubie on Friday April 17, @06:06PM   Printer-friendly
from the fuzzy-bits-and-bobs dept.

https://arstechnica.com/science/2026/04/physicists-think-theyve-resolved-the-proton-size-puzzle/

There has been considerable debate among physicists over the last 15 years about conflicting measurements of the charge radius of a hydrogen atom's proton
[...]
The discrepancy hinted at possible exciting new physics. Now the debate seems to be winding down with the latest experimental measurements, described in two recent papers published in the journals Nature and Physical Review Letters, respectively. And the evidence has tilted in favor of a smaller proton radius and against new physics.
[...]
As previously reported, most popularizations discussing the structure of the atom rely on the much-maligned Bohr model, in which electrons move around the nucleus in circular orbits. But quantum mechanics gives us a much more precise (albeit weirder) description.
[...]
Hydrogen atoms are the simplest nuclei, with a single proton orbited by an electron, so that's typically what physicists have used for their experiments to measure the proton's charge radius. For a long time, the accepted value was .876 femtometers—a "world average"
[...]
Muon spectroscopy measurements first caused the problem back in 2010. Physicists at the Max Planck Institute of Quantum Optics used muonic hydrogen, replacing the electron orbiting the nucleus with a muon, the electron's heavier (and very short-lived) sibling.
[...]
The physicists expected to measure roughly the same radius for the proton as prior experiments, only with less uncertainty. There should be no difference (other than mass and lifetime) between the electron and the muon, theoretically. Instead, they measured a significantly smaller proton radius of 0.841 femtometers, 0.00000000000003 millimeters smaller, well outside the established error bars. It was five standard deviations from the value obtained by other methods.
[...]
Subsequent measurements by various groups were inconclusive. For instance, in 2013, the same international team performed muon-based experiments that confirmed their 2010 value, producing a measurement of 0.84 femtometers for the proton's radius, with a discrepancy of 7 sigma.
[...]
However, two experiments using regular hydrogen to measure the proton radius produced mixed results: A 2017 study also confirmed the 2010 result, while a 2018 measurement was in line with the larger value before the 2010 experiment.
[...]
That brings us to the latest two papers, both of which involved experiments with hydrogen atoms in a vacuum chamber.
[...]
Based on the combined results, the proton has a radius of about 0.84 femtometers, or less than 1 million-billionth of a meter, once again in keeping with the 2010 measurement that kicked off the debate.

"The proton radius should be a universal property; it should give the same result no matter how you look at it," Juan Rojo, a physicist at Vrije University Amsterdam in the Netherlands, who was not involved in either experiment, told New Scientist. "This is why these two papers are quite nice, because they provide different perspectives to the same number."
[...]
this is disappointing for the discovery of new physics, but it is exciting that we are performing such stringent tests of the Standard Model.


Original Submission

posted by janrinok on Friday April 17, @01:20PM   Printer-friendly

Bankers and bank regulators are scrambling to figure out what to do:

High-ranking members of Britain's government and banking sector are reportedly scrambling to figure out what to do about cybersecurity holes found by Claude Mythos Preview, Anthropic's new automated system for making tech elites—and now financial elites—wet their pants.

In case you weren't aware, last week Anthropic declared its unreleased model, Claude Mythos Preview, scary as heck and simply too powerful to unleash upon the world.

In addition to claiming that Claude Mythos Preview is a sneaky little dickens , a post on Anthropic's frontier red team blog describes it as essentially the world's most dangerous super-hacker . The passage below summarizes the apparent hacking hazard pretty well. (Note that "zero-day vulnerabilities" are vulnerabilities in code known only to the person or AI agent who found them):

During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The vulnerabilities it finds are often subtle or difficult to detect. Many of them are ten or twenty years old, with the oldest we have found so far being a now-patched 27-year-old bug in OpenBSD—an operating system known primarily for its security.

Now, according to the Financial Times, the Bank of England and regulators at the U.K.'s Financial Conduct Authority and Treasury will hold "urgent discussions" with that country's National Cyber Security Centre to figure out a course of action. Anonymous sources who spoke to the Financial Times said (quite Britishly) that a planning meeting will be held "in the next fortnight."

How scared is the U.K.? This issue is also the next big priority of the UK's "Cross Market Operational Resilience Group," according to the Financial Times. That group includes members of the U.K.'s National Cyber Security Centre, the Financial Conduct Authority (their equivalent of the SEC), and His Majesty's Treasury. It's co-chaired, the Financial Times says, by someone at the Bank of England with the title "executive director for supervisory risk."

One bit of verbiage from the Financial Times is remarkable. It describes discussions about "the risks posed by the latest AI model from Anthropic." Anthropic might quibble slightly, since it has framed the secretive release of Claude Mythos Preview only through its " Project Glasswing " initiative as a way to warn stakeholders about future dangers down the line, not as a sort of global cybersecurity hostage situation.

Some, like rationalist blogger Zvi Mowshowitz have expressed concern that Anthropic's claims are being communicated poorly. Mowshowitz wrote that Anthropic is "mixing valid points and helpful analysis with overstatement and hype."

For his part, Yann LeCun, the former head AI researcher at Meta has been reposting X posts claiming that big, bad Mythos is actually no big deal.

And it should be noted that as far as anyone knows, no one outside of Anthropic has so far been allowed the sort of unfettered access to the model it would take to attempt a more objective form of analysis.


Original Submission

posted by janrinok on Friday April 17, @08:36AM   Printer-friendly
from the Once-You-Go-Clippy-You-Can-Not-Go-Back dept.

https://www.theguardian.com/technology/2026/apr/13/meta-ai-mark-zuckerberg-staff-talk-to-the-boss

Meta is turning Zuckerberg into Clippy so he can answer all your queries and gives you feedback and support ... I'm sure the staff will just feel the motivation flow over them as their great leader appears to them in person, or in avatar form as their very own Clippy. Zucky?

The AI clone of Zuckerberg, Meta's founder and chief executive, is being trained on his mannerisms and tone as well as his public statements and thoughts on company strategy.

[...] Synthesia, a $4bn UK-based startup that makes realistic video avatars, said the idea of a senior company executive using AI to increase their internal presence was not science fiction any more.

"When you add realistic AI video and voice, engagement and retention go up significantly," said a Synthesia spokesperson. "People work better when the information they need is delivered by a familiar face or voice."

Until Zuckerberg launches his AI self, however, he will have to present in person at meetings with thousands of Meta staff, such as the one he carried out in 2023 two days after he announced that 10,000 employees would be laid off. Then, the tech chief was questioned by "rattled" staff about job security and the future of remote working.


Original Submission

posted by janrinok on Friday April 17, @03:52AM   Printer-friendly
from the robot-overlords dept.

https://arstechnica.com/ai/2026/04/ukraines-military-robot-surge-aims-to-offset-drone-risks-to-humans/

Ukrainian ground robots and drones have demonstrated how to overcome a Russian military position by themselves while forcing the surrender of Russian soldiers, claimed Ukrainian President Volodymyr Zelenskyy.
[...]
The claim by Zelenskyy has not been independently verified but was accompanied by a promotional video in which he described Ukraine's military robots as having completed over 22,000 missions in the last three months. Ukraine's defense ministry also recently described a threefold increase in the Ukrainian military's uncrewed ground vehicle missions over the last five months, with more than 9,000 robotic missions conducted in March, according to Scripps News.
[...]
Zelenskyy's statement may refer to an event that occurred in the Kharkiv Oblast in northeastern Ukraine last year, according to The Independent. It referenced a statement by the Ukrainian 3rd Separate Assault Brigade detailing how the unit had used flying drones and "kamikaze" ground robots to attack fortified Russian frontline positions at that time.
[...]
The increased emphasis on battlefield robots coincides with how deadly flying drones have made the modern battlefield for human soldiers. Persistent drone surveillance and drone strikes have created a "kill zone" stretching 12 miles (20 kilometers) beyond the frontline positions as of February 2026, forcing individual soldiers to hunker down or rely on nighttime darkness, anti-thermal cloaks, or foggy conditions to move about without risking a drone strike. Such drones are now inflicting the majority of battlefield casualties on both sides as the full-scale war enters its fifth year.
[...]
By comparison, ground robot usage in the Russo-Ukrainian war has been relatively modest, with Ukraine reporting thousands of ground robot missions per month versus hundreds of thousands of drone sorties per month. Yet the latest numbers suggest the Ukrainian military has stepped up its effort to deploy more robots for supply runs and medical evacuations, which can reduce human exposure to drone threats.
[...]
One example of such robots is the Droid TW 12.7 developed by the Ukrainian company DevDroid. As described in the company's marketing material, the tracked robot is armed with an M2 Browning machine gun mounted on a remotely controlled turret and capable of traveling up to 15 miles (25 kilometers) at a top speed equivalent to an adult's walking pace.
[...]
A deputy battalion commander of Ukraine's 38th Marine Brigade told The Kyiv Independent that robots attempting to evacuate wounded soldiers failed to reach the positions in four out of five cases due to such complicating factors.

Like drones, robots can also face communication challenges from signal loss and enemy electronic warfare, according to the Lowy Institute.
[...]
The commander of Ukraine's 3rd Army Corps suggested that if military units incorporate more robots, they could reduce their infantry ranks by up to 30 percent by the end of this year. If Ukraine succeeds in that goal, it would mark another notable step for the growing robotic presence on the battlefield.


Original Submission

posted by janrinok on Thursday April 16, @11:07PM   Printer-friendly

Bitcoin's blockchain is a public ledger. Every block header, every nonce, every coinbase transaction, every timestamp is visible to anyone running a full node. Most people look at the price. The data itself tells a different story.

Starting at block 142,312 (approximately early 2011), a persistent anomaly appears in the chain: 37,393 blocks with no pool tag in the coinbase, spanning 14 years, appearing in 2,877 distinct burst episodes that cluster around moments when the mining pool coordination graph is restructuring. These are not scattered solo miners picking up scraps. They are a structured, continuous presence.

Every mining pool has a distinctive nonce distribution — the hardware, work distribution software, and stratum proxy configuration create a statistical fingerprint. KL divergence measures how different two distributions are. The anonymous miner scores 0.0003 against F2Pool. The next closest pool scores 0.01+. The coinbase data confirms it: same template, same extra-nonce encoding, same byte layout — with the pool identification tag stripped out. These are F2Pool blocks with the name removed.

Someone has had the comprehension to read Bitcoin's 587 miner-controlled bits per block header — reconstructing pool attribution, coordination patterns, and regime shifts in real time — for 14 years. Every number in the article is derivable from publicly available blockchain data. The data is there. Look at it: https://subtracted.org/bitcoin-overseer


Original Submission

posted by janrinok on Thursday April 16, @06:20PM   Printer-friendly

Judge said ban, which originated in Reconstruction era to thwart liquor tax evasion, actually reduced tax revenue:

A US appeals court on Friday declared a nearly 158-year-old federal ban on home distilling to be unconstitutional, calling it an unnecessary and improper means for Congress to exercise its power to tax.

The fifth US circuit court of appeals in New Orleans ruled in favor of the non-profit Hobby Distillers Association and four of its 1,300 members.

They argued that people should be free to distill spirits at home, whether as a hobby or for personal consumption including, in one instance, to create an apple-pie-vodka recipe.

The ban was part of a law passed during the US's post-civil war Reconstruction era in July 1868, in part to thwart liquor tax evasion, and subjected violators to up to five years in prison and a $10,000 fine.

Writing for a three-judge panel, the circuit judge Edith Hollan Jones said the ban actually reduced tax revenue by preventing distilling in the first place, unlike laws that regulated the manufacture and labeling of distilled spirits on which the government could collect taxes.

She also said that under the government's logic, Congress could criminalize virtually any in-home activity that might escape notice from tax collectors, including remote work and home-based businesses.

"Without any limiting principle, the government's theory would violate this court's obligation to read the constitution carefully to avoid creating a general federal authority akin to the police power," Jones wrote.

The US justice department had no immediate comment. Another defendant, the treasury department's alcohol and tobacco tax and trade bureau, did not immediately respond to a request for comment.

Devin Watkins, a lawyer representing the Hobby Distillers Association, called the ruling an important decision about the limits of federal power.

Andrew Grossman, who argued the non-profit's appeal, called the decision "an important victory for individual liberty" that allows the plaintiffs to "pursue their passion to distill fine beverages in their homes".

"I look forward to sampling their output," he said.

The decision upheld a July 2024 ruling by the US district judge Mark Pittman in Fort Worth, Texas. He put his ruling on hold so the government could appeal.

Is it legal to distill spirits at home in other parts of the world?


Original Submission

posted by hubie on Thursday April 16, @01:36PM   Printer-friendly
from the all-your-tokens-belongs-to-me dept.

https://www.cio.com/article/4155404/ai-token-freeloaders-are-coming-for-your-customer-support-chatbot.html

Conversation framing or Social-engineering the Customer support AI bots. Making them do things to burn company tokens. One just can't stop laughing.

Users are tricking enterprise chatbots into performing complex AI computations unrelated to customer support, with potentially costly governance and ROI ramifications.

He adds: "Anyone who's spent five minutes with these tools knows you can steer past a system prompt with basic conversational framing, which is exactly what [is happening to enterprises today]. The system authenticates the session, not the intent."

"A normal customer service interaction of 'Where's my order? What are your hours?' runs maybe 200 to 300 tokens. Someone asking the bot to reverse a linked list in Python is generating more than 2,000 tokens easy. That's roughly a 10x cost multiplier per session," says Nik Kale, member of the Coalition for Secure AI (CoSAI) and ACM's AI Security (AISec) program committee.


Original Submission

posted by hubie on Thursday April 16, @08:52AM   Printer-friendly

An amendment to a recent lawsuit could change the narrative significantly, and just weeks before trial:

Does “injecting chaos into the proceedings” sound like something Elon Musk of all people would do during a lawsuit? Well I hope you’re sitting down because he’s being accused of doing just that in a court filing from OpenAI reported by Bloomberg on Saturday.

Earlier this week, Musk amended his lawsuit against OpenAI and Microsoft. He's still seeking an eye-popping $134 billion for allegedly engaging in what he characterizes as fraud by switching from non-profit to for-profit status. Now, however, he's asking for potential damages to be paid not to him, the richest person in the world, but instead to OpenAI's nonprofit.

He also wants Sam Altman, the company’s CEO, and Greg Brockman, its president, to be tossed out.

OpenAI says this is Musk “trying to recast his public narrative about his lawsuit.” Indeed it is a significant change to how the story might be framed. Rather than a zillionaire seeking yet another giant sum of money, it becomes a zillionaire seeking to restore the corporate structure of a firm he was allegedly wronged by.

OpenAI characterized Musk making such a move just weeks before a trial set to start later this month as a “legal ambush,” that is “legally improper and factually unsupported.” The filing also says, “Musk’s proposed amendment would require the presentation of different evidence and different witnesses than the case he sponsored until three days ago.”


Original Submission

posted by hubie on Thursday April 16, @04:05AM   Printer-friendly

https://gizmodo.com/this-memory-chip-survives-temperatures-hotter-than-lava-2000745819

"A new memory chip prototype, described in a recent Science paper, may offer a practical solution to this issue. According to the research team, the chip blueprint is a tiny sandwich of extreme materials that works reliably even at temperatures of 1,300 degrees Fahrenheit (about 700 degrees Celsius)—and probably could function beyond these temperatures, as that number merely represents the maximum provided by the testing equipment."

[...] "The chip is what's called a memristor, or an electrical device that both stores information and performs computing operations. The component is a tiny "sandwich" of three layers: tungsten on the top, hafnium oxide ceramic in the middle, and graphene on the bottom. Notably, tungsten has the highest melting point of any metal at 6,192 degrees Fahrenheit (3,422 degrees Celsius), whereas graphene is a flat sheet of carbon just one atom thick.

These unique physical properties enabled the creation of the novel chip, which ran on a measly 1.5 volts to process data for over 50 hours at 1,300 degrees Fahrenheit, the team explained. In that time, the chip powered through more than one billion switching cycles without needing any external modifications. "

Journal Reference: Zhao et al., Science, 26 Mar 2026 First Release DOI: 10.1126/science.aeb9934


Original Submission

posted by hubie on Wednesday April 15, @11:20PM   Printer-friendly
from the dumasses-gotta-dumass dept.

In a classic case of blame the messenger teenagers are being sent to prison because of poor security.

On a recent Tuesday morning, as his parents were driving him to the federal prison in Connecticut where he'll be locked up for the foreseeable future, 20-year-old Matthew Lane sent a text message to ABC News.

"It's extremely sad, and I'm just scared," he wrote.

Barely a year earlier, while still a teenager, he helped launch what's been described as the biggest cyberattack in U.S. education history -- a data breach that concerned authorities so much, it prompted briefings with senior government officials inside the White House Situation Room.

My take? If a teenager can hack your system and steal your data then don't blame the kid. What about the 20 something government sponsored college educated folks in other lands that work in groups? They'll be able to get into your system and just sit for months or years on end, stealing what they want, never being detected.

CSB. In the late 70s I was on a BBS when a friend said "call this number with your modem". It was the Montgomery Wards order fulfillment site. I ordered a refrigerator, entered delivery information, then chickened out I have no idea if that phone number contributed to Monkey Wards demise, but I'm sure it didn't help.


Original Submission

posted by hubie on Wednesday April 15, @06:37PM   Printer-friendly

The scheme follows a string of security failures at SK Telecom, KT, and LG Uplus:

South Korea's Ministry of Science and ICT said on Thursday that SK Telecom, KT, and LG Uplus — the country’s three major carriers — will provide more than seven million mobile subscribers with unmetered 400 Kbps data once their monthly allowances run out. First floated as part of a broader package of consumer-protection measures being assembled in parallel with its response to spiking memory and PC component prices, Deputy Prime Minister and Minister for Science and ICT Bae Kyung-hoon announced the program as one of many new obligations imposed on the three carriers in response to a sequence of security failures over the past year, calling unlimited, universal access one of the “basic telecommunications rights” that operators are expected to fund themselves.

400 Kbps might not sound like much, especially given that 5G can reach peak speeds in excess of 1 Gbps and standard-definition video streaming requires speeds of around 5 Mbps as a baseline, but it’s more than enough for very rudimentary activities like messaging and VoIP audio, or two-factor authentication.

It’s worth noting that the fallback to 400 Kbps only applies once a customer burns through their paid monthly cap, replacing the hard cutoff or overage charges that previously kicked in on affected plans.

Alongside the obligation to provide unmetered 400 Kbps access, the three operators have committed to increasing data and calling allowances for seniors, upgrading Wi-Fi services on public transport, and introducing 5G plans priced at $13.50 or below. Bae also pushed the carriers to direct more capital toward network buildout for AI workloads.

"Having gone through last year's hacking incidents, the weight of the telecom companies' responsibilities and roles has become even clearer," Bae said in a press release, emphasizing, “We have now reached a point where we must move beyond pledges not to repeat past mistakes and respond with renewal and contribution at a level of complete transformation that the public can tangibly feel." He went on to say that it’s important for the government to contribute to people’s livelihoods, including by guaranteeing what he called “basic telecommunications rights” for all citizens.

Each of the three network operators has been hit by a significant security incident in recent months. SK Telecom suffered a large-scale subscriber data leak, whereas KT was found to have deliberately pushed malware to roughly 600,000 of its own subscribers who were using a third-party BitTorrent-based file-sharing service, resulting in missing files and disabled PCs.


Original Submission

posted by jelizondo on Wednesday April 15, @01:52PM   Printer-friendly

https://phys.org/news/2026-04-electrode-technology-efficiency-plastic-precursors.html

In the process of converting carbon dioxide into useful chemicals such as ethylene—a key precursor for plastics—a major challenge has been the flooding of electrodes, where electrolyte penetrates the electrode structure and reduces performance. KAIST researchers have developed a new electrode design that blocks water while maintaining efficient electrical conduction and catalytic reactions, thereby improving both efficiency and stability.

A research team led by Professor Hyunjoon Song from the Department of Chemistry has developed a novel electrode structure utilizing silver nanowire networks—ultrafine silver wires arranged like a spiderweb—to significantly enhance the efficiency of electrochemical CO₂ conversion to useful chemical products. The research was published in Advanced Science.

In electrochemical CO₂ conversion processes, a long-standing issue has been flooding, where the electrode becomes saturated with electrolyte, reducing the space available for CO₂ to react. While hydrophobic materials can prevent water intrusion, they typically suffer from low electrical conductivity, requiring additional components and complicating the system.

To overcome this, the research team designed a three-layer electrode architecture that simultaneously repels water and enables efficient charge transport. The structure consists of a hydrophobic substrate, a catalyst layer, and an overlaid silver nanowire (Ag NW) network, which acts as an efficient current collector while preventing electrolyte flooding.

A key finding of this study is that the silver nanowires do more than just conduct electricity—they actively participate in the chemical reaction. During CO₂ reduction, the silver nanowires generate carbon monoxide (CO), which is then transferred to adjacent copper-based catalysts, where further reactions occur.

This creates a tandem catalytic system, in which two catalysts cooperate sequentially, significantly enhancing the production of multi-carbon compounds such as ethylene.

The electrode demonstrated outstanding performance. It achieved 79% selectivity toward C₂₊ products in alkaline electrolytes and 86% selectivity in neutral electrolytes, representing a world-leading level. It also maintained stable operation for more than 50 hours without performance degradation.

These results indicate that most of the converted products are the desired chemicals, while also overcoming the durability limitations of conventional systems.

Professor Hyunjoon Song stated, "This study is significant in showing that silver nanowires not only serve as electrical conductors but also directly participate in chemical reactions," adding, "This approach provides a new design strategy that can be extended to converting CO₂ into a wide range of valuable products such as ethanol and fuels."

Provided by The Korea Advanced Institute of Science and Technology (KAIST)

Jonghyeok Park et al, Overlaid Conductive Silver Nanowire Networks on Gas Diffusion Electrodes for High‐Performance Electrochemical CO2‐to‐C2+Conversion, Advanced Science (2026). DOI: 10.1002/advs.75003

Journal information: Advanced Science


Original Submission

posted by jelizondo on Wednesday April 15, @09:07AM   Printer-friendly
from the the-sheriff-is-in-town dept.

https://www.tomshardware.com/software/linux/linux-lays-down-the-law-on-ai-generated-code-yes-to-copilot-no-to-ai-slop-and-humans-take-the-fall-for-mistakes-after-months-of-fierce-debate-torvalds-and-maintainers-come-to-an-agreement

GZDoom, the over-20-year-old 3D accelerated source port of Doom, has been relegated to \"Historical\" status now after a battle over AI-generated code last year.

Legal headaches aside, project maintainers have also been fighting a losing battle against sheer volume. The open-source world is currently drowning in what the community has dubbed "AI slop." The creator of cURL had to close bug bounties after being flooded with hallucinated code, whiteboard tool tldraw began auto-closing external PRs in self-defense, and projects like Node.js and OCaml have seen massive, >10,000-line AI-generated patches spark existential debates among maintainers.

The cultural friction of undisclosed AI code has been even more volatile. Late last year, NVIDIA engineer and kernel maintainer Sasha Levin faced massive community backlash after it was revealed he submitted a patch to kernel 6.15 entirely written by an LLM without disclosing it, including the changelog. While the code was functional, it include a performance regression despite being reviewed and tested. The community pushed back hard against the idea of developers slapping their names on complex code they didn't actually write, and even Torvalds admitted the patch was not properly reviewed, partially because it was not labeled as AI-generated.

The GZDoom incident and the Sasha Levin backlash highlight exactly why the Linux kernel's new policy is so vital. Most of the developer community is less angry about the use of AI and more frustrated about the dishonesty surrounding it. By demanding an Assisted-by tag and enforcing strict human liability, the Linux kernel is attempting to strip the emotion out of the debate. Torvalds and the maintainers are acknowledging reality: developers are going to use AI tools to code faster, and trying to ban them is like trying to ban a specific brand of keyboard.

The bottom line is, if the code is good, then it's good. If it's hallucinatory AI slop that breaks the kernel, the human who clicked "submit" is the one who will have to answer to Linus Torvalds. In the open-source world, that's about as strong a deterrent as you can get.


Original Submission

posted by jelizondo on Wednesday April 15, @04:22AM   Printer-friendly

The AI Great Leap Forward:

In 1958, Mao ordered every village in China to produce steel. Farmers melted down their cooking pots in backyard furnaces and reported spectacular numbers. The steel was useless. The crops rotted. Thirty million people starved.

In 2026, every other company is having top down mandate on AI transformation.

Same energy.

The rallying cry of the Great Leap Forward was 超英趕美 — surpass England, catch up to America. Every province, every village, every household was expected to close the gap with industrialized Western nations by sheer force of will. Peasants who had never seen a factory were handed quotas for steel production. If enough people smelt enough iron, China becomes an industrial power overnight. Expertise was irrelevant. Conviction was sufficient.

The mandate today is identical, just swap the nouns. Every company, every function, every individual contributor is expected to close the AI gap. Ship AI features. Build agents. Automate workflows. That nobody on the team has ever trained a model, designed an evaluation system, or debugged a retrieval system is beside the point. Conviction is sufficient.

So everyone builds. PMs build AI dashboards. Marketing builds AI content generators. Sales ops builds AI lead scorers. Software engineers are building AI and data solutions that look pixel-perfect and function terribly. The UI is clean. The API is RESTful. The architecture diagram is beautiful. The outputs are wrong. Nobody checks because nobody on the team knows what correct outputs look like. They've never looked at the data. They've never computed a baseline.

Entire departments are stitching together n8n workflows and calling it AI — dozens of automated chains firing prompts into models, zero evaluation on any of them. These tools are merchants of complexity: they sell visual simplicity while generating spaghetti underneath. A drag-and-drop canvas makes it trivially easy to chain ten LLM calls together and impossibly hard to debug why the eighth one hallucinates on Tuesdays. The people building these workflows have never designed an evaluation pipeline, never measured model drift, never A/B tested a prompt. They don't need to — the canvas looks clean, the arrows point forward, the green checkmarks fire. The complexity isn't avoided. It's hidden behind a GUI where nobody with ML expertise will ever look.

The backyard steel of 1958 looked like steel. It was not steel. Today's backyard AI looks like AI. It is not AI. A TypeScript workflow with hardcoded if-else branches is not an agent. A prompt template behind a REST endpoint is not a model. Calling these things AI is like calling pig iron from a backyard furnace high-grade steel. It satisfies the reporting requirement. It fails every real-world test.

But the most dangerous furnace is the one that produces something functional. Teams are building demoware — pretty interfaces, working endpoints, impressive walkthroughs — with zero validation underneath. Some are in-housing SaaS products by vibe coding some frontend with coding agents: it runs, it has a dashboard, it cost a fraction of the vendor. Klarna announced in 2024 that it would replace Salesforce and other SaaS providers with internal AI-built solutions. What these replacements don't have is data infrastructure, error handling, monitoring, on-call support, security patching, or anyone who will maintain them after the builder gets promoted and moves on.

These apps will win awards at the next all-hands. In two years they'll be unmaintainable tech debt some poor soul inherits and rewrites from scratch. The furnace produced pig iron. Someone stamped "steel" on it. Now it's load-bearing.

Meanwhile, the actual product that customers pay for rots in the field. But hey, 超英趕美. The AI adoption dashboard is green.

The full article is an interesting read.


Original Submission

posted by hubie on Tuesday April 14, @11:35PM   Printer-friendly

https://www.politico.com/news/2026/04/13/missouri-city-council-data-center-00867259

Residents of a St. Louis suburb turned out in droves to unseat four incumbents just days after the council approved a development agreement for a $6 billion data center.

Tuesday's election in Festus, Missouri — a city of 12,000 people along the Mississippi River a half-hour south of St. Louis — is the latest example of growing public backlash against cities agreeing to host hyperscale data centers over the objections of residents concerned about their local impacts.


Original Submission