Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you put ketchup on the hot dog you are going to consume?

  • Yes, always
  • No, never
  • Only when it would be socially awkward to refuse
  • Not when I'm in Chicago
  • Especially when I'm in Chicago
  • I don't eat hot dogs
  • What is this "hot dog" of which you speak?
  • It's spelled "catsup" you insensitive clod!

[ Results | Polls ]
Comments:63 | Votes:175

posted by janrinok on Saturday January 31, @07:43PM   Printer-friendly

Linux after Linus?

Linux after Linus? The kernel community finally drafts a plan for replacing Torvalds

Linus plans to live forever. But just in case he doesn't, there's now a succession plan (though no actual successor).

So wild speculation time what happens the day that Linus isn't at the helm any more, for one reason or another. What or whom will replace Linus? Is there a list of requirements? Will AI replace Linus? Or some kind of very small shell script? Or will the $corporate overlords take over and within a short time frame everything turns to shit?

https://www.zdnet.com/article/linux-community-project-continuity-plan-for-replacing-linus-torvalds/

Linux Kernel Gets Continuity Plan For Post-Linus Era

Arthur T Knackerbracket has processed the following story:

The Linux kernel project has finally answered one of the biggest questions gripping the community: what happens if Linus Torvalds is no longer able to lead it?

The "Linux project continuity document," drafted by Dan Williams, was merged into its documentation last week, just ahead of the release of Linux 6.19-rc7. Notably, the document's path is Documentation/process/conclave.rst.

It notes that the kernel development project is "widely distributed, with over 100 maintainers each working to keep changes moving through their own repositories."

But "the final step... is a centralized one where changes are pulled into the mainline repository." And that is "normally done by Linus Torvalds," though "there are others who can do that work when the need arises."

It delicately adds: "Should the maintainers of that repository become unwilling or unable to do that work going forward (including facilitating a transition), the project will need to find one or more replacements without delay."

So what will happen? The process centers on "$ORGANIZER" who is "the last Maintainer Summit organizer or the current Linux Foundation (LF) Technical Advisory Board (TAB) Chair as a backup."

The document says: "Within 72 hours, $ORGANIZER will open a discussion with the invitees of the most recently concluded Maintainers Summit. A meeting of those invitees and the TAB, either online or in-person, will be set as soon as possible in a way that maximizes the number of people who can participate."

In the event of no summit happening in the previous 15 months, the TAB will choose the attendees. Invitees can bring in other maintainers as needed. The meeting will be chaired by $ORGANIZER and will "consider options for the ongoing management of the top-level kernel repository consistent with the expectation that it maximizes the long term health of the project and its community."

"Next steps" will then be communicated to the broader community through the ksummit@lists.linux.dev mailing list. The Linux Foundation, with guidance from the TAB, will "take the steps necessary to support and implement this plan."

The document follows discussion of succession and continuity at the 2025 Maintainers Summit. This included what would happen during a "smooth transition" if Torvalds decides it is time to move on, as well as the process "should something happen."

While Torvalds has a firm grip on Linux, as the continuity plan notes, he has himself mused on his own future and the fact the maintainer community, at least for the kernel, is getting grayer.

At the Open Source Summit in 2024, he noted: "Some people are probably still disappointed that I'm still here. I mean, it is absolutely true that kernel maintainers are aging."

He was asked by fellow pioneer Dirk Hohndel of Verizon what the community needs to do to ensure the next generation is ready, "so that in 10, 15, 20, 30 years your role can be handed off to someone else."

Torvalds replied: "We've always had a lot of people who are very competent and could step up." As for an aging community, he said new people still come in and become main developers within three years. "It's not impossible at all."

And Torvalds is not the only maintainer making plans as the open source community matures. Some projects have, of course, fallen by the wayside over the years. Some remain embedded in the ecosystem, even as their originators and maintainers get older.

One option is handing them over to a foundation. Others like curl originator Daniel Stenberg have remained fiercely independent – with discreet arrangements to pass on their GitHub details when the time comes.


Original Submission #1Original Submission #2

posted by mrpg on Saturday January 31, @03:00PM   Printer-friendly
from the M-→-tmpa-XI-tmpa dept.

https://www.righto.com/2026/01/notes-on-intel-8086-processors.html

In 1978, Intel introduced the 8086 processor, a revolutionary chip that led to the modern x86 architecture. Unlike modern 64-bit processors, however, the 8086 is a 16-bit chip. Its arithmetic/logic unit (ALU) operates on 16-bit values, performing arithmetic operations such as addition and subtraction, as well as logic operations including bitwise AND, OR, and XOR. The 8086's ALU is a complicated part of the chip, performing 28 operations in total.1

[...] The ALU is the heart of a processor, performing arithmetic and logic operations. Microprocessors of the 1970s typically supported addition and subtraction; logical AND, OR, and XOR; and various bit shift operations. (Although the 8086 had multiply and divide instructions, these were implemented in microcode, not in the ALU.) Since an ALU is both large and critical to performance, chip architects try to optimize its design. As a result, different microprocessors have widely different ALU designs.

[...] The 8086 is a complicated processor, and its instructions have many special cases, so controlling the ALU is more complex than described above. For instance, the compare operation is the same as a subtraction, except the numerical result of a compare is discarded; just the status flags are updated. The add versus add-with-carry instructions require different values for the carry into bit 0, while subtraction requires the carry flag to be inverted since it is treated as a borrow. The 8086's ALU supports increment and decrement operations, but also increment and decrement by 2, which requires an increment signal into bit 1 instead of bit 0. The bit-shift operations all require special treatment. For instance, a rotate can use the carry bit or exclude the carry bit, while and arithmetic shift right requires the top bit to be duplicated. As a result, along with the six lookup table (LUT) control signals, the ALU also requires numerous control signals to adjust its behavior for specific instructions. In the next section, I'll explain how these control signals are generated.


Original Submission

posted by mrpg on Saturday January 31, @10:19AM   Printer-friendly
from the Chat,-read-it-to-me dept.

Signal president warns AI agents are making encryption irrelevant:

Signal Foundation president Meredith Whittaker said artificial intelligence agents embedded within operating systems are eroding the practical security guarantees of end-to-end encryption (E2EE).

The remarks were made during an interview with Bloomberg at the World Economic Forum in Davos. While encryption remains mathematically sound, Whittaker argued that its real-world protections are increasingly bypassed by the privileged position AI systems occupy inside modern user environments.

Whittaker, a veteran researcher who spent more than a decade at Google, pointed to a fundamental shift in the threat model where AI agents integrated into core operating systems are being granted expansive access to user data, undermining the assumptions that secure messaging platforms like Signal are built on. To function as advertised, these agents must be able to read messages, access credentials, and interact across applications, collapsing the isolation that E2EE relies on.

This concern is not theoretical. A recent investigation by cybersecurity researcher Jamieson O'Reilly uncovered exposed deployments of Clawdbot, an open-source AI agent framework, that were directly linked to encrypted messaging platforms such as Signal. In one particularly serious case, an operator had configured Signal device-linking credentials inside a publicly accessible control panel. As a result, anyone who discovered the interface could pair a new device to the account and read private messages in plaintext, effectively nullifying Signal's encryption.

[...] During the interview, she described how AI agents are marketed as helpful assistants but require sweeping permissions to work. As Whittaker explained, these systems are pitched as tools that can coordinate events or communicate on a user's behalf, but to do so they must access calendars, browsers, payment methods, and private messaging apps like Signal, placing decrypted messages directly within reach of the operating system.


Original Submission

posted by mrpg on Saturday January 31, @05:42AM   Printer-friendly
from the expensive-flying-fish dept.

This article argues history has shown the YF-23 was a better stealth fighter than the F-22.

The Northrop YF-23 "Black Widow II" is often remembered as the loser of the Advanced Tactical Fighter (ATF) competition against the Lockheed F-22, but experts argue it offered a superior—albeit different—vision of future air combat.

Prioritizing extreme stealth and supercruise speed over the F-22's agility and thrust vectoring, the YF-23 featured a unique diamond-shaped design and advanced heat suppression optimized for deep penetration missions.

While the Air Force ultimately chose the more versatile F-22 for its dogfighting capabilities, the YF-23's "stealth-first" philosophy proved prophetic, influencing modern designs like the B-21 Raider and validating the shift toward long-range, beyond-visual-range warfare.


Original Submission

posted by mrpg on Saturday January 31, @01:01AM   Printer-friendly
from the firewall-encryption-algorithm dept.

Settlement comes more than 6 years after Gary DeMercurio and Justin Wynn's ordeal began:

Two security professionals who were arrested in 2019 after performing an authorized security assessment of a county courthouse in Iowa will receive $600,000 to settle a lawsuit they brought alleging wrongful arrest and defamation.

The case was brought by Gary DeMercurio and Justin Wynn, two penetration testers who at the time were employed by Colorado-based security firm Coalfire Labs. The men had written authorization from the Iowa Judicial Branch to conduct "red-team" exercises, meaning attempted security breaches that mimic techniques used by criminal hackers or burglars.

[...] Within minutes, deputies arrived and confronted the two intruders. DeMercurio and Wynn produced an authorization letter—known as a "get out of jail free card" in pen-testing circles. After a deputy called one or more of the state court officials listed in the letter and got confirmation it was legit, the deputies said they were satisfied the men were authorized to be in the building. DeMercurio and Wynn spent the next 10 or 20 minutes telling what their attorney in a court document called "war stories" to deputies who had asked about the type of work they do.

When Sheriff Leonard arrived, the tone suddenly changed. He said the Dallas County Courthouse was under his jurisdiction and he hadn't authorized any such intrusion. Leonard had the men arrested, and in the days and weeks to come, he made numerous remarks alleging the men violated the law. A couple months after the incident, he told me that surveillance video from that night showed "they were crouched down like turkeys peeking over the balcony" when deputies were responding. I published a much more detailed account of the event here. Eventually, all charges were dismissed.

Previously:
    • Iowa Prosecutors Drop Charges Against Men Hired to Test Their Security
    • Coalfire Pen-Testers Charged With Trespass Instead of Burglary
    • Iowa Officials Claim Confusion Over Scope Led to Arrest of Pen-Testers
    • Authorised Pen-Testers Nabbed, Jailed in Iowa Courthouse Break-in Attempt


Original Submission

posted by jelizondo on Friday January 30, @08:22PM   Printer-friendly
from the Maybe-someday,-maybe-never dept.

In "The Adolescence of Technology," Dario Amodei argues that humanity is entering a "technological adolescence" due to the rapid approach of "powerful AI"—systems that could soon surpass human intelligence across all fields. While optimistic about potential benefits in his previous essay, "Machines of Loving Grace," Amodei here focuses on a "battle plan" for five critical risks:

1. Autonomy: Models developing unpredictable, "misaligned" behaviors.
2. Misuse for Destruction: Lowering barriers for individuals to create biological or cyber weapons.
3. Totalitarianism: Autocrats using AI for absolute surveillance and propaganda.
4. Economic Disruption: Rapid labor displacement and extreme wealth concentration.
5. Indirect Effects: Unforeseen consequences on human purpose and biology.

Amodei advocates for a pragmatic defense involving: Constitutional AI, mechanistic interpretability, and surgical government regulations, such as transparency legislation and chip export controls, to ensure a safe transition to "adulthood" for our species.


Original Submission

posted by jelizondo on Friday January 30, @03:38PM   Printer-friendly
from the but-they-taste-so-good! dept.

Salty facts: takeaways have more salt than labels claim:

Some of the UK's most popular takeaway dishes contain more salt than their labels indicate, with some meals containing more than recommended daily guidelines, new research has shown.

Scientists found 47% of takeaway foods that were analysed in the survey exceeded their declared salt levels, with curries, pasta and pizza dishes often failing to match what their menus claim.

While not all restaurants provided salt levels on their menus, some meals from independent restaurants in Reading contained more than 10g of salt in a single portion. The UK daily recommended salt intake for an adult is 6g.

Perhaps surprisingly, traditional fish and chip shop meals contained relatively low levels of salt, as it is only added after cooking and on request.

The University of Reading research, published today (Wednesday, 21 January) in the journal PLOS One, was carried out to examine the accuracy of menu food labelling and the variation in salt content between similar dishes.

[...] "Food companies have been reducing salt levels in shop-bought foods in recent years, but our research shows that eating out is often a salty affair. Menu labels are supposed to help people make better food choices, but almost half the foods we tested with salt labels contained more salt than declared. The public needs to be aware that menu labels are rough guides at best, not accurate measures."

[...] The research team's key findings include:

  • Meat pizzas had the highest salt concentration at 1.6g per 100g.
  • Pasta dishes contained the most salt per serving, averaging 7.2g, which is more than a full day's recommended intake in a single meal. One pasta dish contained as much as 11.2g of salt.
  • Curry dishes showed the greatest variation, with salt levels ranging from 2.3g to 9.4g per dish.
  • Chips from fish and chip shops – where salt is typically only added after cooking and on request – had the lowest salt levels at just 0.2g per serving, compared to chips from other outlets which averaged 1g per serving.

The World Health Organization estimates that excess salt intake contributes to 1.8 million deaths worldwide each year.

Journal Reference: Mavrochefalos, A. I., Dodson, A., & C. Kuhnle, G. G. (2026). Variability in sodium content of takeaway foods: Implications for public health and nutrition policy. PLOS ONE, 21(1), e0339339. https://doi.org/10.1371/journal.pone.0339339


Original Submission

posted by jelizondo on Friday January 30, @10:46AM   Printer-friendly
from the strategy.vs.reality.collide dept.

Leaders think their AI deployments are succeeding. The data tells a different story.

Apparently leaders and bosses thinks that AI is great and are improving things at their companies. Their employees are less certain. Bosses wants AI solutions. Employees not so much. As they don't produce the results that their bosses wants or thinks that it should or does.

Executives we surveyed overwhelmingly said their company has a clear AI strategy, that adoption is widespread, and that employees are encouraged to experiment and build their own solutions. The rest of the workforce disagrees.

The more experienced the staff the less confident they are in the AI solutions. The more you know the less you trust the snake oil?

Even in populations we'd expect to be ahead - tech companies and language-intensive functions - most AI use remains surface-level.

https://www.sectionai.com/ai/the-ai-proficiency-report
https://fortune.com/2026/01/21/ai-workers-toxic-relationship-trust-confidence-collapses-training-manpower-group/


Original Submission

posted by jelizondo on Friday January 30, @06:10AM   Printer-friendly

Elon Musk's X on Tuesday released its source code for the social media platform's feed algorithm:

X's source code release is one of the first ever made by a large social platform, Cryptonews.com reported.

"We know the algorithm is dumb and needs massive improvements, but at least you can see us struggle to make it better in real-time and with transparency. No other social media companies do this," Musk posted in a repost fronm [sic] the platform's engineering team,

His post was in response to the eam account post on Monday which reads: "We have open-sourced our new X algorithm, powered by the same transformer architecture as xAI's Grok model."

[...] "The code reveals a sophisticated system powered by Grok, xAI's open-source transformer. No manual heuristics. No hidden thumb on the scale. The algorithm predicts 15 different user actions and uses 'attention masking' to ensure each post is scored independently, eliminating batch bias. Most interesting? A built-in Author Diversity Scorer prevents any single account from dominating your feed," he continued.

"Researchers, competitors, and critics can now verify exactly how content gets promoted or filtered. Facebook won't do this. TikTok won't do this. YouTube won't do this."

[...] The source code is primarily written in Rust and Python, and the model retrieves posts from two sources, including accounts that a user follows and a wider pool of content identified through machine-learning-based discovery, according to technical documentation, Cryptonews.com reported.

[Ed note: Source code available at Github]


Original Submission

posted by jelizondo on Friday January 30, @01:15AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Cybercrime has entered its AI era, with criminals now using weaponized language models and deepfakes as cheap, off-the-shelf infrastructure rather than experimental tools, according to researchers at Group-IB.

In its latest whitepaper, the cybersec biz argues that AI has become the plumbing of modern cybercrime, quietly turning skills that once took time and talent into services that anyone with a credit card and a Telegram account can rent.

This isn't just a passing fad, according to Group-IB's numbers, which show mentions of AI on dark web forums up 371 percent since 2019, with replies rising even faster – almost twelvefold. AI-related threads were everywhere, racking up more than 23,000 new posts and almost 300,000 replies in 2025.

According to Group-IB, AI has done what automation always does: it took something fiddly and made it fast. The stages of an attack that once needed planning and specialist hands can now be pushed through automated workflows and sold on subscription, complete with the sort of pricing and packaging you'd expect from a shady SaaS outfit.

One of the uglier trends in the report is the rise of so-called Dark LLMs – self-hosted language models built for scams and malware rather than polite conversation. Group-IB says several vendors are already selling them for as little as $30 a month, with more than 1,000 users between them. Unlike jailbroken mainstream chatbots, these things are meant to stay out of sight, run behind Tor, and ignore safety rules by design.

Running alongside the Dark LLM market is a booming trade in deepfakes and impersonation tools. Group-IB says complete synthetic identity kits, including AI-generated faces and voices, can now be bought for about $5. Sales spiked sharply in 2024 and kept climbing through 2025, pointing to a market that continues to grow.

There's real damage behind the numbers, too. Group-IB says deepfake fraud caused $347 million in verified losses in a single quarter, including everything from cloned executives to fake video calls. In one case, the firm helped a bank spot more than 8,000 deepfake-driven fraud attempts over eight months.

Group-IB found that scam call centers were using synthetic voices for first contact, with language models coaching the humans as they go. Malware developers are also starting to test AI-assisted tools for reconnaissance and persistence, with early hints of more autonomous attacks down the line.

"From the frontlines of cybercrime, we see AI giving criminals unprecedented reach," said Anton Ushakov, head of Group-IB's Cybercrime Investigations Unit. "Today it helps scale scams with ease and hyper-personalization at a level never seen before. Tomorrow, autonomous AI could carry out attacks that once required human expertise."

From a defensive point of view, AI removes a lot of the usual clues. When voices, text, and video can all be generated on demand with off-the-shelf software, it becomes much harder to work out who's really behind an attack. Group-IB's view is that this leaves static defenses struggling.

In other words, cybercrime hasn't reinvented itself. It has just automated the old tricks, put them on subscription, and scaled them globally – and as ever, everyone else gets to deal with the mess.


Original Submission

posted by mrpg on Thursday January 29, @08:30PM   Printer-friendly
from the it's-not-a-heist-it's-a-redistribution dept.

One tip led the police to the house in Axel, but the arrested individuals were eventually released after interrogation.

Four suspects were arrested by Zeeland police in the Netherlands after the authorities received a tip that they were involved in the theft of 169 NFTs. According to Dutch newspaper Politie, the three individuals from Axel and one from the neighboring Terneuzen have been interrogated by detectives but have since been released. Nevertheless, the police action also included the seizure of various data carriers and money, as well as three vehicles and the house itself where the raid was conducted.

The stolen NFTs were estimated to be worth 1.4 million Euros (around $1.65 million), which is indeed a massive amount. However, this is a tiny drop in the Ocean of stolen Bitcoin and other crypto, estimated to be worth $17 billion in 2025 alone. We should note that NFTs are not exactly the same as cryptocurrencies, but they both run on blockchain technology and can even be stored on the same wallets that keep Bitcoin, Ethereum, and the like.


Original Submission

posted by mrpg on Thursday January 29, @03:40PM   Printer-friendly
from the it's-not-failure,-it's-secure-boot dept.

Arthur T Knackerbracket has processed the following story:

Windows 11 has another serious bug hidden in the January update, and this is a showstopper that means affected PCs fail to boot up.

Neowin reports that Microsoft has acknowledged the bug with a message as flagged up via the Ask Woody forums: "Microsoft has received a limited number of reports of an issue in which devices are failing to boot with stop code 'UNMOUNTABLE_BOOT_VOLUME', after installing the January 2026 Windows security update, released January 13, 2026, and later updates.

"Affected devices show a black screen with the message 'Your device ran into a problem and needs a restart. You can restart.' At this stage, the device cannot complete startup and requires manual recovery steps."

[...] So, the good news is that we're told there's a limited impact here, so not many PCs are hit by the bug according to Microsoft. The company said that the issues pertain to Windows 11 versions 24H2 and 25H2.

The not-so-great news is that it's a nasty bug, and as Microsoft notes, you'll need to go through a manual recovery, meaning using the Windows Recovery Environment (WinRE). That can be used to try and repair the system, returning it to a functional state.


Original Submission

posted by mrpg on Thursday January 29, @10:59AM   Printer-friendly
from the it's-not-fast,-it's-speed dept.

The following story:

The future is analog.

Researchers from the University of California, Irvine have developed a transceiver that works in the 140 GHz range and can transmit data at up to 120 Gbps, that's about 15 gigabytes per second. By comparison, the fastest commercially available wireless technologies are theoretically limited to 30 Gbps (Wi-Fi 7) and 5 Gbps (5G mmWave). According to UC Irvine News, these new speeds could match most fiber optic cables used in data centers and other commercial applications, usually around at 100 Gbps. The team published their findings in two papers — the “bits-to-antenna” transmitter and the “antenna-to-bits” receiver — on the IEEE Journal of Solid-State Circuits.

“The Federal Communications Commission and 6G standards bodies are looking at the 100-gigahertz spectrum as the new frontier,” lead author Zisong Wang told the university publication. “But as such speeds, conventional transmitters that create signals using digital-to-analog converters are incredibly complex and power-hungry, and face what we call a DAC bottleneck.” The team replaced the DAC with three in-sync sub-transmitters, which only required 230 milliwatts to operate.


Original Submission

posted by mrpg on Thursday January 29, @06:11AM   Printer-friendly
from the it's-not-life,-it's-development dept.

Red Dwarfs Are Too Dim To Generate Complex Life:

One of the most consequential events—maybe the most consequential one throughout all of Earth's long, 4.5 billion year history—was the Great Oxygenation Event (GOE). When photosynthetic cyanobacteria arose on Earth, they released oxygen as a metabolic byproduct. During the GOE, which began around 2.3 billion years ago, free oxygen began to slowly accumulate in the atmosphere.

It took about 2.5 billion years for enough oxygen to accumulate in the atmosphere for complex life to arise. Complex life has higher energy needs, and aerobic respiration using oxygen provided it. Free oxygen in the atmosphere eventually triggered the Cambrian Explosion, the event responsible for the complex animal life we see around us today.

[...] The question is, do red dwarfs emit enough radiation to power photosynthesis that can trigger a GOE on planets orbiting them?

New research tackles this question. It's titled "Dearth of Photosynthetically Active Radiation Suggests No Complex Life on Late M-Star Exoplanets," and has been submitted to the journal Astrobiology. The authors are Joseph Soliz and William Welsh from the Department of Astronomy at San Diego State University. Welsh also presented the research at the 247th Meeting of the American Astronomical Society, and the paper is currently available at arxiv.org.

"The rise of oxygen in the Earth's atmosphere during the Great Oxidation Event (GOE) occurred about 2.3 billion years ago," the authors write. "There is considerably greater uncertainty for the origin of oxygenic photosynthesis, but it likely occurred significantly earlier, perhaps by 700 million years." That timeline is for a planet receiving energy from a Sun-like star.

[...] 63 billion years is far longer than the current age of the Universe, so the conclusion is clear. There simply hasn't been enough time for oxygen to accumulate on any red dwarf planet and trigger the rise of complex life, like happened on Earth with the GOE.

See also:


Original Submission

posted by mrpg on Thursday January 29, @01:30AM   Printer-friendly
from the it's-not-code,-it's-liberty dept.

Generative AI is reshaping software development – and fast:

[...] "We analyzed more than 30 million Python contributions from roughly 160,000 developers on GitHub, the world's largest collaborative programming platform," says Simone Daniotti of CSH and Utrecht University. GitHub records every step of coding – additions, edits, improvements – allowing researchers to track programming work across the globe in real time. Python is one of the most widely used programming languages in the world.

The team used a specially trained AI model to identify whether blocks of code were AI-generated, for instance via ChatGPT or GitHub Copilot.

"The results show extremely rapid diffusion," explains Frank Neffke, who leads the Transforming Economies group at CSH. "In the U.S., AI-assisted coding jumped from around 5% in 2022 to nearly 30% in the last quarter of 2024."

At the same time, the study found wide differences across countries. "While the share of AI-supported code is highest in the U.S. at 29%, Germany reaches 23% and France 24%, followed by India at 20%, which has been catching up fast," he says, while Russia (15%) and China (12%) still lagged behind at the end of our study.

[...] The study shows that the use of generative AI increased programmers' productivity by 3.6% by the end of 2024. "That may sound modest, but at the scale of the global software industry it represents a sizeable gain," says Neffke, who is also a professor at Interdisciplinary Transformation University Austria (IT:U).

The study finds no differences in AI usage between women and men. By contrast, experience levels matter: less experienced programmers use generative AI in 37% of their code, compared to just 27% for experienced programmers. Despite this, the productivity gains the study documents are driven exclusively by experienced users. "Beginners hardly benefit at all," says Daniotti. Generative AI therefore does not automatically level the playing field; it can widen existing gaps.

The study "Who is using AI to code? Global diffusion and impact of Generative AI" by Simone Daniotti, Johannes Wachs, Xiangnan Feng, and Frank Neffke has been published in Science (doi: 10.1126/science.adz9311).


Original Submission