Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
In "The Adolescence of Technology," Dario Amodei argues that humanity is entering a "technological adolescence" due to the rapid approach of "powerful AI"—systems that could soon surpass human intelligence across all fields. While optimistic about potential benefits in his previous essay, "Machines of Loving Grace," Amodei here focuses on a "battle plan" for five critical risks:
1. Autonomy: Models developing unpredictable, "misaligned" behaviors.
2. Misuse for Destruction: Lowering barriers for individuals to create biological or cyber weapons.
3. Totalitarianism: Autocrats using AI for absolute surveillance and propaganda.
4. Economic Disruption: Rapid labor displacement and extreme wealth concentration.
5. Indirect Effects: Unforeseen consequences on human purpose and biology.
Amodei advocates for a pragmatic defense involving: Constitutional AI, mechanistic interpretability, and surgical government regulations, such as transparency legislation and chip export controls, to ensure a safe transition to "adulthood" for our species.
Salty facts: takeaways have more salt than labels claim:
Some of the UK's most popular takeaway dishes contain more salt than their labels indicate, with some meals containing more than recommended daily guidelines, new research has shown.
Scientists found 47% of takeaway foods that were analysed in the survey exceeded their declared salt levels, with curries, pasta and pizza dishes often failing to match what their menus claim.
While not all restaurants provided salt levels on their menus, some meals from independent restaurants in Reading contained more than 10g of salt in a single portion. The UK daily recommended salt intake for an adult is 6g.
Perhaps surprisingly, traditional fish and chip shop meals contained relatively low levels of salt, as it is only added after cooking and on request.
The University of Reading research, published today (Wednesday, 21 January) in the journal PLOS One, was carried out to examine the accuracy of menu food labelling and the variation in salt content between similar dishes.
[...] "Food companies have been reducing salt levels in shop-bought foods in recent years, but our research shows that eating out is often a salty affair. Menu labels are supposed to help people make better food choices, but almost half the foods we tested with salt labels contained more salt than declared. The public needs to be aware that menu labels are rough guides at best, not accurate measures."
[...] The research team's key findings include:
- Meat pizzas had the highest salt concentration at 1.6g per 100g.
- Pasta dishes contained the most salt per serving, averaging 7.2g, which is more than a full day's recommended intake in a single meal. One pasta dish contained as much as 11.2g of salt.
- Curry dishes showed the greatest variation, with salt levels ranging from 2.3g to 9.4g per dish.
- Chips from fish and chip shops – where salt is typically only added after cooking and on request – had the lowest salt levels at just 0.2g per serving, compared to chips from other outlets which averaged 1g per serving.
The World Health Organization estimates that excess salt intake contributes to 1.8 million deaths worldwide each year.
Journal Reference: Mavrochefalos, A. I., Dodson, A., & C. Kuhnle, G. G. (2026). Variability in sodium content of takeaway foods: Implications for public health and nutrition policy. PLOS ONE, 21(1), e0339339. https://doi.org/10.1371/journal.pone.0339339
Leaders think their AI deployments are succeeding. The data tells a different story.
Apparently leaders and bosses thinks that AI is great and are improving things at their companies. Their employees are less certain. Bosses wants AI solutions. Employees not so much. As they don't produce the results that their bosses wants or thinks that it should or does.
Executives we surveyed overwhelmingly said their company has a clear AI strategy, that adoption is widespread, and that employees are encouraged to experiment and build their own solutions. The rest of the workforce disagrees.
The more experienced the staff the less confident they are in the AI solutions. The more you know the less you trust the snake oil?
Even in populations we'd expect to be ahead - tech companies and language-intensive functions - most AI use remains surface-level.
https://www.sectionai.com/ai/the-ai-proficiency-report
https://fortune.com/2026/01/21/ai-workers-toxic-relationship-trust-confidence-collapses-training-manpower-group/
Elon Musk's X on Tuesday released its source code for the social media platform's feed algorithm:
X's source code release is one of the first ever made by a large social platform, Cryptonews.com reported.
"We know the algorithm is dumb and needs massive improvements, but at least you can see us struggle to make it better in real-time and with transparency. No other social media companies do this," Musk posted in a repost fronm [sic] the platform's engineering team,
His post was in response to the eam account post on Monday which reads: "We have open-sourced our new X algorithm, powered by the same transformer architecture as xAI's Grok model."
[...] "The code reveals a sophisticated system powered by Grok, xAI's open-source transformer. No manual heuristics. No hidden thumb on the scale. The algorithm predicts 15 different user actions and uses 'attention masking' to ensure each post is scored independently, eliminating batch bias. Most interesting? A built-in Author Diversity Scorer prevents any single account from dominating your feed," he continued.
"Researchers, competitors, and critics can now verify exactly how content gets promoted or filtered. Facebook won't do this. TikTok won't do this. YouTube won't do this."
[...] The source code is primarily written in Rust and Python, and the model retrieves posts from two sources, including accounts that a user follows and a wider pool of content identified through machine-learning-based discovery, according to technical documentation, Cryptonews.com reported.
[Ed note: Source code available at Github]
Arthur T Knackerbracket has processed the following story:
Cybercrime has entered its AI era, with criminals now using weaponized language models and deepfakes as cheap, off-the-shelf infrastructure rather than experimental tools, according to researchers at Group-IB.
In its latest whitepaper, the cybersec biz argues that AI has become the plumbing of modern cybercrime, quietly turning skills that once took time and talent into services that anyone with a credit card and a Telegram account can rent.
This isn't just a passing fad, according to Group-IB's numbers, which show mentions of AI on dark web forums up 371 percent since 2019, with replies rising even faster – almost twelvefold. AI-related threads were everywhere, racking up more than 23,000 new posts and almost 300,000 replies in 2025.
According to Group-IB, AI has done what automation always does: it took something fiddly and made it fast. The stages of an attack that once needed planning and specialist hands can now be pushed through automated workflows and sold on subscription, complete with the sort of pricing and packaging you'd expect from a shady SaaS outfit.
One of the uglier trends in the report is the rise of so-called Dark LLMs – self-hosted language models built for scams and malware rather than polite conversation. Group-IB says several vendors are already selling them for as little as $30 a month, with more than 1,000 users between them. Unlike jailbroken mainstream chatbots, these things are meant to stay out of sight, run behind Tor, and ignore safety rules by design.
Running alongside the Dark LLM market is a booming trade in deepfakes and impersonation tools. Group-IB says complete synthetic identity kits, including AI-generated faces and voices, can now be bought for about $5. Sales spiked sharply in 2024 and kept climbing through 2025, pointing to a market that continues to grow.
There's real damage behind the numbers, too. Group-IB says deepfake fraud caused $347 million in verified losses in a single quarter, including everything from cloned executives to fake video calls. In one case, the firm helped a bank spot more than 8,000 deepfake-driven fraud attempts over eight months.
Group-IB found that scam call centers were using synthetic voices for first contact, with language models coaching the humans as they go. Malware developers are also starting to test AI-assisted tools for reconnaissance and persistence, with early hints of more autonomous attacks down the line.
"From the frontlines of cybercrime, we see AI giving criminals unprecedented reach," said Anton Ushakov, head of Group-IB's Cybercrime Investigations Unit. "Today it helps scale scams with ease and hyper-personalization at a level never seen before. Tomorrow, autonomous AI could carry out attacks that once required human expertise."
From a defensive point of view, AI removes a lot of the usual clues. When voices, text, and video can all be generated on demand with off-the-shelf software, it becomes much harder to work out who's really behind an attack. Group-IB's view is that this leaves static defenses struggling.
In other words, cybercrime hasn't reinvented itself. It has just automated the old tricks, put them on subscription, and scaled them globally – and as ever, everyone else gets to deal with the mess.
One tip led the police to the house in Axel, but the arrested individuals were eventually released after interrogation.
Four suspects were arrested by Zeeland police in the Netherlands after the authorities received a tip that they were involved in the theft of 169 NFTs. According to Dutch newspaper Politie, the three individuals from Axel and one from the neighboring Terneuzen have been interrogated by detectives but have since been released. Nevertheless, the police action also included the seizure of various data carriers and money, as well as three vehicles and the house itself where the raid was conducted.
The stolen NFTs were estimated to be worth 1.4 million Euros (around $1.65 million), which is indeed a massive amount. However, this is a tiny drop in the Ocean of stolen Bitcoin and other crypto, estimated to be worth $17 billion in 2025 alone. We should note that NFTs are not exactly the same as cryptocurrencies, but they both run on blockchain technology and can even be stored on the same wallets that keep Bitcoin, Ethereum, and the like.
Arthur T Knackerbracket has processed the following story:
Windows 11 has another serious bug hidden in the January update, and this is a showstopper that means affected PCs fail to boot up.
Neowin reports that Microsoft has acknowledged the bug with a message as flagged up via the Ask Woody forums: "Microsoft has received a limited number of reports of an issue in which devices are failing to boot with stop code 'UNMOUNTABLE_BOOT_VOLUME', after installing the January 2026 Windows security update, released January 13, 2026, and later updates.
"Affected devices show a black screen with the message 'Your device ran into a problem and needs a restart. You can restart.' At this stage, the device cannot complete startup and requires manual recovery steps."
[...] So, the good news is that we're told there's a limited impact here, so not many PCs are hit by the bug according to Microsoft. The company said that the issues pertain to Windows 11 versions 24H2 and 25H2.
The not-so-great news is that it's a nasty bug, and as Microsoft notes, you'll need to go through a manual recovery, meaning using the Windows Recovery Environment (WinRE). That can be used to try and repair the system, returning it to a functional state.
The following story:
The future is analog.
Researchers from the University of California, Irvine have developed a transceiver that works in the 140 GHz range and can transmit data at up to 120 Gbps, that's about 15 gigabytes per second. By comparison, the fastest commercially available wireless technologies are theoretically limited to 30 Gbps (Wi-Fi 7) and 5 Gbps (5G mmWave). According to UC Irvine News, these new speeds could match most fiber optic cables used in data centers and other commercial applications, usually around at 100 Gbps. The team published their findings in two papers — the “bits-to-antenna” transmitter and the “antenna-to-bits” receiver — on the IEEE Journal of Solid-State Circuits.
“The Federal Communications Commission and 6G standards bodies are looking at the 100-gigahertz spectrum as the new frontier,” lead author Zisong Wang told the university publication. “But as such speeds, conventional transmitters that create signals using digital-to-analog converters are incredibly complex and power-hungry, and face what we call a DAC bottleneck.” The team replaced the DAC with three in-sync sub-transmitters, which only required 230 milliwatts to operate.
Red Dwarfs Are Too Dim To Generate Complex Life:
One of the most consequential events—maybe the most consequential one throughout all of Earth's long, 4.5 billion year history—was the Great Oxygenation Event (GOE). When photosynthetic cyanobacteria arose on Earth, they released oxygen as a metabolic byproduct. During the GOE, which began around 2.3 billion years ago, free oxygen began to slowly accumulate in the atmosphere.
It took about 2.5 billion years for enough oxygen to accumulate in the atmosphere for complex life to arise. Complex life has higher energy needs, and aerobic respiration using oxygen provided it. Free oxygen in the atmosphere eventually triggered the Cambrian Explosion, the event responsible for the complex animal life we see around us today.
[...] The question is, do red dwarfs emit enough radiation to power photosynthesis that can trigger a GOE on planets orbiting them?
New research tackles this question. It's titled "Dearth of Photosynthetically Active Radiation Suggests No Complex Life on Late M-Star Exoplanets," and has been submitted to the journal Astrobiology. The authors are Joseph Soliz and William Welsh from the Department of Astronomy at San Diego State University. Welsh also presented the research at the 247th Meeting of the American Astronomical Society, and the paper is currently available at arxiv.org.
"The rise of oxygen in the Earth's atmosphere during the Great Oxidation Event (GOE) occurred about 2.3 billion years ago," the authors write. "There is considerably greater uncertainty for the origin of oxygenic photosynthesis, but it likely occurred significantly earlier, perhaps by 700 million years." That timeline is for a planet receiving energy from a Sun-like star.
[...] 63 billion years is far longer than the current age of the Universe, so the conclusion is clear. There simply hasn't been enough time for oxygen to accumulate on any red dwarf planet and trigger the rise of complex life, like happened on Earth with the GOE.
See also:
Generative AI is reshaping software development – and fast:
[...] "We analyzed more than 30 million Python contributions from roughly 160,000 developers on GitHub, the world's largest collaborative programming platform," says Simone Daniotti of CSH and Utrecht University. GitHub records every step of coding – additions, edits, improvements – allowing researchers to track programming work across the globe in real time. Python is one of the most widely used programming languages in the world.
The team used a specially trained AI model to identify whether blocks of code were AI-generated, for instance via ChatGPT or GitHub Copilot.
"The results show extremely rapid diffusion," explains Frank Neffke, who leads the Transforming Economies group at CSH. "In the U.S., AI-assisted coding jumped from around 5% in 2022 to nearly 30% in the last quarter of 2024."
At the same time, the study found wide differences across countries. "While the share of AI-supported code is highest in the U.S. at 29%, Germany reaches 23% and France 24%, followed by India at 20%, which has been catching up fast," he says, while Russia (15%) and China (12%) still lagged behind at the end of our study.
[...] The study shows that the use of generative AI increased programmers' productivity by 3.6% by the end of 2024. "That may sound modest, but at the scale of the global software industry it represents a sizeable gain," says Neffke, who is also a professor at Interdisciplinary Transformation University Austria (IT:U).
The study finds no differences in AI usage between women and men. By contrast, experience levels matter: less experienced programmers use generative AI in 37% of their code, compared to just 27% for experienced programmers. Despite this, the productivity gains the study documents are driven exclusively by experienced users. "Beginners hardly benefit at all," says Daniotti. Generative AI therefore does not automatically level the playing field; it can widen existing gaps.
The study "Who is using AI to code? Global diffusion and impact of Generative AI" by Simone Daniotti, Johannes Wachs, Xiangnan Feng, and Frank Neffke has been published in Science (doi: 10.1126/science.adz9311).
For those unaware: digg is attempting a comeback. They opened their beta to the broad internet around January 18th or so. The site looks nice, there are some rough edges on the software (OAUTH wasn't working for me...) but it's mostly functional. What remains to be seen is: what will this new digg become? When digg left the scene (in the mid-late 2000s - by my reckoning), bots and AI and AI bots and troll farms and AI troll farms and all of that were a tiny fraction of their current influence. Global human internet users in 2007 were estimated at 1.3 billion vs 6 billion today, and mobile usage was just getting started vs its almost total dominance in content consumption now. There is some debate on digg whether they are trying to become reddit2, or what... and my input to that debate was along the lines of: digg is currently small, in its current state human moderation is the only thing that makes any sense, user self mods through blocks, community moderation through post and comment censorship (doesn't belong in THIS forum), and site moderation against griefers - mods all the way down; but as it grows, when feeds start getting multiple new posts per minute, human moderation becomes impractical - some auto-moderation will inevitably become necessary - and the nature of that auto-moderation is going to need to constantly evolve as the site grows and its user base matures.
Well, apparently I was right, because a few hours later my account appears to have been shadow banned - no explanation, just blocked from posting and my posts deleted. I guess somebody didn't like what I was saying, and "moderated" me away. As outlined above, I think a sitewide ban is a little overboard for the thought police to invoke without warning, but... it's their baby and I need to spend less time online anyway, no loss to me. And, digg isn't my core topic for this story anyway... I have also noticed some interesting developments in Amazon reviews - the first page of "my reviews" is always happy to see me, we appreciate the effort you put into your reviews, etc. etc., but... if I dig back a page or two, I start finding "review removed" on some older ones, and when I go to see what I wrote that might have been objectionable, I can't - it's just removed. There's a button there to "submit a new review" but, clicking that I get a message "we're sorry, this account is not eligible to submit reviews on this product." No active notice from Amazon that this happened, no explanation of why, or the scope of my review ineligibility, it just seems that if "somebody, somewhere" (product sellers are high on my suspect list) decides they don't like your review, it is quietly removed and you are quietly blocked from reviewing their products anymore. Isn't the world a happier place where we all just say nice things that everybody involved wants to hear? I do remember, one of my reviews that got removed was critical of a particular category of products, all very similarly labeled and described, but when the products arrive you never know from one "brand" to the next quite what you are getting, some are like car wax: hard until it melts in your hand, some are more water soluble, all are labeled identically with just subtle differences in the packaging artwork. I might have given 3/5 stars, probably 4, because: it was good car wax, but if you were expecting more of a hair mousse? The industry would do itself a favor by figuring out how to communicate that to customers buying their products, in my opinion. Well, that opinion doesn't even appear on Amazon anymore.
Something that has developed/matured on social sites quite a bit since the late 2000s are block functions. They're easier for users to use, control, some sites allow sharing of block lists among users. Of course this brings up obvious echo chamber concerns, but... between an echo chamber and an open field full of state and corporate sponsored AI trolls? I'd like a middle ground, but I don't think there's enough human population on the internet to effectively whack-a-mole by hand to keep the trolls in line. You can let the site moderators pick and choose who gets the amplified voices, and to circle back to digg - I haven't dug around about it, but if anybody knows what their monetization plan is, I wouldn't mind hearing speculation or actual quasi-fact based reporting how they intend to pay for their bandwidth and storage?
As I said and apparently got banned for: some moderation will always be necessary, and as the internet continues to evolve the best solutions for that will have to continue to evolve with it, there's never going to be an optimized solution that stays near optimal for more than a few months, at least not on sites that aspire to reddit, Xitter, Facebook, Bluesky, digg? sized user bases. As we roll along through 2026, who should be holding the ban hammers, and how often and aggressively should they be wielded? Apparently digg has some auto-moderation that's impractically over-aggressive at the moment, they say they're working on it. More power to 'em, they can work on it without my input from here on out.
Review of studies shows meeting face-to-face has more benefits:
A review of more than 1,000 studies suggests that using technology to communicate with others is better than nothing – but still not as good as face-to-face interactions.
Researchers found that people are less engaged and don't have the same positive emotional responses when they use technology, like video calls or texting, to connect with others, compared to when they meet in person.
The results were clear, said Brad Bushman, co-author of the study and professor of communication at The Ohio State University.
"If there is no other choice than computer-mediated communication, then it is certainly better than nothing," Bushman said. "But if there is a possibility of meeting in person, then using technology instead is a poor substitute."
The study was published online yesterday (Jan. 6, 2026) in the journal Perspectives on Psychological Science.
Lead author Roy Baumeister, professor of psychology at the University of Queensland, said: "Electronic communication is here to stay, so we need to learn how to integrate it into our lives. But if it replaces live interactions, you're going to be missing some important benefits and probably be less fulfilled."
Research has shown the importance of social interactions for psychological and physical health. But the issue for computer-mediated communication is that it is "socializing alone," the researchers said. You are communicating with others, but you're by yourself when you do it. The question becomes, is that important?
[...] A good example of the superiority of in-person communication is laughter, Bushman said. "We found a lot of research that shows real health benefits to laughing out loud, but we couldn't find any health benefits to typing LOL in a text or social media post," he said.
Another key finding was that numerous studies showed that educational outcomes were superior in in-person classes compared to those done online. Some of these studies were conducted during the COVID pandemic, when teachers were forced to teach their students online.
As might be expected, video calls were better than texting for boosting positive emotions, the research showed. Being removed in both time and space makes texting and non-live communication less beneficial for those participating.
Results were mixed regarding negative emotions. Computer-mediated communication may reduce some forms of anxiety.
"Shy people in particular seem to feel better about interacting online, where they can type their thoughts into a chat box, and don't have to call as much attention to themselves," Baumeister said.
But there was also a dark side. Some people are more likely to express negative comments online than they would in person. Inhibitions against saying something harmful are reduced online, results showed.
In general, the research found that group dynamics, including learning, were not as effective online as they were in person.
[...] The benefits of modern technology for communication in some situations are indisputable, according to Bushman. But this review shows that it does come with some costs.
"Humans were shaped by evolution to be highly social," Bushman said. "But many of the benefits of social interactions are lost or reduced when you interact with people who are not present with you."
The researchers noted that concerns about the impact of technology on human communication go way back. Almost a century ago, sociologists were concerned that the telephone would reduce people visiting in person with neighbors.
"There is a long history of unconfirmed predictions that various innovations will bring disaster, so one must be skeptical of alarmist projections," the authors wrote in the paper.
"Then again, the early returns are not encouraging."
Journal Reference: Baumeister, R. F., Bibby, M. T., Tice, D. M., & Bushman, B. J. (2026). Socializing While Alone: Loss of Impact and Engagement When Interacting Remotely via Technology. Perspectives on Psychological Science, 0(0). https://doi.org/10.1177/17456916251404368
https://arstechnica.com/ai/2026/01/tsmc-says-ai-demand-is-endless-after-record-q4-earnings/
On Thursday, Taiwan Semiconductor Manufacturing Company (TSMC) reported record fourth-quarter earnings and said it expects AI chip demand to continue for years. During an earnings call, CEO C.C. Wei told investors that while he cannot predict the semiconductor industry's long-term trajectory, he remains bullish on AI.
[...]
"All in all, I believe in my point of view, the AI is real—not only real, it's starting to grow into our daily life. And we believe that is kind of—we call it AI megatrend, we certainly would believe that," Wei said during the call. "So another question is 'can the semiconductor industry be good for three, four, five years in a row?' I'll tell you the truth, I don't know. But I look at the AI, it looks like it's going to be like an endless—I mean, that for many years to come."TSMC posted net income of NT$505.7 billion (about $16 billion) for the quarter, up 35 percent year over year and above analyst expectations.
[...]
Wei's optimism stands in contrast to months of speculation about whether the AI industry is in a bubble. In November, Google CEO Sundar Pichai warned of "irrationality" in the AI market and said no company would be immune if a potential bubble bursts. OpenAI's Sam Altman acknowledged in August that investors are "overexcited" and that "someone" will lose a "phenomenal amount of money."But TSMC, which manufactures the chips that power the AI boom, is betting the opposite way, with Wei telling analysts he spoke directly to cloud providers to verify that demand is real before committing to the spending increase.
[...]
The earnings report landed the same day the US and Taiwan finalized a trade agreement that cuts tariffs on Taiwanese goods to 15 percent, down from 20 percent.
Researchers publish first comprehensive structural engineering manual for bamboo:
University of Warwick engineers have led the creation of a significant milestone manual for bamboo engineering, which will drive the low-carbon construction sector.
Bamboo has been used in construction for millennia, yet colonisation and industrialisation have resulted in the replacement of this natural resource by technologies such as steel, concrete, and masonry. This change became more entrenched in the twentieth century with the development of construction codes as means to ensure structures were safe, since none were written for bamboo.
Dr. David Trujillo, Assistant Professor in Humanitarian Engineering, School of Engineering, University of Warwick said: "Bamboo is a fast-growing, strong, inexpensive, and highly sustainable material, and, amongst other things, it is a very effective carbon sink (naturally absorbing CO2 from the atmosphere).
"Unfortunately, the countries that had the expertise in developing construction codes to regulate the design and building of structures, were not those interested in bamboo. For this to change, international collaboration was needed."
The international collaboration between Warwick, Pittsburgh, Arup, INBAR and BASE has since met this challenge and produced the new Institution of Structural Engineers (IStructE) manual providing comprehensive guidance about the design of bamboo structures. It is the first structural engineering manual for bamboo in the world.
[...] This free resource will empower engineers across the tropics and subtropics to adopt bamboo at no cost. With over 1600 species of bamboo spread across all continents except for Antarctica and Europe (although numerous species successfully thrive across Europe), this manual has the chance to hugely expand the usage of this bio-based material.
The manual centres in the use of bamboo poles (the stems) as the main structural component of buildings. In these structures, bamboo poles act as beams and columns, though the manual also explains how to use bamboo in a structural system called Composite Bamboo Shear Walls – CBSW. This system is particularly effective for making resilient housing in earthquake and typhoon prone locations.
In case you want to start your own building project: Manual for the design of bamboo structures to ISO 22156:2021
Belgium, Denmark, Germany, France, Ireland, Luxemburg, the Netherlands and the United Kingdom have agreed 300 gigawatt in offshore wind generation capacity in the North Sea by 2050. Current offshore wind capacity in the North Sea is 37 gigawatt. Getting to the equivalent of 300 nuclear power plants, or 8 times as much as the current capacity, will require an investment of a trillion euro.
The governments of the North Sea countries promise investment guarantees to industry: if the wholesale price on the market drops beneath an agreed upon level, government will fund the missing part; if the wholesale price exceeds that level, the top-over will go to the governments involved. In exchange, the offshore wind industry and distribution net managers promise 91,000 additional jobs and agreed to a 30 percent price reduction towards 2040.
In 2023, the same governments already had agreed to 120GW by 2030. It turns out that aim is/was quite a bit overambitious.