Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current mobile phone?

  • 0-6 months
  • 6-12 months
  • 1-2 years
  • 2-4 years
  • 4+ years
  • My phone belongs in a technology museum.
  • Do 2 tin cans and a very long piece of string count?
  • I don't have a mobile phone you insensitive clod!

[ Results | Polls ]
Comments:43 | Votes:225

posted by hubie on Friday June 13, @08:41PM   Printer-friendly
from the Going-For-DOGE-puns dept.

The House Committee on Oversight and Government Reform recently (June 5) held a hearing on, ahem, Artificial Intelligence, and its usage within the federal government.

We stand at the dawn of an intelligent age, a transformative period rivaling the industrial and nuclear eras, where AI—the new electricity, the engine of global change—is redrawing the very architecture of global power. It is clear that the nation that masters and fully adopts this foundational technology will not only lead but also write the rules for this new epoch. The breathtaking adoption of AI, exemplified by ChatGPT's rapid rise, underscores that for the United States, widespread federal adoption and deployment are not merely options but a strategic imperative essential for national competitiveness, national security, and effective governance.

(First witness, Mr. Yll Bajraktari, Competitive Studies Project.)

Today, AI is fundamentally transforming how work gets done across America's $30 trillion economy. AI solves a universal problem for public and private entities by transforming employee experience, providing instant support, reducing the toil of manual and tedious tasks, and allowing employees to focus on activities and jobs that provide significantly more value to the organization, leading to more efficient and effective organizations.

(Second witness, Mr. Bhavin Shah, Moveworks.)

AI has evolved dramatically in just a few years and today Generative AI holds enormous promise in radically improving the delivery of government services. The meteoric rise of the newest form of Generative AI— Agentic AI— offers the alluring opportunity to use AI for task automation, not just generating on-demand content, like ChatGPT and its rival chatbots. With these rapid developments, the government stands to realize massive cost savings and enormous gains in effectiveness in scores of programs while at the same time preserving the integrity of taxpayer dollars.

(Third witness, Ms. Linda Miller, TrackLight.)

Proposals to regulate AI systems are proliferating rapidly with over 1,000 AI-related bills already introduced just five months into 2025.27 The vast majority of these are state bills and many of them propose a very top-down, bureaucratic approach to preemptively constraining algorithmic systems. As these mandates expand they will significantly raise the cost of deploying advanced AI systems because complicated, confusing compliance regimes would hamstring developers—especially smaller ones.

Such a restrictive, overlapping regulatory regime would represent a reversal of the policy formula that helped America become the global leader in personal computing, digital technologies, and the internet.

(Fourth witness, Mr. Adam Thierer, R Street Institute.)

Than they made the mistake of calling their final witness, a man named Bruce Schneier. I'll leave you the pleasure of reading the full 31 pages of his testimony here, but I'd like to finish with a couple of money quotes of his, as cited in El Reg:

"You all need to assume that adversaries have copies of all the data DOGE has exfiltrated and has established access into all the networks that DOGE has removed security controls from ... DOGE's affiliates have spread across government ... They are still exfiltrating massive US databases, processing them with AI and offering them to private companies such as Palantir. These actions are causing irreparable harm to the security of our country and the safety of everyone, including everyone in this room, regardless of political affiliation."

Oddly enough, Mr. Schneier was the only witness not quoted, or even mentioned, in the wrap-up of that hearing. Maybe that wrap-up was AI generated?


Original Submission

posted by hubie on Friday June 13, @03:58PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

United has switched off Starlink service on its United Express regional aircraft following reports of radio interference. According to The Points Guy, Starlink connectivity has been turned off across its fleet "out of an abundance of caution," a move the carrier confirmed in a statement.

As noted by the report, United has installed Starlink on nearly two dozen Embraer E175 aircraft. United announced the rollout on March 7, outlining plans to fit 40+ regional aircraft each month beginning in May through the end of 2025. The installation takes around 8 hours per aircraft, and United eventually plans to roll out Starlink to its entire fleet.

TPG reports that United has received reports of radio interference caused by Starlink, affecting the VHF antennas pilots use to contact air traffic control. As such, the aforementioned E175 aircraft carrying Starlink have been operating offline for the past few days, including a flight Tom's Hardware took on Monday, June 9.

United has issued a statement to TPG noting "Starlink is now installed on about two dozen United regional aircraft. United and Starlink teams are working together to address a small number of reports of static interference during the operation of the Wi-Fi system." United says this is "fairly common" with any new airline Wi-Fi provider, and says it expects the service to be back up and running "soon."

TPG reports that United and Starlink have already identified a solution and are rolling out the fix to affected aircraft. Allegedly, one-third of the affected planes have had the fix applied and are now operating with Starlink restored, with the remaining planes set for reconnection once they've had the fix applied.


Original Submission

posted by hubie on Friday June 13, @11:13AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A researcher has exposed a flaw in Google's authentication systems, opening it to a brute-force attack that left users' mobile numbers up for grabs.

The security hole, discovered by a white-hat hacker operating under the handle Brutecat, left the phone numbers of any Google user who'd logged in open to exposure. The issue was a code slip that allowed brute-force attacks against accounts, potentially enabling SIM-swapping attacks.

"This Google exploit I disclosed just requires the email address of the victim and you can get the phone number tied to the account," Brutecat told The Register.

Brutecat found that Google's account recovery process provided partial phone number hints, which could be exploited. By using cloud services and a Google Looker Studio account, the attacker was able to bypass security systems and launch a brute-force attack.

They explained in the post that "after looking through random Google products, I found out that I could create a Looker Studio document, transfer ownership of it to the victim, and the victim's display name would leak on the home page, with 0 interaction required from the victim."

The researcher also found an old-school username recovery form that worked without Javascript, which allowed them to check if a recovery email or phone number was associated with a specific display name using 2 HTTP requests.

After this, they could go "through forgot password flow for that email and get the masked phone."

Finally, a brute-forcing tool they developed as gpb would run with the display name and masked phone to unmask the phone number, using real-time libphonenumber validation to filter out invalid number queries made to Google's API.

[...] Surprisingly, Google didn't consider this a serious flaw, awarding Brutecat $5,000 under its bug bounty scheme.

"Google was pretty receptive and promptly patched the bug," the researcher said. "By depreciating the whole form compared to my other disclosures, this was done much more quickly. That being said, the bounty is pretty low when taking into account the impact of this bug."


Original Submission

posted by janrinok on Friday June 13, @06:29AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

New imagery encompassing nearly 800,000 galaxies.

The Cosmic Evolution Survey (COSMOS) has just released the “largest look ever into the deep universe.” Even more importantly, it has made the data publicly available and accessible “in an easily searchable format.” Possibly the star attraction from this massive 1.5TB of James Webb Space Telescope (JWST) data is the interactive viewer, where you can gawp at stunning space imagery encompassing nearly 800,000 galaxies. At the same site, you can find the complete set of NIRCam and MIRI mosaics and tiles, plus a full photometric catalog.

The COSMOS-Web program is a NASA-backed project with the support of scientists from the University of California, Santa Barbara (UCSB), and Rochester Institute of Technology (RIT). With this significant data release, the public at large is getting access to the largest view deep into the universe they will have ever seen.

According to the press release announcement, the published survey maps 0.54 degrees of the sky, or “about the area of three full moons,” with the NIRCam (near infrared imaging), and a 0.2 square degree area with MIRI (mid-infrared imaging).

To help Joe Public make sense of this 1.5TB data deluge, COSMOS-Web has thoughtfully provided a full aperture and model-based photometric catalog. Using this reference, those interested can observe “photometry, structural measurements, redshifts, and physical parameters for nearly 800,000 galaxies.” More excitingly for amateur astrophysics enthusiasts, the researchers claim that the new JWST imaging, combined with previous COSMOS data, “opens many unexplored scientific avenues.”

Before you head on over to the linked resources, it might be useful to familiarize yourself with some of the terms and units used by COSMOS-Web. If we want to look more closely at the JWST NIRCam mosaics, for example, you will see that the newly surveyed area is mapped into 20 zones with reference codes. Each of the mosaics is available in four NIRCam filters (F115W, F150W, F277W, F444W). In terms of scale, mosaics are available in both 30mas and 60mas. ‘Mas’ is short for milliarcseconds, a unit of angular measurement commonly used in astronomy.

Both mosaics (created by stitching together multiple tiles), and tiles (individual images, as captured by the telescope) are available for download and study. For example, a single 30mas pixel scale mosaic from NIRCam might require a download of up to 174GB, while the individual tiles are a ‘mere’ 7-10GB (compressed). You would also need specialized astronomical software to open these FITS data maps, but there are many options available, including some free and open-source software.

The COSMOS project has made use of most of the major telescopes on Earth and in space. It began with its use of the Hubble Space Telescope to cover what has now become known as the COSMOS field, a 2-square-degree field which appears to cover approximately 2 million galaxies. The initial Hubble survey took 640 orbits of the Earth. Ultimately, it is hoped that the research team will be able to study the formation and evolution of galaxies across cosmic time.


Original Submission

posted by janrinok on Friday June 13, @01:43AM   Printer-friendly

Study shows making hydrogen with soda cans and seawater is scalable and sustainable:

A MIT study shows that making hydrogen with aluminum soda cans and seawater is both scalable and sustainable.

Hydrogen has the potential to be a climate-friendly fuel since it doesn't release carbon dioxide when used as an energy source. Currently, however, most methods for producing hydrogen involve fossil fuels, making hydrogen less of a "green" fuel over its entire life cycle.

A new process developed by MIT engineers could significantly shrink the carbon footprint associated with making hydrogen.

Last year, the team reported that they could produce hydrogen gas by combining seawater, recycled soda cans, and caffeine. The question then was whether the benchtop process could be applied at an industrial scale, and at what environmental cost.

Now, the researchers have carried out a "cradle-to-grave" life cycle assessment, taking into account every step in the process at an industrial scale. For instance, the team calculated the carbon emissions associated with acquiring and processing aluminum, reacting it with seawater to produce hydrogen, and transporting the fuel to gas stations, where drivers could tap into hydrogen tanks to power engines or fuel cell cars. They found that, from end to end, the new process could generate a fraction of the carbon emissions that is associated with conventional hydrogen production.

In a study appearing today in Cell Reports Sustainability, the team reports that for every kilogram of hydrogen produced, the process would generate 1.45 kilograms of carbon dioxide over its entire life cycle. In comparison, fossil-fuel-based processes emit 11 kilograms of carbon dioxide per kilogram of hydrogen generated.

Now the question is how to avoid a Hindenburg every now and then.


Original Submission

posted by janrinok on Thursday June 12, @09:01PM   Printer-friendly
from the sad-king dept.

ChatGPT might have many strengths and claims of "intelligence". But in a recent game of Chess was utterly wrecked (their word not mine) by a Atari 2600 and it's simple little chess program. So all the might of ChatGPT applied to chess wrecked by the scrappy little game console that is almost 50 years old.

So there are things that ChatGPT apparently shouldn't do. Like playing chess. If anything this might show its absolute lack of critical thinking or thinking ahead. Instead it's a regurgitation engine for text blobs. I guess you just conjure up a good game of Chess from the Internet and apply it ...

The matchup seems almost comical when you consider the hardware involved. The Atari 2600 was powered by a MOS Technology 6507 processor running at just 1.19 MHz. To put that in perspective, your smartphone is literally thousands of times more powerful. The chess engine in Atari Chess only thinks one to two moves ahead – a far cry from the sophisticated AI systems we're used to today.

The most telling part? ChatGPT was playing on the beginner difficulty level. This wasn't even the game's hardest setting – it was designed for people just learning to play chess.

https://www.theregister.com/2025/06/09/atari_vs_chatgpt_chess/
https://futurism.com/atari-beats-chatgpt-chess
https://techstory.in/chatgpt-absolutely-wrecked-by-atari-2600-in-beginner-chess/


Original Submission

posted by janrinok on Thursday June 12, @04:16PM   Printer-friendly
from the organic-data dept.

UNFI, North America's largest grocery distributor, halted deliveries after a cyberattack disrupted operations for 30,000 retail locations:

United Natural Foods Inc. (UNFI), North America's largest grocery distributor and the primary supplier for Whole Foods Market, has been forced to halt deliveries and take systems offline after a crippling cyberattack. The breach, discovered in early June, has disrupted operations across its network of 30,000 retail locations, raising alarms about the vulnerability of the nation's food supply chain to digital threats.

The Rhode Island-based company confirmed in a June 9 regulatory filing that unauthorized access to its IT systems triggered emergency protocols, including shutting down critical infrastructure. "The incident has caused, and is expected to continue to cause, temporary disruptions to the Company's business operations," UNFI stated, adding that it is working with law enforcement and cybersecurity experts to restore functionality.

UNFI's outage has left grocery retailers scrambling. Steve Schwartz, director of sales for New York's Morton Williams chain, told The New York Post, "It's bringing the company to a standstill with no orders generated and no orders coming in." The chain relies on UNFI for staples like dairy products and bottled waters, forcing it to seek alternative suppliers. Smaller businesses, like bakeries dependent on UNFI deliveries, face even steeper challenges.

[...] UNFI insists it has implemented "temporary workarounds" to mitigate customer disruptions, but the timeline for full recovery remains unclear. The company's stock fell 8.5% following the announcement, reflecting investor unease.

Also at CNN, TechCrunch and Bloomberg.


Original Submission

posted by janrinok on Thursday June 12, @11:31AM   Printer-friendly
from the Linguistics dept.

From https://www.maginative.com/article/with-dolphingemma-google-is-trying-to-decode-dolphin-language-using-ai/

Google, in collaboration with Georgia Tech and the Wild Dolphin Project, has announced DolphinGemma, an AI model designed to analyze and generate dolphin vocalizations. With about 400 million parameters, the model is compact enough to run on Google Pixel phones used in ocean fieldwork, allowing researchers to process dolphin sounds in real-time.

DolphinGemma builds on Google's lightweight Gemma model family, optimized for on-device use. It was trained on an extensive, labeled dataset collected over four decades by the Wild Dolphin Project — the longest-running underwater dolphin research initiative. These audio and video records capture generations of Atlantic spotted dolphins in their natural habitat, complete with behavioral context and individual dolphin identities.

The goal is ambitious: to detect the structure and potential meaning in dolphin sounds — including signature whistles used between mothers and calves, or the aggressive "squawks" exchanged during disputes. DolphinGemma functions like a language model for dolphins, predicting likely vocalizations based on prior sequences, helping researchers uncover patterns and hidden rules in their communication.

and here's the dolphingemma site

Will this LLM generate AI spam for dolphins? And is there any way we can know what it's saying?

Additional discussion on the matter at The Guardian: We're close to translating animal languages – what happens then?


Original Submission

Processed by jelizondo

posted by hubie on Thursday June 12, @06:45AM   Printer-friendly

https://www.righto.com/2017/10/the-xerox-alto-smalltalk-and-rewriting.html

We succeeded in running the Smalltalk-76 language on our vintage Xerox Alto; this blog post gives a quick overview of the Smalltalk environment. One unusual feature of Smalltalk is you can view and modify the system's code while the system is running. I demonstrate this by modifying the scrollbar code on a running system.

Smalltalk is a highly-influential programming language and environment that introduced the term "object-oriented programming" and was the ancestor of modern object-oriented languages. The Alto's Smalltalk environment is also notable for its creation of the graphical user interface with the desktop metaphor, icons, scrollbars, overlapping windows, popup menus and so forth. When Steve Jobs famously visited Xerox PARC, the Smalltalk GUI inspired him on how the Lisa and Macintosh should work.


Original Submission

posted by hubie on Thursday June 12, @01:56AM   Printer-friendly
from the Stand-Up-For-Science dept.

We regard as "scientific" a method based on deep analysis of facts, theories, and views, presupposing unprejudiced, unfearing open discussion and conclusions.

(Andrei Sakharov, Thoughts on Peace, Progress and Intellectual Freedom, 1968.)

At the time of writing, a couple hundred scientists at the National Institutes of Health (NIH) have signed a letter of dissent towards their management, dubbed the Bethesda Declaration. It opens thus:

Dear Dr. Bhattacharya,

For staff across the National Institutes of Health (NIH), we dissent to Administration policies that undermine the NIH mission, waste public resources, and harm the health of Americans and people across the globe. Keeping NIH at the forefront of biomedical research requires our stalwart commitment to continuous improvement. But the life-and-death nature of our work demands that changes be thoughtful and vetted. We are compelled to speak up when our leadership prioritizes political momentum over human safety and faithful stewardship of public resources.

You too can sign the letter, along with 2,331 scientists and IT specialists who have done so, already, here.

Since January 20, the new administration has cancelled 2,100 NIH research grants totalling around $9.5bn and $2.6bn in contracts.


Original Submission

posted by hubie on Wednesday June 11, @09:11PM   Printer-friendly

New Way to Track Covertly Android Users

Researchers have discovered a new way to covertly track Android users. Both Meta and Yandex were using it, but have suddenly stopped now that they have been caught.

The details are interesting, and worth reading in detail:

Tracking code that Meta and Russia-based Yandex embed into millions of websites is de-anonymizing visitors by abusing legitimate Internet protocols, causing Chrome and other browsers to surreptitiously send unique identifiers to native apps installed on a device, researchers have discovered. Google says it's investigating the abuse, which allows Meta and Yandex to convert ephemeral web identifiers into persistent mobile app user identities.

        The covert tracking ­implemented in the Meta Pixel and Yandex Metrica trackers­ allows Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. Android sandboxing, for instance, isolates processes to prevent them from interacting with the OS and any other app installed on the device, cutting off access to sensitive data or privileged system resources. Defenses such as state partitioning and storage partitioning, which are built into all major browsers, store site cookies and other data associated with a website in containers that are unique to every top-level website domain to ensure they're off-limits for every other site.

-- Links in article:

https://localmess.github.io/
https://www.facebook.com/business/tools/meta-pixel/
https://ads.yandex/metrica
https://source.android.com/docs/security/app-sandbox
https://developer.mozilla.org/en-US/docs/Web/Privacy/Guides/State_Partitioning
https://privacysandbox.google.com/cookies/storage-partitioning
https://www.washingtonpost.com/technology/2025/06/06/meta-privacy-facebook-instagram/

-- See Also:

- Meta and Yandex are de-anonymizing Android users' web browsing identifiers
https://arstechnica.com/security/2025/06/meta-and-yandex-are-de-anonymizing-android-users-web-browsing-identifiers/


Original Submission

posted by hubie on Wednesday June 11, @04:26PM   Printer-friendly

OpenAI defends privacy of hundreds of millions of ChatGPT users:

OpenAI is now fighting a court order to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering—after news organizations suing over copyright claims accused the AI company of destroying evidence.

"Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to 'preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying)," OpenAI explained in a court filing demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without "any just cause," OpenAI argued, the order "continues to prevent OpenAI from respecting its users' privacy decisions." That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), OpenAI said.

The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls "might be more likely to 'delete all [their] searches' to cover their tracks," OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs' concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs' request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, "at a minimum," news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the "sweeping, unprecedented" order continues to be enforced.

"As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained," OpenAI argued.

Meanwhile, there is no evidence beyond speculation yet supporting claims that "OpenAI had intentionally deleted data," OpenAI alleged. And supposedly there is not "a single piece of evidence supporting" claims that copyright-infringing ChatGPT users are more likely to delete their chats.

"OpenAI did not 'destroy' any data, and certainly did not delete any data in response to litigation events," OpenAI argued. "The Order appears to have incorrectly assumed the contrary."

At a conference in January, Wang raised a hypothetical in line with her thinking on the subsequent order. She asked OpenAI's legal team to consider a ChatGPT user who "found some way to get around the pay wall" and "was getting The New York Times content somehow as the output." If that user "then hears about this case and says, 'Oh, whoa, you know I'm going to ask them to delete all of my searches and not retain any of my searches going forward,'" the judge asked, wouldn't that be "directly the problem" that the order would address?

[...] Before the order was in place mid-May, OpenAI only retained "chat history" for users of ChatGPT Free, Plus, and Pro who did not opt out of data retention. But now, OpenAI has been forced to preserve chat history even when users "elect to not retain particular conversations by manually deleting specific conversations or by starting a 'Temporary Chat,' which disappears once closed," OpenAI said. Previously, users could also request to "delete their OpenAI accounts entirely, including all prior conversation history," which was then purged within 30 days.

While OpenAI rejects claims that ordinary users use ChatGPT to access news articles, the company noted that including OpenAI's business customers in the order made "even less sense," since API conversation data "is subject to standard retention policies." That means API customers couldn't delete all their searches based on their customers' activity, which is the supposed basis for requiring OpenAI to retain sensitive data.

"The court nevertheless required OpenAI to continue preserving API Conversation Data as well," OpenAI argued, in support of lifting the order on the API chat logs.

[...] It's unclear if OpenAI will be able to get the judge to waver if oral arguments are scheduled.

Wang previously justified the broad order partly due to the news organizations' claim that "the volume of deleted conversations is significant." She suggested that OpenAI could have taken steps to anonymize the chat logs but chose not to, only making an argument for why it "would not" be able to segregate data, rather than explaining why it "can't."


Original Submission

posted by hubie on Wednesday June 11, @11:40AM   Printer-friendly

Do you think Internet SEARCH has gone sucky-sucky-so-so? Can you imagine a better experience? Do you have some coding (dis)ability, perhaps even friends-with-similar-benefits?

Then you -- yes, you -- might be interested in a project a bunch of European research institutions have been working on for the past two years, and now -- June 6 -- have released to the public.

The project -- imaginatively named the Open Web Search Initiative -- offers all elements of a modern day search engine in convenient open source packages; along with 6.61 billion urls, 923 TiB total, and 1 TiB of daily crawled data. The only thing left for you to do is to download a partial index of all that data to your own server(s) and develop your own custom software on top of that. Then ...

  1. Off to some VC millions
  2. ???
  3. Internet billions!!!

Please do return a percent of your revenue to this site though -- those private massages for the editor do not come cheaply, you know -- and an additional percent for this sub's author. Thank you's!

(Postscript: in case you're looking for funding as an open source developer; also, there's a free event on June 19-20 in Brussels.)


Original Submission

posted by janrinok on Wednesday June 11, @06:58AM   Printer-friendly

'We're definitely on the back foot': U.S. risks losing fusion energy race to China, industry leaders warn:

The race to lead in artificial intelligence isn't the only event in which the U.S. and China are competing for dominance. The pursuit of fusion — the "Holy Grail" of clean energy — is also pitting the superpowers against each other, and American tech leaders worry China could surge ahead.

At a Technology Alliance conference on Tuesday, Washington state companies building commercial fusion technologies raised concerns about China's strategy to pour resources into fusion.

"The U.S. is not committed to fusion. China is, by orders of magnitude," said Ben Levitt, the head of R&D for Zap Energy, speaking on a fusion panel at the Seattle Investor Summit+Showcase.

While the U.S. government spent approximately $800 million a year on fusion efforts during the Biden administration, China is investing more than twice that annually, IEEE Spectrum and others report. The Trump administration has taken action supporting nuclear fission, which powers today's nuclear reactors, but has not shown the same interest in fusion. The sector has become increasingly reliant on venture capital to fund its progress.

China is also focused on training fusion physicists and engineers, while President Trump is slashing funding for scientific research.

Fusion is so highly sought after given its potential to provide nearly limitless, carbon-free power, which could be critical to meet growing energy demands from AI applications and the global push to decarbonize transportation, the electrical grid, heating and cooling, industrial applications and elsewhere.

"The U.S. started with a very good hand in fusion and has played it extremely poorly," Levitt said. "So, yeah, we're definitely on the back foot."

The conference panel also included Brian Riordan, co-founder and chief operating officer of Avalanche Energy, and Anthony Pancotti, co-founder and head of R&D for Helion Energy.

Riordan argued that while China appears to be making strides in the race, what matters even more is who develops the most affordable technology.

Physicists for decades have pursued fusion energy. But replicating the reactions that power the Sun and stars is massively challenging and requires technologies that can generate super high pressure and temperatures of 100 million degrees Celsius, and sustain those conditions — plus efficiently capture the energy that fusion produces.

In December 2022, the U.S. National Ignition Facility (NIF) at Lawrence Livermore National Laboratory hit a key milestone in fusion research, demonstrating that fusion reactions here on Earth could release more power than required to produce them.

Images published in January revealed that China appears to be building a fusion research facility modeled on NIF — but even larger. Others suggest the site could be a giant Z-pinch machine — similar to the technology being pursued by Zap.

Years ago, a Chinese website posted a graphic of a fusion device that bore a troubling resemblance to Helion's technology, the company has said.

"We have seen copycats in China already, and it is terrifying," Pancotti said on Tuesday. "They can mobilize people and money at a scale that is beyond even what venture capital can do in this country. And so I think there's real concern there, and there's real concern around supply chain, too."

Added Levitt: "I wouldn't be surprised if every single one of our [fusion] concepts has a city designated to it in China."

When it comes to world ending tech, I'm not sure I want it to be a race.


Original Submission

posted by janrinok on Wednesday June 11, @02:14AM   Printer-friendly

https://distrowatch.com/dwres.php?resource=showheadline&story=20007

The Ubuntu team is following Fedora's example and dropping GNOME's X11 session in the distribution's next version. The announcement for the change reads, in part:

"The login screen (powered by GDM) will no longer offer the Ubuntu on Xorg option. All sessions based on GNOME Shell and Mutter are now Wayland-only and users who rely on X11-specific behaviors will not be able to use the GNOME desktop environment on Xorg. We understand that some users still depend on Xorg's implementation of X11; for example, in remote desktop setups, or highly specialized workflows. If you require Xorg specifically, you can install and use a non-GNOME desktop environment. Xorg itself is not going away, only GNOME's support for Xorg."


Original Submission