Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current mobile phone?

  • 0-6 months
  • 6-12 months
  • 1-2 years
  • 2-4 years
  • 4+ years
  • My phone belongs in a technology museum.
  • Do 2 tin cans and a very long piece of string count?
  • I don't have a mobile phone you insensitive clod!

[ Results | Polls ]
Comments:43 | Votes:227

posted by janrinok on Saturday June 07, @10:42PM   Printer-friendly
from the avoiding-planned-obsolescence-and-DRM dept.

The KDE community has an outreach campaign encouraging the use of the Plasma desktop by people with older, but usable, laptops. Vista10 support will come to an end and Vista11 has been designed not to run on many still viable models of computer due to several factors including Digital Restrictions Management (DRM) requirements centered around TPM-2.0. GNU/Linux can not only keep the old system working, it can improve its performance, ease of use, and general security. KDE Plasma can be part of that.

Even if you agree to this tech extortion now, in a few years time, they will do it again as they have done many times in the past.

But things don't have to be this way...

Upgrade the smart way! Keep the machine you've got and switch to Linux and Plasma.

Linux can give new life to your laptop. Combined with KDE's Plasma desktop, you get all the advantages of the safety, stability and hi tech of Linux, with all the features of a beautiful, modern and powerful graphic environment.

Their campaign page covers where and how beginners can get help, what the differences are, the benefits gained, and more.

[Editor's Comment: This is obviously a KDE/Plasma centric promotion - which doesn't mean that it is bad but there are lots of other options too. Which Linux OS and desktop would you recommend for someone wanting to make the move from Windows to Linux? Which are the best for a beginner, and which desktops provide the most intuitive interface for someone who has never sat down in front of a Linux computer before?--JR]

Previously:
(2025) Microsoft is Digging its Own Grave With Windows 11, and It Has to Stop
(2023) The Wintel Duopoly Plans to Send 240 Million PCs to the Landfill
(2023) Two Security Flaws in the TPM 2.0 Specs Put Cryptographic Keys at Risk
(2022) Report Claims Almost Half of Systems are Ineligible for Windows 11 Upgrades
(2021) Windows 11 Will Leave Millions of PCs Behind, and Microsoft is Struggling to Explain Why
(2019) Microsoft's Ongoing Tactics Against Competitors Explained, Based on its Own Documents
(2016) Windows 10 Anniversary Update to Require TPM 2.0 Module


Original Submission

posted by janrinok on Saturday June 07, @06:02PM   Printer-friendly

The gender gap in education doesn't always disadvantage women. In countries like Estonia, Iceland, or Sweden, women outperform men in key indicators such as tertiary education and lifelong learning. But that, too, is a gender gap.

That's the starting point for researchers at the [Spain's] Miguel Hernández University of Elche (UMH), who have developed a mathematical model to support European education authorities in improving performance and reducing gender disparities, regardless of which group is underperforming.

"In many European countries, women outperform men at every educational level. If we're serious about equality, we must also address these differences," explains Inmaculada Sirvent, professor of Statistics and Operations Research at UMH and co-author of the study.

Published in Socio-Economic Planning Sciences, the study analyzes four key indicators used by the European Commission to track access to knowledge: tertiary attainment, adult participation in learning, early leavers from education and training, and the share of young people not in employment, education, or training (NEETs).

One of the study's most striking findings is that, on average, women outperform men in three of the four indicators. The most significant gap concerns tertiary attainment: 38.5% of women in Europe have completed tertiary education, compared to 32% of men. "This imbalance, even if favorable to women, is still a gender gap—and one the education system can and should help close," says Sirvent.

Using data from 93 European regions, the model provides tailored improvement targets for each region based on two simultaneous goals: getting closer to best practices and reducing gender disparities for each indicator.

"This bi-objective approach is the key innovation in our work," says Sirvent. The model allows decision-makers to prioritize different strategies: for instance, setting closer targets as the result of benchmarking against the most similar peers (even if gender gaps persist), or choosing more ambitious, gender-balanced targets that may require greater effort.

The methodology is based on Data Envelopment Analysis (DEA), a widely used tool for assessing the relative efficiency of comparable units, such as hospitals, schools, or regions, based on their inputs and outputs. In this case, DEA is adapted to suggest customized educational targets that both improve performance and close gender gaps.

"One of the most striking examples is Estonia, where 54% of women have completed tertiary education, compared to just 31% of men," notes José L. Ruiz, UMH professor of Statistics and Operations Research and co-author of the study.

"Our model shows that Estonia could reduce this gap without significantly burdening its education system." Similar patterns are seen in Iceland and several regions of Poland, Finland, and Spain. In contrast, some areas of Germany, Switzerland, and Austria still show gender gaps favoring men.

The study is also notable for being the first to apply DEA at a subnational level in the European education context and for incorporating gender equality as a key optimization objective in policy planning.

Sirvent and Ruiz, both affiliated with UMH's Institute for Operations Research, collaborated with Dovilė Stumbrienė of Vilnius University's Faculty of Philosophy, who led the research.

Among the study's limitations, the authors cite the lack of more granular territorial data and the absence of relevant social variables such as socioeconomic background, cultural context, or ethnic diversity.

They also note that the indicators used measure educational outcomes but do not necessarily access opportunities or conditions within the education system.

More information: Dovilė Stumbrienė et al, Towards gender equality in education: Different strategies to improve subnational performance of European countries using data envelopment analysis, Socio-Economic Planning Sciences (2024). DOI: 10.1016/j.seps.2024.102138


Original Submission

Processed by jelizondo

posted by hubie on Saturday June 07, @01:17PM   Printer-friendly

Reality check: Microsoft Azure CTO pushes back on AI vibe coding hype, sees 'upper limit':

REDMOND, Wash. — Microsoft Azure CTO Mark Russinovich cautioned that "vibe coding" and AI-driven software development tools aren't capable of replacing human programmers for complex software projects, contrary to the industry's most optimistic aspirations for artificial intelligence.

Russinovich, giving the keynote Tuesday at a Technology Alliance startup and investor event, acknowledged the effectiveness of AI coding tools for simple web applications, basic database projects, and rapid prototyping, even when used by people with little or no programming experience.

However, he said these tools often break down when handling the most complex software projects that span multiple files and folders, and where different parts of the code rely on each other in complicated ways — the kinds of real-world development work that many professional developers tackle daily.

"These things are right now still beyond the capabilities of our AI systems," he said. "You're going to see progress made. They're going to get better. But I think that there's an upper limit with the way that autoregressive transformers work that we just won't get past."

Even five years from now, he predicted, AI systems won't be independently building complex software on the highest level, or working with the most sophisticated code bases.

Instead, he said, the future lies in AI-assisted coding, where AI helps developers write code but humans maintain oversight of architecture and complex decision-making. This is more in line with Microsoft's original vision of AI as a "Copilot," a term that originated with the company's GitHub Copilot AI-powered coding assistant.

[...] He discussed his own AI safety research, including a technique that he and other Microsoft researchers developed called "crescendo" that can trick AI models into providing information they'd otherwise refuse to give.

The crescendo method works like a "foot in the door" psychological attack, he explained, where someone starts with innocent questions about a forbidden topic and gradually pushes the AI to reveal more detailed information.

Ironically, he noted, the crescendo technique was referenced in a recent research paper that made history as the first largely AI-generated research ever accepted into a tier-one scientific conference.

Russinovich also delved extensively into ongoing AI hallucination problems — showing examples of Google and Microsoft Bing giving incorrect AI-generated answers to questions about the time of day in the Cook Islands, and the current year, respectively.

"AI is very unreliable. That's the takeaway here," he said. "And you've got to do what you can to control what goes into the model, ground it, and then also verify what comes out of the model."

Depending on the use case, Russinovich added, "you need to be more rigorous or not, because of the implications of what's going to happen."


Original Submission

posted by hubie on Saturday June 07, @08:30AM   Printer-friendly

Klarna CEO says company will use humans to offer VIP customer service:

"My wife taught me something," Klarna CEO Sebastian Siemiatkowski told the crowd at London SXSW. He was addressing the headlines about the company looking to hire human workers after previously saying Klarna used artificial intelligence to do work that would equate to 700 workers. "Two things can be true at the same time," he said.

Siemiatkowski said it's true that the company looked to stop hiring human workers a few years ago and rolled out AI agents that have helped reduce the cost of customer support and increase the company's revenue per employee. The company had 5,500 workers two years ago, and that number now stands at around 3,000, he said, adding that as the company's salary costs have gone down, Klarna now seeks to reinvest a majority of that money into employee cash and equity compensation.

But, he insisted, this doesn't mean there isn't an opportunity for humans to work at his company. "We think offering human customer service is always going to be a VIP thing," he said, comparing it to how people pay more for clothing stitched by hand rather than machines. "So we think that two things can be done at the same time. We can use AI to automatically take away boring jobs, things that are manual work, but we are also going to promise our customers to have a human connection."

He spoke about how the company plans to balance employees and AI workers. Siemiatkowski said that right now, engineering positions at the company haven't shrunk as much as those in other departments, but he notes that this could shift.

"What I'm seeing internally is a new rise of businesspeople who are coding themselves," he said, adding that the challenge many engineers have these days is that they are not business savvy. "I think that category of people will become even more valuable going forward," Siemiatkowski continued, especially as they can use AI and put their business understanding to good use.

He himself is using ChatGPT to help him learn to code and help him understand more of the data side of Klarna. He said doing this has helped Klarna become a better company. Before, he thought he would never catch up in learning what was needed to take a more present role in database conversations at the company.

"I'll take a Slack thread, I'll throw it in ChatGPT and say, 'This makes sense, right?'" he said, adding that he uses ChatGPT like a private tutor.

[Editor's Comment: Klarna Group plc, commonly referred to as Klarna, is a Swedish fintech company. The company provides payment processing services for the e-commerce industry, managing store claims and customer payments. --JR]


Original Submission

posted by hubie on Saturday June 07, @03:45AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In 1983, researchers discovered that the planet’s surface was speckled with strange, circular landforms. These rounded mountain belts, known as coronae, have no known Earthly counterparts, and they’ve remained enigmatic for decades. But hot plumes of rock upwelling from Venus’ mantle are shaping the mysterious landforms, a new analysis suggests. If true, that mean that Venus’ surface is tectonically active, and not merely a stagnant layer, researchers report May 14 in Science Advances.

Some “people have said, well, it’s geologically dead,” says earth and planetary scientist Anna Gülcher of the University of Bern in Switzerland. But over the past few years, there’s been a growing mound of evidence supporting tectonic activity on the Morning Star. The new work shows that “hot material resides beneath [coronae] and is likely driving tectonic processes that are not so different than what occurs on the Earth,” she says.

Gülcher and colleagues simulated how Venus’ crust deformed in response to material rising from the underlying mantle, a thick layer between the planet’s crust and core. This allowed the team to make predictions about what the underground plumes — buoyant blobs of hot material — and resulting coronae would look like to spacecraft instruments.

Then the team analyzed data on the planet’s topography and gravity collected in the early 1990s by NASA’s Magellan spacecraft, on the agency’s last mission to Venus. The gravity data were crucial. They revealed underground density differences linked to plumes rising from below.

By comparing the simulation predictions to the Magellan observations, the team was able to identify plumes beneath 52 of the observed coronae. What’s more, the simulation results suggested that the plumes had been sculpting the coronae in various ways.

[...] The research supports the argument that Venus’ tectonics are active today, he says. What’s more, the demonstrated ability of computer simulations to predict what spacecraft may observe will be a boon to future Venus missions like the VERITAS mission, which will gather much higher resolution data than Magellan, Byrne says.

If Venus is tectonically active today, perhaps it could have been Earthlike in the past, Gülcher says. “Was there a period in Venus’ history that was … potentially less hot, and more habitable?”

Journal Reference: G. Cascioli et al. A spectrum of tectonic processes at coronae on Venus revealed by gravity and topography. Science Advances. Vol. 11, May 14, 2025. doi: 10.1126/sciadv.adt5932.


Original Submission

posted by kolie on Friday June 06, @11:02PM   Printer-friendly
from the im-sorry-dave-i-cant-let-you-scrape-that dept.

X changes its terms to bar training of AI models using its content

Social network X, formerly known as Twitter, has updated its developer agreement to officially prohibit the use of its platform's public content for training artificial intelligence models. This move solidifies the platform's control over its vast dataset, particularly in light of its relationship with Elon Musk's own AI company, xAI.

The updated terms of service now include a specific restriction against this practice:

In an update on Wednesday, the company added a line under "Reverse Engineering and other Restrictions," a subsection of restrictions on use: "You shall not and you shall not attempt to (or allow others to) [...] use the X API or X Content to fine-tune or train a foundation or frontier model," it reads.

This policy change follows a series of adjustments and is seen as a strategic move to benefit its sister AI company:

This change comes after Elon Musk's AI company xAI acquired X in March — understandably, xAI wouldn't want to give its competitors free access to the social platform's data without a sale agreement. In 2023, X changed its privacy policy to use public data on its site to train AI models. Last October, it made further changes to allow third parties to train their models.

X is not alone in putting up walls around its data as the AI race heats up. Other technology companies have recently made similar changes to their policies to prevent unauthorized AI training:

Reddit has also put in place safeguards against AI crawlers, and last month, The Browser Company added a similar clause to its AI-focused browser Dia's terms of use.

As major platforms that host vast amounts of human-generated text and conversations increasingly restrict access for broad AI training, what might the long-term consequences be for AI development? Does this trend toward creating proprietary "data moats" risk stifling innovation and competition, potentially concentrating the future of advanced AI in the hands of a few powerful companies with exclusive data access?


posted by hubie on Friday June 06, @06:17PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The European Commission (EC) has kicked off a scheme to make Europe a better place to nurture global technology businesses, providing support throughout their lifecycle, from startup through to maturity.

Launched this week, the EU Startup and Scaleup Strategy [PDF], dubbed "Choose Europe to Start and Scale," is another attempt to cultivate a flourishing tech sector in the region to rival that of the US, or "make Europe a startup powerhouse," as the EC puts it.

At the moment, many European tech startups struggle to take their ideas from lab to market, or grow into major players in their market, the EC says, which proposes action across five main areas.

These include creating a more innovation-friendly environment with fewer administrative burdens across the EU Single Market; a Scaleup Europe Fund to help bridge the financing gap; a Lab to Unicorn initiative to help connect universities across the EU; attracting/retaining top talent through to advice on employee stock options and cross-border employment; as well as facilitating access to infrastructure for startups.

The EC reportedly plans to create a public-private fund of at least €10 billion ($11.3 billion) to help with financing. We asked the Commission for confirmation of this, but did not receive an answer prior to publishing.

[...] This latest initiative sets out a clear vision, the EC says: to make Europe the top choice to launch and grow global technology-driven companies. It initiates a myriad of actions to improve conditions for startups and scaleups, encouraging them to capitalize on new geopolitical opportunities, and - importantly - aims to reduce the reasons for fledgling businesses to relocate outside the EU.

[...] According to some estimates, Europeans pay on average a $100 monthly "tax" to use US-created technology, and Steve Brazier, former CEO at Canalys told us last year he suspects this will be exacerbated when AI is widely used.

Europe has relatively few major tech organizations compared to the US, and there is more and more interest from some European businesses in the Trump 2.0 era to reduce their reliance on American hyperscalers in favor of local cloud operators.

According to some seasoned market watchers, the boat has likely sailed with respect to loosening the dominance of Microsoft, AWS and Google in the cloud, yet for the emerging tech startup scene there may be everything to play for.


Original Submission

posted by janrinok on Friday June 06, @04:03PM   Printer-friendly

https://www.newscientist.com/article/2483366-japans-resilience-moon-lander-has-crashed-into-the-lunar-surface/

A Japanese space mission hoping to make history as the third ever private lunar landing has ended in failure, after ispace's Resilience lander smashed into the moon at some point after 7.13pm UTC on 5 June.

The lander had successfully descended to about 20 km above the moon's surface, but ispace's mission control lost contact shortly afterwards, when the probe fired its main engine for the final descent, and received no further communication.

The company said in a statement that a laser tool the craft used to measure its distance to the surface appeared to have malfunctioned, which would have caused the lander to slow down insufficiently, making the most likely outcome a crash landing.

"Given that there is currently no prospect of a successful lunar landing, our top priority is to swiftly analyse the telemetry data we have obtained thus far and work diligently to identify the cause," said ispace CEO Takeshi Hakamada in the statement.

If it had been successful, Resilience would have been the second private lunar landing of this year and the third ever. It would also have marked the first non-US company to land on lunar soil, after iSpace's first attempt, the Hakuto-R mission, ended in failure in 2023.

The Resilience lander started its moon-bound journey on 15 January, when it launched aboard a SpaceX rocket together with Firefly Aerospace's Blue Ghost lander. While Blue Ghost touched down on 2 March, Resilience took a more circuitous route, travelling into deep space before doubling back and entering lunar orbit on 6 May. This winding path was necessary to land in the hard-to-reach northern plain called Mare Frigoris, where no previous moon mission had explored.

There were six experiments on board Resilience, including a device for splitting water into hydrogen and oxygen, a module for producing food from algae and a deep-space radiation monitor. The lander also contained a 5-kilogram rover, called Tenacious, that would have explored and photographed the lunar surface during the two weeks that Resilience was scheduled to run for.


Original Submission

posted by janrinok on Friday June 06, @01:32PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The Commercial Times reports that TSMC's upcoming N2 2nm semiconductors will cost $30,000 per wafer, a roughly 66% increase over the company's 3nm chips. Future nodes are expected to be even more expensive and likely reserved for the largest manufacturers.

TSMC has justified these price increases by citing the massive cost of building 2nm fabrication plants, which can reach up to $725 million. According to United Daily News, major players such as Apple, AMD, Qualcomm, Broadcom, and Nvidia are expected to place orders before the end of the year despite the higher prices, potentially bringing TSMC's 2nm Arizona fab to full capacity.

Unsurprisingly, Apple is getting first dibs. The A20 processor in next year's iPhone 18 Pro is expected to be the first chip based on TSMC's N2 process. Intel's Nova Lake processors, targeting desktops and possibly high-end laptops, are also slated to use N2 and are expected to launch next year.

Earlier reports indicated that yield rates for TSMC's 2nm process reached 60% last year and have since improved. New data suggests that 256Mb SRAM yield rates now exceed 90%. Trial production is likely already underway, with mass production scheduled to begin later this year.

With tape-outs for 2nm-based designs surpassing previous nodes at the same development stage, TSMC aims to produce tens of thousands of wafers by the end of 2025.

TSMC also plans to follow N2 with N2P and N2X in the second half of next year. N2P is expected to offer an 18% performance boost over N3E at the same power level and 36% greater energy efficiency at the same speed, along with significantly higher logic density. N2X, slated for mass production in 2027, will increase maximum clock frequencies by 10%.

As semiconductor geometries continue to shrink, power leakage becomes a major concern. TSMC's 2nm nodes will address this issue with gate-all-around (GAA) transistor architectures, enabling more precise control of electrical currents.

Beyond 2nm lies the Angstrom era, where TSMC will implement backside power delivery to further enhance performance. Future process nodes like A16 (1.6nm) and A14 (1.4nm) could cost up to $45,000 per wafer.

Meanwhile, Intel is aiming to outpace TSMC's roadmap. The company recently began risk production of its A18 node, which also features gate-all-around and backside power delivery. These chips are expected to debut later this year in Intel's upcoming laptop CPUs, codenamed Panther Lake.


Original Submission

posted by janrinok on Friday June 06, @08:48AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

NASA is advancing plans to construct a radio telescope on the Moon's far side – a location uniquely shielded from the ever-increasing interference caused by Earth's expanding satellite networks. This ambitious endeavor, known as the Lunar Crater Radio Telescope, envisions deploying a massive wire mesh reflector within a lunar crater.

The project's innovative design relies on advanced robotics to suspend the reflector using cables, and if development proceeds as planned, the observatory could be operational sometime in the 2030s. Current projections estimate the cost at over $2 billion.

The far side of the Moon offers an unparalleled environment for radio astronomy, being naturally protected from the relentless radio noise and light pollution that plague observatories on Earth. The recent surge in satellite launches, especially from private ventures like Starlink, has led to a dramatic increase in orbiting satellites.

This proliferation raises concerns among astronomers about space debris, light pollution, and, most critically, the leakage of radio-frequency radiation.

Such interference poses a significant threat to sensitive scientific instruments designed to detect faint signals from the universe's earliest epochs. Federico Di Vruno, an astronomer affiliated with the Square Kilometre Array Observatory, told LiveScience, "it would mean that we are artificially closing 'windows' to observe our universe" if radio astronomy on Earth becomes impossible due to interference.

The LCRT is being developed by a team at NASA's Jet Propulsion Laboratory, part of the California Institute of Technology. Since its initial proposal in 2020, the concept has progressed through several phases of funding from NASA's Institute for Advanced Concepts. The team is currently building a prototype for testing at the Owens Valley Radio Observatory in California.

Gaurangi Gupta, a research scientist working on the project, explained that preparations are underway to apply for the next round of funding. If successful, she told LiveScience, the LCRT could transition into a "fully-fledged mission" within the next decade.

The proposed telescope features a mesh reflector spanning approximately 1,150 feet – making it larger than the now-defunct Arecibo telescope, though not as large as China's FAST observatory. The team has already selected a preferred crater in the Moon's Northern Hemisphere for the installation, but the precise site remains confidential.

Although the concept of a lunar radio telescope dates back to at least 1984, technological advances have brought the idea closer to reality. One of the most significant obstacles facing the project, however, is its cost. Gupta noted that the latest estimate for building the LCRT stands at around $2.6 billion – a figure that presents challenges given NASA's current budgetary constraints.

Beyond providing a refuge from terrestrial interference, the LCRT would open new frontiers in astronomy by enabling the study of ultra-long radio waves – those with wavelengths longer than 33 feet. Earth's atmosphere blocks these frequencies, which are essential for investigating the universe's "cosmic dark ages," a period before the first stars formed.

"During this phase, the universe primarily consisted of neutral hydrogen, photons and dark matter, thus it serves as an excellent laboratory for testing our understanding of cosmology," Gupta said. "Observations of the dark ages have the potential to revolutionize physics and cosmology by improving our understanding of fundamental particle physics, dark matter, dark energy and cosmic inflation."

NASA has already begun experimenting with lunar radio astronomy. In February 2024, the ROLSES-1 instrument was delivered to the Moon's near side by Intuitive Machines' Odysseus lander, briefly collecting the first lunar radio data. However, as Gupta pointed out, the instrument's Earth-facing orientation meant that "almost all the signals it collected came from our own planet, offering little astronomical value."

Later this year, another mission aims to place a small radio telescope on the Moon's far side, further testing the feasibility of such observations.


Original Submission

posted by kolie on Friday June 06, @03:59AM   Printer-friendly
from the ground-control-to-major-bomb dept.

Arthur T Knackerbracket has processed the following story:

SpaceX's Starship has failed, again.

Elon Musk’s private rocketry company staged the ninth launch of the craft on Tuesday and notched up one success by managing to leave the launchpad by re-using a Super Heavy booster for the first time. But multiple fails for Flight 9 followed.

SpaceX paused the countdown for Tuesday's launch at the T-40 mark for some final tweaks, then sent Starship into the sky atop the Super Heavy at 1937 Eastern Daylight Time.

After stage separation, the booster crash-landed six minutes into the flight, after SpaceX used a steeper-than-usual angle of attack for its re-entry "to intentionally push Super Heavy to the limits, giving us real-world data about its performance that will directly feed in to making the next generation booster even more capable."

The Starship upper stage, meanwhile, did better than the previous two tests flights, in that it actually reached space, but subsequently things (like the craft) got well and truly turned around.

One of the goals for Musk's space crew was to release eight mocked up Starlink satellites into orbit. SpaceX already failed at its last two attempts to do this when the pod doors never opened. And it was third time unlucky last night when the payload door failed yet again to fully open to release the dummy satellites. SpaceX has not yet provided a reason for the malfunction.

Another goal for Flight 9 was to check out the performance of the ship's heatshield – SpaceX specifically flew it with 100 missing (on purpose) heatshield tiles so that it could test key vulnerable areas "across the vehicle during reentry." (The spacecraft also employed “Multiple metallic tile options, including one with active cooling" to test different materials for future missions.) But it needed controlled reentry to properly assess stress-test that, and that failed too.

After the doors remained stubbornly closed, a "subsequent attitude control error resulted in bypassing the Raptor relight and prevented Starship from getting into the intended position for reentry." It began spinning out of control, blowing up, er, experiencing "a rapid unscheduled disassembly" upon re-entry.

SpaceX boss Elon Musk had rated Starship’s re-entry as the most important phase of this flight. But Starship spinning out as it headed back to Earth meant SpaceX was unable to capture all the data it hoped to gather. Although it says it did gather a lot of useful information before ground control lost contact with Starship approximately 46 minutes into the flight.

Musk nonetheless rated the mission a success.

“Starship made it to the scheduled ship engine cutoff, so big improvement over last flight!” he Xeeted. “Also, no significant loss of heat shield tiles during ascent. Leaks caused loss of main tank pressure during the coast and re-entry phase. Lot of good data to review.”

The billionaire added: “Launch cadence for next 3 flights will be faster, at approximately 1 every 3 to 4 weeks.”

That may be a little optimistic, as the USA’s Federal Aviation Administration (FAA) must authorize Starship launches and is yet to do so for future flights.

Previous Starship missions caused concern in the aviation industry after debris from SpaceX hardware fell to Earth. For this mission the FAA enlarged the Aircraft Hazard Area that aviators avoid after launches. SpaceX’s commentary on the launch made several mentions of the company having secured permission and chosen remote – and therefore safe – locations for touchdowns.

The FAA, however, is not keen to authorize flights until it is satisfied with safety. Three explosive endings in a row could make Musk’s timeline for future launches harder to achieve.


Original Submission

posted by kolie on Thursday June 05, @11:14PM   Printer-friendly
from the whose-chip-is-it-anyways dept.

Arthur T Knackerbracket has processed the following story:

A Bloomberg report, citing sources familiar with the matter, highlights that the proposed plant would be a gigafab, essentially a sprawling complex of multiple chipmaking facilities. If it comes to pass, it would represent a massive leap in the UAE's ambitions to become a key player in this field, even though it currently lacks skilled semiconductor labor.

TSMC has reportedly met several times in recent months with Steve Witkoff, the US Special Envoy to the Middle East, and MGX, a powerful UAE investment fund tied to the ruling family. The renewed interest comes amid broader negotiations around AI cooperation between the two countries.

Still, don't expect bulldozers on the ground anytime soon. The idea is still in early-stage talks, and whether it advances at all hinges on how the US feels about it, particularly given the national security and economic implications.

Critics inside the administration point to the UAE's ties to China and the risk of future technology transfers. AI data centers can be more easily regulated through licensing and oversight, but a chip manufacturing plant would create a pipeline of advanced know-how and local production that the US could lose control over.

It's worth mentioning that TSMC is already investing heavily in the US through its Arizona project, which is expected to cost $165 billion and includes fabs, research labs, and chip packaging facilities. The US committed $6.6 billion in subsidies to help make that happen as part of the CHIPS Act. But some in the Trump administration worry that spreading TSMC's resources too thin, especially in a region with complex geopolitics like the Gulf, could backfire.

Regardless of the outcome, the UAE continues to position itself as a regional tech leader and has been aggressively courting partnerships in AI, quantum computing, and cloud infrastructure. Last month, Trump announced a series of agreements with multiple Gulf countries, including the UAE, related to exporting AI chips and developing AI infrastructure.


Original Submission

posted by janrinok on Thursday June 05, @06:29PM   Printer-friendly
from the if-you're-not-doing-anything-wrong-you-have-nothing-to-fear dept.

The Real ID Act was passed in 2005 on the grounds that it was necessary for access control of sensitive facilities like nuclear power plants and the security of airline flights. The law imposed standards for state- and territory-issued ID cards in the United States, but was widely criticized as an attempt to create a national ID card and would be harmful to privacy. These concerns are explained well in a 2007 article from the New York Civil Liberties Union:

Real ID threatens privacy in two ways. First, it consolidates Americans' personal information into a network of interlinking databases accessible to the federal government and bureaucrats throughout the 50 states and U.S. territories. This national mega-database would invite government snooping and be a goldmine for identity thieves. Second, it mandates that all driver's licenses and ID cards have an unencrypted "machine-readable zone" that would contain personal information on Americans that could be easily "skimmed" by anybody with a barcode reader.

These concerns are based on what happens when criminals access the data, but also how consolidating data from many government agencies into a central database makes it easier for bad actors within the government to target Americans and violate their civil liberties. These concerns led to a 20 year delay in enforcing Real ID standards nationally, and as a USA Today article from 2025 warns, once Americans' data is stored on a central repository for one purpose, mission creep is likely. If the centralized database is used to make student loan applications and income tax processing more efficient, what's to stop law enforcement from accessing it to identify potential criminals? Over the past two decades, criticism of the Real ID Act has come from across the political spectrum, with many people and organizations on both the left and right decrying it as a serious threat to privacy and civil liberties.

Much of these concerns have never been realized about the Real ID Act, but they are renewed with Executive Order #14143, signed by Donald Trump on March 20, 2025. This directs for the sharing of government data between agencies except when it is classified for national security purposes. The executive order does not include any provisions to protect the privacy of individuals.

Although Trump has not commented on how this data sharing will be achieved, the Trump Administration has hired a company called Palantir to create a central registry of data, which would include a national citizen database. Recent reporting describes a database with wide-ranging information about every American that is generally private:

Foundry's capabilities in data organization and analysis could potentially enable the merging of information from various agencies, thereby creating detailed profiles of American citizens. The Trump administration has attempted to access extensive citizen data from government databases, including bank details, student debt, medical claims, and disability status.

Palantir does not gather data on their own, but they do provide tools to analyze large repositories of data, make inferences about the data, and provide easy-to-use reports. There are serious concerns about the lack of transparency about what data is being integrated into this repository, how it will be used, the potential for tracking people in various segments of the population such as immigrants, and the ability to use this data to target and harass political opponents. Concerns about how Trump's national citizen database will be used echo fears raised from across the political spectrum about the Real ID Act, except that they are apparently now quite close to becoming reality.

Additional reading:


Original Submission

posted by janrinok on Thursday June 05, @01:42PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

German motorists likely felt disheartened at the sight of all the stop signs on Google Maps [last] Thursday. The Guardian reports that major roads in western, northern, south-western and central parts of the country were shown as closed. Even parts of Belgium and the Netherlands appeared to have ground to a halt.

The situation was exacerbated by the incident taking place at the start of a four-day break for the Ascension holiday, when many Germans were travelling. It led to a huge number of Google Maps users heading for alternative routes to avoid the non-existent closures. Somewhat ironically, this caused huge jams and delays on these smaller roads.

Drivers not relying on Google Maps – and any Google users who decided to check another service or the news – didn't have to deal with these problems. Apple Maps, Waze, and the traffic reports all showed that everything was moving freely. The major highways were likely quieter than usual as so many Google Maps users were avoiding them.

The apparent mass closure of so many roads caused panic among those who believed Google Maps' warning. Some thought there had been a terrorist attack or state-sponsored hack, while others speculated about a natural disaster.

When asked about the glitch, which lasted around two hours, Google said the company wouldn't comment on the specific case. It added that Google Maps draws information from three key sources: individual users, public sources such as transportation authorities, and a mix of third-party providers.

Ars Technica contacted Google to ask about the cause of the problem. A spokesperson said the company "investigated a technical issue that temporarily showed inaccurate road closures on the map" and has "since removed them."

With Google Maps drawing information from third parties, the issue could partly have been related to the German Automobile Club's warning that there may be heavy traffic at the start of the holiday. Google also added AI features to Maps recently, and we all know how reliable they can be.

There have been plenty of other incidents in which Google Maps got things very wrong. Germany was cursing the service again earlier this month when it showed highway tunnels being closed in part of the country when they were open.

In 2023, Google was sued by the family of a North Carolina man who drove his car off a collapsed bridge as he followed directions given by Google Maps. The case is ongoing.


Original Submission

posted by hubie on Thursday June 05, @09:00AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Fuel cells powered with the metal could provide a new source of electric power that's far more energy-dense than lithium-ion batteries.

A new type of fuel cell that runs on sodium metal could one day help clean up sectors where it’s difficult to replace fossil fuels, like rail, regional aviation, and short-distance shipping. The device represents a departure from technologies like lithium-based batteries and is more similar conceptually to hydrogen fuel cell systems. 

The sodium-air fuel cell was designed by a team led by Yet-Ming Chiang, a professor of materials science and engineering at MIT. It has a higher energy density than lithium-ion batteries and doesn’t require the super-cold temperatures or high pressures that hydrogen does, making it potentially more practical for transport. “I’m interested in sodium metal as an energy carrier of the future,” Chiang says.  

The device’s design, published today in Joule, is related to the technology behind one of Chiang’s companies, Form Energy, which is building iron-air batteries for large energy storage installations like those that could help store wind and solar power on the grid. Form’s batteries rely on water, iron, and air.

One technical challenge for metal-air batteries has historically been reversibility. A battery’s chemical reactions must be easily reversed so that in one direction they generate electricity, discharging the battery, and in the other electricity goes into the cell and the reverse reactions happen, charging it up.

When a battery’s reactions produce a very stable product, it can be difficult to recharge the battery without losing capacity. To get around this problem, the team at Form had discussions about whether their batteries could be refuelable rather than rechargeable, Chiang says. The idea was that rather than reversing the reactions, they could simply run the system in one direction, add more starting material, and repeat. 

[...] Chiang and his colleagues set out to build a fuel cell that runs on liquid sodium, which could have a much higher energy density than existing commercial technologies, so it would be small and light enough to be used for things like regional airplanes or short-distance shipping.

The research team built small test cells to try out the concept and ran them to show that they could use the sodium-metal-based system to generate electricity. Since sodium becomes liquid at about 98 °C (208 °F), the cells operated at moderate temperatures of between 110 °C and 130 °C (or 230 °F and 266°F), which could be practical for use on planes or ships, Chiang says. 

From their work with these experimental devices, the researchers estimated that the energy density was about 1,200 watt-hours per kilogram (Wh/kg). That’s much higher than what commercial lithium-ion batteries can reach today (around 300 Wh/kg). Hydrogen fuel cells can achieve high energy density, but that requires the hydrogen to be stored at high pressures and often ultra-low temperatures.

[...] There are economic factors working in favor of sodium-based systems, though it would take some work to build up the necessary supply chains. Today, sodium metal isn’t produced at very high volumes. However, it can be made from sodium chloride (table salt), which is incredibly cheap. And it was produced more abundantly in the past, since it was used in the process of making leaded gasoline. So there’s a precedent for a larger supply chain, and it’s possible that scaling up production of sodium metal would make it cheap enough to use in fuel cell systems, Chiang says.

[...] "If people don't find it crazy, I'll be rather disappointed," Chiang says. "Because if an idea doesn't sound crazy at the beginning, it probably isn't as revolutionary as you think. Fortunately, most people think I'm crazy on this one."

Journal Reference: Sugano, Karen et al., Sodium-air fuel cell for high energy density and low-cost electric power, Joule, Volume 0, Issue 0, 101962


Original Submission

Today's News | June 8 | June 6  >