Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
"We're excited that Northrop is ready to deliver this incredibly beneficial increase in capacity" :
What happens when you use a SpaceX Falcon 9 rocket to launch Northrop Grumman's Cygnus supply ship? A record-setting resupply mission to the International Space Station.
The first flight of Northrop's upgraded Cygnus spacecraft, called Cygnus XL, is on its way to the international research lab after launching Sunday evening from Cape Canaveral Space Force Station, Florida. This mission, known as NG-23, is set to arrive at the ISS early Wednesday with 10,827 pounds (4,911 kilograms) of cargo to sustain the lab and its seven-person crew.
By a sizable margin, this is the heaviest cargo load transported to the ISS by a commercial resupply mission. NASA astronaut Jonny Kim will use the space station's Canadian-built robotic arm to capture the cargo ship on Wednesday, then place it on an attachment port for crew members to open hatches and start unpacking the goodies inside.
The Cygnus XL spacecraft looks a lot like Northrop's previous missions to the station. It has a service module manufactured at the company's factory in Northern Virginia. This segment of the spacecraft provides power, propulsion, and other necessities to keep Cygnus operating in orbit.
The most prominent features of the Cygnus cargo freighter are its circular, fan-like solar arrays and an aluminum cylinder called the pressurized cargo module that bears some resemblance to a keg of beer. This is the element that distinguishes the Cygnus XL from earlier versions of the Cygnus supply ship.
The cargo module is 5.2 feet (1.6 meters) longer on the Cygnus XL. The full spacecraft is roughly the size of two Apollo command modules, according to Ryan Tintner, vice president of civil space systems at Northrop Grumman. Put another way, the volume of the cargo section is equivalent to two-and-a-half minivans.
"The most notable thing on this mission is we are debuting the Cygnus XL configuration of the spacecraft," Tintner said. "It's got 33 percent more capacity than the prior Cygnus spacecraft had. Obviously, more may sound like better, but it's really critical because we can deliver significantly more science, as well as we're able to deliver a lot more cargo per launch, really trying to drive down the cost per kilogram to NASA."
[...] Northrop Grumman would have preferred to launch this mission on its own rocket, the Antares, but that's no longer possible. Russia's invasion of Ukraine in 2022 pitted two of the most critical suppliers for Northrop's Antares rocket against one another. Shipments of Russian-made engines and Ukrainian-built boosters to Northrop dried up after the outbreak of war, and the last Antares rocket using critical foreign parts took off in August 2023.
[...] Now, Northrop is partnering with Firefly Aerospace on a new rocket, the Antares 330, using a new US-made booster stage and engines. It won't be ready to fly until late 2026, at the earliest, somewhat later than Northrop officials originally hoped. Tintner confirmed Friday that Northrop has purchased a fourth Falcon 9 launch from SpaceX for the next Cygnus cargo mission in the first half of next year, in a bid to bridge the gap until the debut of the Antares 330 rocket.
[...] But there's a notable benefit to launching Cygnus missions on SpaceX's workhorse rocket. The Falcon 9 can loft heavier payloads than the old version of the Antares rocket, allowing NASA to take full advantage of the additional volume on the Cygnus XL. The combination of the Falcon 9 and Cygnus XL can deliver more cargo to the ISS than SpaceX's own Dragon cargo ship.
China rules that Nvidia violated its antitrust laws:
A Chinese regulator has found Nvidia violated the country's antitrust law, in a preliminary finding against the world's most valuable chipmaker.
Nvidia had failed to fully comply with provisions outlined when it acquired Mellanox Technologies, an Israeli-US supplier of networking products, China's State Administration for Market Regulation (SAMR) said on Monday. Beijing conditionally approved the US chipmaker's acquisition of Mellanox in 2020.
Monday's statement came as US and Chinese officials prepared for more talks in Madrid over trade, with a tariff truce between the world's two largest economies set to expire in November.
SAMR reached its conclusion weeks before Monday's announcement, according to two people with knowledge of the matter, adding that the regulator had released the statement now to give China greater leverage in the trade talks.
The regulator started the anti-monopoly investigation in December, a week after the US unveiled tougher export controls on advanced high-bandwidth memory chips and chipmaking equipment to the country.
[...] The preliminary findings against the chipmaker could result in fines of between 1 percent and 10 percent of the company's previous year's sales. Regulators can also force the company to change business practices that are considered in violation of antitrust laws.
Over recent years, Nvidia has become a global market leader in artificial intelligence chips, with its graphics processing units becoming crucial in developing leading AI models.
That has also meant that Nvidia has increasingly been caught up in the trade tensions between Washington and Beijing.
[...] Nvidia chief Jensen Huang, who has made frequent visits to China in a signal of his commitment to a crucial overseas market, has previously criticized the US curbs as a "failure" that has spurred Chinese rivals to accelerate development of their own products.
Chinese giant Tencent announces domestic AI chip push:
Tencent says it has "fully adapted" its AI computing infrastructure to support Chinese-designed processors, in a move that shifts one of the country's biggest buyers of Nvidia chips closer to home-grown hardware, as reported by SCMP. The announcement came at the company's Global Digital Ecosystem Summit on September 16, where Tencent Cloud president Qiu Yuepeng confirmed the firm is now using "mainstream domestic chips" and building infrastructure around them.
While Tencent stopped short of naming the specific silicon in use, the phrasing suggests that production deployments are involved, not just experimentation. Senior executive vice-president Dowson Tong Tao-sang added that the company is working with "multiple domestic chip companies" to apply "the most suitable hardware" to each scenario, and that long-term strategic investment will focus on optimizing hardware-software co-design to lower the cost of compute.
Tencent's announcement comes just a day after China's State Administration for Market Regulation said Nvidia had violated antitrust rules and the terms of approval for its 2019 acquisition of Mellanox Technologies. The regulator did not elaborate but confirmed the investigation remains active. This adds another layer of uncertainty for U.S. firms selling into China's cloud and AI sectors, which are already under tight export restrictions from Washington.
For Tencent, the company now has to factor in both geopolitics and supply continuity into its decision-making. Company president Martin Lau Chi-ping said in August that Tencent already has enough training chips in stock and "many options" for inference, suggesting that the firm has already diversified procurement. But adapting software to support non-Nvidia architectures is a deeper shift that Tencent appears to be leaning into, mirroring earlier signals from AI start-up DeepSeek, which said in August its V3.1 model was tuned for the next wave of domestic accelerators.
The most likely candidate for those deployments is Huawei's Ascend platform, which has already been adopted at scale by ByteDance and is supported by an increasingly mature stack built around the MindSpore framework. But whether Ascend or other domestic chips can sustain large-scale training remains an open question, especially as U.S. officials estimate that Huawei will only be able to produce around 200,000 AI chips next year.
There's no such thing as free laundry:
An unknown hacker has broken into smart washing machines that accept digital payments, leaving over a thousand students without laundry service in Amsterdam. According to Dutch publication Folia, the hacker(s) disabled the payment system on the appliances, so students based in the Spinozacampus housing complex could get their clothes cleaned for free. However, this did not last long, as Duwo, the management company behind the service, didn't want to get stuck sorting the stains of the unpaid laundry bills.
"Because we purchase the machines ourselves, we need the income to be able to continue offering laundry services to our residents at affordable prices," its spokesperson told Folia. While it might seem like a small amount for a company to shoulder the cost of laundry, it could soon add up if it's going to shoulder the cost of providing clean clothes to Spinozacampus' 1,250 residents.
The hack was first discovered in mid-July, but it wasn't until recently that the company disabled the machines. Although there are still 10 other analog washing machines students can use, Folia reports that these are almost always out of order, with one student claiming that only one machine works for all the students. This has even got them worried about the risk of a lice outbreak because of the limited availability of laundry machines, digital or otherwise. Thankfully, there's another residence building a little over 200 meters or 650 feet away that has more washing machines, allowing for shorter queues.
In the meantime, Duwo is slowly switching back to non-digital appliances, with the company expecting to receive five analog washing machines in a few days. It's also reported that other buildings and housing associations are moving away from IoT washing machines.
As for the hacker, they could face up to a year in prison if they are caught, with the sentence going up to six years if it is proven that they did it for monetary gain. Nevertheless, ethical hacker Sijmen Ruwhof told Folia that finding the culprit is costly and time-consuming, so it might not be worth it for Duwo to follow up on the case. And although Ruwhof suspects a professional hacker is behind the washing machine attack, he also conceded that there are a lot of bright students on campus who would be capable of executing the breach.
"There are lots of bright minds on that campus who also know how to program. It gives you a huge kick when you hack into a washing machine like that," Sijmen said. He also added, "If I were a student and saw those digital washing machines, as a hacker, I would be getting the itch, too."
A stealthy military radio that hides communications in background noise is extremely difficult to jam or locate, meaning that it could allow drone pilots to operate without detection.
Electronic warfare has entered an intense new phase as drones increasingly dominate the battlefield. In the ongoing war between Russia and Ukraine, both sides use jammers to block drone-control signals. They also trace radio signals to target enemy drone operators with artillery strikes.
Now, US-based start-up Rampart Communications has designed a radio with two levels of protection that make the signal extremely hard to detect. Its StrataWave radio encrypts the signal and spreads it across the radio spectrum, rather than broadcasting on a single frequency, making radio emissions quieter and harder to detect.
Similar techniques have been used before, but StrataWave goes an extra step. While spreading the signal across the radio spectrum makes it harder to intercept, it doesn't hide the fact that a radio broadcast is taking place. To do that, StrataWave scrambles the entire broadcast to hide the very presence of a radio signal in background noise.
The first level of protection is like writing a letter in code and tearing it into large pieces – even if an adversary can't read your letter, they can at least see you have written one. The second level is more like grinding the letter to dust.
"Without the correct encryption key and algorithm, the signal will appear as noise to any other receiver," says Aaron Correa at Rampart.
[...] Electronic warfare is a game of cat and mouse, with every development met by a new counter. In Ukraine, drones are updated every few weeks to stay ahead of jammers. Rampart says adversaries will effectively be starting from scratch when trying to detect or jam StrataWave.
Thomas Withington, an electronic warfare expert at the Royal United Services Institute (RUSI), a defence think tank in the UK, says this is unlikely to be the final move in the game of radios versus jammers. "Radio frequency engineers will tell you that every new system works brilliantly – until it doesn't," he says.
Withington notes that cognitive radio systems using AI and large amounts of data are getting ever better at finding hidden signals in noise. But it may take a while to crack StrataWave. "This type of system will certainly give you a temporary advantage, and that may be all you need," he says.
Famed developer Poul-Henning Kamp (phk) has posted an update on the status and future of the project currently known as Varnish Cache. And, after 20 years of being a go-to component in WWW infrastructure, it will change its name to The Vinyl Cache Project with version 8.0.0 being the last under the old name. The software project will be sheperded under the new name by a Danish association formed for that specific purpose.
We will instead form a voluntary association, a "Forening", under the laws of Denmark, with bylaws that set out what the goal is (develop, maintain and distribute the software), who gets to make the decisions (a governing board appointed by the members), who can become members (anybody but subject to approval by the members) and that the association cannot ever hold or handle any money.
The commented bylaws of the association will be ratified by the founders and made public this autumn, and the first general assembly will be on Monday February 23rd 2026 - hopefully with many membership applications to approve - more about that when we publish the bylaws.
We will also, at the same time, reluctantly change the name of the project.
Varnish Cache is a very fast web application accelerator, or caching HTTP reverse proxy. It runs in front of web servers and the output of the server is cached there, subject to specific caching criteria, to save overloading the back end.
Previously:
(2022) The New Yorker on NTP Software Maintenance
(2022) PHK on Surveillance Which Is Too Cheap to Meter
(2018) Transparency Versus Liability in Hardware
FAA found factory violations, says Boeing sought approval for unairworthy planes:
The Federal Aviation Administration on Friday proposed fines of $3.1 million against Boeing for various safety violations related to the January 2024 door plug blowout and what the FAA called "interference with safety officials' independence."
An FAA statement said the proposed fine covers "safety violations that occurred from September 2023 through February 2024," and is the "maximum statutory civil penalty authority consistent with law." Boeing, which reported $22.7 billion in revenue and a net loss of $612 million last quarter, has 30 days to file a response with the agency.
"The FAA identified hundreds of quality system violations at Boeing's 737 factory in Renton, Washington, and Boeing subcontractor Spirit AeroSystems' 737 factory in Wichita, Kansas. Additionally, Boeing presented two unairworthy aircraft to the FAA for airworthiness certificates and failed to follow its quality system," the FAA said.
The FAA said that a Boeing safety official faced pressure to sign off on an aircraft that did not meet standards. The employee is part of the Boeing ODA [Organization Designation Authorization] unit that performs functions the FAA delegated to the company.
The FAA said it "found that a non-ODA Boeing employee pressured a Boeing ODA unit member to sign off on a Boeing 737-MAX airplane so Boeing could meet its delivery schedule, even though the ODA member determined the aircraft did not comply with applicable standards." Boeing's ODA process has faced criticism for years. A 2021 Inspector General report found that "the Boeing ODA process and structure do not ensure ODA personnel are adequately independent."
[...] Separately, Boeing in July 2024 agreed to plead guilty to a criminal charge of defrauding the FAA and pay a $243.6 million fine for violating a 2021 deferred prosecution agreement with the government. However, the Boeing plea deal was rejected by a federal judge in December 2024.
The 2021 deferred prosecution agreement was spurred by 737 Max crashes in 2018 and 2019 in Indonesia and Ethiopia that killed a combined 346 people. In May 2024, the Justice Department said it determined that Boeing violated the deferred prosecution agreement "by failing to design, implement, and enforce a compliance and ethics program to prevent and detect violations of the US fraud laws throughout its operations."
With the Trump administration reconsidering Biden-era decisions, Boeing reportedly asked the government for more lenient treatment. In May, the DOJ announced a deal with Boeing in which the company would avoid prosecution. The non-prosecution agreement says Boeing must pay the $243.6 million fine and invest at least $455 million in its compliance and safety programs, the same terms agreed to during the Biden administration.
Although Boeing "had inadequate anti-fraud controls and an inadequate antifraud compliance program," it took steps "to enhance its compliance program through structural and leadership changes, including but not limited to steps to enhance the independence, capability, and effectiveness of its compliance program," the agreement said.
The government moved to dismiss the case based on the agreement. The motion is still pending, and families of the crash victims urged the court to reject it.
When it comes to US AI rules, there's too many cooks:
The US government wants AI in every corner of government, but the unstoppable force of new tech is running into the immovable object of bureaucracy - a growing mass of AI rules.
It's been well established in the first year of Trump's second presidency that AI is a priority for the administration. Even prior to Trump taking office, government generative AI use cases had surged, growing ninefold between 2023 and 2024. In recent months, agencies have cut numerous deals with most leading AI companies under the General Services Administration's Trump-driven OneGov contracting strategy. These agreements give federal agencies access to leading AI models for $1 or less per agency for the first year, suggesting that the Trump team is keen on acting fast.
But given the nature of government, it's not surprising to hear from the auditors at the Government Accountability Office (GAO) that agencies face a large, fragmented set of AI requirements.
According to a report published on Tuesday, the GAO identified 94 separate "AI-related government-wide requirements" that agencies have to adhere to - and those rules aren't centralized under a single management body.
AI rules and requirements come from ten separate executive-branch oversight and advisory groups, including the Office of Management and Budget, Office of Science and Technology Policy, Department of Commerce, GSA, and National Science Foundation. Those groups help set and police requirements drawn from five AI-related laws, six executive orders, and three guidance documents, the GAO said.
In short, there are a lot of rules and regulations surrounding federal government AI use to account for, making for a tricky - and possibly shifting - deployment path for agencies to navigate.
The GAO declined to take a stance on whether there were too many AI regulations for federal agencies to account for, or whether a central AI regulation for federal use is necessary to streamline operations, but its own prior work suggests that there's a rather severe AI regulatory burden placed on agencies.
It noted that agencies were struggling with GenAI deployment. Rather than chalk it up to chasing use cases that don't exist, those agencies pointed to familiar hurdles - a lack of computing resources, concerns over bias and hallucinations, and - you guessed it - too many rules.
Of the 12 agencies the GAO spoke with about AI implementation for that report, 10 said that existing federal AI policy either didn't account for all the obstacles an agency could face when implementing AI or, conversely, that federal AI policy "could present obstacles to the adoption of generative AI."
"AI is rapidly growing and holds substantial promise for improving the government," GAO's director of IT and cybersecurity Kevin Walsh told The Register in an email. "But AI technologies also pose risks."
Walsh told us that AI can substantially improve operations at federal agencies, but can also be misused to enable cyberattacks, commit fraud, and deanonymize data.
"The rules that govern AI will be critical in our attempts to ensure AI is used for good," Walsh told us while declining to take a position on whether the current fragmented, cumbersome AI regulatory regime was an appropriate one.
That said, the GAO has been making recommendations to improve federal AI oversight as far back as four years ago, when it first published a "framework to help managers ensure accountability and the responsible use of AI in government programs and processes," according to its latest report. In 2023, it issued a second report on how federal agencies were complying with AI rules, but few of the compliance recommendations it made have been acted upon as of yet.
"We made 35 recommendations to 19 agencies ... to fully implement federal AI requirements," the GAO said. "As of July 2025, three agencies had implemented four recommendations."
Google cut managers by 35%: Inside Pichai's layoffs overhaul:
Google has cut 35% of its managers, focusing on those leading teams with fewer than three people. The move, announced during an all-hands meeting on August 27, 2025, has jolted workers across the globe. The recent Google management layoffs is part of CEO Sundar Pichai's bold move that focuses on efficiency, reshaping the tech darling's hierarchy amid ongoing restructuring plans. Pichai's ongoing Google layoffs not only reshape the company's structure but also push for leaner operations.
What's interesting is that the Google job cuts will once again help the giant double down on its AI and cost efficiency moves.
The recent Google layoffs target roles seen as unnecessary, particularly managers overseeing small teams. Brian Welle, Google's VP of People Analytics and Performance, shared the details: "We now have 35% fewer managers, with fewer direct reports than a year ago."
Welle added that Google aims to reduce its leadership ranks, i.e. managers, directors, and vice presidents, to a smaller share of the workforce over time. So, why did Google fire managers?
The ongoing layoffs at Google won't just cut managers' roles. Many affected managers have now been shifted to individual contributor roles, thereby retaining their expertise within the company.
Pichai has been clear about the reasoning behind these Google layoffs in 2025. "We need to be more efficient as we grow, so we don't just throw more people at every problem," he said during the meeting. The CEO's approach marks a significant shift from Google's past, where rapid hiring fueled growth.
These Google layoffs build on earlier job cuts. This includes the 6% workforce reductions at Google in 2023, and targeted layoffs in teams like Android and Pixel. With a nod to rival Meta's policies, Pichai jokingly remarked, "Maybe I should try running the company with all of Meta's policies," but clarified that Google's existing leave options are sufficient.
To soften the blow in the aftermath of Google layoffs, the giant has introduced a Voluntary Exist Program (VEP) in January 2025 for U.S. employees in areas like search, marketing, hardware, and people operations. Fiona Cicconi, Google's chief people office, called the VEP a success. "It's been quite effective," with 3% to 5% of eligible employees taking the offer, often for personal reasons like family or breaks from work.
Pichai praised the program's flexibility, "I'm glad it's worked out well, it gives people agency."
Google layoffs in the past year aimed to make decision-making faster and foster innovation by reducing management layers. This move, however, comes with a slew of risks. Google firing small team managers could weaken mentorship for junior employees or overload remaining managers.
Alphabet's CFO, Anat Ashkenazi, hinted last October that cost-cutting needs to go "a little further," suggesting more changes may come soon.
Employee reactions on Google firing managers overseeing small teams have been mixed. One anonymous worker told The HR Digest that the Google layoffs simply show the fragility of middle-class jobs in the era of AI.
Google's manager firings isn't the first time a company has prioritized efficiency over expansion. For companies, the recent Google layoffs offer more than a case study on the balance between agility and stability. Sundar Pichai-led layoffs at Google may set a new standard for Silicon Valley giant, but they also raise several questions about employee morale.
Meta made 2023 its "Year of Efficiency". German pharmaceutical giant Bayer slashed layers of management, blaming hierarchy for corporate sluggishness. And Elon Musk? He's casually swinging the axe across the US government under the banner of his Department of Government Efficiency (DOGE).
The message is clear: middle management is out. Companies and governments are convinced they can run leaner, faster, and better without it.
It sounds bold. It sounds modern. It sounds like progress.
Except we've seen this reckless cost-cutting experiment before.
Since the 1980s, companies have recycled the same tired playbook: slash middle management under the banner of "rightsizing", "downsizing", or "restructuring".
It's the corporate equivalent of a fad diet – dramatic, headline-grabbing, and usually disastrous in the long term. But in tough economic times, chief financial officers start eyeing the biggest expense on the balance sheet: labour costs.
And middle management? An easy target.
But here's the problem: when you strip out middle management, you don't get a high-performance, self-sufficient workforce – you get chaos.
Google learnt this the hard way. So did Zappos.
Google's Project Oxygen initially removed middle managers – only to bring them back. Zappos' Holacracy experiment, which promised "no job titles, decentralised self-management", was quietly rolled back – it didn't work either. Why? Because people flounder without structure. Without regular feedback, motivation, and career development, employees weren't empowered – they felt lost and quickly disengaged.
And yet, here we go again. According to Live Data Technologies, layoffs in the US are hitting middle management harder than ever. In 2023, nearly a third of all layoffs were managers. And in 2024? That number has surged to almost half.
But today's layoffs are different from past waves – because this time, AI is in the mix.
AI is already replacing some traditional middle management tasks – administration, workflow management, workload balancing, resource allocation, and reporting. If that's all your middle managers do, let's be blunt: bring in the robots. But if you think that's all middle managers do, or can do, then you've missed the lessons of those who went before you.
The best middle managers – the ones I call B-suite leaders – aren't just pushing paper. They're driving engagement, fostering development, and making sure company strategy actually turns into outcome.
B-suite leadership is about three core capabilities:
- Controlling the pace of work.
- Using the space to think.
- Making the case with influence.
Right now, most middle managers are bogged down in controlling the pace of work at the expense of everything else. But their real, high-impact responsibilities – motivating and developing people, giving feedback, organising collaborations, resolving conflicts, thinking strategically, influencing decisions, and designing solutions – these are what keep businesses running.
Let AI take over the admin. But cut middle managers entirely, and you cut out the leadership that AI can't replace.
The lesson from Google and Zappos? You can do without bad middle managers, but you cannot do without good ones. And understanding that distinction is crucial.
Yes, the temptation to cut middle management is strong, especially in tough times. But instead of falling for the siren call of the "Great Flattening" or "Great Unbossing", leaders should focus on rebossing – building the next generation of B-suite leaders who can do what AI and automation cannot.
The future isn't about unbossing. It's about rebossing – developing the human leadership that technology will never replace.
Real-Time Observation of Magnet Switching in a Single Atom:
Nuclear spins stay magnetically stable because they're great at ignoring their surroundings. But to read or change their state, they need just a little interaction with the outside world. That's why knowing and controlling their atomic neighborhood is crucial for quantum tech.
Until now, we could read single nuclear spins, but their environments were a mystery. Enter Scanning tunneling microscopy (STM) + electron spin resonance (ESR): a powerful duo that lets scientists zoom in and listen to nuclear spins at the atomic level, thanks to hyperfine interactions.
In a breakthrough from Delft University, scientists used an STM to spy on a single titanium atom's nuclear spin, like catching its magnetic heartbeat in real time. By tapping into the atom's electrons, they watched the spin flip back and forth, live.
The twist? That tiny spin stayed stable for several seconds, an eternity in quantum terms. This opens the door to better control over atomic-scale bits, nudging us closer to ultra-precise quantum technologies.
A new way to control atomic nuclei as "qubits"
A scanning tunneling microscope (STM) is like a super-sharp needle that can "feel" individual atoms on a surface and create images with incredible detail. But what it actually senses are the electrons swirling around the atom's nucleus.
Both electrons and the nucleus act like tiny magnets, each with a property called spin. Scientists figured out how to detect the spin of a single electron using an STM about ten years ago.
Now, a team at TU Delft, led by Professor Sander Otte, asked a bold question: Can we go deeper and read the spin of the nucleus itself, in real time?
Otte explains, "The general idea had been demonstrated a few years ago, making use of the so-called hyperfine interaction between electron and nuclear spins. However, these early measurements were too slow to capture the motion of the nuclear spin over time."
Evert Stolte, first author of the study, said, "We were able to show that this switching corresponds to the nuclear spin flipping from one quantum state to another, and back again."
They found that the nuclear spin in the atom stays stable for about five seconds before flipping, much longer than most quantum systems. In comparison, the electron spin in the same atom lasts only about 100 nanoseconds, which is millions of times shorter.
Because the researchers could measure the nuclear spin faster than it changed, and without disturbing it, they achieved what's called single-shot readout. This means they could catch the spin's state in one go, like snapping a photo before it moves.
This breakthrough makes it possible to control nuclear spins more precisely, opening up new experiments. In the long run, it could help build powerful tools for quantum simulation and atomic-scale sensing.
Journal Reference:
Stolte, Evert W., Lee, Jinwon, Vennema, Hester G., et al. Single-shot readout of the nuclear spin of an on-surface atom [open], Nature Communications (DOI: 10.1038/s41467-025-63232-5)
Pentagon begins deploying new satellite network to link sensors with shooters:
The first 21 satellites in a constellation that could become a cornerstone for the Pentagon's Golden Dome missile defense shield successfully launched from California Wednesday aboard a SpaceX Falcon 9 rocket.
The Falcon 9 took off from Vandenberg Space Force Base, California, at 7:12 am PDT (10:12 am EDT; 14:12 UTC) and headed south over the Pacific Ocean, heading for an orbit over the poles before releasing the 21 military-owned satellites to begin several weeks of activations and checkouts.
These 21 satellites will boost themselves to a final orbit at an altitude of roughly 600 miles (1,000 kilometers). The Pentagon plans to launch 133 more satellites over the next nine months to complete the build-out of the Space Development Agency's first-generation, or Tranche 1, constellation of missile tracking and data relay satellites.
"We had a great launch today for the Space Development Agency, putting this array of space vehicles into orbit in support of their revolutionary new architecture," said Col. Ryan Hiserote, system program director for the Space Force's assured access to space launch execution division.
Military officials have worked for six years to reach this moment. The Space Development Agency (SDA) was established during the first Trump administration, which made plans for an initial set of demonstration satellites that launched a couple of years ago. In 2022, the Pentagon awarded contracts for the first 154 operational spacecraft. The first batch of 21 data relay satellites built by Colorado-based York Space Systems is what went up Wednesday.
"Back in 2019, when the SDA was stood up, it was to do two things. One was to make sure that we can do beyond line of sight targeting, and the other was to pace the threat, the emerging threat, in the missile warning and missile tracking domain. That's what the focus has been," said GP Sandhoo, the SDA's acting director.
Historically, the military communications and missile warning networks have used a handful of large, expensive satellites in geosynchronous orbit some 22,000 miles (36,000 kilometers) above the Earth. This architecture was devised during the Cold War, and is optimized for nuclear conflict and intercontinental ballistic missiles.
For example, the military's ultra-hardened Advanced Extremely High Frequency satellites in geosynchronous orbit are designed to operate through an electromagnetic pulse and nuclear scintillation. The Space Force's missile warning satellites are also in geosynchronous orbit, with infrared sensors tuned to detect the heat plume of a missile launch.
The problem? Those satellites cost more than $1 billion a pop. They're also vulnerable to attack from a foreign adversary. Pentagon officials say the SDA's satellite constellation, officially called the Proliferated Warfighter Space Architecture, is tailored to detect and track more modern threats, such as smaller missiles and hypersonic weapons carrying conventional warheads. It's easier for these missiles to evade the eyes of older early warning satellites.
What's more, the SDA's fleet in low-Earth orbit will have numerous satellites. Losing one or several satellites to an attack would not degrade the constellation's overall capability. The SDA's new relay satellites cost between $14 and $15 million each, according to Sandhoo. The total cost of the first tranche of 154 operational satellites totals approximately $3.1 billion.
These satellites will not only detect and track ballistic and hypersonic missile launches. They will also transmit signals between US forces using an existing encrypted tactical data link network known as Link 16. This UHF system is used by NATO and other US allies to allow military aircraft, ships, and land forces to share tactical information through text messages, pictures, data, and voice communication in near real-time, according to the SDA's website.
Up to now, Link 16 radios were ubiquitous on fighter jets, helicopters, naval vessels, and missile batteries. But they had a severe limitation. Link 16 was only able to close a radio link with a clear line of sight. The Space Development Agency's satellites will change that, providing direct-to-weapon connectivity from sensors to shooters on Earth's surface, in the air, and in space.
The relay satellites, which the SDA calls the transport layer, are also equipped with Ka-band and laser communication terminals for higher bandwidth connectivity.
"What the transport layer does is it extends beyond the line of sight," Sandhoo said. "Now, you're able to talk not only to within couple of miles with your Link 16 radios, (but) we can use space to, let's say, go from Hawaii out to Guam using those tactical radios, using a space layer."
Another batch of SDA relay satellites will launch next month, and more will head to space in November. In all, it will take 10 launches to fully deploy the SDA's Tranche 1 constellation. Six of those missions will carry data relay satellites, and four will carry satellites with sensors to detect and track missile launches. The Pentagon selected several contractors to build the satellites, so the military is not reliant on a single company. The builders of the SDA's operational satellites include York, Lockheed Martin, Northrop Grumman, and L3Harris.
"We will increase coverage as we get the rest of those launches on orbit," said Michael Eppolito, the SDA's acting deputy director.
The satellites will connect with one another using inter-satellite laser links, creating a mesh network with sufficient range to provide regional communications, missile warning, and targeting coverage over the Western Pacific beginning in 2027. US Indo-Pacific Command, which oversees military operations in this region, is slated to become the first combatant command to take up use of the SDA's satellite constellation.
This is not incidental. US officials see China as the nation's primary strategic threat, and Indo-Pacific Command would be on the front lines of any future conflict between Chinese and US forces. The SDA has contracts in place for more than 270 second-generation, or Tranche 2 satellites, to further expand the network's reach. There's also a third generation in the works, but the Pentagon has paused part of the SDA's Tranche 3 program to evaluate other architectures, including one offered by SpaceX.
Teaching tactical operators to use the new capabilities offered by the SDA's satellite fleet could be just as challenging as building the network itself. To do this, the Pentagon plans to put soldiers, sailors, airmen, and marines through "warfighter immersion" training beginning next year. This training will allow US forces to "get used to using space from this construct," Sandhoo said.
"This is different than how it has been done in the past," Sandhoo said. "This is the first time we'll have a space layer actually fully integrated into our warfighting operations."
The SDA's satellite architecture is a harbinger for what's to come with the Pentagon's Golden Dome system, a missile defense shield for the US homeland proposed by President Donald Trump in an executive order in January. Congress authorized a down payment on Golden Dome in July, the first piece of funding for what the White House says will cost $175 billion over the next three years.
Golden Dome, as currently envisioned, will require thousands of satellites in low-Earth orbit to track missile launches and space-based interceptors to attempt to shoot them down. The Trump administration hasn't said how much of the shield might be deployed by the end of 2028, or what the entire system might eventually cost.
But the capabilities of the SDA's satellites will lay the foundation for any regional or national missile defense shield. Therefore, it seems likely that the military will incorporate the SDA network into Golden Dome, which at least at first, is likely to consist of technologies already in space or nearing launch. Apart from the Space Development Agency's architecture in low-Earth orbit (LEO), the Space Force was already developing a new generation of missile warning satellites to replace aging platforms in geosynchronous orbit (GEO), plus a fleet of missile warning satellites to fly at a midrange altitude between LEO and GEO.
Air Force Gen. Gregory Guillot, commander of US Northern Command, said in April that Golden Dome "for the first time integrates multiple layers into one system that allows us to detect, track, and defeat multiple types of threats that affect us in different domains.
"So, while a lot of the components and the requirements were there in the past, this is the first time that it's all tied together in one system," he said.
Solar pacifiers: Influence of the planets may subdue solar activity:
Our Sun is about five times less magnetically active than other sunlike stars – effectively a special case. The reason for this could reside in the planets in our solar system, say researchers at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR). In the last ten years, they have developed a model that derives virtually all the Sun's known activity cycles from the cyclical influence of the planets' tidal forces. Now they have also been able to demonstrate that this external synchronization automatically curbs solar activity (DOI: 10.1007/s11207-025-02521-0).
At the moment, the Sun is actually reaching a maximum level of activity only seen roughly every eleven years. That is the reason why we on Earth observe more polar lights and solar storms as well as turbulent space weather in general. This has an impact on satellites in space right down to technological infrastructure on Earth. Despite this, in comparison with other sunlike stars, the strongest radiation eruptions from our Sun are 10 to 100 times weaker. This relatively quiet environment could be an important precondition for Earth being inhabitable. Not least for this reason, solar physicists want to understand what precisely drives solar activity.
It is known that solar activity has many patterns – both shorter and longer periodic fluctuations that range from a few hundred days to several thousand years. But researchers have very different ways of explaining the underlying physical mechanisms. The model developed by the team led by Frank Stefani at HZDR's Institute of Fluid Dynamics views the planets as pacemakers: on this understanding, approximately every eleven years, Venus, Earth and Jupiter focus their combined tidal forces on the Sun. Via a complex physical mechanism, each time they give the Sun's inner magnetic drive a little nudge. In combination with the rosette-shaped orbital motion of the Sun, this leads to overlapping periodic fluctuations of varying lengths – exactly as observed in the Sun.
"All the solar cycles identified are a logical consequence of our model; its explanatory power and internal consistency are really astounding. Each time we have refined our model we have discovered additional correlations with the periods observed," says Stefani. In the work now published, this is QBO – Quasi Biennial Oscillation – a roughly bi-annual fluctuation in various aspects of solar activity. The special point here is that in Stefani's model, QBO cannot only be assigned to a precise period, but it also automatically leads to subdued solar activity.
Up to now, solar data have usually reported on QBO periods of 1.5 to 1.8 years. In earlier work, some researchers had suggested a connection between QBO and so-called Ground Level Enhancement events. They are sporadic occurrences during which energy-rich solar particles trigger a sudden increase in cosmic radiation on the Earth's surface. "A study conducted in 2018 shows that radiation events measured close to the ground occurred more in the positive phase of an oscillation with a period of 1.73 years. Contrary to the usual assumption that these solar particle eruptions are random phenomena, this observation indicates a fundamental, cyclical process," says Stefani. This is why he and his colleagues revisited the chronology once again. They discovered the greatest correlation for a period of 1.724 years. "This value is remarkably close to the value of 1.723 years which occurs in our model as a completely natural activity cycle," says Stefani. "We assume that it is QBO."
While the Sun's magnetic field oscillates between minimum and maximum over a period of eleven years, QBO imposes an additional short-period pattern on the field strength. This subdues the field strength overall because the Sun's magnetic field does not maintain its maximal value for so long. A frequency diagram reveals two peaks: one at maximum field strength and the other when QBO swings back. This effect is known as bimodality of the solar magnetic field. In Stefani's model, the two peaks cause the average strength of the solar magnetic field to be reduced – a logical consequence of QBO.
"This effect is so important because the Sun is most active during the highest field strengths. This is when the most intense events occur with huge geomagnetic storms like the Carrington event of 1859 when polar lights could even be seen in Rome and Havanna, and high voltages damaged telegraph lines. If the Sun's magnetic field remains at lower field strengths for a significantly longer period of time, however, this reduces the likelihood of very violent events," Stefani explains.
Publication:
F. Stefani, G. M. Horstmann, G. Mamatsashvili, T. Weier, Adding Further Pieces to the Synchronization Puzzle: QBO, Bimodality, and Phase Jumps, in Solar Physics, 2025 (DOI: 10.1007/s11207-025-02521-0)
See also:
New Apple-funded program teaches manufacturing to US firms:
Apple has announced that it is opening its first Apple Manufacturing Academy in Detroit, providing a program of advanced manufacturing skills for US workers.
If you really want to bring manufacturing jobs back to the US, you need to train people rather than impose tariffs. As part of its existing commitment to investing $500 billion in US businesses, Apple is launching an Apple Manufacturing Academy that will open with a two-day program on August 19, 2025.
"We're thrilled to welcome companies from across the country to the Apple Manufacturing Academy starting next month," said Sabih Khan, Apple's new chief operating officer in a statement. "Apple works with suppliers in all 50 states because we know advanced manufacturing is vital to American innovation and leadership."
"With this new programming," he continued, "we're thrilled to help even more businesses implement smart manufacturing so they can unlock amazing opportunities for their companies and our country."
Running in partnership with Michigan State University, the the new academy will follow broadly the same structure as existing Developer Academies, such as the one already in Detroit. It will host small and medium-sized businesses from across the US, and teach manufacturing and technology skills including:
- Machine learning and deep learning in manufacturing
- Automation in the product manufacturing industry
- Leveraging data to improve quality
- Applying digital technologies to operations
The sessions will initially consist of in-person workshops with Apple staff. Later in 2025, Apple says a virtual program will be added, specifically for issues such as project management.
Firms interested in applying and register for the first academy on Michigan State University's official site.
While this academy is newly announced, it's part of the long-standing $500 billion program that Apple is announcing piecemeal. The most recent addition to it is the investment into Texas-based firm MP Materials on a project to increase Apple's use of US-made rare earth magnets.
See also:
The newly developed concept uses liquid uranium to heat rocket propellant:
Engineers from Ohio State University are developing a new way to power rocket engines, using liquid uranium for a faster, more efficient form of nuclear propulsion that could deliver round trips to Mars within a single year.
NASA and its private partners have their eyes set on the Moon and Mars, aiming to establish a regular human presence on distant celestial bodies. The future of space travel depends on building rocket engines that can propel vehicles farther into space and do it faster. Nuclear thermal propulsion is currently at the forefront of new engine technologies aiming to significantly reduce travel time while allowing for heavier payloads.
Nuclear propulsion uses a nuclear reactor to heat a liquid propellant to extremely high temperatures, turning it into a gas that's expelled through a nozzle and used to generate thrust. The newly developed engine concept, called the centrifugal nuclear thermal rocket (CNTR), uses liquid uranium to heat rocket propellant directly. In doing so, the engine promises more efficiency than traditional chemical rockets, as well as other nuclear propulsion engines, according to new research published in Acta Astronautica.
If it proves successful, CNTR could allow future vehicles to travel farther using less fuel. Traditional chemical engines produce about 450 seconds of thrust from a given amount of propellant, a measure known as specific impulse. Nuclear propulsion engines can reach around 900 seconds, with the CNTR possibly pushing that number even higher.
"You could have a safe one-way trip to Mars in six months, for example, as opposed to doing the same mission in a year," Spencer Christian, a PhD student at Ohio State and leader of CNTR's prototype construction, said in a statement. "Depending on how well it works, the prototype CNTR engine is pushing us towards the future."
CNTR promises faster routes, but it could also use different types of propellant, like ammonia, methane, hydrazine, or propane, that can be found in asteroids or other objects in space.
The concept is still in its infancy, and a few engineering challenges remain before CNTR can fly missions to Mars. Engineers are working to ensure that startup, shutdown, and operation of the engine don't cause instabilities, while also finding ways to minimize the loss of liquid uranium.
"We have a very good understanding of the physics of our design, but there are still technical challenges that we need to overcome," Dean Wang, associate professor of mechanical and aerospace engineering at Ohio State and senior member of the CNTR project, said in a statement. "We need to keep space nuclear propulsion as a consistent priority in the future, so that technology can have time to mature."
The PowerShell script should work with any version of Windows 11:
NTDEV has introduced the Nano11 Builder for Windows 11. This is a tool that allows Windows 11 testers and tinkerers to pare down Microsoft's latest OS into the bare minimum. With this new release, the developer has significantly pushed the boundaries of their prior Tiny11 releases.
Nano11's extreme pruning of the official installer disk image from Microsoft produces a new ISO "up to 3.5 times smaller" than the original. The example Windows 11 standard ISO shown in the Nano11 demo was 7.04GB, but after the PowerShell script had done its stuff, you can end up with a 2.29GB ISO.
Furthermore, the completed Windows 11 install can scrape in as low as 2.8GB if you use Windows 11 LTSC as the source ISO.
Before we talk anymore about Nano11, please be warned that it is described as "an extreme experimental script designed for creating a quick and dirty development testbed." It could also be useful for Windows 11 VM (virtual machine) tests, suggests its developer. But it isn't designed for installing a compact Windows 11 for your daily workhorse.
[...] Some of the social media postings suggest that, when following in NTDEV's Nano11 footsteps, you will end up with as little as a 2.8GB Windows 11 install footprint. However, this will depend on the 'flavor' of Windows 11 you start with, and there is also a little bit more work to be done to achieve the minimum size.
After installation, the example Nano11 install actually uses up 11.0GB of the 20GB virtual disk in the VM. It is only after NTDEV runs the 'Compact' command on the C: drive using LZX compression and then deletes the virtual memory page file that we see the installation reduced to around the 3.2GB level.
The nano11 script used on a Win11 LTSC image leads to an even better result! This will be perfect for when you need an ISO that can be installed in like 5 minutes and without any (and I mean it) fluff https://t.co/iJXGKahIx4 pic.twitter.com/UVnsS6MlgGSeptember 9, 2025
Also see: Tiny11 Builder Update Lets Users Strip Copilot and Other Bloat From Windows 11