Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:71 | Votes:120

posted by jelizondo on Tuesday September 16, @06:52PM   Printer-friendly

Google cut managers by 35%: Inside Pichai's layoffs overhaul:

Google has cut 35% of its managers, focusing on those leading teams with fewer than three people. The move, announced during an all-hands meeting on August 27, 2025, has jolted workers across the globe. The recent Google management layoffs is part of CEO Sundar Pichai's bold move that focuses on efficiency, reshaping the tech darling's hierarchy amid ongoing restructuring plans. Pichai's ongoing Google layoffs not only reshape the company's structure but also push for leaner operations.

What's interesting is that the Google job cuts will once again help the giant double down on its AI and cost efficiency moves.

The recent Google layoffs target roles seen as unnecessary, particularly managers overseeing small teams. Brian Welle, Google's VP of People Analytics and Performance, shared the details: "We now have 35% fewer managers, with fewer direct reports than a year ago."

Welle added that Google aims to reduce its leadership ranks, i.e. managers, directors, and vice presidents, to a smaller share of the workforce over time. So, why did Google fire managers?

The ongoing layoffs at Google won't just cut managers' roles. Many affected managers have now been shifted to individual contributor roles, thereby retaining their expertise within the company.

Pichai has been clear about the reasoning behind these Google layoffs in 2025. "We need to be more efficient as we grow, so we don't just throw more people at every problem," he said during the meeting. The CEO's approach marks a significant shift from Google's past, where rapid hiring fueled growth.

These Google layoffs build on earlier job cuts. This includes the 6% workforce reductions at Google in 2023, and targeted layoffs in teams like Android and Pixel. With a nod to rival Meta's policies, Pichai jokingly remarked, "Maybe I should try running the company with all of Meta's policies," but clarified that Google's existing leave options are sufficient.

To soften the blow in the aftermath of Google layoffs, the giant has introduced a Voluntary Exist Program (VEP) in January 2025 for U.S. employees in areas like search, marketing, hardware, and people operations. Fiona Cicconi, Google's chief people office, called the VEP a success. "It's been quite effective," with 3% to 5% of eligible employees taking the offer, often for personal reasons like family or breaks from work.

Pichai praised the program's flexibility, "I'm glad it's worked out well, it gives people agency."

Google layoffs in the past year aimed to make decision-making faster and foster innovation by reducing management layers. This move, however, comes with a slew of risks. Google firing small team managers could weaken mentorship for junior employees or overload remaining managers.

Alphabet's CFO, Anat Ashkenazi, hinted last October that cost-cutting needs to go "a little further," suggesting more changes may come soon.

Employee reactions on Google firing managers overseeing small teams have been mixed. One anonymous worker told The HR Digest that the Google layoffs simply show the fragility of middle-class jobs in the era of AI.

Google's manager firings isn't the first time a company has prioritized efficiency over expansion. For companies, the recent Google layoffs offer more than a case study on the balance between agility and stability. Sundar Pichai-led layoffs at Google may set a new standard for Silicon Valley giant, but they also raise several questions about employee morale.

Meta made 2023 its "Year of Efficiency". German pharmaceutical giant Bayer slashed layers of management, blaming hierarchy for corporate sluggishness. And Elon Musk? He's casually swinging the axe across the US government under the banner of his Department of Government Efficiency (DOGE).

The message is clear: middle management is out. Companies and governments are convinced they can run leaner, faster, and better without it.

It sounds bold. It sounds modern. It sounds like progress.

Except we've seen this reckless cost-cutting experiment before.

Since the 1980s, companies have recycled the same tired playbook: slash middle management under the banner of "rightsizing", "downsizing", or "restructuring".

It's the corporate equivalent of a fad diet – dramatic, headline-grabbing, and usually disastrous in the long term. But in tough economic times, chief financial officers start eyeing the biggest expense on the balance sheet: labour costs.

And middle management? An easy target.

But here's the problem: when you strip out middle management, you don't get a high-performance, self-sufficient workforce – you get chaos.

Google learnt this the hard way. So did Zappos.

Google's Project Oxygen initially removed middle managers – only to bring them back. Zappos' Holacracy experiment, which promised "no job titles, decentralised self-management", was quietly rolled back – it didn't work either. Why? Because people flounder without structure. Without regular feedback, motivation, and career development, employees weren't empowered – they felt lost and quickly disengaged.

And yet, here we go again. According to Live Data Technologies, layoffs in the US are hitting middle management harder than ever. In 2023, nearly a third of all layoffs were managers. And in 2024? That number has surged to almost half.

But today's layoffs are different from past waves – because this time, AI is in the mix.

AI is already replacing some traditional middle management tasks – administration, workflow management, workload balancing, resource allocation, and reporting. If that's all your middle managers do, let's be blunt: bring in the robots. But if you think that's all middle managers do, or can do, then you've missed the lessons of those who went before you.

The best middle managers – the ones I call B-suite leaders – aren't just pushing paper. They're driving engagement, fostering development, and making sure company strategy actually turns into outcome.

B-suite leadership is about three core capabilities:

  • Controlling the pace of work.
  • Using the space to think.
  • Making the case with influence.

Right now, most middle managers are bogged down in controlling the pace of work at the expense of everything else. But their real, high-impact responsibilities – motivating and developing people, giving feedback, organising collaborations, resolving conflicts, thinking strategically, influencing decisions, and designing solutions – these are what keep businesses running.

Let AI take over the admin. But cut middle managers entirely, and you cut out the leadership that AI can't replace.

The lesson from Google and Zappos? You can do without bad middle managers, but you cannot do without good ones. And understanding that distinction is crucial.

Yes, the temptation to cut middle management is strong, especially in tough times. But instead of falling for the siren call of the "Great Flattening" or "Great Unbossing", leaders should focus on rebossing – building the next generation of B-suite leaders who can do what AI and automation cannot.

The future isn't about unbossing. It's about rebossing – developing the human leadership that technology will never replace.


Original Submission

posted by janrinok on Tuesday September 16, @03:20PM   Printer-friendly

News is breaking that Robert Redford has died at his home age 89.

posted by janrinok on Tuesday September 16, @02:10PM   Printer-friendly

Real-Time Observation of Magnet Switching in a Single Atom:

Nuclear spins stay magnetically stable because they're great at ignoring their surroundings. But to read or change their state, they need just a little interaction with the outside world. That's why knowing and controlling their atomic neighborhood is crucial for quantum tech.

Until now, we could read single nuclear spins, but their environments were a mystery. Enter Scanning tunneling microscopy (STM) + electron spin resonance (ESR): a powerful duo that lets scientists zoom in and listen to nuclear spins at the atomic level, thanks to hyperfine interactions.

In a breakthrough from Delft University, scientists used an STM to spy on a single titanium atom's nuclear spin, like catching its magnetic heartbeat in real time. By tapping into the atom's electrons, they watched the spin flip back and forth, live.

The twist? That tiny spin stayed stable for several seconds, an eternity in quantum terms. This opens the door to better control over atomic-scale bits, nudging us closer to ultra-precise quantum technologies.

A new way to control atomic nuclei as "qubits"

A scanning tunneling microscope (STM) is like a super-sharp needle that can "feel" individual atoms on a surface and create images with incredible detail. But what it actually senses are the electrons swirling around the atom's nucleus.

Both electrons and the nucleus act like tiny magnets, each with a property called spin. Scientists figured out how to detect the spin of a single electron using an STM about ten years ago.

Now, a team at TU Delft, led by Professor Sander Otte, asked a bold question: Can we go deeper and read the spin of the nucleus itself, in real time?

Otte explains, "The general idea had been demonstrated a few years ago, making use of the so-called hyperfine interaction between electron and nuclear spins. However, these early measurements were too slow to capture the motion of the nuclear spin over time."

Evert Stolte, first author of the study, said, "We were able to show that this switching corresponds to the nuclear spin flipping from one quantum state to another, and back again."

They found that the nuclear spin in the atom stays stable for about five seconds before flipping, much longer than most quantum systems. In comparison, the electron spin in the same atom lasts only about 100 nanoseconds, which is millions of times shorter.

Because the researchers could measure the nuclear spin faster than it changed, and without disturbing it, they achieved what's called single-shot readout. This means they could catch the spin's state in one go, like snapping a photo before it moves.

This breakthrough makes it possible to control nuclear spins more precisely, opening up new experiments. In the long run, it could help build powerful tools for quantum simulation and atomic-scale sensing.

Journal Reference:
Stolte, Evert W., Lee, Jinwon, Vennema, Hester G., et al. Single-shot readout of the nuclear spin of an on-surface atom [open], Nature Communications (DOI: 10.1038/s41467-025-63232-5)


Original Submission

posted by janrinok on Tuesday September 16, @09:24AM   Printer-friendly

Pentagon begins deploying new satellite network to link sensors with shooters:

The first 21 satellites in a constellation that could become a cornerstone for the Pentagon's Golden Dome missile defense shield successfully launched from California Wednesday aboard a SpaceX Falcon 9 rocket.

The Falcon 9 took off from Vandenberg Space Force Base, California, at 7:12 am PDT (10:12 am EDT; 14:12 UTC) and headed south over the Pacific Ocean, heading for an orbit over the poles before releasing the 21 military-owned satellites to begin several weeks of activations and checkouts.

These 21 satellites will boost themselves to a final orbit at an altitude of roughly 600 miles (1,000 kilometers). The Pentagon plans to launch 133 more satellites over the next nine months to complete the build-out of the Space Development Agency's first-generation, or Tranche 1, constellation of missile tracking and data relay satellites.

"We had a great launch today for the Space Development Agency, putting this array of space vehicles into orbit in support of their revolutionary new architecture," said Col. Ryan Hiserote, system program director for the Space Force's assured access to space launch execution division.

Military officials have worked for six years to reach this moment. The Space Development Agency (SDA) was established during the first Trump administration, which made plans for an initial set of demonstration satellites that launched a couple of years ago. In 2022, the Pentagon awarded contracts for the first 154 operational spacecraft. The first batch of 21 data relay satellites built by Colorado-based York Space Systems is what went up Wednesday.

"Back in 2019, when the SDA was stood up, it was to do two things. One was to make sure that we can do beyond line of sight targeting, and the other was to pace the threat, the emerging threat, in the missile warning and missile tracking domain. That's what the focus has been," said GP Sandhoo, the SDA's acting director.

Historically, the military communications and missile warning networks have used a handful of large, expensive satellites in geosynchronous orbit some 22,000 miles (36,000 kilometers) above the Earth. This architecture was devised during the Cold War, and is optimized for nuclear conflict and intercontinental ballistic missiles.

For example, the military's ultra-hardened Advanced Extremely High Frequency satellites in geosynchronous orbit are designed to operate through an electromagnetic pulse and nuclear scintillation. The Space Force's missile warning satellites are also in geosynchronous orbit, with infrared sensors tuned to detect the heat plume of a missile launch.

The problem? Those satellites cost more than $1 billion a pop. They're also vulnerable to attack from a foreign adversary. Pentagon officials say the SDA's satellite constellation, officially called the Proliferated Warfighter Space Architecture, is tailored to detect and track more modern threats, such as smaller missiles and hypersonic weapons carrying conventional warheads. It's easier for these missiles to evade the eyes of older early warning satellites.

What's more, the SDA's fleet in low-Earth orbit will have numerous satellites. Losing one or several satellites to an attack would not degrade the constellation's overall capability. The SDA's new relay satellites cost between $14 and $15 million each, according to Sandhoo. The total cost of the first tranche of 154 operational satellites totals approximately $3.1 billion.

These satellites will not only detect and track ballistic and hypersonic missile launches. They will also transmit signals between US forces using an existing encrypted tactical data link network known as Link 16. This UHF system is used by NATO and other US allies to allow military aircraft, ships, and land forces to share tactical information through text messages, pictures, data, and voice communication in near real-time, according to the SDA's website.

Up to now, Link 16 radios were ubiquitous on fighter jets, helicopters, naval vessels, and missile batteries. But they had a severe limitation. Link 16 was only able to close a radio link with a clear line of sight. The Space Development Agency's satellites will change that, providing direct-to-weapon connectivity from sensors to shooters on Earth's surface, in the air, and in space.

The relay satellites, which the SDA calls the transport layer, are also equipped with Ka-band and laser communication terminals for higher bandwidth connectivity.

"What the transport layer does is it extends beyond the line of sight," Sandhoo said. "Now, you're able to talk not only to within couple of miles with your Link 16 radios, (but) we can use space to, let's say, go from Hawaii out to Guam using those tactical radios, using a space layer."

Another batch of SDA relay satellites will launch next month, and more will head to space in November. In all, it will take 10 launches to fully deploy the SDA's Tranche 1 constellation. Six of those missions will carry data relay satellites, and four will carry satellites with sensors to detect and track missile launches. The Pentagon selected several contractors to build the satellites, so the military is not reliant on a single company. The builders of the SDA's operational satellites include York, Lockheed Martin, Northrop Grumman, and L3Harris.

"We will increase coverage as we get the rest of those launches on orbit," said Michael Eppolito, the SDA's acting deputy director.

The satellites will connect with one another using inter-satellite laser links, creating a mesh network with sufficient range to provide regional communications, missile warning, and targeting coverage over the Western Pacific beginning in 2027. US Indo-Pacific Command, which oversees military operations in this region, is slated to become the first combatant command to take up use of the SDA's satellite constellation.

This is not incidental. US officials see China as the nation's primary strategic threat, and Indo-Pacific Command would be on the front lines of any future conflict between Chinese and US forces. The SDA has contracts in place for more than 270 second-generation, or Tranche 2 satellites, to further expand the network's reach. There's also a third generation in the works, but the Pentagon has paused part of the SDA's Tranche 3 program to evaluate other architectures, including one offered by SpaceX.

Teaching tactical operators to use the new capabilities offered by the SDA's satellite fleet could be just as challenging as building the network itself. To do this, the Pentagon plans to put soldiers, sailors, airmen, and marines through "warfighter immersion" training beginning next year. This training will allow US forces to "get used to using space from this construct," Sandhoo said.

"This is different than how it has been done in the past," Sandhoo said. "This is the first time we'll have a space layer actually fully integrated into our warfighting operations."

The SDA's satellite architecture is a harbinger for what's to come with the Pentagon's Golden Dome system, a missile defense shield for the US homeland proposed by President Donald Trump in an executive order in January. Congress authorized a down payment on Golden Dome in July, the first piece of funding for what the White House says will cost $175 billion over the next three years.

Golden Dome, as currently envisioned, will require thousands of satellites in low-Earth orbit to track missile launches and space-based interceptors to attempt to shoot them down. The Trump administration hasn't said how much of the shield might be deployed by the end of 2028, or what the entire system might eventually cost.

But the capabilities of the SDA's satellites will lay the foundation for any regional or national missile defense shield. Therefore, it seems likely that the military will incorporate the SDA network into Golden Dome, which at least at first, is likely to consist of technologies already in space or nearing launch. Apart from the Space Development Agency's architecture in low-Earth orbit (LEO), the Space Force was already developing a new generation of missile warning satellites to replace aging platforms in geosynchronous orbit (GEO), plus a fleet of missile warning satellites to fly at a midrange altitude between LEO and GEO.

Air Force Gen. Gregory Guillot, commander of US Northern Command, said in April that Golden Dome "for the first time integrates multiple layers into one system that allows us to detect, track, and defeat multiple types of threats that affect us in different domains.

"So, while a lot of the components and the requirements were there in the past, this is the first time that it's all tied together in one system," he said.


Original Submission

posted by janrinok on Tuesday September 16, @04:38AM   Printer-friendly
from the why-our-sun-is-tame dept.

Solar pacifiers: Influence of the planets may subdue solar activity:

Our Sun is about five times less magnetically active than other sunlike stars – effectively a special case. The reason for this could reside in the planets in our solar system, say researchers at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR). In the last ten years, they have developed a model that derives virtually all the Sun's known activity cycles from the cyclical influence of the planets' tidal forces. Now they have also been able to demonstrate that this external synchronization automatically curbs solar activity (DOI: 10.1007/s11207-025-02521-0).

At the moment, the Sun is actually reaching a maximum level of activity only seen roughly every eleven years. That is the reason why we on Earth observe more polar lights and solar storms as well as turbulent space weather in general. This has an impact on satellites in space right down to technological infrastructure on Earth. Despite this, in comparison with other sunlike stars, the strongest radiation eruptions from our Sun are 10 to 100 times weaker. This relatively quiet environment could be an important precondition for Earth being inhabitable. Not least for this reason, solar physicists want to understand what precisely drives solar activity.

It is known that solar activity has many patterns – both shorter and longer periodic fluctuations that range from a few hundred days to several thousand years. But researchers have very different ways of explaining the underlying physical mechanisms. The model developed by the team led by Frank Stefani at HZDR's Institute of Fluid Dynamics views the planets as pacemakers: on this understanding, approximately every eleven years, Venus, Earth and Jupiter focus their combined tidal forces on the Sun. Via a complex physical mechanism, each time they give the Sun's inner magnetic drive a little nudge. In combination with the rosette-shaped orbital motion of the Sun, this leads to overlapping periodic fluctuations of varying lengths – exactly as observed in the Sun.

"All the solar cycles identified are a logical consequence of our model; its explanatory power and internal consistency are really astounding. Each time we have refined our model we have discovered additional correlations with the periods observed," says Stefani. In the work now published, this is QBO – Quasi Biennial Oscillation – a roughly bi-annual fluctuation in various aspects of solar activity. The special point here is that in Stefani's model, QBO cannot only be assigned to a precise period, but it also automatically leads to subdued solar activity.

Up to now, solar data have usually reported on QBO periods of 1.5 to 1.8 years. In earlier work, some researchers had suggested a connection between QBO and so-called Ground Level Enhancement events. They are sporadic occurrences during which energy-rich solar particles trigger a sudden increase in cosmic radiation on the Earth's surface. "A study conducted in 2018 shows that radiation events measured close to the ground occurred more in the positive phase of an oscillation with a period of 1.73 years. Contrary to the usual assumption that these solar particle eruptions are random phenomena, this observation indicates a fundamental, cyclical process," says Stefani. This is why he and his colleagues revisited the chronology once again. They discovered the greatest correlation for a period of 1.724 years. "This value is remarkably close to the value of 1.723 years which occurs in our model as a completely natural activity cycle," says Stefani. "We assume that it is QBO."

While the Sun's magnetic field oscillates between minimum and maximum over a period of eleven years, QBO imposes an additional short-period pattern on the field strength. This subdues the field strength overall because the Sun's magnetic field does not maintain its maximal value for so long. A frequency diagram reveals two peaks: one at maximum field strength and the other when QBO swings back. This effect is known as bimodality of the solar magnetic field. In Stefani's model, the two peaks cause the average strength of the solar magnetic field to be reduced – a logical consequence of QBO.

"This effect is so important because the Sun is most active during the highest field strengths. This is when the most intense events occur with huge geomagnetic storms like the Carrington event of 1859 when polar lights could even be seen in Rome and Havanna, and high voltages damaged telegraph lines. If the Sun's magnetic field remains at lower field strengths for a significantly longer period of time, however, this reduces the likelihood of very violent events," Stefani explains.

Publication:
F. Stefani, G. M. Horstmann, G. Mamatsashvili, T. Weier, Adding Further Pieces to the Synchronization Puzzle: QBO, Bimodality, and Phase Jumps, in Solar Physics, 2025 (DOI: 10.1007/s11207-025-02521-0)

See also:


Original Submission

posted by hubie on Monday September 15, @11:54PM   Printer-friendly
from the no-actual-manufacturing-jobs dept.

New Apple-funded program teaches manufacturing to US firms:

Apple has announced that it is opening its first Apple Manufacturing Academy in Detroit, providing a program of advanced manufacturing skills for US workers.

If you really want to bring manufacturing jobs back to the US, you need to train people rather than impose tariffs. As part of its existing commitment to investing $500 billion in US businesses, Apple is launching an Apple Manufacturing Academy that will open with a two-day program on August 19, 2025.

"We're thrilled to welcome companies from across the country to the Apple Manufacturing Academy starting next month," said Sabih Khan, Apple's new chief operating officer in a statement. "Apple works with suppliers in all 50 states because we know advanced manufacturing is vital to American innovation and leadership."

"With this new programming," he continued, "we're thrilled to help even more businesses implement smart manufacturing so they can unlock amazing opportunities for their companies and our country."

Running in partnership with Michigan State University, the the new academy will follow broadly the same structure as existing Developer Academies, such as the one already in Detroit. It will host small and medium-sized businesses from across the US, and teach manufacturing and technology skills including:

  • Machine learning and deep learning in manufacturing
  • Automation in the product manufacturing industry
  • Leveraging data to improve quality
  • Applying digital technologies to operations

The sessions will initially consist of in-person workshops with Apple staff. Later in 2025, Apple says a virtual program will be added, specifically for issues such as project management.

Firms interested in applying and register for the first academy on Michigan State University's official site.

While this academy is newly announced, it's part of the long-standing $500 billion program that Apple is announcing piecemeal. The most recent addition to it is the investment into Texas-based firm MP Materials on a project to increase Apple's use of US-made rare earth magnets.

See also:


Original Submission

posted by hubie on Monday September 15, @07:12PM   Printer-friendly

The newly developed concept uses liquid uranium to heat rocket propellant:

Engineers from Ohio State University are developing a new way to power rocket engines, using liquid uranium for a faster, more efficient form of nuclear propulsion that could deliver round trips to Mars within a single year.

NASA and its private partners have their eyes set on the Moon and Mars, aiming to establish a regular human presence on distant celestial bodies. The future of space travel depends on building rocket engines that can propel vehicles farther into space and do it faster. Nuclear thermal propulsion is currently at the forefront of new engine technologies aiming to significantly reduce travel time while allowing for heavier payloads.

Nuclear propulsion uses a nuclear reactor to heat a liquid propellant to extremely high temperatures, turning it into a gas that's expelled through a nozzle and used to generate thrust. The newly developed engine concept, called the centrifugal nuclear thermal rocket (CNTR), uses liquid uranium to heat rocket propellant directly. In doing so, the engine promises more efficiency than traditional chemical rockets, as well as other nuclear propulsion engines, according to new research published in Acta Astronautica.

If it proves successful, CNTR could allow future vehicles to travel farther using less fuel. Traditional chemical engines produce about 450 seconds of thrust from a given amount of propellant, a measure known as specific impulse. Nuclear propulsion engines can reach around 900 seconds, with the CNTR possibly pushing that number even higher.

"You could have a safe one-way trip to Mars in six months, for example, as opposed to doing the same mission in a year," Spencer Christian, a PhD student at Ohio State and leader of CNTR's prototype construction, said in a statement. "Depending on how well it works, the prototype CNTR engine is pushing us towards the future."

CNTR promises faster routes, but it could also use different types of propellant, like ammonia, methane, hydrazine, or propane, that can be found in asteroids or other objects in space.

The concept is still in its infancy, and a few engineering challenges remain before CNTR can fly missions to Mars. Engineers are working to ensure that startup, shutdown, and operation of the engine don't cause instabilities, while also finding ways to minimize the loss of liquid uranium.

"We have a very good understanding of the physics of our design, but there are still technical challenges that we need to overcome," Dean Wang, associate professor of mechanical and aerospace engineering at Ohio State and senior member of the CNTR project, said in a statement. "We need to keep space nuclear propulsion as a consistent priority in the future, so that technology can have time to mature."


Original Submission

posted by hubie on Monday September 15, @02:29PM   Printer-friendly
from the that's-only-1900-floppies-for-the-install dept.

The PowerShell script should work with any version of Windows 11:

NTDEV has introduced the Nano11 Builder for Windows 11. This is a tool that allows Windows 11 testers and tinkerers to pare down Microsoft's latest OS into the bare minimum. With this new release, the developer has significantly pushed the boundaries of their prior Tiny11 releases.

Nano11's extreme pruning of the official installer disk image from Microsoft produces a new ISO "up to 3.5 times smaller" than the original. The example Windows 11 standard ISO shown in the Nano11 demo was 7.04GB, but after the PowerShell script had done its stuff, you can end up with a 2.29GB ISO.

Furthermore, the completed Windows 11 install can scrape in as low as 2.8GB if you use Windows 11 LTSC as the source ISO.

Before we talk anymore about Nano11, please be warned that it is described as "an extreme experimental script designed for creating a quick and dirty development testbed." It could also be useful for Windows 11 VM (virtual machine) tests, suggests its developer. But it isn't designed for installing a compact Windows 11 for your daily workhorse.

[...] Some of the social media postings suggest that, when following in NTDEV's Nano11 footsteps, you will end up with as little as a 2.8GB Windows 11 install footprint. However, this will depend on the 'flavor' of Windows 11 you start with, and there is also a little bit more work to be done to achieve the minimum size.

After installation, the example Nano11 install actually uses up 11.0GB of the 20GB virtual disk in the VM. It is only after NTDEV runs the 'Compact' command on the C: drive using LZX compression and then deletes the virtual memory page file that we see the installation reduced to around the 3.2GB level.

The nano11 script used on a Win11 LTSC image leads to an even better result! This will be perfect for when you need an ISO that can be installed in like 5 minutes and without any (and I mean it) fluff https://t.co/iJXGKahIx4 pic.twitter.com/UVnsS6MlgGSeptember 9, 2025

Also see: Tiny11 Builder Update Lets Users Strip Copilot and Other Bloat From Windows 11


Original Submission

posted by hubie on Monday September 15, @09:46AM   Printer-friendly
from the grok-is-this-real? dept.

Google Is Telling People DOGE Never Existed:

Here's a Mandela effect event that you probably thought was real: The Department of Government Efficiency, the pseudo-agency run by Elon Musk to cut "fraud, waste, and abuse" from federal operations, didn't actually exist. At least, that is what Google's AI Overview response will tell you if you search certain content related to DOGE's operations.

A Bluesky user who goes by iucounu first pointed out this mistake in Google's comprehension skills, finding that querying the search engine for information and the number of deaths caused by DOGE's cutting of essential programs results in a response that claims the agency is "fictional" and from "a political satire or conspiracy theory." Gizmodo was able to recreate these results:

According to Google, "There is no actual government department named DOGE, and the term is used in critical or satirical contexts to refer to policies or actions taken by the Trump administration." The results expand on this later, stating, "It is crucial to understand that there is no actual government entity named DOGE, and the discussion around it is part of political discourse or satire, not a factual government action."

There are certainly outlets and people who have suggested that DOGE is fake, either in that it does nothing to accomplish its stated mission or actually is not a real agency established by the federal government (though it certainly functions as one). But the AI Overview does not cite any source that suggests this.

The closest it gets to a source outright saying DOGE doesn't exist is a link to the Democrats' House Committee on the Budget, which has a page titled "The So-Called 'DOGE,'" but even that offers a pretty clear statement that DOGE is not some mass delusion: "DOGE is an organization in the Executive Office of the President. It is not a cabinet-level agency with Senate-approved leadership and has no statutory authority to alter Congressionally appropriated funds." The other sources, places like Lawfare and the Center on Budget and Policy Priorities, don't even come close to suggesting the agency is a satire.

So what gives? Google didn't offer any explanation when contacted, though a spokesperson for the company did tell Gizmodo, "This AI Overview is clearly incorrect. It violated our policies around civic information, and we are taking action to address the issue."


Original Submission

posted by jelizondo on Monday September 15, @05:05AM   Printer-friendly
from the cybersecurity-elephant-in-the-room dept.

Apparently, China's dominance of the supply chain means that it's also seen as the principal source of cybersecurity risk:

"We know that foreign, hostile actors see Australia's energy system as a good target," Home Affairs assistant secretary for cyber security Sophie Pearce told the small, afternoon-on-the-last-day audience.

"We know that cyber vector is the most likely means of disrupting our energy ecosystem, and I think that the energy transition raises the stakes even further. Where we're reliant on foreign investment and foreign supply chains, lots of opportunity there, obviously.

"When there's a dependency on jurisdictions that might require or can compel access to data or access to systems, that increases the risks."

[...] Pearce Courtney handles cyber coordination for energy markets at AEMO, and while he says it's maintaining visibility over the whole structure that keeps the organisation "up at night", technology concentration risk is on the radar.

[...] "In terms of the technology and the devices and where we're buying our supply chain. That's probably the other challenge that doesn't keep us up at night, that's a significant, complex challenge."

China controls 80 per cent of the global supply chain for all the manufacturing stages of solar panels, according to an International Energy Agency (IEA) report from 2022. A similar study from 2024 shows China has almost 85 per cent of global battery cell production capacity.

[Editor's Comment: Title corrected to more accurately reflect the summary contents--JR 2025-09-15 05:55]


Original Submission

posted by jelizondo on Monday September 15, @12:19AM   Printer-friendly

Newly released video at House UFO hearing appears to show U.S. missile striking and bouncing off orb:

A newly released video captured by a U.S. reaper drone shows a glowing orb off the coast of Yemen. Then in the video, a Hellfire missile suddenly struck the unidentified object and bounced off it.

Rep. Eric Burlison, a Republican from Missouri, shared the video at a House Oversight hearing on Tuesday on what the military calls "Unidentified Aerial Phenomena" or better known as UFOs.

The video, dated Oct. 30, 2024, was provided by a whistleblower and when slowed down, the missile can be seen continuing on its own path after striking the orb.

A recent government report revealed that it had received more than 750 new UAP sightings between May 2023 and June 2024, leaving lawmakers digging into the mystery and national security concerns posed by the objects.

"We've never seen a Hellfire missile hit a target and bounce off," said Lue Elizondo, a former senior intelligence official with the Pentagon.

"When a hellfire makes a hit, a kinetic strike on something solid, there's usually not much left of whatever it is it's hitting," Elizondo said. "It's very, very destructive. But in the video ... what seems to happen is that the missile is either redirected, or in some case, perhaps glances off the object and continues on its way."

What was not shown in the video is a second reaper drone that launched the missile.

Details remain unclear, including what the mission was.

The U.S. military was conducting regular air strikes against Houthi targets that posed a threat to the U.S. Navy and commercial vessels.

Pentagon officials told CBS News they have no comment.

The Defense Department in 2023 launched a website for declassified UAP information, following a House Oversight Committee held a hearing earlier that year that featured testimony from a former military intelligence officer and two former fighter pilots, who had first-hand experience with the mysterious objects.

At a House Oversight Committee hearing on unidentified anomalous phenomena (UAPs) Sept. 9, Rep. Eric Burlison (R-MO) shared drone footage from October, 2024 which shows the "hit" [3:26 --JE]


Original Submission

posted by jelizondo on Sunday September 14, @07:39PM   Printer-friendly

Scientists Stunned as Tiny Algae Keep Moving Inside Arctic Ice:

Scientists know that microbial life can survive under some extreme conditions—including, hopefully, harsh Martian weather. But new research suggests that one particular microbe, an algal species found in Arctic ice, isn't as immobile as it was previously believed. They're surprisingly active, gliding across—and even within—their frigid stomping grounds.

In a Proceedings of the National Academy of Sciences paper published September 9, researchers explained that ice diatoms—single-celled algae with glassy outer walls—actively dance around in the ice. This feisty activity challenges assumptions that microbes living in extreme environments, or extremophiles, are barely getting by. If anything, these algae evolved to thrive despite the extreme conditions. The remarkable mobility of these microbes also hints at an unexpected role they may play in sustaining Arctic ecology.

"This is not 1980s-movie cryobiology," said Manu Prakash, the study's senior author and a bioengineer at Stanford University, in a statement. "The diatoms are as active as we can imagine until temperatures drop all the way down to -15 C [5 degrees Fahrenheit], which is super surprising."

That temperature is the lowest ever for a eukaryotic cell like the diatom, the researchers claim. Surprisingly, diatoms of the same species from a much warmer environment didn't demonstrate the same skating behavior as the ice diatoms. This implies that the extreme life of Arctic diatoms birthed an "evolutionary advantage," they added.

For the study, the researchers collected ice cores from 12 stations across the Arctic in 2023. They conducted an initial analysis of the cores using on-ship microscopes, creating a comprehensive image of the tiny society inside the ice.

To get a clearer image of how and why these diatoms were skating, the team sought to replicate the conditions of the ice core inside the lab. They prepared a Petri dish with thin layers of frozen freshwater and very cold saltwater. The team even donated strands of their hair to mimic the microfluidic channels in Arctic ice, which expels salt from the frozen apparatus.

As they expected, the diatoms happily glided through the Petri dish, using the hair strands as "highways" during their routines. Further analysis allowed the researchers to track and pinpoint how the microbes accomplished their icy trick.

"There's a polymer, kind of like snail mucus, that they secrete that adheres to the surface, like a rope with an anchor," explained Qing Zhang, study lead author and a postdoctoral student at Stanford, in the same release. "And then they pull on that 'rope,' and that gives them the force to move forward."

If we're talking numbers, algae may be among the most abundant living organisms in the Arctic. To put that into perspective, Arctic waters appear "absolute pitch green" in drone footage purely because of algae, explained Prakash.

The researchers have yet to identify the significance of the diatoms' gliding behavior. However, knowing that they're far more active than we believed could mean that the tiny skaters unknowingly contribute to how resources are cycled in the Arctic.

"In some sense, it makes you realize this is not just a tiny little thing; this is a significant portion of the food chain and controls what's happening under ice," Prakash added.

That's a significant departure from what we often think of them as—a major food source for other, bigger creatures. But if true, it would help scientists gather new insights into the hard-to-probe environment of the Arctic, especially as climate change threatens its very existence. The timing of this result shows that, to understand what's beyond Earth, we first need to protect and safely observe what's already here.

Journal Reference:
DOI: Ice gliding diatoms establish record-low temperature limits for motility in a eukaryotic cell


Original Submission

posted by hubie on Sunday September 14, @02:49PM   Printer-friendly

Researchers investigated giant prehistoric trash piles to reveal where animal remains came from:

You can learn a lot about people by studying their trash, including populations that lived thousands of years ago.

In what the team calls the "largest study of its kind," researchers applied this principle to Britain's iconic middens, or giant prehistoric trash (excuse me, rubbish) piles. Their analysis revealed that at the end of the Bronze Age (2,300 to 800 BCE), people—and their animals—traveled from far to feast together.

"At a time of climatic and economic instability, people in southern Britain turned to feasting—there was perhaps a feasting age between the Bronze and Iron Age," Richard Madgwick, an archaeologist at Cardiff University and co-author of the study published yesterday in the journal iScience, said in a university statement. "These events are powerful for building and consolidating relationships both within and between communities, today and in the past."

Madgwick and his colleagues investigated material from six middens in Wiltshire and the Thames Valley via isotope analysis, a technique archaeologists use to link animal remains to the unique chemical make-up of a particular geographic area. The technique reveals where the animals were raised, allowing the researchers to see how far people traveled to join these feasts.

"The scale of these accumulations of debris and their wide catchment is astonishing and points to communal consumption and social mobilisation on a scale that is arguably unparalleled in British prehistory," Madgwick added.

[...] "Our findings show each midden had a distinct make up of animal remains, with some full of locally raised sheep and others with pigs or cattle from far and wide," said Carmen Esposito, lead author of the study and an archaeologist at the University of Bologna. "We believe this demonstrates that each midden was a lynchpin in the landscape, key to sustaining specific regional economies, expressing identities and sustaining relations between communities during this turbulent period, when the value of bronze dropped and people turned to farming instead."

A number of these prehistoric trash heaps, which resulted from potentially the largest feasts in Britain until the Middle Ages (that would mean they even outdid the Romans), were eventually incorporated into the landscape as small hills.

"Overall, the research points to the dynamic networks that were anchored on feasting events during this period and the different, perhaps complementary, roles that each midden had at the Bronze Age-Iron Age transition," Madgwick concluded.

Since previous research indicates that Late Neolithic (2,800 BCE to 2,400 BCE) communities in Britain were also organizing feasts that attracted guests—and their pigs—from far and wide, I think it's fair to say that prehistoric British people were throwing successful ragers across 2,000 years.

Journal Reference: Esposito, Carmen et al. Diverse feasting networks at the end of the Bronze Age in Britain (c. 900-500 BCE) evidenced by multi-isotope analysis [OPEN], iScience, Volume 0, Issue 0, 113271 https://doi.org/10.1016/j.isci.2025.113271


Original Submission

posted by hubie on Sunday September 14, @10:00AM   Printer-friendly

Arbitrarily inflated lock-in-tastic fees curbed as movement charges must be cost-linked:

Most of the provisions of the EU Data Act will officially come into force from the end of this week, requiring cloud providers to make it easier for customers to move their data, but some of the big players are keener than others.

The European Data Act is an ambitious attempt by the European Commission to galvanize the market for digital services by opening up access to data. But it also contains provisions to permit customers to move seamlessly between different cloud operators and combine data services from different providers in a so-called multi-cloud strategy.

Cloud users have often complained about the fees that operators charge whenever data is transferred outside of their networks. Investigations by regulators such as the UK's Competition and Markets Authority (CMA) have led the big three platforms – AWS, Microsoft's Azure and Google Cloud – to all waive egress fees, but only for users quitting their platforms.

While the Data Act doesn't rule out vendors charging data transfer fees, it does expect cloud firms to pass on costs to customers rather than charging arbitrary or excessive payments.

Google is keen to publicize that it is going further than this and offering data movement at no cost for customers in both the European Union and the United Kingdom via a newly announced Data Transfer Essentials service.

There's a catch, of course – Google makes it clear that its service is designed for cost-optimized data transfer between two services of a customer organization that happen to be running on different cloud platforms.

In other words, it is for traffic that would effectively be considered internal to the customer organization and not for transfers to third parties. Google warns that if one of its audits uncovers that the service is being misused in this way, the traffic will be billed as regular internet traffic.

Microsoft is offering at-cost transfer for customers and cloud service partners in the EU shifting data to another provider, but there are also strings attached. Customers must create an Azure Support request for the transfer, specifying where the data is to be moved, and it must also be to a service operated by the same customer, not to endpoints belonging to different customers.

We understand that AWS specifies that EU customers "request reduced data transfer rates for eligible use cases under the European Data Act," requiring them to contact customer support for further information. We asked AWS for clarification.

Google claims that its move demonstrates its own commitment to harbouring an open and fair cloud market in Europe.

This might have something to do with it being a bit of an underdog here, making up about 10 percent of the European cloud market, while AWS is estimated to take 32 percent, and Azure another 23 percent.

"The original promise of the cloud is one that is open, elastic, and free from artificial lock-ins. Google Cloud continues to embrace this openness and the ability for customers to choose the cloud service provider that works best for their workload needs," said the Google Cloud's senior director for global risk and compliance, Jeanette Manfra.


Original Submission

posted by hubie on Sunday September 14, @05:14AM   Printer-friendly

Pluralistic: Fingerspitzengefühl (08 Sep 2025) – Pluralistic: Daily links from Cory Doctorow:

This was the plan: America would stop making things and instead make recipes, the "IP" that could be sent to other countries to turn into actual stuff, in distant lands without the pesky environmental and labor rules that forced businesses to accept reduced profits because they weren't allowed to maim their workers and poison the land, air and water.

This was quite a switch! At the founding of the American republic, the US refused to extend patent protection to foreign inventors. The inventions of foreigners would be fair game for Americans, who could follow their recipes without paying a cent, and so improve the productivity of the new nation without paying rent to old empires over the sea.

It was only once America found itself exporting as much as it imported that it saw fit to recognize the prerogatives of foreign inventors, as part of reciprocal agreements that required foreigners to seek permission and pay royalties to American patent-holders.

But by the end of the 20th Century, America's ruling class was no longer interested in exporting things; they wanted to export ideas, and receive things in return. You can see why: America has a limited supply of things, but there's an infinite supply of ideas (in theory, anyway).

There was one problem: why wouldn't the poor-but-striving nations abroad copy the American Method for successful industrialization? If ignoring Europeans' patents allowed America to become the richest and most powerful nation in the world, why wouldn't, say, China just copy all that American "IP"? If seizing foreigners' inventions without permission was good enough for Thomas Jefferson, why not Jiang Zemin?

America solved this problem with the promise of "free trade." The World Trade Organization divided the world into two blocs: countries that could trade with one another without paying tariffs, and the rabble without who had to navigate a complex O(n^2) problem of different tariff schedules between every pair of nations.

To join the WTO club, countries had to sign up to a side-treaty called the Trade-Related Aspects of Intellectual Property Rights (TRIPS). Under the TRIPS, the Jeffersonian plan for industrialization (taking foreigners' ideas without permission) was declared a one-off, a scheme only the US got to try and no other country could benefit from. For China to join the WTO and gain tariff-free access to the world's markets, it would have to agree to respect foreign patents, copyrights, trademarks and other "IP."

We know the story of what followed over the next quarter-century: China became the world's factory, and became so structurally important that even if it violated its obligations under the TRIPS, "stealing the IP" of rich nations, no one could afford to close their borders to Chinese imports, because every country except China had forgotten how to make things.

But this isn't the whole story – it's not even the most important part of it. In his new book Breakneck, Dan Wang (a Chinese-born Canadian who has lived extensively in Silicon Valley and in China) devotes a key chapter to "process knowledge":

https://danwang.co/breakneck/

What's "process knowledge"? It's all the intangible knowledge that workers acquire as they produce goods, combined with the knowledge that their managers acquire from overseeing that labor. The Germans call it "Fingerspitzengefühl" ("fingertip-feeling"), like the sense of having a ball balanced on your fingertips, and knowing exactly which way it will tip as you tilt your hand this way or that.

[...] Process knowledge is everything from "Here's how to decant feedstock into this gadget so it doesn't jam," to "here's how to adjust the flow of this precursor on humid days to account for the changes in viscosity" to "if you can't get the normal tech to show up and calibrate the part, here's the phone number of the guy who retired last year and will do it for time-and-a-half."

It can also be decidedly high-tech. A couple years ago, the legendary hardware hacker Andrew "bunnie" Huang explained to me his skepticism about the CHIPS Act's goal of onshoring the most advanced (4-5nm) chips.

[...] This process is so esoteric, and has so many figurative and literal moving parts, that it needs to be closely overseen and continuously adjusted by someone with a PhD in electrical engineering. That overseer needs to wear a clean-room suit, and they have to work an eight-hour shift without a bathroom, food or water break (because getting out of the suit means going through an airlock means shutting down the system means long delays and wastage).

That PhD EENG is making $50k/year. Bunnie's topline explanation for the likely failure of the CHIPS Act is that this is a process that could only be successfully executed in a country "with an amazing educational system and a terrible passport." For bunnie, the extensive educational subsidies that produced Taiwan's legion of skilled electrical engineers and the global system that denied them the opportunity to emigrate to higher-wage zones were the root of the country's global dominance in advanced chip manufacture.

I have no doubt that this is true, but I think it's incomplete. What bunnie is describing isn't merely the expertise imparted by attaining a PhD in electrical engineering – it's the process knowledge built up by generations of chip experts who debugged generations of systems that preceded the current tin-vaporizing Rube Goldberg machines.

[...] Wang evocatively describes how China built up its process knowledge over the WTO years, starting with simple assembly of complex components made abroad, then progressing to making those components, then progressing to coming up with novel ways to reconfiguring them ("a drone is a cellphone with propellers"). He explains how the vicious cycle of losing process knowledge accelerated the decline of manufacturing in the west: every time a factory goes to China, US manufacturers that had been in its supply chain lose process knowledge. You can no longer call up that former supplier and brainstorm solutions to tricky production snags, which means that other factories in the supply chain suffer, and they, too get offshored to China.

America's vicious cycle was China's virtuous cycle. The process knowledge that drained out of America accumulated in China. Years of experience solving problems in earlier versions of new equipment and processes gives workers a conceptual framework to debug the current version – they know about the raw mechanisms subsumed in abstraction layers and sealed packages and can visualize what's going on inside those black boxes.

[...] But here's the thing: while "IP" can be bought and sold by the capital classes, process knowledge is inseparably vested in the minds and muscle-memory of their workers. People who own the instructions are constitutionally prone to assuming that making the recipe is the important part, while following the recipe is donkey-work you can assign to any freestanding oaf who can take instruction.

[...] The exaltation of "IP" over process knowledge is part of the ancient practice of bosses denigrating their workers' contribution to the bottom line. It's key to the myth that workers can be replaced by AI: an AI can consume all the "IP" produced by workers, but it doesn't have their process knowledge. It can't, because process knowledge is embodied and enmeshed, it is relational and physical. It doesn't appear in training data.

In other words, elevating "IP" over process knowledge is a form of class war. And now that the world's store of process knowledge has been sent to the global south, the class war has gone racial. Think of how Howard Dean – now a paid shill for the pharma lobby – peddled the racist lie that there was no point in dropping patent protections for the covid vaccines, because brown people in poor countries were too stupid to make advanced vaccines:

The truth is that the world's largest vaccine factories are to be found in the global south, particularly India, and these factories sit at the center of a vast web of process knowledge, embedded in relationships and built up with hard-won problem-solving.

Bosses would love it if process knowledge didn't matter, because then workers could finally be tamed by industry. We could just move the "IP" around to the highest bidders with the cheapest workforces. But Wang's book makes a forceful argument that it's easier to build up a powerful, resilient society based on process knowledge than it is to do so with IP. What good is a bunch of really cool recipes if no one can follow them?

I think that bosses are, psychoanalytically speaking, haunted by the idea that their workers own the process knowledge that is at the heart of their profits. That's why bosses are so obsessed with noncompete "agreements." If you can't own your workers' expertise, then you must own your workers. Any time a debate breaks out over noncompetes, a boss will say something like, "My intellectual property walks out the door of my shop every day at 5PM." They're wrong: the intellectual property is safely stored on the company's hard drives – it's the process knowledge that walks out the door.


Original Submission