Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you put ketchup on the hot dog you are going to consume?

  • Yes, always
  • No, never
  • Only when it would be socially awkward to refuse
  • Not when I'm in Chicago
  • Especially when I'm in Chicago
  • I don't eat hot dogs
  • What is this "hot dog" of which you speak?
  • It's spelled "catsup" you insensitive clod!

[ Results | Polls ]
Comments:88 | Votes:246

posted by jelizondo on Wednesday October 29, @09:19PM   Printer-friendly
from the thats-a-lot-of-revolutions dept.

https://arstechnica.com/space/2025/10/25-years-one-website-iss-in-real-time-captures-quarter-century-on-space-station/

With the milestone just days away, you are likely to hear this week that there has now been a continuous human presence on the International Space Station (ISS) for the past 25 years. But what does that quarter of a century actually encompass?

[...]

Fortunately, the astronauts and cosmonauts on the space station have devoted some of their work time and a lot of their free time to taking photos, filming videos, and calling down to Earth. Much of that data has been made available to the public, but in separate repositories, with no real way to correlate or connect it with the timeline on which it was all created.

That is, not until now. Two NASA contractors, working only during their off hours, have built a portal into all of those resources to uniquely represent the 25-year history of ISS occupancy.

ISS in Real Time, by Ben Feist and David Charney, went live on Monday (October 27), ahead of the November 2 anniversary.


Original Submission

posted by jelizondo on Wednesday October 29, @04:32PM   Printer-friendly

ASML launches revolutionary lithography scanner for advanced 3D chip packaging:

Last week, ASML introduced the Twinscan XT:260 lithography scanner, the industry's first scanner that has been designed from the ground up for advanced 3D packaging, marking a new era in fab tools.

Advanced packaging technologies like TSMC's Chip-on-Wafer-on-Substrate (CoWoS) are crucially important to achieve the performance scaling necessary to develop artificial intelligence and to evolve supercomputers.

Advanced packaging relies on deposition, etching, lithography, and metrology/inspection tools to make sophisticated chips. But while using these front-end tools is efficient for many steps, they are overengineered for some, and insufficient for others.

"In line with our plans to support our customers in the 3D integration space, we shipped ASML's first product serving Advanced Packaging, the Twinscan XT:260, an i-line scanner offering up to 4x productivity compared to existing solutions.", ASML posted in its Q3 2025 financial results.

ASML's Twinscan XT:260 is an i-line (365 nm) step-and-scan lithography system that processes 300-mm wafers and weds the precision of previous-generation front-end lithography tools with the productivity and flexibility of back-end tools. TSMC claims this provides four times higher productivity compared to 'competing steppers' used for advanced packaging technologies, such as Canon's FPA-5520iV. ASML never named the exact competing product, but Canon's FPA-5520iV is a good bet.

The key advantage of the tool compared to some of the existing machines used for advanced packaging is that it supports a high-dose exposures (340 mJ is mentioned, though it is usually tunable) and a 52 mm × 66 mm image field, enabling the tool to process up to 3,432 mm^2 interposers (4X EUV reticle size) without field stitching, which reduces complexity and speeds up the production cycle. For the sake of truth, it should be noted that Canon's FPA-5520iV LF2 Option supports a 52 mm × 68 mm image field, but this is a stepper, not a scanner.

The system delivers 400 nm resolution, a 35 nm overlay, and offers a large depth of focus (11 µm at 1 µm CD) to enable accurate patterning for redistribution layers (RDLs) through-silicon vias (TSVs), and hybrid bonding structures used by modern packaging methods to integrate multi-chiplet designs. The unit also boasts 775 µm through-silicon alignment capabilities to make it particularly suited for bonded or non-planar wafers, which are common in 3D stacking.

The Twinscan XT:260 relies on ASML's dual-stage platform, so it can expose one wafer while simultaneously aligning the next, which significantly increases its performance. Speaking of performance, the machine can process up to 270 wafers per hour (at a 340 mJ dose) and handle thick (0.775 mm – 1.7 mm) or warped (1 mm) wafers.

With a 400 nm resolution, 35 nm overlay, and the ability to handle thick or warped wafers (up to 1.7 mm), the XT:260 is optimized for technologies Intel's Foveros, TSMC's CoWoS and System-on-Integrated-Chips (SoIC), as well as other high-density die-stacking or interposer technologies which require precise alignment through silicon.

The Twinscan XT:260 is positioned below the Twinscan XT:400M, the company's most basic i-line scanner used for chipmaking on mature nodes, but which may be an overkill for chip packaging on advanced nodes for now. Keep in mind that there are plenty of ASML's PAS 5500 i-line steppers that are used for 'more-than-Moore' applications, which is the company's convoluted way of pointing towards advanced packaging.

Compared with Canon's FPA-5520iV and the long-standing PAS 5500 i-line steppers, which have been widely used for CoWoS and fan-out packaging, the XT:260 represents a major leap in both productivity and precision. While the aforementioned tools rely on step-and-repeat exposure with limited throughput and field size (PAS 5500 only), the XT:260 introduces a scanner architecture with continuous wafer movement, advanced alignment optics, and automation suitable for high-volume 3D integration, which will be particularly important given that demand for advanced packaging is increasing.

ASML's Twinscan XT:260 is the industry's first lithography scanner designed specifically for advanced packaging. It's not the only lithography scanner aimed at advanced packaging, though. The semiconductor industry has a choice: Use new tools like the Twinscan XT:260, or repurpose existing tools designed for front-end chip manufacturing for advanced packaging. If ASML's estimations are correct, using this specific tool will be technically beneficial, but may come at significant expense.

Previous-generation front-end lithography, etch, and deposition tools offer sub-micron precision, but need ultra-clean processing environments that ensure tight overlay and defect control. This is because they produce thousands of interconnects linking chiplets and HBM stacks in 2.5D, and eventually 3D System-in-Packages (SiPs).

However, these front-end tools are far more expensive — both in terms of price, performance, and total cost of ownership — than what is typically required for back-end packaging steps. Hence, using them in packaging lines drives up cost and limits output. One advantage of using them is that developers, process control engineers, and technicians at Intel and TSMC are familiar with those tools, which almost certainly guarantees good yields and fast ramp-ups. However, it comes at a high cost and a relatively long production cycle.

With tools like ASML's Twinscan XT:260, wafer-level stages that demand extreme precision — TSV formation, RDL patterning, and hybrid bonding — will get faster and therefore cheaper. This will set the stage for the broader adoption of advanced packaging technologies in several years. It'll likely take some time for chipmakers like Intel, Samsung, and TSMC, or OSAT companies like ASE, Amkor, and JCET, to integrate the lithography system into their process technologies and flows.

There are a lot more tools to be designed specifically for advanced packaging. Advanced packaging techniques still rely on 'classic' back-end tools for underfill, molding, ball attach, and many other operations. This hybrid flow balances cost with accuracy: front-end-grade tools where micron (or even nanometer) alignment matters.

As advanced packaging facilities use front-end tools (with appropriate costs), the boundary between foundries and OSATs is blurring. TSMC's CoWoS and SoIC facilities are filled with wafer fab equipment from ASML, Applied Materials, Canon, KLA, Lam Research, and Tokyo Electron, and usually cost north of $3 billion, the cost of a chip fab in the early 2010s. This is going to continue as more WFE makers set to produce tools specifically tailored for advanced packaging in the coming years and quarters.


Original Submission

posted by jelizondo on Wednesday October 29, @11:34AM   Printer-friendly
from the dystopian-science dept.

https://arstechnica.com/cars/2025/10/an-autonomous-car-for-consumers-lucid-says-its-happening/

Is it possible to be a CEO in 2025 and not catch a case of AI fever? The latest company to catch this particular cold is Lucid, the Saudi-backed electric vehicle startup. Today, it announced a new collaboration with Nvidia to use the latter's hardware and software, with the aim of creating an autonomous vehicle for consumers. Oh, and the AI will apparently design Lucid's production lines.

Formed by refugees from Tesla who saw a chance to improve on their past work, Lucid has already built the most efficient EV on sale in North America.
[...]
"We've already set the benchmark in core EV attributes with proprietary technology that results in unmatched range, efficiency, space, performance, and handling," said interim CEO Marc Winterhoff. "Now, we're taking the next step by combining cutting-edge AI with Lucid's engineering excellence to deliver the smartest and safest autonomous vehicles on the road. Partnering with Nvidia, we're proud to continue powering American innovation leadership in the global quest for autonomous mobility," Winterhoff said.
[...]
Car buyers are starting to cotton on to driver assists like General Motors' Super Cruise, which about 40 percent of customers choose to pay for after the three-year free trial ends, and Lucid must be hoping that offering a far more advanced system, which won't require the human to pay any attention while it is engaged, will help it earn plenty of money.
[...]
Nvidia's industrial platform will let Lucid create its production lines digitally first before committing them to actual hardware. "By modeling autonomous systems, Lucid can optimize robot path planning, improve safety, and shorten commissioning time," Lucid said.


Original Submission

posted by hubie on Wednesday October 29, @06:52AM   Printer-friendly
from the paging-Mr-Pot-call-from-Mr-Kettle dept.

The Australian Government wants AI to pay for copyright fees in move that may be more about getting a piece of the billions invested in AI. ARIA chief executive Annabelle Herd of the Copyright and AI Reference Group (CAIRG) has called the recommendation for a text and data mining exception "a radical change" that has been "put forward with very little evidence".

"Artificial Intelligence presents significant opportunities for Australia and our economy, however it's important that Australian creatives benefit from these opportunities too," Attorney-General Michelle Rowland said.

"Australian creatives are not only world class, but they are also the lifeblood of Australian culture, and we must ensure the right legal protections are in place."

[...] It is a difficult space for governments to regulate as they balance embracing the promised economic boons of AI without cumbersome red tape while also pitching guardrails.

In the lead-up to Labor's economic reform roundtable in August, the Productivity Commission urged against heavy-handed regulation of AI, warning it could smother opportunities.

Among its recommendations was a text and data mining exception – a call that sparked furore.

But Ms Rowland vowed the government would not "weaken copyright protections when it comes to AI".

"The tech industry and the creative sector must now come together and find sensible and workable solutions to support innovation while ensuring creators are compensated," she said.

"The government will support these next steps through the renewed focus tasked to the Copyright and AI reference group."


Original Submission

posted by hubie on Wednesday October 29, @02:03AM   Printer-friendly
from the AI-will-be-watching-you dept.

OpenAI has acquired Software Applications Incorporated (SAI), perhaps best known for the core team that produced what became Shortcuts on Apple platforms. More recently, the team has been working on Sky, a context-aware AI interface layer on top of macOS. The financial terms of the acquisition have not been publicly disclosed.

"AI progress isn't only about advancing intelligence—it's about unlocking it through interfaces that understand context, adapt to your intent, and work seamlessly," an OpenAI rep wrote in the company's blog post about the acquisition. The post goes on to specify that OpenAI plans to "bring Sky's deep macOS integration and product craft into ChatGPT, and all members of the team will join OpenAI."

...Sky, which leverages Apple APIs and accessibility features to provide context about what's on screen to a large language model; the LLM takes plain language user commands and executes them across multiple applications. At its best, the tool aimed to be a bit like Shortcuts, but with no setup, generating workflows on the fly based on user prompts.

It bears some resemblance to features of Atlas, the ChatGPT-driven web browser that OpenAI launched earlier this week, and this acquisition piles on even more evidence that OpenAI has ambitions beyond a question-and-answer chatbot.

OpenAI can use the SAI team's knowledge of the macOS platform to develop new ways for ChatGPT not just to make suggestions about, but to agentically work directly on users' macOS environments.


Original Submission

posted by hubie on Tuesday October 28, @09:15PM   Printer-friendly

AI arms dealer relies on Taiwanese advanced packaging plants for top-specced GPUs:

US manufacturing of Nvidia GPUs is underway and CEO Jensen Huang is celebrating the first Blackwell wafer to come out of TSMC's Arizona chip factory. However, to be part of a complete product, those chips may need to visit Taiwan.

Nvidia first announced plans to produce chips at Fab21 just six months ago..

Speaking during an event in Phoenix on Friday, Huang lauded TSMC's manufacturing prowess while pandering to US President Donald Trump's America First agenda.

"This is the vision of President Trump of reindustrialization — to bring back manufacturing to America, to create jobs, of course, but also this is the single most vital manufacturing industry and the most important technology industry in the world," he said.

But while the silicon may be homegrown, Nvidia remains reliant on Taiwanese packaging plants to turn those wafers into its most powerful and highest-demand GPUs.

Modern GPUs are composed of multiple compute and memory dies. The company's Blackwell family of datacenter chips feature two reticle-sized compute dies along with eight stacks of HBM3e memory, all stitched together using TSMC's CoWoS packaging tech.

Up to this point, all of TSMC's packaging facilities have been located in Taiwan. Amkor, an outsourced semiconductor assembly and test services (OSAT) provider, is working on building an advanced packaging plant in the US capable of stitching together silicon dies using TSMC's chip-on-wafer-on-substrate (CoWoS) tech. But until it's done – expected in 2027 or 2028 – the next stop for Nvidia's wafers will likely be Taiwan.

During TSMC's Q3 earnings call last week, CEO C.C. Wei confirmed the Amkor plan was moving forward, but the site was only now breaking ground.

It's worth noting that, while Nvidia's most potent accelerators rely on CoWoS, not all of its Blackwell chips do. The RTX Pro 6000, a 96GB workstation and server card aimed at AI inference, data visualization, and digital twins doesn't feature a single GPU die fed by GDDR7 memory rather than HBM3e. This means Nvidia doesn't need CoWoS to produce the chip. The same is true for much of Nvidia's RTX family of gaming cards.

Long-term, Nvidia isn't limited to TSMC or Amkor for packaging either. Nvidia has already announced plans to produce GPU tiles built by TSMC for Intel client processors that will presumably make use of the x86 giant's EMIB and/or Foveros advanced packaging technologies.

Nvidia hasn't said which are the first Blackwell wafers to roll off Fab21's production line. El Reg has reached out for clarification; we'll let you know what we hear back.


Original Submission

posted by hubie on Tuesday October 28, @04:28PM   Printer-friendly

Plus spy helping spy: Typhoons teaming up:

Security researchers now say more Chinese crews - likely including Salt Typhoon - than previously believed exploited a critical Microsoft SharePoint vulnerability, and used the flaw to target government agencies, telecommunications providers, a university, and a finance company across multiple continents.

Threat intel analysts at Broadcom-owned Symantec and Carbon Black uncovered additional victims and malware tools the intruders used, and published those and other details about the attacks in a Wednesday report.

In July, Microsoft patched the so-called ToolShell vulnerability (CVE-2025-53770), a critical remote code execution bug in on-premises SharePoint servers. But before Redmond fixed the flaw, Chinese attackers found and exploited it as a zero-day, compromising more than 400 organizations, including the US Energy Department.

Trend Micro's research team says they've uncovered additional evidence of China-aligned groups, specifically Salt Typhoon and its Beijing botnet-building brethren Flax Typhoon, collaborating in "what looks like a single cyber campaign at first sight."

In these attacks, Salt Typhoon (aka Earth Estries, FamousSparrow) performs the initial break-in, then hands the compromised org over to Flax Typhoon (aka Earth Naga).

"This phenomenon, which we have termed 'Premier Pass,' represents a new level of coordination in cyber campaigns, particularly among China-aligned APT actors," the Trend researchers said.

At the time, Microsoft attributed the break-ins to three China-based groups. These included two government-backed groups: Linen Typhoon (aka Emissary Panda, APT27), which typically steals intellectual property, and Violet Typhoon (aka Zirconium, Judgment Panda, APT31), which focuses on espionage and targets former government and military personnel and other high-value individuals.

Microsoft also accused a suspected China-based criminal org, Storm-2603, of exploiting the bug to infect victims with Warlock ransomware.

It now appears other Beijing crews – including Salt Typhoon, which famously hacked America's major telecommunications firms and stole information belonging to nearly every American – also joined in the attacks.


Original Submission

posted by hubie on Tuesday October 28, @11:45AM   Printer-friendly

In a study published last month https://www.nature.com/articles/s41562-025-02297-0 researchers analyze internal sentence representation for both humans and LLMs. It turns out that humans and LLMs use similar tree structures. Quote from their conclusions: "The results also add to the literature showing that the human brain and LLM, albeit fundamentally different in terms of the implementation, can have aligned internal representations of language."

Originally seen on techxplore https://techxplore.com/news/2025-10-humans-llms-sentences-similarly.html:

A growing number of behavioral science and psychology studies have thus started comparing the performance of humans to those of LLMs on specific tasks, in the hope of shedding new light on the cognitive processes involved in the encoding and decoding of language. As humans and LLMs are inherently different, however, designing tasks that realistically probe how both represent language can be challenging.

Researchers at Zhejiang University have recently designed a new task for studying sentence representation and tested both LLMs and humans on it. Their results, published in Nature Human Behavior, show that when asked to shorten a sentence, humans and LLMs tend to delete the same words, hinting at commonalities in their representation of sentences.

"Understanding how sentences are represented in the human brain, as well as in large language models (LLMs), poses a substantial challenge for cognitive science," wrote Wei Liu, Ming Xiang, and Nai Ding in their paper. "We develop a one-shot learning task to investigate whether humans and LLMs encode tree-structured constituents within sentences."

[...] nterestingly, the researchers' findings suggest that the internal sentence representations of LLMs are aligned with linguistics theory. In the task they designed, both humans and ChatGPT tended to delete full constituents (i.e., coherent grammatical units) as opposed to random word sequences. Moreover, the word strings they deleted appeared to vary based on the language they were completing the task in (i.e., Chinese or English), following language-specific rules.

"The results cannot be explained by models that rely only on word properties and word positions," wrote the authors. "Crucially, based on word strings deleted by either humans or LLMs, the underlying constituency tree structure can be successfully reconstructed."

Overall, the team's results suggest that when processing language, both humans and LLMs are guided by latent syntactic representations, specifically tree-structured sentence representations. Future studies could build on this recent work to further investigate the language representation patterns of LLMs and humans, either using adapted versions of the team's word deletion task or entirely new paradigms.

Journal Reference: Liu, W., Xiang, M. & Ding, N. Active use of latent tree-structured sentence representation in humans and large language models. Nat Hum Behav (2025). https://doi.org/10.1038/s41562-025-02297-0


Original Submission

posted by hubie on Tuesday October 28, @07:04AM   Printer-friendly
from the the-collapse-of-democracy dept.

Trump Eyes Government Control of Quantum Computing Firms With Intel-Like Deals

Donald Trump is eyeing taking equity stakes in quantum computing firms in exchange for federal funding, The Wall Street Journal reported.

At least five companies are weighing whether allowing the government to become a shareholder would be worth it to snag funding that the Trump administration has "earmarked for promising technology companies," sources familiar with the potential deals told the WSJ.

IonQ, Rigetti Computing, and D-Wave Quantum are currently in talks with the government over potential funding agreements, with minimum awards of $10 million each, some sources said. Quantum Computing Inc. and Atom Computing are reportedly "considering similar arrangements," as are other companies in the sector, which is viewed as critical for scientific advancements and next-generation technologies.

No deals have been completed yet, sources said, and terms could change as quantum-computing firms weigh the potential risks of government influence over their operations.


Original Submission

posted by hubie on Tuesday October 28, @02:18AM   Printer-friendly

ESA astronauts take to helicopters for Moon landing training:

European Space Agency (ESA) astronauts have completed a helicopter training course to prepare them for upcoming lunar landings.

The astronauts in question include Alexander Gerst, Matthias Maurer, Samantha Cristoforetti, and Thomas Pesquet.

The course consisted of one week of simulator instruction followed by two weeks of practical flying in Airbus EC135 helicopters. ESA said: "Helicopter training offers a realistic analogue for the dynamics of planetary landings, requiring capabilities such as vertical take-off and landing, terrain-based decision-making, and high levels of coordination and situational awareness."

The Apollo astronauts also honed their Moon landing skills using helicopters, although with occasional catastrophic consequences. On January 23, 1971, the Bell 47G helicopter flown by Apollo 14 backup commander Gene Cernan crashed into the Indian River lagoon near Malabar, Florida. An accident investigation board, headed by Apollo 13 commander Jim Lovell, pinned much of the blame on Cernan. He'd found the altitude difficult to judge when skimming the surface of the water and accidentally ditched the helicopter. The incident didn't stop Cernan from being the last person on the Moon on the Apollo 17 mission.

A better real-world simulator was the Lunar Landing Training Vehicle (LLTV), which featured a vertically mounted turbofan engine capable of lifting the machine – nicknamed "the flying bedstead" – to simulate the reduced lunar gravity. Astronauts spoke highly of it. Apollo 11 commander Neil Armstrong called it a "most valuable training experience." He was almost killed by its predecessor, the Lunar Landing Research Vehicle (LLRV), in 1968.

Cernan said: "Although there is nothing quite like the real thing, flying the LLTV had been a step toward realism from 'flying' the stationary simulators.

"In the LLTV you had your butt strapped to a machine that you had to land safely or you didn't make it."

Andreas Mogensen, ESA's Human Exploration Group Leader, told The Register:

ESA has yet to strap its astronauts into something as potentially hazardous as the LLTV. However, the helicopter raises some interesting questions – what does ESA expect its astronauts to use for a lunar landing?

Landing the towering Starship manually would be a challenge, while the other Human Landing System (HLS) contender from Blue Origin won't be ready until Artemis V.

The helicopter training is an introductory course that will give ESA astronauts the skills and knowledge to participate in advanced helicopter courses, like NASA's HAATS, which is a requirement for participating in Artemis lunar landing missions.

The purpose of HAATS and similar courses is to train astronauts in vertical landing profiles and to recognize the visual and optical illusions that can arise from a visual environment characterized by mono-colours and stark shadows. Helicopter pilots are well-aware of these illusions, especially when flying in snow and mountain environments. The goal is thus to equip astronauts with the skills to visually monitor the descent and judge obstacles and risks, regardless of the actual vehicle used to land on the moon.


Original Submission

posted by janrinok on Monday October 27, @09:28PM   Printer-friendly
from the must-be-Thursday dept.

An approach it calls "quantum echoes" takes 13,000 times longer on a supercomputer

[...] Today, Google and a large collection of academic collaborators are publishing a paper describing a computational approach that demonstrates a quantum advantage compared to current algorithms—and may actually help us achieve something useful.

Google's latest effort centers on something it's calling "quantum echoes." The approach could be described as a series of operations on the hardware qubits that make up its machine. These qubits hold a single bit of quantum information in a superposition between two values, with probabilities of finding the qubit in one value or the other when it's measured. Each qubit is entangled with its neighbors, allowing its probability to influence those of all the qubits around it. The operations that allow computation, called gates, are ways of manipulating these probabilities. Most current hardware, including Google's, perform manipulations on one or two qubits at a time (termed one- and two-qubit gates, respectively.

[...] So how do you turn quantum echoes into an algorithm? On its own, a single "echo" can't tell you much about the system—the probabilities ensure that any two runs might show different behaviors. But if you repeat the operations multiple times, you can begin to understand the details of this quantum interference. And performing the operations on a quantum computer ensures that it's easy to simply rerun the operations with different random one-qubit gates and get many instances of the initial and final states—and thus a sense of the probability distributions involved.

This is also where Google's quantum advantage comes from. Everyone involved agrees that the precise behavior of a quantum echo of moderate complexity can be modeled using any leading supercomputer. But doing so is very time-consuming, so repeating those simulations a few times becomes unrealistic. The paper estimates that a measurement that took its quantum computer 2.1 hours to perform would take the Frontier supercomputer approximately 3.2 years. Unless someone devises a far better classical algorithm than what we have today, this represents a pretty solid quantum advantage.

But is it a useful algorithm? The repeated sampling can act a bit like the Monte Carlo sampling done to explore the behavior of a wide variety of physical systems. Typically, however, we don't view algorithms as modeling the behavior of the underlying hardware they're being run on; instead, they're meant to model some other physical system we're interested in. That's where Google's announcement stands apart from its earlier work—the company believes it has identified an interesting real-world physical system with behaviors that the quantum echoes can help us understand.

[...] For now, the team stuck to demonstrations on very simple molecules, making this work mostly a proof of concept. But the researchers are optimistic that there are many ways the system could be used to extract structural information from molecules at distances that are currently unobtainable using NMR. They list a lot of potential upsides that should be explored in the discussion of the paper, and there are plenty of smart people who would love to find new ways of using their NMR machines, so the field is likely to figure out pretty quickly which of these approaches turns out to be practically useful.

The fact that the demonstrations were done with small molecules, however, means that the modeling run on the quantum computer could also have been done on classical hardware (it only required 15 hardware qubits). So Google is claiming both quantum advantage and quantum utility, but not at the same time. The sorts of complex, long-distance interactions that would be out of range of classical simulation are still a bit beyond the reach of the current quantum hardware. O'Brien estimated that the hardware's fidelity would have to improve by a factor of three or four to model molecules that are beyond classical simulation.

The quantum advantage issue should also be seen as a work in progress. Google has collaborated with enough researchers at enough institutions that there's unlikely to be a major improvement in algorithms that could allow classical computers to catch up. Until the community as a whole has some time to digest the announcement, though, we shouldn't take that as a given.

The other issue is verifiability. Some quantum algorithms will produce results that can be easily verified on classical hardware—situations where it's hard to calculate the right result but easy to confirm a correct answer. Quantum echoes isn't one of those, so we'll need another quantum computer to verify the behavior Google has described.

Journal: "Observation of constructive interference at the edge of quantum ergodicity", Nature, 2025. DOI: 10.1038/s41586-025-09526-6


Original Submission

posted by janrinok on Monday October 27, @06:59PM   Printer-friendly
from the Something-completely-different dept.

No tech, no snark, no politics, just good pictures

Your day is about to get a lot better! After so much anticipation, the Nikon Comedy Wildlife Awards entry finalists have finally been revealed, and they are great. They're hilarious. Witty. Dynamic. And they're inspiring us to pick up the camera, too.

Today, we're featuring the finalist photos in all their glory, so scroll down to add a bit of humor and sunshine to your life. If anyone you know needs their spirits picked up, be sure to send them this way.


Original Submission

posted by janrinok on Monday October 27, @04:41PM   Printer-friendly

TechCrunch

New AI-powered web browsers such as OpenAI's ChatGPT ATLAS and Perplexity's Comet are trying to unseat Google Chrome as the front door to the internet for billions of users. A key selling point of these products are their web browsing AI agents, which promise to complete tasks on a user's behalf by clicking around on websites and filling out forms.

But consumers may not be aware of the major risks to user privacy that come along with agentic browsing, a problem that the entire tech industry is trying to grapple with.

Cybersecurity experts who spoke to TechCrunch say AI browser agents pose a larger risk to user privacy compared to traditional browsers. They say consumers should consider how much access they give web browsing AI agents, and whether the purported benefits outweigh the risks.

[...] There are a few practical ways users can protect themselves while using AI browsers. Rachel Tobac, CEO of the security awareness training firm SocialProof Security, tells TechCrunch that user credentials for AI browsers are likely to become a new target for attackers. She says users should ensure they're using unique passwords and multi-factor authentication for these accounts to protect them.

Tobac also recommends users to consider limiting what these early versions of ChatGPT Atlas and Comet can access, and siloing them from sensitive accounts related to banking, health, and personal information. Security around these tools will likely improve as they mature, and Tobac recommends waiting before giving them broad control.

Based on these concerns, would you use such browsers ?


Original Submission

posted by janrinok on Monday October 27, @11:55AM   Printer-friendly
from the yes-I-am-working-why-do-you-ask dept.

An Anonymous Coward has submitted the following:

A December update to Microsoft Teams that will be disabled by default will reportedly track user location and report it if the feature is enabled. This will allow bosses to tell if an employee is in the office or working from home and set their status accordingly. It will also be able to tell if the user is not at their normal home logon location and provide evidence to employers showing the user's location. Workers who have been taking mini holidays while claiming to be working from home may be affected by this new feature.

The idea of the new feature is to eliminate confusion for bosses about where a worker is within the building and to see if they are working remotely.

But those who work from home argue it is an invasion of privacy.

"Micro management at peak? All online work doesn't need you to be in the office, we can do it from home," one X user said.

"Why is this needed?" another added.

Almost half of Gen Z workers surveyed (44 per cent) revealed last year that they took a secret trip, with most giving their workplace the impression they were working normal hours and using a virtual background in meetings to trick their employer.

Ella Maree, 26, started hush-tripping after Covid when her corporate workplace adopted a 3:2 work week, which meant she could work from home on Mondays and Fridays.

"Since travel options were limited, hush trips became my go-to choice," she said.

"I flew out Thursday evening and worked by the hotel pool, restaurant and room on Friday. I maintained the same level of productivity as if I were physically in the office or working from home, so really, a win-win situation.

"Most of my office work from home Friday, so really, I'm just making the most of our remote work flexibility."

Ms Maree insisted her boss "wouldn't mind" given workplaces are mostly connected online and that she was always getting her work done.

How many Soylentils still have the ability to WFH, either full-time or part-time? I thought one of the attractions of WFH is the ability to work when the hours suit you and not the standard 9-5 (for non-Usians). Would you consider working from a different location a breach of your contract?


Original Submission

posted by mrpg on Monday October 27, @07:11AM   Printer-friendly
from the nice dept.

Alibaba Cloud says it cut Nvidia AI GPU use by 82% with new pooling system:

Alibaba Cloud claims its new Aegaeon pooling system reduces the number of Nvidia GPUs required to serve large language models by 82% during a multi-month beta test inside its Model Studio marketplace. The result, published in a peer-reviewed paper presented at the 2025 ACM Symposium on Operating Systems (SOSP) in Seoul, suggests that cloud providers may be able to extract significantly more inference capacity from existing silicon, especially in constrained markets like China, where the supply of Nvidia's latest H20s remains limited.

Unlike training-time breakthroughs that chase model quality or speed, Aegaeon is an inference-time scheduler designed to maximize GPU utilization across many models with bursty or unpredictable demand. Instead of pinning one accelerator to one model, Aegaeon virtualizes GPU access at the token level, allowing it to schedule tiny slices of work across a shared pool. This means one H20 could serve several different models simultaneously, with system-wide “goodput” — a measure of effective output — rising by as much as nine times compared to older serverless systems.

The system was tested in production over several months, according to the paper, which lists authors from both Peking University and Alibaba’s infrastructure division, including CTO Jingren Zhou. During that window, the number of GPUs needed to support dozens of different LLMs — ranging in size up to 72 billion parameters — fell from 1,192 to just 213.


Original Submission