An approach it calls "quantum echoes" takes 13,000 times longer on a supercomputer
[...] Today, Google and a large collection of academic collaborators are publishing a paper describing a computational approach that demonstrates a quantum advantage compared to current algorithms—and may actually help us achieve something useful.
Google's latest effort centers on something it's calling "quantum echoes." The approach could be described as a series of operations on the hardware qubits that make up its machine. These qubits hold a single bit of quantum information in a superposition between two values, with probabilities of finding the qubit in one value or the other when it's measured. Each qubit is entangled with its neighbors, allowing its probability to influence those of all the qubits around it. The operations that allow computation, called gates, are ways of manipulating these probabilities. Most current hardware, including Google's, perform manipulations on one or two qubits at a time (termed one- and two-qubit gates, respectively.
[...] So how do you turn quantum echoes into an algorithm? On its own, a single "echo" can't tell you much about the system—the probabilities ensure that any two runs might show different behaviors. But if you repeat the operations multiple times, you can begin to understand the details of this quantum interference. And performing the operations on a quantum computer ensures that it's easy to simply rerun the operations with different random one-qubit gates and get many instances of the initial and final states—and thus a sense of the probability distributions involved.
This is also where Google's quantum advantage comes from. Everyone involved agrees that the precise behavior of a quantum echo of moderate complexity can be modeled using any leading supercomputer. But doing so is very time-consuming, so repeating those simulations a few times becomes unrealistic. The paper estimates that a measurement that took its quantum computer 2.1 hours to perform would take the Frontier supercomputer approximately 3.2 years. Unless someone devises a far better classical algorithm than what we have today, this represents a pretty solid quantum advantage.
But is it a useful algorithm? The repeated sampling can act a bit like the Monte Carlo sampling done to explore the behavior of a wide variety of physical systems. Typically, however, we don't view algorithms as modeling the behavior of the underlying hardware they're being run on; instead, they're meant to model some other physical system we're interested in. That's where Google's announcement stands apart from its earlier work—the company believes it has identified an interesting real-world physical system with behaviors that the quantum echoes can help us understand.
[...] For now, the team stuck to demonstrations on very simple molecules, making this work mostly a proof of concept. But the researchers are optimistic that there are many ways the system could be used to extract structural information from molecules at distances that are currently unobtainable using NMR. They list a lot of potential upsides that should be explored in the discussion of the paper, and there are plenty of smart people who would love to find new ways of using their NMR machines, so the field is likely to figure out pretty quickly which of these approaches turns out to be practically useful.
The fact that the demonstrations were done with small molecules, however, means that the modeling run on the quantum computer could also have been done on classical hardware (it only required 15 hardware qubits). So Google is claiming both quantum advantage and quantum utility, but not at the same time. The sorts of complex, long-distance interactions that would be out of range of classical simulation are still a bit beyond the reach of the current quantum hardware. O'Brien estimated that the hardware's fidelity would have to improve by a factor of three or four to model molecules that are beyond classical simulation.
The quantum advantage issue should also be seen as a work in progress. Google has collaborated with enough researchers at enough institutions that there's unlikely to be a major improvement in algorithms that could allow classical computers to catch up. Until the community as a whole has some time to digest the announcement, though, we shouldn't take that as a given.
The other issue is verifiability. Some quantum algorithms will produce results that can be easily verified on classical hardware—situations where it's hard to calculate the right result but easy to confirm a correct answer. Quantum echoes isn't one of those, so we'll need another quantum computer to verify the behavior Google has described.
Journal: "Observation of constructive interference at the edge of quantum ergodicity", Nature, 2025. DOI: 10.1038/s41586-025-09526-6
(Score: 2) by stormreaver on Tuesday October 28, @12:03AM
I would not be the least bit surprised if someone soon publishes an article describing how he outperforms this using a regular computer, or that Google has once again misrepresented its results.
(Score: 3, Touché) by JoeMerchant on Tuesday October 28, @12:11AM (17 children)
You input a command, and you've got an 80% chance of it coming back like you asked for.
Now, build a system around that that detects the 20% errors, corrects, tries again... and remember: this error correcting logic also only works as expected 80% of the time...
🌻🌻🌻 [google.com]
(Score: 4, Interesting) by anubi on Tuesday October 28, @02:27AM (2 children)
I still try to understand this
My old analog computer could run certain multi variable nonlinear differential equations with dynamic input far, far, far faster than the "big iron" at my university...but it was often in error due to various "sensitive" spots such as dividing by zero, and DC drifts, even when freshly tweaked, due to component sensitivity to temperature. Capacitive dielectric soakage also presented a challenge.
I still often use analog designs as front-ends to digitizers to take advantage of the analog continuous sampling over the digital sampling window. Dual-slope and sigma-delta are a couple of implementations of analog-digital hybrid.
Quantum may be fast, but using the wrong tool for the wrong job is quite inefficient. I would not want to balance my checkbook with an analog computer, and it's memory drifts too much to remain accurate for more than a few seconds. Maybe minutes with really good integrator capacitors and ultra low input offset current analog instrumentation amplifiers. Long enough to get an analog pen-plot out.
I am quite confident that their reported speeds may be realistic, but, like my analog subsystems, is it sufficiently precise to be useful?
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 4, Touché) by shrewdsheep on Tuesday October 28, @09:50AM (1 child)
That's the chance you take when you turn on any Windows computer.
(Score: 2, Informative) by anubi on Tuesday October 28, @10:12AM
My biggest problem with the later windows is:
" Will this thing still work like it did yesterday? "
" Whats going on with all that internet gossip?
Are they talking about me behind my back? "
" Make sure I am not agreeing to something!
I don't have time to read all that legalese! "
And...
" Oh shit!!! Another gawddam dialog box!
what does this one want? "
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 1) by khallow on Tuesday October 28, @02:27AM (12 children)
(Score: 5, Interesting) by JoeMerchant on Tuesday October 28, @03:09AM (11 children)
You know, a year ago when the hallucination rate was just under 50%, the stack of turtles was infinitely deep.
Where it stands now, you can give it a 1000 line chunk of problem and it gets that right 80 to 95% of the time. As long as you can organize the problem chunks small enough, and define their pass/fail criteria clearly enough, the turtle stacks are almost short enough to not fall over every time.
What I'm seeing is: when I apply the software quality / mature processes that have been being created over the past 30ish years to the AI code generation process, I'm building significant systems that actually work within a week or two just me and a $50/week AI partner. Those same systems would take months to finish in my corporate den of a half dozen meatbags.
🌻🌻🌻 [google.com]
(Score: 2, Informative) by anubi on Tuesday October 28, @10:47AM (2 children)
I have been experimenting with Claude and GPT to write some x86 assembly and C++ compatible with my legacy Borland compilers. Mostly TSRs, device drivers, and legacy DOS to Arduino I2C commlinks ( bit-banging the registers of the PC legacy parallel port to mimic my Arduino I2C interfaces and bit ports ).
Yes, I still use GWBasic a lot, as it's an interpretive language and easy to make a lot of small changes when dealing with register setups and binary algorithms of interface chips.
So far it's been helpful, but I have outgrown the free version and need more work on my prompt engineering skills.
I used to work with guys that knew all this stuff, but since the late 80's, management got us all compartmentalized / standalone / "individual responsibility" and it became glaringly obvious to me what "teamwork" was all about, once it was gone. But their " merit ranking " was now much easier since the people who knew how to do something were now doing it instead of showing everyone else how it's done. A lot of very inefficient learn-by-trial-and-error, the hard way, replaced simple queries under the new leadership-skill based management paradigms.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 2) by JoeMerchant on Tuesday October 28, @10:12PM (1 child)
Yeah, sad how that goes.
The intertubes should be relatively full of GWBasic and similar stuff, if you know where to dig, so claude has a fighting chance.
As for outgrowing "free" - yeah, that happened for me in early July, I paid the $20 per month for 3 months, then I got serious about re-implementing my beast of a side project about a week and a half ago and I went in for the full $200 subscription. Paid mode really is better... at the $200 level I can bang away as much as I want and just barely use up the weekly allowance. You can start trying interesting things without worrying about "spending all your tokens". I don't think the $100 subscription would work for me, I blow past 25% of my weekly allowance in just a couple of days, and the $100 plan is reputed to be 1/4th the available resources.
When I'm past my side project binge, I'll definitely scale back to $20 or free. Work finally got me an "approved for use with company code" Cursor API key, of course our IT are screwing it up here an there with net-nanny filters randomly killing the connection - something that never happens on my home claude code work. But, they want us to work "through the system" not around it, and I'm paid the same every 2 weeks - not by merit or productivity - so...
🌻🌻🌻 [google.com]
(Score: 2, Informative) by anubi on Wednesday October 29, @03:03AM
They used to run the major aerospace company I joined that way. We had dozens of people on the payroll, many well past retirement age... because they knew many useful things and provided the rest of us their insight into how to approach what we needed to do.
The company was purchased by a financial conglomerate that made many things. On the day of switchover, the employees wore black arm bands.
We knew.
It was much like being told a loved one had cancer.
A terminal disease. And we had it
Same as in today's educational system...what is it? A dozen or so support administrative people for every teacher? And all ours did was impede us from getting anything done, mandating shortcuts that invariably sped up delivery milestones, but generated massive problems for our customer. Pleasing our customer became secondary to litigation skills. Our reputation was soon shot beyond repair. But a few people made a lot of money in the melee.
The rest of us had to find new jobs.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 5, Insightful) by canopic jug on Tuesday October 28, @01:21PM (7 children)
Where it stands now, you can give it a 1000 line chunk of problem and it gets that right 80 to 95% of the time.
That's true only if you are fine with stolen code. The thing that the LLMs do best is strip licensing and attribution from Free and Open Source Software project repositories. Since the output of LLMs cannot be copyrighted, the project which the LLM ripped the code from neither gets credit because the LLM has stripped attribution nor can the developer work with the upstream because it becomes unknown. Contribution to upstream development is thus impossible after code has passed through an LLM even in the cases where there is both the will and the skill on the part of the developer to do so. Thus the new generations of developers are cut off from projects they could work with or contribute to, and vice versa. That separation is regardless of how much the developers would want to help. They just plain can't find which project(s) the code came from so they can't contact it. There have been some excellent, detailed blog posts on that conundrum lately.
Be that as it may, the result harms Free and Open Source Software by cutting projects off from the supply of new developers.
Furthermore, vibe coding turns development tasks into the drudgery of debugging legacy code written by an inept coder who had no understanding of what they were really doing or what the real goals were. Yes, LLM output is by definition legacy code: it is code which exists but no one actually knows how or why it works or what it really does.
Then as you already know, the first 80% to 95% of the coding is the easy part. The hard part is the other 80% to 95% of the coding.
Money is not free speech. Elections should not be auctions.
(Score: 4, Insightful) by aafcac on Tuesday October 28, @08:51PM (3 children)
There's also the issue that since it doesn't understand the code, and the person asking it to write the code probably doesn't understand the code, there's no telling what sort of issues there are in the code. And, in such a situation, the person requesting the code likely has no idea how to audit the code or find those issues. I wouldn't be surprised if bad actors start to deliberately open source questionable code with hidden issues just to pollute the LLMs' data set.
Personally, I think that if somebody needs code and can't be bothered to either program it themselves or get somebody else to do it, the next best option is one of those no code tools. Sure, you're sort of coding it, but that barrier to entry on those tools is rather low and you mostly just need the ability to work out what needs to be done and let the tool help guide how it's actually done. Things like MIT's App Inventor can be rather helpful for just about all situations where AI programming is at all acceptable.
(Score: 3, Interesting) by JoeMerchant on Tuesday October 28, @11:30PM (2 children)
> there's no telling what sort of issues there are in the code.
TDD helps a lot with this, but there's no substitute for code review.
If you don't keep adequate tabs on what the AI is coding for you along the way, you get what you deserve.
But you have to consider baseline reality: shops all over the world hire developers with a six month turnover, and nobody reviews anything... at least AI doesn't have an axe to grind about how unfair its life is.
🌻🌻🌻 [google.com]
(Score: 3, Insightful) by aafcac on Wednesday October 29, @12:11AM (1 child)
That speaks more to the incompetence of the managers and horrible conditions than anything else. Replacing the developers with AI is just going to make that all worse.
(Score: 3, Informative) by JoeMerchant on Wednesday October 29, @12:39AM
>That speaks more to the incompetence of the managers
I agree wholeheartedly.
The glimmer of hope I'm seeing in all of this is: managers who do a good job managing AI are likely going to start outperforming bad software managers of all kinds of coders by a very very wide margin.
IMO, this week at least, if you don't know how to manage software development you're not going to get much value out of AI code.
On the other hand, I feel like a lot of the stuff I'm doing is going to get baked into the AI coding tools within the next six months. Right now, I'm spending half my time "teaching" the AI to chunk problems into manageable bits, plan execution in manageable little chunk phases, write a formal plan detailing those chunks before starting to write code, etc. etc. The interesting part to me is: I mostly did this by instructing the AI to research current best practices for context management and write development workflows for itself to follow, and the results are shockingly powerful so far.
🌻🌻🌻 [google.com]
(Score: 3, Informative) by JoeMerchant on Tuesday October 28, @10:18PM (2 children)
>That's true only if you are fine with stolen code. The thing that the LLMs do best is strip licensing and attribution from Free and Open Source Software project repositories.
I'll have to disagree there. I'm sure they do a bit, just as I do when I'm researching a problem. I watched claude explicitly copy the axum tower mode pattern from the axum official demo code - that's hardly theft, they're providing it for you. What I had claude doing with that tower mode wasn't anything you're likely to find in open source, and what I'm doing behind that API you definitely won't find in open source. Bits and pieces of it, sure, but you might try reading Melancholy Elephants before you get too outraged...
As for copying example code snippets and rearranging them, that's what I've been doing my whole life - ever since copying examples out of Compute! magazine in the 1980s. I did it heavily from the Qt Demo apps in the 2006-7 timeframe, until I finally developed my own preferred patterns from them, then I copied my own stuff relentlessly whenever it applied.
If you're sitting down at your code editor and writing every single word of your code "blind" without copying good examples of how those words are strung together for the desired effect, you're doing it wrong, re-inventing the wheel, and kicking the giants in the shins instead of getting the lift they offer freely.
🌻🌻🌻 [google.com]
(Score: 2) by canopic jug on Wednesday October 29, @04:24AM (1 child)
As for copying example code snippets and rearranging them, that's what I've been doing my whole life - ever since copying examples out of Compute! magazine in the 1980s. I did it heavily from the Qt Demo apps in the 2006-7 timeframe, until I finally developed my own preferred patterns from them, then I copied my own stuff relentlessly whenever it applied.
Yes, and as a result you know the provenance of the code and which project to feed improvements and bug reports back to, should you choose to do so. But the choice is yours, negative or affirmative. That is completely unlike working with LLM output which is by and large stripped of that information, so the choice has been made for you: negative.
Money is not free speech. Elections should not be auctions.
(Score: 2) by JoeMerchant on Wednesday October 29, @01:10PM
>you know the provenance of the code and which project to feed improvements and bug reports back to, should you choose to do so. But the choice is yours, negative or affirmative. That is completely unlike working with LLM output which is by and large stripped of that information, so the choice has been made for you: negative
I'm finding that AI implementations are even more sensitive to "chunk size" considerations than meatbag projects. Meatbags tend to extremes, they will write something line by line, or they will lift a whole project or a whole module of a project and tweak it to taste. When lifting a whole project, that's when bug reports are most helpful for all involved.
AI, on the other hand, at least as I have been using it, tends to pull smaller chunks from "the corpus of available examples," which are much less likely to lead to useful bug findings. If you're and "affirmative" choice developer, which I am maybe once every few years when I find something of potentially high value, you can instruct your AI workflows to report those bugs that it finds in the process of development, but if the AI is working at a smallish chunk size it's going to lack a lot of context about whether the "bugs" it finds are really important to the larger projects it is finding them in.
I have been asking a number of "can Cursor do X" questions lately, and depending on the answer sometimes we end up with "no, but you should suggest that to the cursor team on their forums" - and, depending on which tool I'm using for the conversation, it can go as far as drafting the feature request and finding the spot on the forum where I can best post it.
Now, if you're like most LLM users (and most programmers) "Ain't nobody got time for that!" so, yeah, they will just roll on in the negative column, ignoring whatever the GPL or other license agreements require.
🌻🌻🌻 [google.com]
(Score: 4, Interesting) by RamiK on Tuesday October 28, @12:58PM
Glitches in on-chip caches are so common there's papers discussing reducing the incurred latency of the error correction: https://passat.crhc.illinois.edu/hpca_15_cam.pdf [illinois.edu]
DDR5 SDRAM and LPDDR5 similarly incorporate ECC on-die to account for built-in error rates.
We cache miss... We speculate... We use sensors and user input... In real time too...
It's not a new problem and the worst case scenario is probably just some computation redundancies (2-out-3? 3 or 4 out of 5?) that would still keep the costs well below classical circuits.
compiling...