Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday May 24 2018, @02:32AM   Printer-friendly
from the well,-that-cleared-that-up dept.

A team of physicists from ICTP-Trieste and IQOQI-Innsbruck has come up with a surprisingly simple idea to investigate quantum entanglement of many particles. Instead of digging deep into the properties of quantum wave functions - which are notoriously hard to experimentally access - they propose to realize physical systems governed by the corresponding entanglement Hamiltonians. By doing so, entanglement properties of the original problem of interest become accessible via well-established tools. This radically new approach could help to improve understanding of quantum matter and open the way to new quantum technologies.

Quantum entanglement forms the heart of the second quantum revolution: it is a key characteristic used to understand forms of quantum matter, and a key resource for present and future quantum technologies.

Physically, entangled particles cannot be described as individual particles with defined states, but only as a single system. Even when the particles are separated by a large distance, changes in one particle also instantaneously affect the other particle(s). The entanglement of individual particles - whether photons, atoms or molecules - is part of everyday life in the laboratory today.

The physicists turn the concept of quantum simulation upside down by no longer simulating a certain physical system in the quantum simulator, but directly simulating its entanglement Hamiltonian operator, whose spectrum of excitations immediately relates to the entanglement spectrum.

"Instead of simulating a specific quantum problem in the laboratory and then trying to measure the entanglement properties, we propose simply turning the tables and directly realizing the corresponding entanglement Hamiltonian, which gives immediate and simple access to entanglement properties, such as the entanglement spectrum" explains Marcello Dalmonte. "Probing this operator in the lab is conceptually and practically as easy as probing conventional many-body spectra, a well-established lab routine." Furthermore, there are hardly any limits to this method with regard to the size of the quantum system.

This could also allow the investigation of entanglement spectra in many-particle systems, which is notoriously challenging to address with classical computers. Dalmonte, Vermersch and Zoller describe the radically new method in a current paper in Nature Physics and demonstrate its concrete realization on a number of experimental platforms, such as atomic systems, trapped ions and also solid-state systems based on superconducting quantum bits.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Thursday May 24 2018, @10:48AM

    by Anonymous Coward on Thursday May 24 2018, @10:48AM (#683497)

    (NOTE: I'm using stuff here to denote mass and energy since AFAIK we never actually came up with a word for it despite mass and energy being equivalent. I'm also using particle to mean anything not a wave and not empty space.)

    What we're finding more and more often is that everything is already entangled.

    This makes sense if you look at the pre-big bang as a point wherein all the stuff of the universe was sharing a single quantum state, i.e. a shared or universal wave function.

    The big bang itself having been the initial configuration prior to decoherence which still happens to this very day.

    You, me, every event that has ever occurred and ever will occur, is the universe making progress towards solving it's universal wave function by breaking it down into smaller and smaller components, much like a programmer will break down a large program into smaller and finer grained functions until all the outliers and edge cases are resolved.

    In this case, time's arrow appears to be driven by the act of collapsing i.e. solving the wave functions, by whatever substrate we're computing it on.

    In this view point, the reason why time slows in the presence of gravity, derives from the fact that more stuff is observing the state of other stuff as it evolves and because that stuff is closer together, there is more work for whatever substrate we're running on to do. If you've ever experienced lag in a video game as you pan across an area with a large number of animated objects such as players, this is pretty much the same principle.

    It could be that the entirety of our universe will eventually be found to be a cellular automaton with fundamentally simple rules and a singular number (seed) or handful of numbers (fundamental constants) that are being solved against by evolving from an initial state. Stephen Wolfram posits this in several of his works... http://www.stephenwolfram.com/publications/academic/?cat=cellular-automata [stephenwolfram.com]

    I don't know that I buy into Simulation Theory arguments like that. Obviously our universe is computational in nature, this is evidenced by the fact that we are able to build computers in this universe and lo and behold they compute correctly (barring the stray cosmic ray here and there).

    But the fact that certain configurations / states of matter can and do compute, and certain configurations / states of matter can be and are conscious (you and me for example). Does not imply that there is defacto a simulation running. Simulation is computation, but it is computation with a purpose. Purpose implies a designer and therefore to my mind, Simulation Theory is merely appealing to a universal engineer or programmer, i.e. God.

    Yet what purpose does the rock you look down at as you walk down the road serve? And yet a rock can be viewed as a configuration of matter, currently computing a state you're calling "rock". However this is only because you've observed it to be a rock.

    Thousands of years ago that same rock may have been part of a larger configuration you would have called "mountain".
    It has decohered from the mountain state to the rock state and given several thousand to several million years it will likely decohere to dust.

    This is all ivory tower academic stuff of course.

    There is no difference between saying a configuration of atoms IS a rock and a configuration of atoms is a computer, computing something your senses interpret as rock. You yourself are a collection of atoms that are computing your body and your conscious mind.

    You are a state or configuration of matter that both computes (quickly what is 41+1?) and is conscious (answer was 42). But really you're only conscious because disparate regions of your brain are aware of the state of other regions of the brain. These regions are fed electrical impulses by your senses and then those regions synthesize this information into what you're observing as conscious experience. Your conscious experience is the result of a standing wave of energy in the collection of atoms that makes up your brain.

    However in the realm of quantum mechanics there is no difference between a conscious collection of atoms performing an observation and a rock performing an observation.

    Does a rock really observe? Yes it does, each photon that strikes the rock collapses the wave function of both the photon and the rock, in the process the photon which is a messenger packet, excites the electrons in the atoms of the rock, the same way that each photon striking your retina excites the atoms therein.

    The only major difference is that in the rock, the photon excites the atoms and because of the rock's particular configuration those atoms radiate that excitation energy as heat. Yet your retina instead of only heating up, also releases an electrical impulse. This electrical impulse tells your brain "photon @ wavelength just arrived", this happens rapidly enough that your brain builds up a picture, impulse by impulse.

    Yet to the laws of physics the wave state collapsed and time moved forward as the wave universal function collapsed just a little more, a wave function that you are still part of.

    Returning to my original point, everything started out entangled and it's the act of decoherence via observation that is moving the arrow of time forward. There is no reason to assume that there is a simulation at play. It could just be that the universe itself is the result of a computation occurring on a substrate capable of performing these kinds of computations.

    Being a programmer, I personally use a computational model in my mind.
    A mental exercise to try and frame things when I read about QM and it looks like this.

    Imagine for a moment you need to model billions of particles interacting. Each particle contains at a minimum 4 values that are constantly evolving and a handful of constants that are supposed to be set from the beginning of the computation, but which are dependent on a value that can only be known in the future and you don't know in advance which of the linked values will be known first.

    The most computationally efficient way to model this would be a linked list containing a series of structs. The structs would contain x,y,z which represent their position in 3D space and t which represents a temporal component, giving a 4 space component vector. In addition they could contain values which are being "carried but unkown", such as spin, charge etc. These extra values are the ones that need to be updated instantly, so you set those values to simple pointers to the structs representing the entangled particles they are dependent on.

    Entanglement is the process of placing these structs adjacently in what amounts to a linked list, then as you iterate over the "non-entangled" values as soon as you have the answer for the "future" value of an entangled constant, i.e. an observation has occurred, you merely jump to the dependent neighbors on the linked list, update the observed values and the t value, replacing the pointer with a value.

    This produces an evolving structure that is identical to decoherence, and if you happened to place pointers to each particle struct into an t,x,y,z array, you can move objects around in the array, computing neighbors until you find an empty group, simply by adding pointers to the array, i.e. you maintain a history. In the meantime you can instantiate coherence between any co-dependents by setting the unknown value to a pointer to the "entangled neighbor", while the co-dependents remain in the correct proximity on the array.
    This pointer stays as the value of the constant until a solution for one of the co-dependents is known, kicking off an update cycle across the linked list.

    Concepts such as position in 4 space of course are particle dependent and even t does not need to be the same, so long as particles with close enough values on all of x,y,z,t can have their interactions computed correctly.

    Just sayin, it's an idea.

    Starting Score:    0  points
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   1