Stories
Slash Boxes
Comments

SoylentNews is people

posted by Woods on Friday September 05 2014, @04:08PM   Printer-friendly
from the shedding-light-on-awful-puns dept.

A just-released paper Researcher advances a new model for a cosmological enigma—dark matter offers a novel model of dark matter, dubbed "flavor-mixed multicomponent dark matter." From the article:

"Dark matter has not yet been detected in a lab. We infer about it from astronomical observations," said Mikhail Medvedev, professor of physics and astronomy at the University of Kansas, who has just published breakthrough research on dark matter that merited the cover of Physical Review Letters, the world's most prestigious journal of physics research.

Medvedev's theory rests on the behavior of elementary particles that have been observed or hypothesized. According to today's prevalent Standard Model theory of particle physics, elementary particles—categorized as varieties of quarks, leptons and gauge bosons—are the building blocks of an atom. The properties, or "flavors," of quarks and leptons are prone to change back and forth, because they can combine with each other in a phenomenon called flavor-mixing.

"In everyday life we've become used to the fact that each and every particle or an atom has a certain mass," Medvedev said. "A flavor-mixed particle is weird—it has several masses simultaneously—and this leads to fascinating and unusual effects."

Medvedev said that dark matter may interact with normal matter extremely weakly, which is why it hasn't been revealed already in numerous ongoing direct detection experiments around the world. So physicists have devised a working model of completely collisionless (noninteracting), cold (that is, having very low thermal velocities) dark matter with a cosmological constant (the perplexing energy density found in the void of outer space), which they term the "Lambda-CDM model."

But the model has hasn't always agreed with observational data, until Medvedev's paper solved the theory's long-standing and troublesome puzzles.

"Our results demonstrated that the flavor-mixed, two-component dark matter model resolved all the most pressing Lambda-CDM problems simultaneously," said the KU researcher.

The original of the article seems to be on the Kansas University web site. According to: Cold Dark Matter is often called Lambda-CDM:

Cold Dark Matter is an abbreviation for Lambda-Cold Dark Matter. Cold Dark Matter represents the current concordance model of Big Bang cosmology that explains cosmic microwave background observations, as well as large scale structure observations and supernovae observations of the accelerating expansion of the universe. Cold Dark Matter is the simplest known model that is in general agreement with observed phenomena.

I am fascinated by stories on cosmology but have only a passing understanding of the material; is there a physicist or cosmologist around who would like to weigh in and shed some light on the subject?

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by boristhespider on Friday September 05 2014, @07:06PM

    by boristhespider (4048) on Friday September 05 2014, @07:06PM (#89935)

    until I've read it (and perhaps not even then since my particle physics isn't my strong suite) but I'd find a different summary site than one that says "Cold Dark Matter is an abbreciation for Lambda-Cold Dark Matter" - that's just bizarre. No it isn't; cold dark matter is a hypothesised and poorly understood phenomenon in which galaxies and galaxy clusters act as if they're embedded in halos of massive, non-relativistic particles that interact only through gravity. The Lambda is the cosmological constant, which is required when one jumps another scale and considers the full cosmology -- we need the cold dark matter to allow normal, luminous matter to clump together enough, otherwise it is impossible, given our current ideas about gravity, to get "small-scale" structures; we have a hard limit on luminous matter from big bang nucleosynthesis: the universe can be composed of, at most, around 5% of the critical density in the usual standard model particles, or we'd see very different abundances of hydrogen, helium, lithium and trace heavier elements. We would then be stuck with either a universe of very low density, which it transpires evolves far too quickly for structures on galactic scales to evolve (and gives a very different distribution of temperature fluctuations on the CMB, to boot), or given that other evidence implies the universe is damn near *at* critical density, you need some other form of clustering matter.

    For a long time the CDM universe (which is basically an Einstein-de Sitter universe -- one composed of pure dust -- since the amount of radiation is now pretty minimal) was the standard model, but in the 90s it began to become obvious that this isn't enough. To get the right amount of clustering we can't have 95% dark matter and, besides, that predicts a universe that's quite significantly younger than some of the older stars hanging around. Which is a problem, since we had mounting evidence that the universe is basically at the critical density. Adding a *non*-clustering component, though, balances out the over-clumping in the 95% CDM universe. And there's a candidate available - the cosmological constant, which can be arbitrarily added into the equations. (In classical GR, it's nothing more than an integration constant, although it can semi-classically be identified with the energy of the vacuum.) It turned out that a universe with roughly 75% in the cosmological constant, Lambda, fitted models of the growth of structures, fitted observations, and fitted the CMB, while providing a universe old enough for a consistent picture of even stellar formation to emerge. This is still basically the model we have today, except that the numbers have been tweaked a bit and we're at roughly 4% in normal matter, 26% in CDM, and 70% in a cosmological constant.

    Out of interest, there used to be a lot of attention paid to warm dark matter (WDM) models, but again structure formation kills them. The universe cannot be composed of warm dark matter such as light (or massless) neutrinos, because you don't get anything like enough structure on small scales. It all gets washed out because the light particles are moving way too fast to clump into clusters as small as the ones we see. You *can* get massive neutrino clusters but these are on scales edging towards the gigaparsec. That doesn't rule out a mix of warm and cold dark matters, of course, but given the quality of data we've got (even now, and the datasets are fantastic compared to those we had in the 90s or even when I started my PhD in 2002) we wouldn't gain much studying a two-fluid CDM except a whacking great increase in our error bars.

    Also out of interest, "Dark Energy" is a name given to the generalisation of the LCDM universe. If one accepts inflationary models where a scalar field in the early universe evolved slowly along an almost-flat potential (which for technical reasons means the field acts almost exactly as a cosmological constant) then one can also propose similar models in the late universe. This is "quintessence", sometimes written as phi CDM since the field is usually written with a phi. Or if one prefers modifying gravity, a popular model a decade back - until it was found to be brutally unstable and impossible to deal with even semi-classically - was the 1/R model. R is known as the "Ricci scalar" and is a way of describing the local curvature of spacetime, and it also happens to be the Lagrangian density of GR. (That means that if we take the Ricci scalar, and integrate it across all of space and time, we get the "action". Imposing that that action has to take its minimal value gives constraints on R, which turn out to be the Einstein field equations. This procedure is familiar all the way back to the mid 19th century, and is Hamilton's principle; the easiest analogue is in classical mechanics where the Lagrangian of a point particle is given by L = (1/2)mv^2 - V where V is the potential it moves in.) So a first step to modifying gravity would be to tack on terms in the action dependent on R. R + a/R is a nice candidate for cosmology, since at "high" curvature the second term is basically zero, and we have simply R, which is normal GR. At low curvature, though, as in the present day universe, the *first* term is basically zero and we get a theory of gravity given by 1/R, which accelerates. Neat. [Even neater is that R + bR^2 does the opposite thing -- when the curvature is "low" relative to b, R dominates and we have normal GR. When it's "high", as in the early universe, gravity suddenly becomes governed by bR^2 and, for technical reasons, *also* accelerates. Even more beautifully, it accelerates exactly as inflation, except with a very low background of gravitational waves. R^2 inflation, first proposed by Starobinsky in 78, I think, a good few years before Guth's Higgs inflation, is actually the most favoured by present data, but since many models lie within one standard deviation of the best fit, that's not a very strong statement.]

    • (Score: 2, Informative) by boristhespider on Friday September 05 2014, @07:11PM

      by boristhespider (4048) on Friday September 05 2014, @07:11PM (#89939)

      Also, for anyone interested, the paper is here: http://arxiv.org/abs/1305.1307 [arxiv.org]

      • (Score: 2) by cosurgi on Friday September 05 2014, @09:36PM

        by cosurgi (272) on Friday September 05 2014, @09:36PM (#89979) Journal

        Great! Thanks for the link.

        This is the type of stories that I love here on SN. Maybe this research will be a breakthrough.

        --
        #
        #\ @ ? [adom.de] Colonize Mars [kozicki.pl]
        #
        • (Score: 1) by boristhespider on Friday September 05 2014, @10:31PM

          by boristhespider (4048) on Friday September 05 2014, @10:31PM (#90001)

          I've skimmed the paper now and while it's obviously fairly preliminary, it's really quite interesting -- firstly, so far as I can tell with my lacklustre particle physics he's got a reasonably well-motivated dark matter, and secondly where in my original wall of text I stated "we wouldn't gain much studying a two-fluid CDM except a whacking great increase in our error bars", his model is intrinsically a two-fluid CDM and it certainly produces a much nicer "maximum circular velocity function" (basically quantifying the galaxy rotation curves). This is by no means a parameter estimation, so there's no way of telling what would happen to the constraints on model parameters -- generically, adding more parameters to a model will increase the error bars unless the fit to the data is staggering -- but it does look promising. The caution I'd sound is chiefly based on the natural limitations of his study: he didn't run very large simulations and it's in an extremely small box (50 Mpc/h on a side, so roughly 75Mpc or so) so while it's reasonable for looking at the formation of galaxy clusters it's entirely inapplicable to cosmology as a whole; and he chose a cosmological model without standard matter at all, so *all* the gravitating matter is formed of his two-component CDM. This isn't a reason to discount the study, of course -- far from it; his results suggest it would be very well worth examining on larger scales, and initial runs assuming pure CDM universes are not in the slightest infrequent -- but I'd be a bit wary of the more grandiose claims in the phys.org article, which repeats the KU press release almost verbatim.

          What I'd really like to see from this model are four things: a full linear cosmological analysis, a model comparison using standard datasets (the combination of 9-year WMAP; 1 year Planck or, soonish, Planck second release; the most recent SDSS release; WiggleZ; recent supernovae datasets etc.), a model comparison using galactic- and cluster-scale measurements; and an N-body simulation in a box of ~Gpc in size which could give results applicable for cosmology, this time including gas physics.

          The linear cosmological analysis is basically the theory of small fluctuations around the smooth cosmological background - it's vital for "precision" cosmology, and is the central tool for predicting the power spectra of the perturbations of the microwave background and of their polarisation, and for predicting the statistical distribution of galaxy clusters on large scales. Practically everything we say with confidence about the universe comes from linear cosmological perturbation theory, and it's impossible to make any strong claims about a theory without having done so. WMAP and Planck are the major CMB datasets, WMAP from NASA revolutionised cosmology in 2003 when the first analysis came through and whole swathes of theory were discredited overnight, while Planck is the successor technology which produced jaw-droppingly good observations of the CMB temperature fluctuations and will hopefully do the same with at least some of the polarisation signatures in the next data release in a few months. SDSS is an ongoing collaboration with the aim of cataloguing every galaxy in the "nearby" universe -- stretching out to unimaginable distances -- and is now in something like its 15th year. WiggleZ is a project with similar aims and different coverage on the sky. (The near future will also include the likes of ESA's Euclid and, a bit more hopefully and a bit more distant, SKA which if it actually works as hoped will be genuinely stunning.)

          Model comparison is a way of comparing two different models of a dataset using Bayesian analysis -- it's very similar to parameter estimation, turned up to eleven. It can never tell you if a model is correct, but it can tell you the extent to which one model fits the data significantly better than another. Without a model estimation we're basically judging results on a posterior basis -- and I don't mean looking at bums, I mean biasing the selection given the knowledge of the theory and its results.

          It would also be lovely to see detailed weak lensing calculations, but since his mass difference between the two dark matters isn't very large they're probably not very different from standard CDM - still more data, though.

          Again, this is not a criticism of his current paper, since he had the very specific aim of cluster- and galaxy-scale dynamics - it's more a wish-list of things for people to start looking at for the future. Personally, I think the model is well enough motivated that it's worth us spending time on, and I don't say that lightly about dark matter models.

          • (Score: 1) by art guerrilla on Saturday September 06 2014, @02:54AM

            by art guerrilla (3082) on Saturday September 06 2014, @02:54AM (#90083)

            i always thought spiders were smart...

          • (Score: 2) by HiThere on Saturday September 06 2014, @07:14PM

            by HiThere (866) on Saturday September 06 2014, @07:14PM (#90295) Journal

            I have essentially no knowledge in this area, but just based on the past history of people exploring new areas I would expect that it's a lot more complex than two components. Saying that Dark Matter is two components may well be like saying the baryonic universe is built out of Hydrogen and Helium. There are other important constituents, even though they are minor as a percentage of volume.

            --
            Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
            • (Score: 1) by boristhespider on Sunday September 07 2014, @10:39AM

              by boristhespider (4048) on Sunday September 07 2014, @10:39AM (#90458)

              Absolutely. There are a variety of ways we can get the behaviour of a dark matter: particulate dark matter, scalar fields, modifications to gravity, consequences of modifications to gravity such as massive matter on branes near to us (a setup with the additional benefit of introducing an effective scalar field, the dilaton), a more careful consideration of the gravitational field of a galaxy, a more careful consideration of how average behaviours actually work in metric theories of gravity, and a more careful analysis of the paths that light travels on.

              Particulate dark matter is clearly the most popular, and is usually taken to be a single species of almost collisionless, massive particle. Indeed, we already know of a particle that acts, to some degree, as a warm dark matter: neutrinos. Neutrinos have mass, and are abundant throughout the universe, and so they are a dark matter even if this is not often appreciated. That's not least because neutrinos cannot be "the" dark matter, since as I noted above they would wash out structure on small scales, and by "small" I mean "galaxy cluster". A second popular candidate is a lightest supersymmetric particle; if there is supersymmetry (which I have my doubts on, but it's still always possible) then there is a stable particle that is lower of mass than the other supersymmetric particles. Then supersymmetric particles will tend to decay into the LSP, but this cannot decay into normal matter and neither does it interact in any normal way other than gravitationally: it's a dark matter. (Candidates include the axinos and neutralinos.) But there's no a priori reason to limit ourselves to one, two, or thirty particulate dark matters.

              Or we can have scalar fields. These would normally be invoked in the current universe to provide a dark energy but there's no reason at all not to employ one as a dark matter - if we're going to accept a field with an absurdly flat potential in the present universe, it's surely less of a stretch to accept one with a less finely-tuned potential that makes it virtually pressureless.

              Or we can modify gravity in one way or another, with famous examples being the likes of MOND and its relativistic generalisations. Many ways of modifying gravity yield a theory that can be transferred into the "Einstein frame", in which the equations are those of normal general relativity, plus an effective matter which is typically then described as a scalar field or a combination of scalar and vector fields, so in some ways this is linked to the above. For instance, relativistic generalisations of MOND tend to be scalar/vector/tensor theories of gravity; and the differently-motivated bimetric theories are also basically scalar/vector/tensor theories. To be honest, on galaxy scales MOND is probably the most successful and simple approach to dark matter, since with a single, universal parameter it fits practically every observed galaxy - a success that standard dark matter can't reproduce. On the other hand, on cluster scales MOND is laughably pathetic, so I wouldn't take this as more than indicative that *something* is going on that looks impressively like a bottoming out of acceleration in some regimes.

              Or other ways of modifying gravity emerge from attempts at string theory - and, in particular, from M theory. They've lost popularity somewhat in the last decade but for a while braneworld theories were all the range. In M theory one can have surfaces of various dimensionality strung through the 11+1d "spacetime". In particular, one can have 3 dimensional surfaces. The ends of "open" strings snap onto these surfaces and are exhibited as particles, while "closed" strings pass through, and act as gravitons. If we then have two such surfaces suspended near to one another, particles will react to each other gravitationally, but in no other way - dark matter.

              On the other hand, studies of gravitational dynamics, let alone cluster dynamics, are typically done assuming effectively Newtonian physics (which is why MOND is MOND, not MOED). This assumes a flat, Euclidean spacetime -- but the spacetime around a galaxy is not Euclidean. It's a horrific mess of about 10^9 to 10^12 spacetimes, each of which individually is something near to Schwarzschild or Kerr-Newman. A better description of the spacetime around a spiral galaxy might be something more or less cylindrical, and charged, given that the galaxy sustains a magnetic field. Preliminary and unconvincing studies of this typically find that the need for dark matter can be reduced by something between 1-15%.

              But this also raises the next suggestion, that we're simply trying to apply local theories (such as relativity) to large systems. We don't do that in other spheres of physics. There's a reason we study the gas in a room with temperatures and entropies and specific heats -- thermodynamics is a theory that emerges from a statistical mechanical mapping of the local physics up to a collection of around 10^23-10^24 atoms. We do this because solving the dynamics of 10^24 particles would take about the age of the universe to do properly. But as I mentioned, galaxies are collections of around 10^9-10^12 "atoms", and we're attempting to describe them as if we already know the mean field. And we most certainly do not. There are two ways of solving this, neither of which work: we can take spatial averages (a procedure riddled with problems, and, technically, impossible if there's a single gravitational lens in a galaxy), or we can take statistical averages (a procedure riddled with even more problems, not least of which is that statistical mechanics maps the local Hamiltonia of the atoms onto the free energy of the systems... and there *is* no Hamiltonian for GR; it's constrained to be zero from the start). Crude approaches to the spatial averaging ignoring this issue typically find things that operate as a dark matter (or, on cosmological scales, sometimes as a curvature.)

              Or we could look a lot more carefully at the influence of gravity on the propagation of light. For dark matter I'm not aware of anyone who's managed to show much from this, but that it can be significant can be seen on cosmological scales, where one can take as a toy model of the universe a Lemaitre-Tolman-Bondi, an isotropic but spherically inhomogeneous solution. Tuning this, and not actually all that carefully, one can recover signals that look like dark energy -- but in a universe filled with nothing but dark matter. LTBs cannot be a good model for our universe because while we can mimic the *local*, supernovae, observations of dark energy, things go to pot as soon as we look further away, particularly at the CMB. But the point isn't to suggest LTBs are a good model for the universe, but rather that even very simple changes in the setup of a gravitational system can strongly bias your observations, and since every observation we make is of light we should probably understand what's happened to it along its path.

              Most likely, the answer is a combination of all of the above. I've often argued against particulate solutions for dark matter, but ultimately I've got no genuine reasons to assume that it doesn't exist, and no reason to assume we don't have more than one unknown type. I also know our theory of gravity is wrong, that simple modifications to gravity work beautifully well on galactic scales, and that we don't even understand how to apply gravity on such scales, and nor do we understand how to recover a mean field, or the backreaction of individual stars as they propagate through it. The only problem is that we can't even properly approach some of these, and even if we could we'd be throwing so many parameters into the mix that we'd have absolutely no constraining power at all. But the flipside of that is that at some point in the next few years someone is going to make a lot of noise about "the" dark matter, and pinning down "the" dark matter, and it might be decades before we can demonstrate that actually it's nothing like abundant enough to be "the" dark matter and there must be other contributors - and that "the" dark matter is actually three, or six, or twelve related particles anyway...

    • (Score: 2) by martyb on Saturday September 06 2014, @11:20AM

      by martyb (76) Subscriber Badge on Saturday September 06 2014, @11:20AM (#90157) Journal

      Thank-you! *THIS* is what I come here for! Though I would not pretend to understand the details of what you posted, I was able to get the gist of the general concepts. Much more than the "press release". It is apparent that you know far more than you are letting on here (and in prior comments I've seen you post).

      Further, your explanation of the differences in behavior when combining the terms (R + a1/r) vs (R + bR^2) was a real eye-opener. My studies emphasized how to derive and apply formulations, but never made it manifest in the way you just did, here. (I'm still, many years out, trying to understand the applications of Del and Grad and Eigenvectors and Eigenvalues; though it's been so long since I was exposed to them, I'd probably need a refresher course just to understand them, now. Oh well.)

      But the point I was trying to make — Just. Plain. Wow! =)

      --
      Wit is intellect, dancing.
      • (Score: 2, Insightful) by boristhespider on Saturday September 06 2014, @12:40PM

        by boristhespider (4048) on Saturday September 06 2014, @12:40PM (#90166)

        Thank you - it's always hard to tell if you're pitching over people's heads, or so low it's patronising, and hard to tell when you're going on for far too long, so it's nice to know it's appreciated :) Also my cosmology and my relativity is strong, but my particle physics is definitely very weak - I've barely done any in 12 years. I've got a few textbooks lying in my new house I'm going to spend a few weeks reading through in the evenings.

        If you're getting into modified actions for metric-based theories of gravity you definitely need a refresher :) The good news is that once you adjust to thinking of things as tensors, Laplacians become far more straightforward. A mixed second derivative is just written as (d/dx^a)(d/dx^b)A(x) where a and b can take any of the allowed values (1, 2, 3 in normal space, 0, 1, 2, 3 or 1, 2, 3, 4 in spacetime). This is more concisely written (d_a)(d_b)A(x), and the Laplacian is just setting a=b and summing, so (d_1)(d_1)A + (d_2)(d_2)A + (d_3)(d_3)A, which in the Einstein notation becomes, pleasingly, d^ad_a A. And that's a Laplacian. In non-flat spacetimes, it's replaced by covariant derivatives, but that's just corrections coming from translating vectors around on the surface.

        To be honest, if you can handle partial differentiation then relativity is straightforward - replete with tedious calculations, perhaps, but straightforward.

        • (Score: 0) by Anonymous Coward on Monday September 08 2014, @03:07AM

          by Anonymous Coward on Monday September 08 2014, @03:07AM (#90613)

          Thank you - it's always hard to tell if you're pitching over people's heads, or so low it's patronising, and hard to tell when you're going on for far too long, so it's nice to know it's appreciated :)

          You're welcome!

          As for where you are pitching, well, it IS difficult to know your "audience" here; as an editor, I struggle with this pretty regularly. I appreciate your candor and your interest in trying to find the right level of detail!

          You mentioned your particle physics is very weak. Speaking only for myself, pretty much all my physics and maths has atrophied. My math background went from Calculus, through Introduction to Differential Equations, Linear Algebra, Advanced Calculus, and lastly Calculus over the complex number plane. At the time, it was just a firehose of symbol systems in which I mastered each well enough that I could proceed to the next. I truly struggled through Linear Algebra and Intro to DIff EQ. Physics was the basic stuff of Newtonian Mechanics, Optics and Electromagnetism, and there was maybe half of one class that touched on relativity. I did take several courses on Astronomy which piqued my interest in cosmology. I've read many articles about various astronomical discoveries, since those classes well over three decades ago. So, I would not mind at all if you wrote at a lower level. =)

          BTW, please accept my belated thanks for your masterful comments a while back on, IIRC, the wormhole! I didn't even know there were different kinds of black holes, so learning of that, and of the possible results of the different varieties colliding/merging was a real eye-opener and utterly fascinating!

          TL;DR - thank-you, please keep the comments coming, and I would not at all mind if you patronized a bit. =)

          • (Score: 1) by boristhespider on Monday September 08 2014, @06:58PM

            by boristhespider (4048) on Monday September 08 2014, @06:58PM (#90917)

            I remember posting about different kinds of black holes -- what happens if you fire a single electron (or even just a neutrino, off-axis) into a Schwarzschild hole is an old obsession of mine -- but for some reason I can't find it, which makes me wonder if it was on the other site...

            When there are cosmology or relativity articles I feel I can comment on I'll continue to do so, don't worry :) I'm out of the professional field myself now - full-time software development - but cosmology was my bread and butter for twelve years, coming on top of four years at university, so it would be weird if I suddenly lost all interest.

            The next biggie will be the next Planck data release, which will hopefully be along late this autumn/early winter. I do know some people in Planck but they're staying extremely close-mouthed about even when the release will come other than "later this year", let alone any results. My hopes are power spectra for the "E mode" polarisation (basically patterns that are circular, like crosses and squares), cosmic-variance limited up to quite small scales. Cosmic variance is an intrinsic error we can never beat because we're dealing with a statistical quantity, but only have a limited number of samples -- because we've only got one universe. Your observation is cosmic variance limited if the observational errors are smaller than cosmic variance, and the temperature measurements have been cosmic variance limited up to reasonably small scales since NASA's WMAP first came out, but the first Planck data release was mind-blowing. Getting the E mode spectrum with the quality that WMAP gave us the temperature would be fantastic. I suspect Planck will also significantly tighten bounds on the "B mode" polarisation (patterns like Catherine wheels, and the quantity that BICEP2 claimed, very over-strongly, that they had detected in tension with every other dataset and theoretical prejudice), or even get a detection, which would be great. Of course, that detection would immediately lead to what we saw with BICEP2, and people immediately and uncritically claiming it to be from inflation, but everything has to start somewhere...