Columbia Researchers Provide New Evidence on the Reliability of Climate Modeling
The Hadley circulation, or Hadley cell -- a worldwide tropical atmospheric circulation pattern that occurs due to uneven solar heating at different latitudes surrounding the equator -- causes air around the equator to rise to about 10-15 kilometers, flow poleward (toward the North Pole above the equator, the South Pole below the equator), descend in the subtropics, and then flow back to the equator along the Earth's surface. This circulation is widely studied by climate scientists because it controls precipitation in the subtropics and also creates a region called the intertropical convergence zone, producing a band of major, highly-precipitative storms.
[...] Historically, climate models have shown a progressive weakening of the Hadley cell in the Northern Hemisphere. Over the past four decades reanalyses, which combine models with observational and satellite data, have shown just the opposite -- a strengthening of the Hadley circulation in the Northern Hemisphere.
[...] The difference in trends between models and reanalyses poses a problem that goes far beyond whether the Hadley cell is going to weaken or strengthen; the inconsistency itself is a major concern for scientists. Reanalyses are used to validate the reliability of climate models -- if the two disagree, that means that either the models or reanalyses are flawed.
[...] To understand which data was correct -- the models or the reanalyses -- they had to compare the systems using a purely observational metric, untainted by any model or simulation. In this case, precipitation served as an observational proxy for latent heating since it is equal to the net latent heating in the atmospheric column. This observational data revealed that the artifact, or flaw, is in the reanalyses -- confirming that the model projections for the future climate are, in fact, correct.
The paper's findings support previous conclusions drawn from a variety of models -- the Hadley circulation is weakening.
(Score: 3, Insightful) by DeathMonkey on Friday June 28 2019, @04:58PM (3 children)
They've got two predictions and they're picking the one that matches observed reality.
You're trying really hard to be confused by a very simple thing.
(Score: 2) by slinches on Friday June 28 2019, @05:20PM (2 children)
How useful is a model that only matches reality half the time on a question with a binary answer?
(Score: 3, Informative) by DeathMonkey on Friday June 28 2019, @05:37PM (1 child)
How useful is someone who refuses to understand the concept he is discussing?
There are two DIFFERENT models. One was correct. The other wasn't.
They fixed the one that wasn't.
This is not that complicated!
(Score: 3, Interesting) by slinches on Friday June 28 2019, @06:20PM
If the these models don't agree on predictions until they can confirm with actual data and fix the wrong one, then they aren't predictive yet. Maybe eventually they will be, but in this case two models predicted different answers, so maybe one is correct and the other wrong. It's equally plausible that both models arrived at their answer by chance and the one that was "correct" is no more accurate than the "wrong" one. The problem comes down to which model we should believe when they conflict next time (without data to verify it) and can we trust the results even if they agree? Being able to predict the past helps to build a model, but unless they can deliver results which are shown to match reality nearly all of the time without needing to be fixed, they are not reliable enough for more than academic study.