Columbia Researchers Provide New Evidence on the Reliability of Climate Modeling
The Hadley circulation, or Hadley cell -- a worldwide tropical atmospheric circulation pattern that occurs due to uneven solar heating at different latitudes surrounding the equator -- causes air around the equator to rise to about 10-15 kilometers, flow poleward (toward the North Pole above the equator, the South Pole below the equator), descend in the subtropics, and then flow back to the equator along the Earth's surface. This circulation is widely studied by climate scientists because it controls precipitation in the subtropics and also creates a region called the intertropical convergence zone, producing a band of major, highly-precipitative storms.
[...] Historically, climate models have shown a progressive weakening of the Hadley cell in the Northern Hemisphere. Over the past four decades reanalyses, which combine models with observational and satellite data, have shown just the opposite -- a strengthening of the Hadley circulation in the Northern Hemisphere.
[...] The difference in trends between models and reanalyses poses a problem that goes far beyond whether the Hadley cell is going to weaken or strengthen; the inconsistency itself is a major concern for scientists. Reanalyses are used to validate the reliability of climate models -- if the two disagree, that means that either the models or reanalyses are flawed.
[...] To understand which data was correct -- the models or the reanalyses -- they had to compare the systems using a purely observational metric, untainted by any model or simulation. In this case, precipitation served as an observational proxy for latent heating since it is equal to the net latent heating in the atmospheric column. This observational data revealed that the artifact, or flaw, is in the reanalyses -- confirming that the model projections for the future climate are, in fact, correct.
The paper's findings support previous conclusions drawn from a variety of models -- the Hadley circulation is weakening.
(Score: 3, Insightful) by Runaway1956 on Friday June 28 2019, @02:03PM (9 children)
I'm kinda baffled by this seemingly circuitous logic.
"This observational data revealed that the artifact, or flaw, is in the reanalyses -- confirming that the model projections for the future climate are, in fact, correct.
So, these guys have a new model. One team runs the model, and comes to one conclusion. Next team up reaches another conclusion. So, someone makes a few observations, and that "confirms" that one or the other team is correct? How about, just maybe, instead of "confirming" the more palatable conclusion, we take the model apart, examine it closely. Then examine methodology in the use of the model, examine the data put into it each time, and try to determine WHY the two teams came up with different conclusions?
I like that word "model". Let's say that I'm into models of ships, since I'm Navy. I've built 1/72 scale model of some ship or other, I test it, and it sinks. I rebuild, test, and it doesn't sink on the first test, then sinks on the second. My first question is going to be along the lines of, "What did I screw up?" In short, my model is FUBAR, so it's time to go back to the drawing board.
I'm NOT going to look around me, observe that the leaves on the trees are turning color, and decide that the model is good, it just won't float in the autumn.
“I have become friends with many school shooters” - Tampon Tim Walz
(Score: 2, Insightful) by Anonymous Coward on Friday June 28 2019, @02:53PM (8 children)
I think that's what the term "reanalyses" means. Just sayin'.
(Score: 3, Insightful) by Runaway1956 on Friday June 28 2019, @04:15PM (7 children)
That is what baffles me. They're not re-analyzing the model. Instead, they are picking which result they like better. Seems a bit like flipping a coin 100 times, and picking which results you like. Discard all the tails, and keep all the heads.
“I have become friends with many school shooters” - Tampon Tim Walz
(Score: 2) by istartedi on Friday June 28 2019, @04:23PM (4 children)
Yes, it's a bit too much like picking 8 stocks and publishing 256 web sites that recommend all combinations of buy/sell. Then a year later you shut down all but the top 10, and tout the top-10 adivsors stock-picking abilities.
Appended to the end of comments you post. Max: 120 chars.
(Score: 3, Insightful) by DeathMonkey on Friday June 28 2019, @04:58PM (3 children)
They've got two predictions and they're picking the one that matches observed reality.
You're trying really hard to be confused by a very simple thing.
(Score: 2) by slinches on Friday June 28 2019, @05:20PM (2 children)
How useful is a model that only matches reality half the time on a question with a binary answer?
(Score: 3, Informative) by DeathMonkey on Friday June 28 2019, @05:37PM (1 child)
How useful is someone who refuses to understand the concept he is discussing?
There are two DIFFERENT models. One was correct. The other wasn't.
They fixed the one that wasn't.
This is not that complicated!
(Score: 3, Interesting) by slinches on Friday June 28 2019, @06:20PM
If the these models don't agree on predictions until they can confirm with actual data and fix the wrong one, then they aren't predictive yet. Maybe eventually they will be, but in this case two models predicted different answers, so maybe one is correct and the other wrong. It's equally plausible that both models arrived at their answer by chance and the one that was "correct" is no more accurate than the "wrong" one. The problem comes down to which model we should believe when they conflict next time (without data to verify it) and can we trust the results even if they agree? Being able to predict the past helps to build a model, but unless they can deliver results which are shown to match reality nearly all of the time without needing to be fixed, they are not reliable enough for more than academic study.
(Score: 5, Informative) by DeathMonkey on Friday June 28 2019, @04:53PM
They have a model that predicts things as they move forward.
Reanalysis has the same output as the model but is basically moving back in time based on observational data.
Neither one says anything about the Hadley cell directly.
Generally speaking, they match up.
However, if you use the model to make predictions about the Hadley cell it says one thing.
If you use the reanalysis, it says something else.
They went out and directly measured the Hadley cell and it turns out the model was correct.
That's weird, they say, as scientists often do before they discover something cool.
They then tried to figure out why the reanalysis prediction was wrong and found a flaw in the reanalysis methodology.
So now, we have a model that was already really good.
And, we have an improved reanalysis methodology, too.
(Score: 0) by Anonymous Coward on Saturday June 29 2019, @01:43AM
You didn't read TFA.
Models were developed. They worked pretty well!
New models were developed. They worked pretty well too! But... they reached a different conclusion.
Researchers chose a physical phenomenon which is tightly coupled to some core aspects of the system and easy to measure, and checked both models with it, and found that the old models got it right and the new models got it wrong. So they concluded the old ones are better.
There's no coinflipping. There's high dimensional curve fits that overlap for much of the historic data. There are infinite other (higher and higher frequency containing) solutions. For a car analogy, it's like seeing a jerk is trying to hit you, and choosing between gas and brake and steer. There might be multiple possible solutions but when you shoulder check and see a semi coming fast you rule out the swerve into that lane.