Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Friday June 28 2019, @08:45AM   Printer-friendly
from the does-it-go-round-in-circles? dept.

Columbia Researchers Provide New Evidence on the Reliability of Climate Modeling

The Hadley circulation, or Hadley cell -- a worldwide tropical atmospheric circulation pattern that occurs due to uneven solar heating at different latitudes surrounding the equator -- causes air around the equator to rise to about 10-15 kilometers, flow poleward (toward the North Pole above the equator, the South Pole below the equator), descend in the subtropics, and then flow back to the equator along the Earth's surface. This circulation is widely studied by climate scientists because it controls precipitation in the subtropics and also creates a region called the intertropical convergence zone, producing a band of major, highly-precipitative storms.

[...] Historically, climate models have shown a progressive weakening of the Hadley cell in the Northern Hemisphere. Over the past four decades reanalyses, which combine models with observational and satellite data, have shown just the opposite -- a strengthening of the Hadley circulation in the Northern Hemisphere.

[...] The difference in trends between models and reanalyses poses a problem that goes far beyond whether the Hadley cell is going to weaken or strengthen; the inconsistency itself is a major concern for scientists. Reanalyses are used to validate the reliability of climate models -- if the two disagree, that means that either the models or reanalyses are flawed.

[...] To understand which data was correct -- the models or the reanalyses -- they had to compare the systems using a purely observational metric, untainted by any model or simulation. In this case, precipitation served as an observational proxy for latent heating since it is equal to the net latent heating in the atmospheric column. This observational data revealed that the artifact, or flaw, is in the reanalyses -- confirming that the model projections for the future climate are, in fact, correct.

The paper's findings support previous conclusions drawn from a variety of models -- the Hadley circulation is weakening.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Friday June 28 2019, @09:41AM (6 children)

    by Anonymous Coward on Friday June 28 2019, @09:41AM (#860883)

    Super wordy article - how many of those phrases could've been half the length? - but very interesting.

    Key quote:

    “One of the largest climatic signals associated with global warming is the drying of the subtropics, a region that already receives little rainfall,” he explained. “The Hadley cell is an important control on subtropical precipitation. Hence, any changes in the strength of the Hadley cell will result in a change in precipitation in that region. This is why it is important to determine if, as a consequence of anthropogenic emission, the Hadley cell will speed up or slow down in the coming decades.”

    • (Score: 1, Insightful) by Anonymous Coward on Friday June 28 2019, @10:16AM (1 child)

      by Anonymous Coward on Friday June 28 2019, @10:16AM (#860888)

      Can you tell from this wordstream, whether it is strengthening or weakening of circulation that increases "precipitation" (aka rainfall)? Can you tell whether that "precipitation served as an observational proxy for latent heating" increased, or decreased? Not a word of the only thing that has a practical meaning. None.
      None in the abstract as well, and the article itself is beyond a paywall. Nice.

    • (Score: 2) by Bot on Friday June 28 2019, @04:20PM (3 children)

      by Bot (3902) on Friday June 28 2019, @04:20PM (#860969) Journal

      I can summarize it for you.
      After decades of adjusting models to match observation, in one case observation was proven inaccurate. Therefore models are cool. I mean, hot.

      --
      Account abandoned.
      • (Score: 4, Touché) by DeathMonkey on Friday June 28 2019, @05:34PM (2 children)

        by DeathMonkey (1380) on Friday June 28 2019, @05:34PM (#861011) Journal

        After decades of adjusting models to match observation

        Also known as SCIENCE.

        • (Score: 0, Touché) by Anonymous Coward on Friday June 28 2019, @06:42PM

          by Anonymous Coward on Friday June 28 2019, @06:42PM (#861043)

          Science: The observation, identification, description, experimental investigation, and theoretical explanation of phenomena.

          The problem is, in what is being called "science", theoretical explanation is never wrong. We just adjust the numbers and our predictions are now magically right.
          We have gotten away from actual science, from the scientific method, and logic - and we've moved to political driven insanity and a form of religious zealotry that is far worse than the fictional straw-man caricature of all the things we hate about religion.

        • (Score: 2) by Bot on Sunday June 30 2019, @10:22AM

          by Bot (3902) on Sunday June 30 2019, @10:22AM (#861580) Journal

          LOL only an economist would say adjusting models to match observation is "science".

          --
          Account abandoned.
  • (Score: 0) by Anonymous Coward on Friday June 28 2019, @01:06PM (3 children)

    by Anonymous Coward on Friday June 28 2019, @01:06PM (#860912)

    So one model predicted a positive trend, the other predicted a negative. Both had 50% chance of being right. So who gives a shit until they make a more precise prediction that would actually be impressive if correct?

    • (Score: 2) by DeathMonkey on Friday June 28 2019, @05:00PM (2 children)

      by DeathMonkey (1380) on Friday June 28 2019, @05:00PM (#860995) Journal

      The people who are choosing a model to use probably care about which one is correct.

      • (Score: 0) by Anonymous Coward on Friday June 28 2019, @06:09PM (1 child)

        by Anonymous Coward on Friday June 28 2019, @06:09PM (#861028)

        Their choosing a model based on coin flip odds?

        • (Score: 2) by Reziac on Saturday June 29 2019, @03:34AM

          by Reziac (2489) on Saturday June 29 2019, @03:34AM (#861226) Homepage

          Perhaps, but more likely on which one is predicted to attract research grants.

          --
          And there is no Alkibiades to come back and save us from ourselves.
  • (Score: 2, Interesting) by Anonymous Coward on Friday June 28 2019, @01:47PM (7 children)

    by Anonymous Coward on Friday June 28 2019, @01:47PM (#860918)

    Apparently not.

    Title of front page post: New Evidence on the Reliability of Climate Modeling

    From TFS:

    confirming that the model projections for the future climate are, in fact, correct.

    Not sure what the confusion is here.

    But I'll explain, and I'll use small words so you'll be sure to understand, you warthog-faced buffoons:

    There is new evidence (that means data/information that we didn't have before) concerning how reliable climate models may be (that means whether or not such models get it right or not).

    Are you following along so far? Maybe you should read that last sentence again to make sure you understand. It's okay. I'll wait.

    Okay. The new evidence is "confirming that the model projections for the future climate are, in fact, correct." This means that yes, the projections (that means estimates about the future) are correct (that means they're right).

    Are we clear now, or do we need to dumb this down from fourth grade to second?

    • (Score: -1, Offtopic) by Anonymous Coward on Friday June 28 2019, @02:59PM (1 child)

      by Anonymous Coward on Friday June 28 2019, @02:59PM (#860940)

      As your brain is demonstrably too small to handle "fourth grade" small words, do try again with monosyllables. Grunts and yelps shall be within your capability and a closer fit to your type of thought process.

      • (Score: -1, Troll) by Anonymous Coward on Friday June 28 2019, @05:09PM

        by Anonymous Coward on Friday June 28 2019, @05:09PM (#861000)

        Your mom didn't think so.

        She was so pleased, she even introduced me to your girlfriend. Okay. That's a lie. We both know you don't have any sexual partners for which your don't pay cash up front. At least that's what your mom said as I was getting dressed.

        Yeah, this is about your level, isn't it?

    • (Score: 0) by Anonymous Coward on Friday June 28 2019, @06:36PM (3 children)

      by Anonymous Coward on Friday June 28 2019, @06:36PM (#861042)

      Who would possibly mod this interesting? Its like a crazy person ranting at no one on the street.

      • (Score: 0) by Anonymous Coward on Saturday June 29 2019, @04:45AM (2 children)

        by Anonymous Coward on Saturday June 29 2019, @04:45AM (#861244)

        Keep your filthy mitts off my cheese steak, you cad! You poltroon!

        I've painted that cheese steak 400 times with hot-buttered motor oil and took it with me to meet the Queen!

        Seven horny stalagmites made me rich! So I bought the CIA and they will bury you.

        Believe in the great avocado, do you? I eat your avocado and raise you a chimichanga!
        ===================
        The above is like a crazy person ranting in the street.

        My initial post was snark mixed with disdain [youtube.com] for morons who (assuming they speak the language) can't understand perfectly readable English.

        See the difference now, friend?

        • (Score: 0) by Anonymous Coward on Saturday June 29 2019, @05:46AM (1 child)

          by Anonymous Coward on Saturday June 29 2019, @05:46AM (#861259)

          No, not really. Are you sure you're not a crazy person?

          • (Score: 0) by Anonymous Coward on Saturday June 29 2019, @06:37AM

            by Anonymous Coward on Saturday June 29 2019, @06:37AM (#861268)

            Are you sure you're not a crazy person?

            That's a good question. A sane person to an insane society must appear insane. [milwaukeeindependent.com]

            That ought to muddy the waters a bit, eh?

    • (Score: 2) by Bot on Sunday June 30 2019, @10:26AM

      by Bot (3902) on Sunday June 30 2019, @10:26AM (#861581) Journal

      The projections are correct is a very nebulous concept, barring a time machine. Not that I care.

      --
      Account abandoned.
  • (Score: 3, Insightful) by Runaway1956 on Friday June 28 2019, @02:03PM (9 children)

    by Runaway1956 (2926) Subscriber Badge on Friday June 28 2019, @02:03PM (#860920) Journal

    I'm kinda baffled by this seemingly circuitous logic.

    "This observational data revealed that the artifact, or flaw, is in the reanalyses -- confirming that the model projections for the future climate are, in fact, correct.

    So, these guys have a new model. One team runs the model, and comes to one conclusion. Next team up reaches another conclusion. So, someone makes a few observations, and that "confirms" that one or the other team is correct? How about, just maybe, instead of "confirming" the more palatable conclusion, we take the model apart, examine it closely. Then examine methodology in the use of the model, examine the data put into it each time, and try to determine WHY the two teams came up with different conclusions?

    I like that word "model". Let's say that I'm into models of ships, since I'm Navy. I've built 1/72 scale model of some ship or other, I test it, and it sinks. I rebuild, test, and it doesn't sink on the first test, then sinks on the second. My first question is going to be along the lines of, "What did I screw up?" In short, my model is FUBAR, so it's time to go back to the drawing board.

    I'm NOT going to look around me, observe that the leaves on the trees are turning color, and decide that the model is good, it just won't float in the autumn.

    • (Score: 2, Insightful) by Anonymous Coward on Friday June 28 2019, @02:53PM (8 children)

      by Anonymous Coward on Friday June 28 2019, @02:53PM (#860939)

      So, these guys have a new model. One team runs the model, and comes to one conclusion. Next team up reaches another conclusion. So, someone makes a few observations, and that "confirms" that one or the other team is correct? How about, just maybe, instead of "confirming" the more palatable conclusion, we take the model apart, examine it closely. Then examine methodology in the use of the model, examine the data put into it each time, and try to determine WHY the two teams came up with different conclusions?

      I think that's what the term "reanalyses" means. Just sayin'.

      • (Score: 3, Insightful) by Runaway1956 on Friday June 28 2019, @04:15PM (7 children)

        by Runaway1956 (2926) Subscriber Badge on Friday June 28 2019, @04:15PM (#860965) Journal

        That is what baffles me. They're not re-analyzing the model. Instead, they are picking which result they like better. Seems a bit like flipping a coin 100 times, and picking which results you like. Discard all the tails, and keep all the heads.

        • (Score: 2) by istartedi on Friday June 28 2019, @04:23PM (4 children)

          by istartedi (123) on Friday June 28 2019, @04:23PM (#860970) Journal

          Yes, it's a bit too much like picking 8 stocks and publishing 256 web sites that recommend all combinations of buy/sell. Then a year later you shut down all but the top 10, and tout the top-10 adivsors stock-picking abilities.

          --
          Appended to the end of comments you post. Max: 120 chars.
          • (Score: 3, Insightful) by DeathMonkey on Friday June 28 2019, @04:58PM (3 children)

            by DeathMonkey (1380) on Friday June 28 2019, @04:58PM (#860994) Journal

            They've got two predictions and they're picking the one that matches observed reality.

            You're trying really hard to be confused by a very simple thing.

            • (Score: 2) by slinches on Friday June 28 2019, @05:20PM (2 children)

              by slinches (5049) on Friday June 28 2019, @05:20PM (#861004)

              How useful is a model that only matches reality half the time on a question with a binary answer?

              • (Score: 3, Informative) by DeathMonkey on Friday June 28 2019, @05:37PM (1 child)

                by DeathMonkey (1380) on Friday June 28 2019, @05:37PM (#861012) Journal

                How useful is someone who refuses to understand the concept he is discussing?

                There are two DIFFERENT models. One was correct. The other wasn't.

                They fixed the one that wasn't.

                This is not that complicated!

                • (Score: 3, Interesting) by slinches on Friday June 28 2019, @06:20PM

                  by slinches (5049) on Friday June 28 2019, @06:20PM (#861034)

                  If the these models don't agree on predictions until they can confirm with actual data and fix the wrong one, then they aren't predictive yet. Maybe eventually they will be, but in this case two models predicted different answers, so maybe one is correct and the other wrong. It's equally plausible that both models arrived at their answer by chance and the one that was "correct" is no more accurate than the "wrong" one. The problem comes down to which model we should believe when they conflict next time (without data to verify it) and can we trust the results even if they agree? Being able to predict the past helps to build a model, but unless they can deliver results which are shown to match reality nearly all of the time without needing to be fixed, they are not reliable enough for more than academic study.

        • (Score: 5, Informative) by DeathMonkey on Friday June 28 2019, @04:53PM

          by DeathMonkey (1380) on Friday June 28 2019, @04:53PM (#860992) Journal

          They have a model that predicts things as they move forward.
          Reanalysis has the same output as the model but is basically moving back in time based on observational data.

          Neither one says anything about the Hadley cell directly.
          Generally speaking, they match up.

          However, if you use the model to make predictions about the Hadley cell it says one thing.
          If you use the reanalysis, it says something else.

          They went out and directly measured the Hadley cell and it turns out the model was correct.
          That's weird, they say, as scientists often do before they discover something cool.

          They then tried to figure out why the reanalysis prediction was wrong and found a flaw in the reanalysis methodology.

          So now, we have a model that was already really good.
          And, we have an improved reanalysis methodology, too.

        • (Score: 0) by Anonymous Coward on Saturday June 29 2019, @01:43AM

          by Anonymous Coward on Saturday June 29 2019, @01:43AM (#861198)

          You didn't read TFA.

          Models were developed. They worked pretty well!

          New models were developed. They worked pretty well too! But... they reached a different conclusion.

          Researchers chose a physical phenomenon which is tightly coupled to some core aspects of the system and easy to measure, and checked both models with it, and found that the old models got it right and the new models got it wrong. So they concluded the old ones are better.

          There's no coinflipping. There's high dimensional curve fits that overlap for much of the historic data. There are infinite other (higher and higher frequency containing) solutions. For a car analogy, it's like seeing a jerk is trying to hit you, and choosing between gas and brake and steer. There might be multiple possible solutions but when you shoulder check and see a semi coming fast you rule out the swerve into that lane.

(1)