Stories
Slash Boxes
Comments

SoylentNews is people

The Fine print: The following are owned by whoever posted them. We are not responsible for them in any way.

Journal by khallow
I ran across a recent study ("Knowledge overconfidence is associated with anti-consensus views on controversial scientific issues", published July 2022) that had some interesting results. The study asked subjects to rate their opposition to some scientific claim that is generally held to be true (a "consensus"). They then asked the subjects to evaluate their own knowledge in the area and finally tested the subjects on their actual knowledge of the subject. This resulted in a three value data set of "opposition", "subjective knowledge", and "objective knowledge". The opposition questions are listed in the above study.

For example, one on GM foods:

"Consuming foods with ingredients derived from GM crops is no riskier than consuming foods modified by conventional plant improvement techniques."

The primary conclusion is that for a number of claims that are generally held to be true by consensus, opposition to those results show interesting correlations: opposition correlates negatively with objective knowledge (what the final test indicated that the subject knew about the field), and positively with subjective knowledge (what the subject thought they knew about the field). Those who were most opposed tended to exhibit a large gap between what they knew and what they thought they knew.

Here's the list of subjects and then I'll get to the punch line:

  • GM foods
  • Vaccination
  • Homeopathic medicine
  • Nuclear power
  • Climate change
  • Big bang
  • Evolution

Which one wasn't like the others?

Climate change!

The question was in the same vein as the rest:

Most of the warming of Earth’s average global temperature over the second half of the 20th century has been caused by human activities.

Unlike every other field listed in this research, there was a slight positive correlation between opposition to the claim and objective knowledge of the subject (see figure 2).

What other consensus viewpoints are out there where agreement with the consensus correlations with greater ignorance of the subject? Economics maybe?

Display Options Threshold/Breakthrough Reply to Comment Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by khallow on Wednesday August 24 2022, @11:01AM

    by khallow (3766) Subscriber Badge on Wednesday August 24 2022, @11:01AM (#1268209) Journal
    On that last point, we have some interesting features. First, a graph titled "Forecast evaluation for models run in 2004" which doesn't mention a source or what adjustments were made. Second, the final bit makes a claim that 17 models were evaluated for accuracy. What they neglected to tell you is that the actual number is far greater than 17. In the actual research [wiley.com], look at figure 2 on page 5. Note the labels "IPCC FAR, IPCC SAR, and IPCC TAR". Those refer to assessment reports by the IPCC (FAR - First Assessment Report, etc). Those are aggregations of dozens of models each not single models. (Elsewhere in the report was a mention of "AR4" which is the fourth Assessment Report, but it doesn't show up in this particular figure.) From page 3:

    Starting in the mid‐1990s, climate modeling efforts were primarily undertaken in conjunction with the IPCC process (and later, the Coupled Model Intercomparison Projects, CMIPs), and model projections were taken from models featured in the IPCC FAR (1990), Second Assessment Report (SAR‐IPCC, 1996), Third Assessment Report (TAR‐IPCC, 2001), and Fourth Assessment Report (AR4‐IPCC, 2007).

    So of the 17 model projections, the last four or so are actually massive aggregations of models. And the report found problems with the IPCC aggregates. On page 5:

    However, the remaining eight models—RS71, H81 Scenario 1, H88 Scenarios A, B, and C, FAR,MS93, and TAR—had projected forcings significantly stronger or weaker than observed (Figure 1).

    For IPCC FAR and IPCC TAR (note they are referred to as "models"), it was stronger, a typical symptom of "running hot" bias. Notice also the use of an "implied TCR metric" throughout the paper to ignore these differences.

    We compared observations to climate model projections over the model projection period using two approaches: change in temperature versus time and change in temperature versus change in radiative forcing (“implied TCR”). We use an implied TCR metric to provide a meaningful model‐observation comparison even in the presence of forcing differences. Implied TCR is calculated by regressing temperature change against radiative forcing for both models and observations, and multiplying the resulting values by the forcing associated with doubled atmospheric CO2concentrations, F2x, (following Otto et al., 2013):

    A typical policy question is when you emit a certain amount of CO2 equivalent, what global temperature increase do you see? By going to a purely radiative forcing viewpoint, they ignore both the increased removal of greenhouse gases from the atmosphere and the positive feedback mechanisms that are supposed to result in higher temperatures in the long term. But that's a huge part of the policy decision!

    To be blunt, your link was significantly dishonest. It alleged to compare 17 "models", but we find that it's actually comparing hundreds with at least two large aggregates showing the consistent bias problems I discussed earlier, and emphasizes metrics that hide those bias issues.