Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday February 13 2018, @03:17AM   Printer-friendly
from the I-guess-so dept.

An increasing number of businesses invest in advanced technologies that can help them forecast the future of their workforce and gain a competitive advantage. Many analysts and professional practitioners believe that, with enough data, algorithms embedded in People Analytics (PA) applications can predict all aspects of employee behavior: from productivity, to engagement, to interactions and emotional states.

Predictive analytics powered by algorithms are designed to help managers make decisions that favourably impact the bottom line. The global market for this technology is expected to grow from US$3.9 billion in 2016 to US$14.9 billion by 2023.

Despite the promise, predictive algorithms are as mythical as the crystal ball of ancient times.

[...] To manage effectively and develop their knowledge of current and likely organisational events, managers need to learn to build and trust their instinctual awareness of emerging processes rather than rely on algorithmic promises that cannot be realised. The key to effective decision-making is not algorithmic calculations but intuition.

https://theconversation.com/predictive-algorithms-are-no-better-at-telling-the-future-than-a-crystal-ball-91329

What do you people think about predictive algorithms ? Mumbo jumbo or ??


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by khallow on Wednesday February 14 2018, @06:42PM

    by khallow (3766) Subscriber Badge on Wednesday February 14 2018, @06:42PM (#637739) Journal

    Note that the only attempts to evaluate such predictions rapidly devolved into apologism.

    I'm noticing nothing of the kind, but if you feel better imagining such things who am I to deny your pleasure.

    Fortunately, that is something we can correct. Consider this example [carbonbrief.org]. At numerous points, the article speaks of the long term temperature sensitivity of a doubling of CO2 ("climate sensitivity").

    As with Sawyer, Broecker used an equilibrium climate sensitivity of 2.4C per doubling of CO2. Broecker assumed that the Earth instantly warms up to match atmospheric CO2, while modern models account for the lag between how quickly the atmosphere and oceans warm up. (The slower heat uptake by the oceans is often referred to as the “thermal inertia” of the climate system.)

    NASA’s Dr James Hansen and colleagues published a paper in 1981 that also used a simple energy balance model to project future warming, but accounted for thermal inertia due to ocean heat uptake. They assumed a climate sensitivity of 2.8C per doubling CO2, but also looked at a range of 1.4-5.6C per doubling.

    The FAR gave a best estimate of climate sensitivity as 2.5C warming for doubled CO2, with a range of 1.5-4.5C. These estimates are applied to the BAU scenario in the figure below, with the thick black line representing the best estimate and the thin dashed black lines representing the high and low end of the climate sensitivity range.

    Throughout the article there are all sorts of rationalizations for why the models are in error. This leads to the conclusion:

    Climate models published since 1973 have generally been quite skillful in projecting future warming. While some were too low and some too high, they all show outcomes reasonably close to what has actually occurred, especially when discrepancies between predicted and actual CO2 concentrations and other climate forcings are taken into account.

    What is skillfully ignored here is that the actual heating has been significantly less than "climate sensitivity". For example, from 1970 to 2017, the various measures of global mean temperature went up about 0.65 C (as shown in that link, it has since gone up another 0.2C). Eyeballing figure [www.ipcc.ch], total anthropogenic contributions (in equivalent of CO2 emissions) has gone from 27 GtC to 49 GtC in 2010. Superficially, that would correspond to 0.75-1 C per doubling of CO2 equivalent (0.86 of a doubling roughly per 0.65 to 0.85 C increase in temperature). This is the great problem which has resulted in a quest for the "missing heat".

    But of course one would need to include stratospheric global cooling gases and particles like sulfur dioxide and soot. That tends to get you somewhere near [www.ipcc.ch] the current estimate of radiative forcing from CO2, which I strongly suspect is why that parameter still is considered in a vacuum as in the above article.

    If we take mean CO2 concentrations for 1970 and 2017 (325 ppm in 1970 and my estimate of 406 ppm for 2017), we get an effective doubling of 0.32 (that's 2.65 C for the high temperature in 2017). (Similar calculation yields 2.5 C in 2010). Sure looks nice, when you don't consider a third and growing portion of global warming.

    What gets interesting here is when I look for independent confirmation of the global mean temperature graphs contained in the first article, I find that they tend to be fairly far off. For example, this link [cet.edu] which allegedly shows NASA GISS data, shows a bit under a 0.6 C climb in temperature between 1970 and 2010. I estimate that's almost 10% less than the apologist article shows for the same GISS data. That alone drops the temperature sensitivity through to 2010 by the same amount (to roughly 2.3 C per doubling, I estimate). Again, this is just with CO2 and ignoring the far greater rate at which non-CO2 sources are growing.

    Another indication something is up comes from the accuracy of the models for time periods of the past. For the fourth IPCC assessment report and beyond, the models attempt to simulate small scale variation as well. For example, all the subsequent aggregations of models pick up the cooling that happened around 1992, during an El Nino [noaa.gov] year and then subsequently completely fail to model any such variation once one gets to the future.

    One should see similar inaccuracies in the past because the physics and models didn't change. That's how it would work for weather models, for example. The same chaos that makes it so hard to model a few days into the future also makes it hard to model a few days into the past.

    When all that mattered was that the models predicted extensive warming in the future, then they were significantly off in the near future. Notice that they suddenly became more accurate, starting in 2007, once criticism of past models surfaced, but the future predictions of extreme global warming remain unchanged. Climatology is one of those fields where one can be presented with contradictory evidence yet not bother to change the underlying models that led to the error beyond some superficial changes.