An increasing number of businesses invest in advanced technologies that can help them forecast the future of their workforce and gain a competitive advantage. Many analysts and professional practitioners believe that, with enough data, algorithms embedded in People Analytics (PA) applications can predict all aspects of employee behavior: from productivity, to engagement, to interactions and emotional states.
Predictive analytics powered by algorithms are designed to help managers make decisions that favourably impact the bottom line. The global market for this technology is expected to grow from US$3.9 billion in 2016 to US$14.9 billion by 2023.
Despite the promise, predictive algorithms are as mythical as the crystal ball of ancient times.
[...] To manage effectively and develop their knowledge of current and likely organisational events, managers need to learn to build and trust their instinctual awareness of emerging processes rather than rely on algorithmic promises that cannot be realised. The key to effective decision-making is not algorithmic calculations but intuition.
What do you people think about predictive algorithms ? Mumbo jumbo or ??
(Score: 1) by khallow on Wednesday February 14 2018, @06:42PM
Fortunately, that is something we can correct. Consider this example [carbonbrief.org]. At numerous points, the article speaks of the long term temperature sensitivity of a doubling of CO2 ("climate sensitivity").
Throughout the article there are all sorts of rationalizations for why the models are in error. This leads to the conclusion:
What is skillfully ignored here is that the actual heating has been significantly less than "climate sensitivity". For example, from 1970 to 2017, the various measures of global mean temperature went up about 0.65 C (as shown in that link, it has since gone up another 0.2C). Eyeballing figure [www.ipcc.ch], total anthropogenic contributions (in equivalent of CO2 emissions) has gone from 27 GtC to 49 GtC in 2010. Superficially, that would correspond to 0.75-1 C per doubling of CO2 equivalent (0.86 of a doubling roughly per 0.65 to 0.85 C increase in temperature). This is the great problem which has resulted in a quest for the "missing heat".
But of course one would need to include stratospheric global cooling gases and particles like sulfur dioxide and soot. That tends to get you somewhere near [www.ipcc.ch] the current estimate of radiative forcing from CO2, which I strongly suspect is why that parameter still is considered in a vacuum as in the above article.
If we take mean CO2 concentrations for 1970 and 2017 (325 ppm in 1970 and my estimate of 406 ppm for 2017), we get an effective doubling of 0.32 (that's 2.65 C for the high temperature in 2017). (Similar calculation yields 2.5 C in 2010). Sure looks nice, when you don't consider a third and growing portion of global warming.
What gets interesting here is when I look for independent confirmation of the global mean temperature graphs contained in the first article, I find that they tend to be fairly far off. For example, this link [cet.edu] which allegedly shows NASA GISS data, shows a bit under a 0.6 C climb in temperature between 1970 and 2010. I estimate that's almost 10% less than the apologist article shows for the same GISS data. That alone drops the temperature sensitivity through to 2010 by the same amount (to roughly 2.3 C per doubling, I estimate). Again, this is just with CO2 and ignoring the far greater rate at which non-CO2 sources are growing.
Another indication something is up comes from the accuracy of the models for time periods of the past. For the fourth IPCC assessment report and beyond, the models attempt to simulate small scale variation as well. For example, all the subsequent aggregations of models pick up the cooling that happened around 1992, during an El Nino [noaa.gov] year and then subsequently completely fail to model any such variation once one gets to the future.
One should see similar inaccuracies in the past because the physics and models didn't change. That's how it would work for weather models, for example. The same chaos that makes it so hard to model a few days into the future also makes it hard to model a few days into the past.
When all that mattered was that the models predicted extensive warming in the future, then they were significantly off in the near future. Notice that they suddenly became more accurate, starting in 2007, once criticism of past models surfaced, but the future predictions of extreme global warming remain unchanged. Climatology is one of those fields where one can be presented with contradictory evidence yet not bother to change the underlying models that led to the error beyond some superficial changes.