(CNN)This week, a trillion-ton hunk of ice broke off Antarctica.
You probably know that. It was all over the Internet.
Among the details that have been repeated ad nauseam: The iceberg is nearly the size of Delaware, which prompted some fun musing on Twitter about where exactly Delaware is and how anyone is supposed to approximate the square footage of that US state. The ice, which has been named A68, represents more than 12% of the Larsen C ice shelf, a sliver on the Antarctic Peninsula. And most important: None of this has anything to do with man-made climate change.
The problem: That last detail -- the climate one -- is misleading at best.
At worst, it's wrong.
Some scientists think this has a lot to do with global warming.
I spent most of Thursday on the phone with scientists, talking to them about the huge iceberg off Antarctica and what it means. Here are my five takeaways.
http://www.cnn.com/2017/07/14/world/sutter-iceberg-antarctica-climate-change/index.html
[Warning: CNN autoplay video - Ed]
(Score: 2) by maxwell demon on Monday July 17 2017, @06:49PM (5 children)
Not instead. In addition. If you measure the physical properties of the coin, you can say that it is not entirely symmetric, but you will not convince those who deny that the asymmetry actually leads to a bias in heads vs. tails. After all, coin tossing is a complex process, and lots of things like air currents in the room influence it, how can you say what this asymmetry really does for the probabilities? Sure, you can make simulations, but then, what about the effects you neglected because your computer is only so powerful?
The Tao of math: The numbers you can count are not the real numbers.
(Score: 0) by Anonymous Coward on Monday July 17 2017, @08:00PM (4 children)
Yes, measure the physical properties of the coin and use these to predict the coin should come up heads x% of the time. Then when you test the coin by flipping it should be close to this value you have predicted (rather than the NHST approach of checking "50% heads").
(Score: 1) by khallow on Tuesday July 18 2017, @02:04AM (3 children)
We ignore here that the NHST does work here without the need for model building of the coin which may be costly and still miss biases of the coin (particularly, if those biases are designed to hide from your measurements of the physical properties of the coin). And what does "test the coin by flipping it should be close to this value you have predicted" mean? NHST in disguised form.
NHST has its place. Here, low cost testing of numerous supposedly unbiased, identical, independent observations is one of its more useful roles.
(Score: 0) by Anonymous Coward on Tuesday July 18 2017, @03:32AM (2 children)
No, it is not the same at all. In my suggested case we are building a physical model of the coin and saying "coin is biased by this much", if the model then predicts the correct amount of biasedness (with "enough" precision and accuracy) we can trust the model that the coin is actually biased.
In the NHST case we assume "coin has exactly zero bias" and test this by just checking the results of flipping the coin. As pointed out by maxwell demon there are many schemes and environmental effects that can result in a bias while the coin is just fine.
The second difference is that once your model is tested repeatedly and shown to work, you no longer "need many tosses to see the difference between the unbiased and the biased coin." In the NHST case you will keep having to collect huge amounts of data each time. There is no cumulative knowledge being gained.
(Score: 1) by khallow on Tuesday July 18 2017, @11:40AM (1 child)
Once again I reiterate my claim that we have NHST here. You describe hypothesis testing, with an implicit comparison to the null hypothesis.
Those schemes and environmental effects will be just as much a problem with your testing of your model's predictions.
(Score: 0) by Anonymous Coward on Tuesday July 18 2017, @02:30PM
NHST is not the same as hypothesis testing, which is not the same as significance testing. What I describe is closest to Fisher's significance testing, there is a huge literature about the mass confusion caused by the mash up of hypothesis testing and significance testing. If you were trained in the last 50 years to do applied stats there is 99% chance you were taught NHST (which is wrong).
The schemes and environmental effects would not be a "problem", they are further parameters to include in the model once discovered. There is no way to do this for the NHST case.