Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Monday June 25 2018, @03:11AM   Printer-friendly
from the I-disagree dept.

Nathan Myhrvold: 'Nasa doesn't want to admit it's wrong about asteroids'

Nathan Myhrvold is the former chief technology officer of Microsoft, founder of the controversial patent asset company Intellectual Ventures and the main author of the six-volume, 2,300-page Modernist Cuisine cookbook, which explores the science of cooking. Currently, he is taking on Nasa over its measurement of asteroid sizes.

For the past couple of years, you've been fighting with Nasa about its analysis of near-Earth asteroid size. You've just published a 33-page scientific paper [open, DOI: 10.1016/j.icarus.2018.05.004] [DX] criticising the methods used by its Neowise project team to estimate the size and other properties of approximately 164,000 asteroids. You have also published a long blog post explaining the problem. Where did Nasa go wrong and is it over or underestimating size?

Nasa's Wise space telescope [Wide-field Infrared Survey Explorer] measured the asteroids in four different wavelengths in the infrared. My main beef is with how they analysed that data. What I think happened is they made some poor choices of statistical methods. Then, to cover that up, they didn't publish a lot of the information that would help someone else replicate it. I'm afraid they have both over- and underestimated. The effect changes depending on the size of the asteroid and what it's made of. The studies were advertised as being accurate to plus or minus 10%. In fact, it is more like 30-35%. That's if you look overall. If you look at specific subsets some of them are off by more than 100%. It's kind of a mess.

[...] Nasa's reported response has been to stand by the data and the analysis performed by the Neowise team. Can we trust Nasa after this?

They need to have an independent investigation of these results. When my preprint paper came out in 2016, they said: "You shouldn't believe it because it's not peer-reviewed." Well, now it has been peer reviewed. How Nasa handles it at this stage will be very telling. People have suggested to me the reason Nasa doesn't want to admit that anything is wrong with the data is that they're afraid it would hurt the chances of Neocam, an approximately $500m (£380m) telescope to find asteroids that might hit Earth proposed by the same group who did the Neowise analysis.

Previously: Former Microsoft Chief Technologist Criticizes NASA


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by Runaway1956 on Monday June 25 2018, @07:34AM (2 children)

    by Runaway1956 (2926) Subscriber Badge on Monday June 25 2018, @07:34AM (#698009) Journal

    Yes, the article is a mess. But, if I even begin to understand his twisted reasoning, then the complete data set can be segregated into subsets. He seems to be saying that some of those subsets have better overall accuracy, while other subsets have worse accuracy. The reasons for those differences isn't really clear. Different methods were used to estimate sizes? Different instruments were used over time? Different people did the math work? I suppose that a change in any of those would create a new subset of data. Or, maybe they simply migrated from a BSD to a Microsoft machine to do their number crunching, and things went to shit.

    The most basic of claims here, is that some of the data is more reliable than some of the rest of it. That might be believable. But it's up to the claimant to communicate his claims. It's not our job to read his mind!

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2, Informative) by Anonymous Coward on Monday June 25 2018, @11:30AM (1 child)

    by Anonymous Coward on Monday June 25 2018, @11:30AM (#698076)

    The NEOWISE results were obtained by the application of 10 different modeling methods, many of which are not adequately explained or even defined, to 12 different combinations of WISE band data. More than half of NEOWISE results are based on a single band of data. The majority of curve fits to the data in the NEOWISE results are of poor quality, frequently missing most or all of the data points on which they are based. Complete misses occur for about 30% of single-band results, and among the results derived from the most common multiple-band combinations, about 43% miss all data points in at least one band. The NEOWISE data analysis relies on assumptions that are in many cases inconsistent with each other. A substantial fraction of WISE data was systematically excluded from the NEOWISE analysis. Building on methods developed by Hanuš et al. (2015), I show that error estimates for the WISE observational data were not well characterized, and all observations have true uncertainty at least a factor of 1.3–2.5 times larger than previously described, depending on the band. I also show that the error distribution is not well fit by a normal distribution. These findings are important because the Monte Carlo error-analysis method used by the NEOWISE project depends on both the observational errors and the normal distribution.

    https://www.sciencedirect.com/science/article/pii/S0019103516307643 [sciencedirect.com]

    I've heard about this before. He basically claims they didn't really try when coming up with the statistical models, instead just used some default procedures So they are underestimating uncertainty between the observations (abledo, etc) and estimated size.

    • (Score: 0) by Anonymous Coward on Monday June 25 2018, @11:34AM

      by Anonymous Coward on Monday June 25 2018, @11:34AM (#698078)

      typo:

      ...procedures. So they are underestimating the uncertainty between mapping the observations (abledo, etc) to estimated size.