Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday April 03 2018, @02:25PM   Printer-friendly
from the yeah-but-how-many-rats-own-phones? dept.

This week, following three days of live-broadcast peer review sessions, experts concluded that a pair of federal studies show “clear evidence” that cell phone radiation caused heart cancer in male rats.

This substantially changes the debate on whether cell phone use is a cancer risk. Up until this point, the federal government and cell phone manufacturers operated on the assumption that cell phones cannot by their very nature cause cancer, because they emit non-ionizing radiation. Whereas ionizing radiation—the kind associated with x-rays, CT scans, and nuclear power plants, among others—definitely causes cancer at high enough doses, non-ionizing radiation was believed to not emit enough energy to break chemical bonds. That meant it couldn’t damage DNA, and therefore couldn’t lead to mutations that cause cancer.

But the pair of studies by the US National Toxicology Program found “clear evidence” that exposure to radiation caused heart tumors in male rats, and found “some evidence” that it caused tumors in the brains of male rats. (Both are positive results; the NTP uses the labels “clear evidence,” “some evidence,” “equivocal evidence” and “no evidence” when making conclusions.)

Tumors were found in the hearts of female rats, too, but they didn’t rise to the level of statistical significance and the results were labeled “equivocal;” in other words, the researchers couldn’t be sure the radiation is what caused the tumors.

The next scientific step will be to determine what this means for humans. The peer-reviewed papers will be passed on to the US Food and Drug Administration, which is responsible for determining human risk and issuing any guidelines to the public, and the Federal Communications Commission, which develops safety standards for cell phones. The FDA was part of the group of federal agencies who commissioned the studies back in the early 2000s.

Ronald Melnick, the NTP senior toxicologist who designed the studies (and who retired from the agency in 2009), says it’s unlikely any future study could conclude with certainty that there is no risk to humans from cell phone use. “I can’t see proof of a negative ever arising from future studies,” Melnick says.

He believes the FDA should put out guidance based on the results of the rat studies. “I would think it would be irresponsible to not put out indications to the public,” Melnick says. “Maintain a distance from this device from your children. Don’t sleep with your phone near your head. Use wired headsets. This would be something that the agencies could do right now.”

When the draft results of the papers were published earlier this year, all results were labeled “equivocal,” meaning the study authors felt the data weren’t clear enough to determine if the radiation caused the health effects or not. But the panel of peer reviewers (among them brain and heart pathologists, toxicologists, biostaticians, and engineers) re-evaluated the data and upgraded several of the conclusions to “some evidence” and “clear evidence.”

Peer review is a vital part of any scientific study; it brings several more lifetimes of expertise into the room to rigorously check a study for any weak points. Melnick calls the peer reviewers’ choice to change some conclusions an unusual move; “It’s quite uncommon that the peer review panel changes the final determination,” he says, noting if anything, he’s seen peer reviewers downgrade findings, not upgrade them. “Typically when NTP presents their findings, the peer review almost in all cases goes along with that.” In this case, the peer reviewers felt the data—when combined with their knowledge of the cancers and with the study design itself—was significant enough to upgrade several of the findings.

[...] The FDA will make the next move in determining the risk posed to humans, and how to interpret the results for the public. “We’re taking a responsible approach,” FDA’s director of the office of science and engineering, Edward Margerrison, said on Wednesday, according to the News & Observer. “We’re not gonna knee-jerk on anything.”

Related: California Issues Warning Over Cellphones; Study Links Non-Ionizing Radiation to Miscarriage


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by rleigh on Tuesday April 03 2018, @07:28PM (3 children)

    by rleigh (4887) on Tuesday April 03 2018, @07:28PM (#662099) Homepage

    While I can't comment specifically on this study, splitting into gender groups is not uncommon, because there are many subtle and not to subtle gender differences which complicate experiments. For example, wound healing is slower in females compared with males. There are fundamental differences at the level of individual cells, and this could equally affect the body's ability to deal with cancers--you could see a male/female bias there. And this is fairly obvious when you look at the prevalence of different cancers, e.g. breast and prostate cancers, but also others which aren't directly related to sex hormones. In and of itself, it's not unreasonable that it might be significant for the male group and insignificant for the female group. Maybe there's a real difference there which justifies the split; it's not always "P-hacking"!

    Starting Score:    1  point
    Moderation   +3  
       Insightful=2, Interesting=1, Total=3
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 2) by rleigh on Tuesday April 03 2018, @09:34PM

    by rleigh (4887) on Tuesday April 03 2018, @09:34PM (#662182) Homepage

    One item of interest I'll add relating to animal experiments. A few years ago, this paper was published: https://www.nature.com/articles/nmeth.2935 [nature.com] which added an even crazier variable. Mouse experiments were found to vary depending upon whether the mice were handled by males or females! This happens all the time, when the cages get cleaned, when experiments are being done etc. And you always wear gloves. Sex hormones, scents, and other molecules diffusing from human skin actually affect the mice. There's a difference between males and females, but then you also have to factor in that female scientists may also affect the mice depending upon their monthly cycle. That's a hideous amount of extra complexity to factor in. It's easy to criticise the use of dodgy statistics and P-hacking, but these are simply a reflection of a horrible reality: biology has huge amounts of variability, and there are many different (and often unknown) sources of it we have yet to account for. These are not simple problems; it's a lot dirtier than physics.

  • (Score: 3, Insightful) by FatPhil on Wednesday April 04 2018, @06:11AM (1 child)

    by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Wednesday April 04 2018, @06:11AM (#662374) Homepage
    I agree that it can be justified, it's just that I demand all statistical tests, and the methodology need to be carved in stone before a single datum is gathered. If you're tweaking, you're cheating. Of course, you probably have prior data that informs your decision to do a new study, but that data must not be used in the new study. (And of course others can do a metastudy containing your prior data, but that's a different beast entirely.)
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2) by rleigh on Wednesday April 04 2018, @12:29PM

      by rleigh (4887) on Wednesday April 04 2018, @12:29PM (#662447) Homepage

      Definitely agreed that you should have all your analysis methodology finalised before you even start the experiment.

      While I'm not going to defend the use of poor statistics, or going on fishing expeditions with different analysis techniques until you find one with significance, I'll just mention one common complication here. The statistical power of animal studies is often intentionally weak. In countries like the UK, there's a legal requirement of the animal work licensing to reduce animal counts to the minimum needed to perform a study, and they are also expensive to keep so cost keeps an additional cap on numbers. This leaves little margin for error if you lose any animals, or the effect you are measuring is less than expected. I think it's generally a good thing ethically, to keep animal usage and suffering to a minimum, but I do think it can have a negative side in terms of the statistics you get out.