More than 200 academics have signed an open letter criticizing a controversial new statement [PDF] by the American Psychological Association suggesting a link between violent video games and increased aggression.
The APA writes:
It is the accumulation of risk factors that tends to lead to aggressive or violent behaviour. The research reviewed here demonstrates that violent video game use is one such risk factor.
A positive association between violent video game use and increased aggressive behavior was found in most (12 of 14 studies) but not all studies published after the earlier meta‐analyses. This continues to be a reliable finding and shows good multi‐method consistency across various representations of both violent video game exposure and aggressive behavior.
However, the group of academics said they felt the methodology of the research was deeply flawed as a significant part of material included in the study had not been subjected to peer review. "I fully acknowledge that exposure to repeated violence may have short-term effects - you would be a fool to deny that - but the long-term consequences of crime and actual violent behaviour, there is just no evidence linking violent video games with that," said one.
"If you play three hours of Call of Duty you might feel a little bit pumped, but you are not going to go out and mug someone."
(Score: 1, Informative) by Anonymous Coward on Tuesday August 18 2015, @03:35AM
It was deeply flawed due to lack of peer review? What evidence is there that peer review is even useful? The answer is none, look it up.
Without looking at it I am sure this research is deeply flawed because all they did is see if there was positive or negative correlations without really trying to rule out other things that could explain the relationship. Likely they also did not attempt to, or were unable to, get good estimates of the file drawer effect (studies showing small effects or those in the "wrong" direction do not get published).
I say that as standard criticisms without reading any of this literature, based purely on prior experience that people who hold up peer review as some kind of useful obstacle to overcome have no idea what they are doing. Of course getting other people to give competent feedback on your work is useful, that seems to have nothing to do with any formalized peer review process.
(Score: 3, Informative) by Francis on Tuesday August 18 2015, @03:44AM
Peer review has issues, but if something isn't peer reviewed, then that says something about the research. Usually it means that it's of poor quality or outside the realm of generally accepted science. It's definitely possible, and too common, for the people doing the peer review to be biased or unfairly generous to the notions, but removing even that is going to harm the reliability of the papers being written.
(Score: 2) by Aichon on Tuesday August 18 2015, @10:10PM
What evidence is there that peer review is even useful? The answer is none, look it up.
Without looking at it I am sure this research is deeply flawed [...]
Double standard much, asking us to look for evidence without doing so yourself? And there are plenty of meta-analyses on the subject, contrary to your assertions.
But really, just think of peer review like a spam filtering process for research papers: a good peer review process won't fix everything, but it will SIGNIFICANTLY improve the situation. Reading unfiltered stuff is generally a waste of your time since the stuff that can't pass peer review is rarely worth the ink used to print it. That said, not all "peer review" is created equal, and some of it, admittedly, is downright crappy, but dismissing the entire idea of peer review just because we can point to cases where it's come up short is like dismissing the idea that spam filters on e-mail provide a benefit, just because you know about a guy with a bad filter that lets everything through.
During my time in grad school, my research group had a weekly seminar in which we reviewed hundreds of papers in our field (Internet research, mostly focused on search engines and related areas). We'd skewer nearly all of them for having some combination of: edge cases they hadn't considered, data that was missing, overstated assertions, or insufficient information by which to judge the claimed results. But that's to be expected, since it's basically impossible to have a perfect paper that covers everything.
Papers that we had picked up from highly-regarded or even just averagely-regarded conferences and journals in our field (i.e. ones that had acceptance rates less than about 15%) would generally only result in minor quibbles from us. They were nearly always solid papers with solid research. We almost always saw room for improving them, but we rarely saw the sort of glaring issues that would cause us to question how the paper got published.
Papers from "peer-reviewed" conferences and journals we had never heard of would routinely result in multi-hour roasts, during which we'd pull out a lot of hair, gnash a lot of teeth, and do a lot of questioning about whether an advanced degree was worthwhile if it'd mean having those authors as our "peers". They were nearly always a waste of time. I recall one "peer-reviewed" paper I read that had a heading for Data and a heading for Results with nothing at all under either of them. I kid you not. And I'm sure we're all aware of robo-generated papers and the like that have managed to get published in "peer-reviewed" publications, which just goes to show that the peer review process is only as good as your peers.
Papers from non-peer-reviewed sources either followed the same pattern as the ones I just talked about, or they would have us slamming the plain-as-day corporate/government interests that were driving the obvious narrative present throughout the paper. We wouldn't even bother analyzing all of the issues with them. Instead, we'd get a laugh out of the soundbite-laden abstract that had made the news and how poorly it matched up with the shoddy-as-hell research that backed it up, we'd see how well we could predict which pieces of data would be conspicuously missing since we could make a pretty good guess at what it would have shown and would know that it wouldn't fit their predetermined narrative, and we'd generally learn more from about the truth of the situation from what was not said than what was actually said. On rare occasion you'd find a diamond in the rough, but those were definitely the exceptions, not the rule.
All of which is to say, you're quite right that just because a paper is "peer reviewed" it really doesn't mean much about the quality of the research, but you're quite wrong to suggest that the peer review process provides no benefit whatsoever. It's a filtering process, and like any other filtering process, it's only as good as your filter. Check the acceptance rate at the publication and how often the papers it publishes are cited in other publications with low acceptance rates, and you'll likely get a good idea for how reputable the publication is and whether you can generally trust the papers it publishes.
(Score: 0) by Anonymous Coward on Wednesday August 19 2015, @04:26AM
I did look, I only found papers questioning its value. I can't post a link to a lack of results. If there are plenty, share one.