Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Saturday November 14 2015, @06:04PM   Printer-friendly
from the they-will-try-craps-next dept.

The field of psychology has recently been embarrassed by failed attempts to repeat the results of classic textbook experiments, and a mounting realization that many papers are the result of commonly accepted statistical shenanigans rather than careful attempts to test hypotheses.

Now Ed Yong writes at The Atlantic that Anna Dreber at the Stockholm School of Economics has created a stock market for scientific publications, where psychologists bet on published studies based on how reproducible they deemed the findings. Based on Robin Hanson's classic paper "Could Gambling Save Science," that proposed a market-based alternative to peer review called "idea futures," the market would allow scientists to formally "stake their reputation", and offer clear incentives to be careful and honest while contributing to a visible, self-consistent consensus on controversial (or routine) scientific questions.

Here's how it works. Each of 92 participants received $100 for buying or selling stocks on 41 studies that were in the process of being replicated. At the start of the trading window, each stock cost $0.50. If the study replicated successfully, they would get $1. If it didn't, they'd get nothing. As time went by, the market prices for the studies rose and fell depending on how much the traders bought or sold. The participants tried to maximize their profits by betting on studies they thought would pan out, and they could see the collective decisions of their peers in real time. The final price of the stocks, at the end of two-week experiment, reflected the probability that each study would be successfully replicated, as determined by the collective actions of the traders. In the end, the markets correctly predicted the outcomes of 71 percent of the replications—a statistically significant, if not mind-blowing score.

"It blew us all away," says Dreber. "There is some wisdom of crowds; people have some intuition about which results are true and which are not," adds Dreber. "Which makes me wonder: What's going on with peer review? If people know which results are really not likely to be real, why are they allowing them to be published?"


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by frojack on Saturday November 14 2015, @08:35PM

    by frojack (1554) on Saturday November 14 2015, @08:35PM (#263442) Journal

    Just do the replicate study and stop all the shenanigans with betting. Why add another avenue (and incentive) to game the system?

    --
    No, you are mistaken. I've always had this sig.
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0) by Anonymous Coward on Saturday November 14 2015, @10:27PM

    by Anonymous Coward on Saturday November 14 2015, @10:27PM (#263484)

    If you read the paper, they write that they do not like replications and are looking to replace (not that these are common in psych anyway...) that part of science. This will just be used as a justification for to continuing to produce BS.