Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.

Submission Preview

Link to Story

Gambling Can Save Science!

Accepted submission by HughPickens.com http://hughpickens.com at 2015-11-13 17:25:12
News
The field of psychology has been recently been embarrassed by failed attempts to repeat the results of classic textbook experiments, and a mounting realization that many papers are the result of commonly accepted statistical shenanigans rather than careful attempts to test hypotheses. Now Ed Yong writes at The Atlantic that Anna Dreber at the Stockholm School of Economics has created a stock market for scientific publications [theatlantic.com], where psychologists bet on published studies based on how reproducible they deemed the findings. Based on Robin Hanson's classic paper "Could Gambling Save Science," [gmu.edu] that proposed a market-based alternative to peer review called "idea futures," the market would allow scientists to formally "stake their reputation", and offer clear incentives to be careful and honest while contributing to a visible, self-consistent consensus on controversial (or routine) scientific questions.

Here's how it works. Each of 92 participants received $100 for buying or selling stocks on 41 studies that were in the process of being replicated. At the start of the trading window, each stock cost $0.50. If the study replicated successfully, they would get $1. If it didn't, they'd get nothing. As time went by, the market prices for the studies rose and fell depending on how much the traders bought or sold. The participants tried to maximize their profits by betting on studies they thought would pan out, and they could see the collective decisions of their peers in real time. The final price of the stocks, at the end of two-week experiment, reflected the probability that each study would be successfully replicated, as determined by the collective actions of the traders. In the end, the markets correctly predicted the outcomes of 71 percent of the replications [pnas.org]—a statistically significant, if not mind-blowing score. “It blew us all away,” says Dreber. “There is some wisdom of crowds; people have some intuition about which results are true and which are not,” adds Dreber. “Which makes me wonder: What's going on with peer review? If people know which results are really not likely to be real, why are they allowing them to be published?”

Original Submission