The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.
The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little "computation" contributed to the results was done by hand and could be verified in the same way.
The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it's contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you've actually discovered, clearly enough that someone else can discover it for themselves.
Source: The Scientific Paper Is Obsolete
(Score: 4, Insightful) by PiMuNu on Monday April 09 2018, @02:05PM (5 children)
> Why is that paywalled?
Almost no science done in UK, funded by research councils, is paywalled. I cannot comment on other nations.
> Publish all of your tool chain and data on the web
Fine, but bear in mind that it does take years to understand the tool chain for any reasonably sophisticated data analysis. So what you suggest doesn't fix anything, while adding some bureaucracy on the researchers.
(Score: 3, Interesting) by Justin Case on Monday April 09 2018, @02:17PM
I guess I was thinking something along the lines of a makefile, with dependencies. At least you could recreate the results even if you don't immediately understand them.
Then, potentially, gather some new observations or measurements and run it again. Different outcome? Why?
(Score: 3, Informative) by TheRaven on Monday April 09 2018, @04:11PM
sudo mod me up
(Score: 2) by Wootery on Tuesday April 10 2018, @09:06AM (2 children)
It's a good thing to make it easier to find software bugs that impact published results, no? It also makes it easier to reproduce results, by enabling other researchers to re-implement as much or as little as they like. You really think that's without any value?
(Score: 2) by PiMuNu on Tuesday April 10 2018, @10:00AM (1 child)
> You really think that's without any value?
I guess the work I do is specific to the bespoke instrumentation I use. There is little use, beyond the general algorithms (which are described in a paper somewhere), to anyone to take that code elsewhere.
(Score: 2) by Wootery on Tuesday April 10 2018, @11:20AM
There's value in providing real working code, both in terms of quality-assurance for the publication (more eyeballs), and in terms of providing direct value to others. Your point is that in your case, the direct value is diminished as the code is tied to one platform, but the first advantage remains.
To fail to publish the source-code is to fail to properly document the experiment.
Implementing pseudocode correctly is not as easy as we might like to think - annoyingly I'm now unable to find the article, but I recall reading that most programmers introduce bugs even when given a relatively simple pseudocode->implementation exercise.