Large datasets and predictive analytics software are a fertile field for innovation, but while excellent open source tools like Sci-Py, R, etc are freely available, the datasets are not. A Computerworld article notes that the scarcity of large publicly available data collections has led to a database released for a competition by Netflix half a decade ago now being constantly used in computer science research.
Australia's government does provide an easy way to find, access and reuse some public datasets, but most public and private databases are silo-ed away from experimenters. The Open Data Handbook offers some guidelines for defining openness in data, but offers little in ways to drive organisations to make their datasets available.
So do we need a GPL for data, and if so, what would it look like?
(Score: 0) by Anonymous Coward on Thursday March 19 2015, @09:52AM
Requirements 1, 1 and 1? I think you want to refine your numbering a bit ;-)
Anyway, I'd put different requirements:
1. The data must be available in a standardized, open and well-documented form which can be read be widely available open-source software, and is in a form where it can be processed by computer software.
2. The provider of the data set gives everyone a worldwide, irrevocable, royalty-free license to use the data in any way, and to generate derived data from it.
3. The provider of the data set gives everyone a worldwide, irrevocable, royalty-free license to further distribute the original data set under the same conditions he received it under.
4. The provider of the data set gives everyone a worldwide, irrevocable, royalty-free license to distribute data sets derived from that data under the same conditions as the original data, as long as he clearly states that the data was derived, and provides any information needed to reproduce that derivation (like additional data, used algorithms, etc.) under the same conditions.