Large datasets and predictive analytics software are a fertile field for innovation, but while excellent open source tools like Sci-Py, R, etc are freely available, the datasets are not. A Computerworld article notes that the scarcity of large publicly available data collections has led to a database released for a competition by Netflix half a decade ago now being constantly used in computer science research.
Australia's government does provide an easy way to find, access and reuse some public datasets, but most public and private databases are silo-ed away from experimenters. The Open Data Handbook offers some guidelines for defining openness in data, but offers little in ways to drive organisations to make their datasets available.
So do we need a GPL for data, and if so, what would it look like?
(Score: 2) by Nerdfest on Thursday March 19 2015, @09:59AM
What I'm saying is that disparate data sources can be combined to reduce anonymity whether they're open or not. Just because the data's not open and you don't know who has what data doesn't mean it isn't happening. It just means that you don't know who's doing it.
(Score: 1, Insightful) by Anonymous Coward on Thursday March 19 2015, @12:19PM
And open access data means you still don't know who's doing it and now the barrier for those unknown people to do it is significantly reduced.
(Score: 2) by wantkitteh on Thursday March 19 2015, @02:50PM
Okay - put your data where you mouth is - d0x yourself. Release every scrap of data you can about yourself under a creative commons license. It can all be used whether it's open or not, right? It won't make any difference to you if you just make it more convenient for everyone to access, right? Do it or stfufe.