Large datasets and predictive analytics software are a fertile field for innovation, but while excellent open source tools like Sci-Py, R, etc are freely available, the datasets are not. A Computerworld article notes that the scarcity of large publicly available data collections has led to a database released for a competition by Netflix half a decade ago now being constantly used in computer science research.
Australia's government does provide an easy way to find, access and reuse some public datasets, but most public and private databases are silo-ed away from experimenters. The Open Data Handbook offers some guidelines for defining openness in data, but offers little in ways to drive organisations to make their datasets available.
So do we need a GPL for data, and if so, what would it look like?
(Score: 2) by kaszz on Wednesday March 18 2015, @11:08PM
Next question is if the person doing the anonymization is competent enough. How does one measure? And can the workplace hire and retain such people?