Google's use of Brits' medical records to train an AI and treat people was legally "inappropriate," says Dame Fiona Caldicott, the National Data Guardian at the UK's Department of Health.
In April 2016 it was revealed the web giant had signed a deal with the Royal Free Hospital in London to build an artificially intelligent application called Streams, which would analyze patients' records and identify those who had acute kidney damage.
As part of the agreement, the hospital handed over 1.6 million sets of NHS medical files to DeepMind, Google's highly secretive machine-learning nerve center. However, not every patient was aware that their data was being given to Google to train the Streams AI model. And the software was supposed to be used only as a trial – an experiment with software-driven diagnosis – yet it was ultimately used to detect kidney injuries in people and alert clinicians that they needed treatment.
Dame Caldicott has told the hospital's medical director Professor Stephen Powis that he overstepped the mark: it's one thing to create and test an application, it's another thing entirely to use in-development code to treat people. Proper safety trials must be carried out for medical systems, she said.
We are going to see many more stories like this.
(Score: 2) by kaszz on Wednesday May 17 2017, @04:04PM (1 child)
if Google or anyone else made money from the improper use of data there seems a strong case for civil prosecution.
It may be that they didn't make money out of this particular event. But unless the knowledge gained, code produced etc is shared with the public. It's all the same people provide the data and megarich gains the profit.. from the people.
So at the minimum the British citizens now own this AI and can use it for free to make better prediction for who should get an extra checkup etc.
(Score: 0) by Anonymous Coward on Wednesday May 17 2017, @04:37PM
Can the data be poisoned to suggest daily & mandatory proctology exams for the entire Google and related workforce?