Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Tuesday May 16 2017, @10:31PM   Printer-friendly
from the deepmed dept.

Google's use of Brits' medical records to train an AI and treat people was legally "inappropriate," says Dame Fiona Caldicott, the National Data Guardian at the UK's Department of Health.

In April 2016 it was revealed the web giant had signed a deal with the Royal Free Hospital in London to build an artificially intelligent application called Streams, which would analyze patients' records and identify those who had acute kidney damage.

As part of the agreement, the hospital handed over 1.6 million sets of NHS medical files to DeepMind, Google's highly secretive machine-learning nerve center. However, not every patient was aware that their data was being given to Google to train the Streams AI model. And the software was supposed to be used only as a trial – an experiment with software-driven diagnosis – yet it was ultimately used to detect kidney injuries in people and alert clinicians that they needed treatment.

Dame Caldicott has told the hospital's medical director Professor Stephen Powis that he overstepped the mark: it's one thing to create and test an application, it's another thing entirely to use in-development code to treat people. Proper safety trials must be carried out for medical systems, she said.

We are going to see many more stories like this.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Informative) by Anonymous Coward on Wednesday May 17 2017, @08:30AM

    by Anonymous Coward on Wednesday May 17 2017, @08:30AM (#510970)

    None of these projects should be allowed unless the resulting code, data set, etc is considered 'in the public domain', if possible scrubbing identifying information, and if not scrubbing what information they can (like names) while leaving the links between records left in (for geneological purposes if necessary to help improve diagnosis.)

    The problem long term is that Big Data, like pervasise surveillance technology, only works if information asymmetry is removed by providing everyone with the complete set of information. This does have dangers in the form of stalking, targetted murder, etc. But that will inevitably happen with these systems anyway now that the technology is out there. The only way let to attempt to mitigate the damage caused is to ensure everyone is on a level playing field, and if the 'upper tier'/'people benefititing from asymmetric access to information' are punitively (through targeted data attacks, or legal/physical punitive measures) dealt with for continuing to broker a system of haves and have nots.

    I expect many people will disagree with me, but much of the loonyness Stallman often gets berated for is coming to pass, even if the time taken to reach both panopticon and maximum information asymmetry is taking longer than expected.

    Starting Score:    0  points
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  

    Total Score:   1