Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday August 23 2019, @12:16PM   Printer-friendly
from the I-know-something-you-don't-know dept.

Submitted via IRC for SoyCow3196

An AI privacy conundrum? The neural net knows more than it says

Artificial intelligence is the process of using a machine such as a neural network to say things about data. Most times, what is said is a simple affair, like classifying pictures into cats and dogs.

Increasingly, though, AI scientists are posing questions about what the neural network "knows," if you will, that is not captured in simple goals such as classifying pictures or generating fake text and images.

It turns out there's a lot left unsaid, even if computers don't really know anything in the sense a person does. Neural networks, it seems, can retain a memory of specific training data, which could open individuals whose data is captured in the training activity to violations of privacy.

For example, Nicholas Carlini, formerly a student at UC Berkeley's AI lab, approached the problem of what computers "memorize" about training data, in work done with colleagues at Berkeley. (Carlini is now with Google's Brain unit.) In July, in a paper provocatively titled, "The Secret Sharer," posted on the arXiv pre-print server, Carlini and colleagues discussed how a neural network could retain specific pieces of data from a collection of data used to train the network to generate text. That has the potential to let malicious agents mine a neural net for sensitive data such as credit card numbers and social security numbers.

Those are exactly the pieces of data the researchers discovered when they trained a language model using so-called long short-term memory neural networks, or "LSTMs."


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Rupert Pupnick on Friday August 23 2019, @01:08PM (1 child)

    by Rupert Pupnick (7277) on Friday August 23 2019, @01:08PM (#884077) Journal

    But a neural network doesn’t have a structured file system like a regular computer, so it would seem much more difficult extract a body of information that pertains to a single target you intend to victimize.

    If, for example, if I have just a credit card number with no other information to go with it, is that of any use to a fraudster?

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Informative) by garfiejas on Friday August 23 2019, @01:47PM

    by garfiejas (2072) on Friday August 23 2019, @01:47PM (#884096)

    Agreed, but it could be the entire record that was present in the training set thats encoded, the trick is to how to get it uncoded;

    The paper talks about "unintended" memorisation of elements of very large data sets, so in normal operation the network behaves as expected, but if an adversarial network has access to the model, it could work out information about the training set (training data leakages) and re-create that entire record. There are other issues outlined in the paper such as Model inversion which seeks to re-create the aggregate stats of the training set or models being maliciously crafted to memorise data as they are being trained whilst generating normal outputs...;

    However the paper also describes the (useful) counter differential privacy which clips to a norm and adds noise to the training sets to limit the issue;