Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday August 23 2019, @12:16PM   Printer-friendly
from the I-know-something-you-don't-know dept.

Submitted via IRC for SoyCow3196

An AI privacy conundrum? The neural net knows more than it says

Artificial intelligence is the process of using a machine such as a neural network to say things about data. Most times, what is said is a simple affair, like classifying pictures into cats and dogs.

Increasingly, though, AI scientists are posing questions about what the neural network "knows," if you will, that is not captured in simple goals such as classifying pictures or generating fake text and images.

It turns out there's a lot left unsaid, even if computers don't really know anything in the sense a person does. Neural networks, it seems, can retain a memory of specific training data, which could open individuals whose data is captured in the training activity to violations of privacy.

For example, Nicholas Carlini, formerly a student at UC Berkeley's AI lab, approached the problem of what computers "memorize" about training data, in work done with colleagues at Berkeley. (Carlini is now with Google's Brain unit.) In July, in a paper provocatively titled, "The Secret Sharer," posted on the arXiv pre-print server, Carlini and colleagues discussed how a neural network could retain specific pieces of data from a collection of data used to train the network to generate text. That has the potential to let malicious agents mine a neural net for sensitive data such as credit card numbers and social security numbers.

Those are exactly the pieces of data the researchers discovered when they trained a language model using so-called long short-term memory neural networks, or "LSTMs."


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by PiMuNu on Friday August 23 2019, @12:29PM (3 children)

    by PiMuNu (3823) on Friday August 23 2019, @12:29PM (#884067)

    Say I train a neural net to identify credit card numbers. My training set is 0123 4567 8901 2345. Now I can throw stupid numbers at the neural net and it will only identify 0123 4567 8901 2345 as a credit card. So I can figure out the training set from the neural net responses. It's pretty obvious really.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by KritonK on Friday August 23 2019, @02:05PM (2 children)

    by KritonK (465) on Friday August 23 2019, @02:05PM (#884103)

    In this particular case, the neural net would probably identify credit card numbers as four groups of four decimal digits. Considering real brains as trained neural networks, I would assume that this why you gave 0123 4567 8901 2345 as an example, and I had no trouble identifying it as such. I doubt that the neural net contains the actual training data, after the rule is deduced.

    Some data that such a neural net might contain, however, is information about the banks issuing the credit cards. The first four(?) digits of a credit card identify the issuing bank, e.g. Since there are only so many of them, the neural net may contain a table of the various valid bank prefixes, perhaps with a second column containing the name of the corresponding bank. This would be useful information for a scammer to extract, but in this case is probably available more easily from other sources.

    • (Score: 2) by PiMuNu on Friday August 23 2019, @03:00PM (1 child)

      by PiMuNu (3823) on Friday August 23 2019, @03:00PM (#884134)

      > I doubt that the neural net contains the actual training data

      You just trained the neural net to identify 0123 4567 8901 2345 and only 0123 4567 8901 2345 as a credit card number. Type in 0123 4567 8901 2346 and the neural net says "no". Type in 0123 4567 8901 2345 and the neural net says "yes". Therefore one can extract the training data set from the neural net (or other optimisation algorithm).

      • (Score: 2) by KritonK on Saturday August 24 2019, @01:59PM

        by KritonK (465) on Saturday August 24 2019, @01:59PM (#884727)

        Assuming that a neural net can be trained with only one data item, which I would assume would be similar to drawing statistical conclusions from one data point.