Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday September 21 2017, @06:24AM   Printer-friendly
from the not-scrambled-enough dept.

Submitted via IRC for SoyCow5743

During last year's WWDC in June 2016, Apple noted it would be adopting some degree of differential privacy methods to ensure privacy while the company mined user data on iOS and Mac OS. In short, the technique adds noise to data that scrambles it enough to prevent it from becoming identifiable -- though the company made clear at the time that its data collection process was opt-in. Over a year later, a study claims that Apple's methods fall short of the digital privacy community's expectations for how much a user's data is kept private.

As they reveal in their study (PDF), researchers from the University of Southern California, Indiana University and China's Tsinghua University evaluated how Apple injects static into users' identifiable info, from messages to your internet history, to baffle anyone looking at the data, from the government to Apple's own staff. The metric for measuring a setup's differential privacy effectiveness is called a "privacy loss parameter" or, as a variable, "epsilon." In this case, the researchers discovered that Apple's epsilon on MacOS allowed a lot more personal data to be identifiable than digital privacy theorists are comfortable with, and iOS 10 permits even more.

Apple has refuted the study's findings, especially on its alleged ability to link data to particular users.

Source: https://www.engadget.com/2017/09/15/study-says-apple-data-mining-safeguards-dont-protect-privacy-en/


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Informative) by Anonymous Coward on Thursday September 21 2017, @08:23AM (4 children)

    by Anonymous Coward on Thursday September 21 2017, @08:23AM (#571059)

    to ensure privacy

    Okay, that sounds reasonable so far...

    while the company mined user data

    Pick one.

    Starting Score:    0  points
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  

    Total Score:   1  
  • (Score: 2) by FatPhil on Thursday September 21 2017, @08:48AM (3 children)

    by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Thursday September 21 2017, @08:48AM (#571068) Homepage
    Nope. That's why academics have created the field of differential privacy - it is possible to pick both. The data miners just have to accept that it will take longer to reach equally firm conclusions. Which they appear reticent to do, which is why the implementations have been watered down so as to be apparently ineffectual.

    Consider this analogue to your post:

    | Secret communication
    OK
    | Over a public channel
    Pick one

    Crypto exists that solves the problem. That Apple are using pig-latin doesn't mean encryption is impossible.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 3, Informative) by TheRaven on Thursday September 21 2017, @09:23AM (2 children)

      by TheRaven (270) on Thursday September 21 2017, @09:23AM (#571074) Journal
      The problem is that most differential privacy approaches simply don't work. There's a related field of deanonymisation that has so far shown that any approach to try to anonymise large data sets has failed and that, even when it succeeds in isolation, it can be reversed by combining the dataset with some other anonymised or public dataset.
      --
      sudo mod me up
      • (Score: 3, Informative) by FatPhil on Thursday September 21 2017, @12:01PM (1 child)

        by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Thursday September 21 2017, @12:01PM (#571106) Homepage
        How many people have actually implemented the provably secure protocols, and how many have just rolled their own that they themselves are unable to find a flaw with? I'd be willing to bet that if it's apple and advertisers, they're not going to want to do *the right thing*, and will instead to *the quick thing*.

        E.g. what's wrong (apart from having to actually implement it) with:
        https://www.researchgate.net/publication/226120439_Cryptographic_Techniques_in_Privacy-Preserving_Data_Mining
        Abstract:
        Research in secure distributed computation, which was done
        as part of a larger body of research in the theory of cryptog-
        raphy, has achieved remarkable results. It was shown that
        non-trusting parties can jointly compute functions of their
        different inputs while ensuring that no party learns anything
        but the defined output of the function. These results were
        shown using generic constructions that can be applied to
        any function that has an efficient representation as a cir-
        cuit. We describe these results, discuss their efficiency, and
        demonstrate their relevance to privacy preserving compu-
        tation of data mining algorithms. We also show examples
        of secure computation of data mining algorithms that use
        these generic constructions.
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
        • (Score: 3, Informative) by TheRaven on Thursday September 21 2017, @01:24PM

          by TheRaven (270) on Thursday September 21 2017, @01:24PM (#571136) Journal

          I don't know that specific paper, but in general the problem with these approaches is that they're either 5+ orders of magnitude slower than normal computation (i.e. if you can't do it on a laptop without doing the privacy stuff, you can't do it at all with), or they have O(N^M) memory usage, where N is the number of data elements and M is the number of primitive operations that you want to support. For example, if you want to support adding a known constant, that's one operation. If you want to support multiplying by a constant, then you can't do it by repeated addition, because you need to know both values so you need another primitive operation for that, and so on. The space requirements are generally prohibitive.

          Most of what people do in the real world involves throwing away data that they believe is identifying and then only using the aggregates, but that's been repeatedly shown to be flawed.

          --
          sudo mod me up