Apple, like practically every mega-corporation, wants to know as much as possible about its customers. But it’s also marketed itself as Silicon Valley’s privacy champion, one that—unlike so many of its advertising-driven competitors—wants to know as little as possible about you. So perhaps it’s no surprise that the company has now publicly boasted about its work in an obscure branch of mathematics that deals with exactly that paradox. [wired.com]
Differential privacy, translated from Apple-speak, is the statistical science of trying to learn as much as possible about a group while learning as little as possible about any individual in it. With differential privacy, Apple can collect and store its users’ data in a format that lets it glean useful notions about what people do, say, like and want. But it can’t extract anything about a single, specific one of those people that might represent a privacy violation. And neither, in theory, could hackers or intelligence agencies.
An example is the technique in which a survey asks if the respondent has ever, say, broken a law. But first, the survey asks them to flip a coin. If the result is tails, they should answer honestly. If the result is heads, they’re instructed to flip the coin again and then answer “yes” for heads or “no” for tails. The resulting random noise can be subtracted from the results with a bit of algebra, and every respondent is protected from punishment if they admitted to lawbreaking