Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday December 10 2018, @06:16AM   Printer-friendly
from the whatever-happened-to-an-expectation-of-privacy? dept.

The privacy risks of compiling mobility data

The privacy risks of compiling mobility data
-- Merging different types of location-stamped data can make it easier to discern users’ identities, even when the data is anonymized.

Rob Matheson | MIT News Office
December 7, 2018

A new study by MIT researchers finds that the growing practice of compiling massive, anonymized datasets about people’s movement patterns is a double-edged sword: While it can provide deep insights into human behavior for research, it could also put people’s private data at risk.  

Companies, researchers, and other entities are beginning to collect, store, and process anonymized data that contains “location stamps” (geographical coordinates and time stamps) of users. Data can be grabbed from mobile phone records, credit card transactions, public transportation smart cards, Twitter accounts, and mobile apps. Merging those datasets could provide rich information about how humans travel, for instance, to optimize transportation and urban planning, among other things.

But with big data come big privacy issues: Location stamps are extremely specific to individuals and can be used for nefarious purposes. Recent research has shown that, given only a few randomly selected points in mobility datasets, someone could identify and learn sensitive information about individuals. With merged mobility datasets, this becomes even easier: An agent could potentially match users trajectories in anonymized data from one dataset, with deanonymized data in another, to unmask the anonymized data.

In a paper published today in IEEE Transactions on Big Data, the MIT researchers show how this can happen in the first-ever analysis of so-called user “matchability” in two large-scale datasets from Singapore, one from a mobile network operator and one from a local transportation system.

The researchers use a statistical model that tracks location stamps of users in both datasets and provides a probability that data points in both sets come from the same person. In experiments, the researchers found the model could match around 17 percent of individuals in one week’s worth of data, and more than 55 percent of individuals after one month of collected data. The work demonstrates an efficient, scalable way to match mobility trajectories in datasets, which can be a boon for research. But, the researchers warn, such processes can increase the possibility of deanonymizing real user data.

“As researchers, we believe that working with large-scale datasets can allow discovering unprecedented insights about human society and mobility, allowing us to plan cities better. Nevertheless, it is important to show if identification is possible, so people can be aware of potential risks of sharing mobility data,” says Daniel Kondor, a postdoc in the Future Urban Mobility Group at the Singapore-MIT Alliance for Research and Technology.

“In publishing the results — and, in particular, the consequences of deanonymizing data — we felt a bit like ‘white hat’ or ‘ethical’ hackers,” adds co-author Carlo Ratti, a professor of the practice in MIT’s Department of Urban Studies and Planning and director of MIT’s Senseable City Lab. “We felt that it was important to warn people about these new possibilities [of data merging] and [to consider] how we might regulate it.”

The co-authors of the study are Behrooz Hashemian, a postdoc at the Senseable City Lab, and Yves-Alexandre de Mondjoye of the Department of Computing and Data Science Institute of Imperial College London.

Eliminating false positives

To understand how matching location stamps and potential deanonymization works, consider this scenario: “I was at Sentosa Island in Singapore two days ago, came to the Dubai airport yesterday, and am on Jumeirah Beach in Dubai today. It’s highly unlikely another person’s trajectory looks exactly the same. In short, if someone has my anonymized credit card information, and perhaps my open location data from Twitter, they could then deanonymize my credit card data,” Ratti says.

Similar models exist to evaluate deanonymization in data. But those use computationally intensive approaches for re-identification, meaning to merge anonymous data with public data to identify specific individuals. These models have only worked on limited datasets. The MIT researchers instead used a simpler statistical approach — measuring the probability of false positives — to efficiently predict matchability among scores of users in massive datasets.

In their work, the researchers compiled two anonymized “low-density” datasets — a few records per day — about mobile phone use and personal transportation in Singapore, recorded over one week in 2011. The mobile data came from a large mobile network operator and comprised timestamps and geographic coordinates in more than 485 million records from over 2 million users. The transportation data contained over 70 million records with timestamps for individuals moving through the city.

The probability that a given user has records in both datasets will increase along with the size of the merged datasets, but so will the probability of false positives. The researchers’ model selects a user from one dataset and finds a user from the other dataset with a high number of matching location stamps. Simply put, as the number of matching points increases, the probability of a false-positive match decreases. After matching a certain number of points along a trajectory, the model rules out the possibility of the match being a false positive.

Focusing on typical users, they estimated a matchability success rate of 17 percent over a week of compiled data, and about 55 percent for four weeks. That estimate jumps to about 95 percent with data compiled over 11 weeks.

The researchers also estimated how much activity is needed to match most users over a week. Looking at users with between 30 and 49 personal transportation records, and around 1,000 mobile records, they estimated more than 90 percent success with a week of compiled data. Additionally, by combining the two datasets with GPS traces — regularly collected actively and passively by smartphone apps — the researchers estimated they could match 95 percent of individual trajectories, using less than one week of data.

Better privacy

With their study, the researchers hope to increase public awareness and promote tighter regulations for sharing consumer data. “All data with location stamps (which is most of today’s collected data) is potentially very sensitive and we should all make more informed decisions on who we share it with,” Ratti says. “We need to keep thinking about the challenges in processing large-scale data, about individuals, and the right way to provide adequate guarantees to preserve privacy.”

To that end, Ratti, Kondor, and other researchers have been working extensively on the ethical and moral issues of big data. In 2013, the Senseable City Lab at MIT launched an initiative called “Engaging Data,” which involves leaders from government, privacy rights groups, academia, and business, who study how mobility data can and should be used by today’s data-collecting firms.

“The world today is awash with big data,” Kondor says. “In 2015, mankind produced as much information as was created in all previous years of human civilization. Although data means a better knowledge of the urban environment, currently much of this wealth of information is held by just a few companies and public institutions that know a lot about us, while we know so little about them. We need to take care to avoid data monopolies and misuse.”

Reprinted with permission of MIT News.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by bzipitidoo on Monday December 10 2018, @11:22AM (1 child)

    by bzipitidoo (4388) on Monday December 10 2018, @11:22AM (#772317) Journal

    Privacy is overrated. Rather than trying to keep our movements a big secret, so that others can't take advantage, would be better to outlaw the nefarious uses of conclusions others might try to make. For instance, visits to doctors and drugstores shouldn't lead to a person being faced with higher health and life insurance premiums. What's particularly unfair is being judged and found guilty of, say, "risky behavior", without ever realizing one was even on trial.

    Also quite unfair is the extreme imbalance in privacy standards. Big corporations are allowed all kinds of privacy they should not have. Like, frackers being allowed to claim that the ingredients of fracking fluid are trade secrets, so that they can pump all kinds of toxic waste into the ground with no one the wiser. The dirty crap that Big Finance pulls is another bad abuse of privacy. And there's propaganda and all this dark money that corrupts our politics. Management gets to hide behind the corporate veil far too often and too much. The extremely anti-social decisions powerful companies are notorious for making are aided and abetted by this kind of privacy.

    I suspect the biggest secrets many individuals want to keep is particular to sex-- the marital infidelity, skeletons in the closet kind of stuff. Something I learned about genealogy is that genealogists used to be thought weird, nosy, and uncouth, prying into questions of parentage that were none of their business. Even the fact that 5% to 30% of the population are bastards (in the sense of being born out of wedlock), was not known until relatively recently. People didn't want to know this about themselves. Wasn't Christian, nor acceptable in many other religions.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 4, Informative) by SemperOSS on Monday December 10 2018, @12:49PM

    by SemperOSS (5072) on Monday December 10 2018, @12:49PM (#772340)

    You're right, privacy is overrated. There are no privacy problems if you do not abuse the data you have, which means there must be rules for what you can do with it. But, and this is really the crux of the matter, if data is collected the information can (and very probably will) be abused (rules or not), thus it would be safer to avoid this problem by not collecting the data in the first place.

    Let us not fall into the trap to assume that everyone abides by the law. History has proven this assumption wrong so many times. Remember, when private data once has been set free, the information will not disappear.

    You cannot undo a privacy slip-up!

    --
    I don't need a signature to draw attention to myself.
    Maybe I should add a sarcasm warning now and again?