Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Saturday February 10 2018, @07:57PM   Printer-friendly
from the when-the-party-of-the-first-part... dept.

You don't read privacy policies. And of course, that's because they're not actually written for you, or any of the other billions of people who click to agree to their inscrutable legalese. Instead, like bad poetry and teenagers' diaries, those millions upon millions of words are produced for the benefit of their authors, not readers—the lawyers who wrote those get-out clauses to protect their Silicon Valley employers.

But one group of academics has proposed a way to make those virtually illegible privacy policies into the actual tool of consumer protection they pretend to be: an artificial intelligence that's fluent in fine print. Today, researchers at Switzerland's Federal Institute of Technology at Lausanne (EPFL), the University of Wisconsin and the University of Michigan announced the release of Polisis—short for "privacy policy analysis"—a new website and browser extension that uses their machine-learning-trained app to automatically read and make sense of any online service's privacy policy, so you don't have to.

Details at Wired


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Saturday February 10 2018, @08:16PM (4 children)

    by Anonymous Coward on Saturday February 10 2018, @08:16PM (#636115)

    to solve this little problem.

    • (Score: 3, Insightful) by fyngyrz on Saturday February 10 2018, @08:21PM (1 child)

      by fyngyrz (6567) on Saturday February 10 2018, @08:21PM (#636117) Journal

      Lawyers are strong AI. Sort of. And they are the problem.

      • (Score: -1, Troll) by Anonymous Coward on Saturday February 10 2018, @08:59PM

        by Anonymous Coward on Saturday February 10 2018, @08:59PM (#636125)

        What's this...? The Pool of Heroes just spit out two half-eaten candy bars in response to your comment! Hahahahahhahahahahhahahahahahaha!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! I've never seen such incorrectness of ultimatum...! Such a fuckin' thing! How does it feel now that your True Ferocity has not only been revealed to all, but been found to be utterly useless as well? You thoroughly disgust me. Your continued existence will only serve to poison everything around you with your sheer worthlessness. Vanish. Vanish. Vanish. Vanish. Vanish. Vanish. Vanish. Disappear!

    • (Score: 3, Informative) by Ethanol-fueled on Saturday February 10 2018, @11:17PM (1 child)

      by Ethanol-fueled (2792) on Saturday February 10 2018, @11:17PM (#636162) Homepage

      This problem doesn't have to be limited to privacy policies, but rather any policy. What steams my weenies is when entities you depend on change their policies and those changes are significant and detrimental. Of course nobody's going to read 10 pages of dense English and the vast majority of policy changes are for inconsequential things, but if you've come to be used to a specific behavior and that behavior changes, you can be really fucked by your sloth and naiveté. I found this out the hard way when I discovered by accident that my credit union was now starting to behave like a big bank.

      Projects such as this one are definitely relatively low-hanging fruit which should be tackled. I don't know much law but I do have a good knack for translating complicated English into terse statements even simpletons can understand. Your English translations were a bitch to read misters Freud, Dostoevsky, and Tolstoy; but they have served me well.

      • (Score: 1, Informative) by Anonymous Coward on Sunday February 11 2018, @07:52AM

        by Anonymous Coward on Sunday February 11 2018, @07:52AM (#636299)

        What steams my weenies is when entities you depend on change their policies and those changes are significant and detrimental. Of course nobody's going to read 10 pages of dense English and the vast majority of policy changes are for inconsequential things, but if you've come to be used to a specific behavior and that behavior changes, you can be really fucked by your sloth and naiveté.

        You mean like https://www.eff.org/deeplinks/2010/04/facebook-timeline [eff.org] or http://mattmckeon.com/facebook-privacy/ [mattmckeon.com]

        The latter requires JS, here are the images. 1 [mattmckeon.com] 2 [mattmckeon.com] 3 [mattmckeon.com] 4 [mattmckeon.com] 5 [mattmckeon.com] 6 [mattmckeon.com]

  • (Score: 5, Insightful) by frojack on Saturday February 10 2018, @08:31PM (4 children)

    by frojack (1554) on Saturday February 10 2018, @08:31PM (#636121) Journal

    You can wax eloquent in your labels if you manufacture food in the US. You can extol the virtues and the spiritual energy it provides, and reference the toxins it cleanses.

    But sooner or later you have to come up with that legally mandated nutritional label and list of ingredients.

    Certainly we could come up with a simpler form than the long winded impenetrable lawyer language, and mandate that form be used and be binding.

    If patting a woman's behind can become a career ending offense, with no trial or appeal, then surely lawyer's obfuscatory language abuse should be punishable in some form of career ending rico statute.

    --
    No, you are mistaken. I've always had this sig.
    • (Score: 5, Insightful) by fyngyrz on Saturday February 10 2018, @08:53PM

      by fyngyrz (6567) on Saturday February 10 2018, @08:53PM (#636124) Journal

      If [an accusation of] patting a woman's behind can become a career ending offense, with no trial or appeal

      FTFY. But otherwise, yes.

    • (Score: 3, Interesting) by requerdanos on Saturday February 10 2018, @10:16PM

      by requerdanos (5997) Subscriber Badge on Saturday February 10 2018, @10:16PM (#636140) Journal

      You can wax eloquent in your labels if you manufacture food in the US.

      If you package oil (almost pure fat) in a spray can [scientificpsychic.com], you can get away with saying that it has "Total fat 0 grams" of fat-free goodness.

      (See, "less than half a gram of fat per serving" is the legal definition of "fat free," and a "serving" of spray-fat is, you guessed it, less than half a gram.)

      You can use the stuff to oil squeaky hinges and rusty bolts; the handwaving doesn't take any of the oil out of the oil can.

    • (Score: 0) by Anonymous Coward on Sunday February 11 2018, @02:31AM (1 child)

      by Anonymous Coward on Sunday February 11 2018, @02:31AM (#636218)
      Legal language partly developed in order to to reduce the ambiguities of meaning inherent in everyday human language. Legal English, for example, uses legal jargon often unfamiliar to non-lawyers (e.g. promissory estoppel, restrictive covenant, etc.) and makes use of words that seem like familiar English words but are given a different specific meaning, e.g. "consideration" in legal English usually means a contract of some sort rather than any of its other standard English meanings. They might have been better served by some kind of programming language, which cannot contain ambiguity at all.
      • (Score: 4, Insightful) by frojack on Sunday February 11 2018, @02:51AM

        by frojack (1554) on Sunday February 11 2018, @02:51AM (#636229) Journal

        Sigh. Yes, we all know that Professor Obvious.

        But the communication fails, because nobody can understand it, so nobody reads, and THAT was the intent.

        --
        No, you are mistaken. I've always had this sig.
  • (Score: 1, Touché) by Anonymous Coward on Saturday February 10 2018, @08:32PM (1 child)

    by Anonymous Coward on Saturday February 10 2018, @08:32PM (#636122)

    Get legalese away from me
    You know I don't find this stuff
    Amusing anymore

    If you'll be my bodyguard
    I can be your long-lost pal
    I can call you Betty
    And Betty, when you call me
    You can call me Al

    • (Score: 0) by Anonymous Coward on Sunday February 11 2018, @08:32PM

      by Anonymous Coward on Sunday February 11 2018, @08:32PM (#636433)

      Man walks down the street, he asks,
      "Why am I so soft in the middle."
      "Why am I so soft in the middle when,
      the rest of my life is so hard!"

  • (Score: 2) by maxwell demon on Saturday February 10 2018, @09:17PM (4 children)

    by maxwell demon (1608) on Saturday February 10 2018, @09:17PM (#636130) Journal

    What if that AI misinterprets the fine print, and you accept a contract because of this? Will the authors of that AI be liable? And if not, did they state so in their fine print? And will the AI correctly interpret their own fine print?

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 2) by Runaway1956 on Saturday February 10 2018, @09:31PM

      by Runaway1956 (2926) Subscriber Badge on Saturday February 10 2018, @09:31PM (#636136) Journal

      Your AI will be phoning home, routinely, to compare results with other AI's. That's the "deep learning" part of the whole thing. Nevermind that those AI's are also creating a database on you simple citizens. And, don't worry about who is buying into that database. It's all for the greater good, and if you're doing nothing wrong, you'll have nothing to worry about.

    • (Score: 2) by requerdanos on Saturday February 10 2018, @10:41PM (1 child)

      by requerdanos (5997) Subscriber Badge on Saturday February 10 2018, @10:41PM (#636145) Journal

      will the AI correctly interpret

      Well, I asked it some questions about its own fine print.

      I asked about protection against my identity being revealed, and it spat out....

      To protect your personal information, we take reasonable precautions and follow industry best practices to make sure it is not inappropriately lost, misused, accessed, disclosed, altered or destroyed. All information exchanged with this website is encrypted using secure socket layer technology (SSL).

      But it gets better! Underneath the above, there was a Simplify option. Pressing it changed the above to...

      We take security measures to guard your data. We particularly secure data during transfer.

      If there was a "Simplify some more" I guess it would go down to "We're very careful" but that feature doesn't seem to be implemented.

      That simplify button impressed me, I have to say.

      • (Score: 3, Interesting) by Hyperturtle on Sunday February 11 2018, @12:29AM

        by Hyperturtle (2824) on Sunday February 11 2018, @12:29AM (#636184)

        Anyone here read Douglas Adams book Dirk Gently's Holistic Detective Agency (he's more famously known as the author of The Hitchhiker's Guide)?

        He has prior art I think; he invented the Electric Monk, a monk that you bought so that it could believe in things for you. When people came to the door, just let the visitor talk to the Monk -- especially those people that ignore the "No Solicitor's" sign.

        The Monk believed in everything for you, so that you could go about your life doing other more important things that required logic and dealing with facts.

        Of course, when you cross the Monk with cable television (and I imagine if made today, the Internet), it tends to end up believing some pretty weird things, or so how the book played out...

        But the point is, an AI reading the policy for you in no way makes it agree to any of that, just like how the description in the book of how the Monk's owners didn't actually believe any of that stuff it did.

            I doubt an AI owned by another company can do a good job keeping things private for you, since this states it only reads the privacy policies out there for you. It's not responsible for what happens once you click next to continue...After alll, you'd need a Yes Man of some kind if you wanted something to *Agree* to things for you... and some sort of legal framework to make sure that as your proxy, only its privacy is violated and not yours!. Having a robotic overlord read to me legal filler from somewhere doesn't really help much beyond the chance that the robot paperclip potentially being able to give advice based on the document type...

        Maybe if that if we consider that corporations are people, you can get a Yes Man corporation of some kind that operates the Privacy AI on your behalf as a service, and you can be your own customer and shareholder at once, compelling them what to do but not being responsible for your own actions as passed through the Yes Man corporation?

        (unfortunately not only did they say we don't read privacy polices, I also didn't read the fine article... wired is sort of like the mondo 2000 of the modern internet. just as it was then as wired is now, it may be worth a look now and then but usually the same info can be found in a less gimmicky way elsewhere)

    • (Score: 1, Touché) by Anonymous Coward on Saturday February 10 2018, @11:03PM

      by Anonymous Coward on Saturday February 10 2018, @11:03PM (#636156)

      > What if that AI misinterprets the fine print, ...

      Written like a true lawyer!

  • (Score: 4, Insightful) by looorg on Saturday February 10 2018, @09:27PM (1 child)

    by looorg (578) on Saturday February 10 2018, @09:27PM (#636135)

    I'm not sure if it's sad or funny that we are using AI to try and interpret lawyers ... so the rest of us doesn't have to.

    • (Score: 4, Informative) by Ethanol-fueled on Saturday February 10 2018, @11:27PM

      by Ethanol-fueled (2792) on Saturday February 10 2018, @11:27PM (#636166) Homepage

      What's sad and/or funny is that formerly "alpha" professions like medicine and law are now an exhausting and unprofitable drag and will be less relevant from a human perspective.

      Soon the skilled trades and other technicians will be the most valuable for a long while, because servers, robots, plumbing, wiring, and construction can't physically fix themselves. Even in high-tech industry humans are preferable to modern machines for low-volume, high-profit work. If you've ever worked with microwave or millimeter-wave circuits (or, as another example, optical transceivers) you know this well because the pick-n-place mentality is not yet accurate enough to produce consistent results. This is why in the microwave/mm-wave world you still see weird shit like tuning filters by positioning coils and proximity caps with toothpicks or cactus needles, cutting traces with X-acto blades and a microscope on soft-substrate, zero-turn inductors (they look like a horseshoe), etc.

      Blue-collar for life, bitches. Suck it.

  • (Score: 2) by requerdanos on Saturday February 10 2018, @10:30PM (1 child)

    by requerdanos (5997) Subscriber Badge on Saturday February 10 2018, @10:30PM (#636141) Journal

    one group of academics has proposed a way to make those virtually illegible privacy policies into the actual tool of consumer protection they pretend to be: an artificial intelligence that's fluent in fine print.

    This sounds interesting and all, but have you read their privacy policy? ~500 words long!*

    -----------------------
    * Source: echo $(lynx pribot.org/privacy.html -dump|wc|awk '{print $2}') words.

  • (Score: 5, Insightful) by shortscreen on Saturday February 10 2018, @11:04PM (1 child)

    by shortscreen (2252) on Saturday February 10 2018, @11:04PM (#636157) Journal

    99% of privacy policies, EULAs. and TOS could be translated for easier understanding by outputting ASCII art of a giant middle finger.

    • (Score: 2) by bzipitidoo on Sunday February 11 2018, @05:00AM

      by bzipitidoo (4388) on Sunday February 11 2018, @05:00AM (#636263) Journal

      I was thinking this was a useless application for AI. No one reads these policies because we know they are a waste of our time, and are difficult to impossible to enforce and police.

      It's even more of a waste of time if the entity with the privacy policy accepts fake info. Just use a fake name and whatever other fake info they demand, then their privacy policy doesn't matter, does it?

  • (Score: 4, Insightful) by crafoo on Sunday February 11 2018, @12:25AM

    by crafoo (6639) on Sunday February 11 2018, @12:25AM (#636182)

    If an AI can read and summarize it,
    then an AI can write it.

    Can we get an AI to pass the bar? That would be hilarious.

    Let me summarize all EULAs, ToS, privacy policies for everyone: "We are going to fuck you. We are going to sell every bit of information the law allows us to and then sell the rest and claim the Ooopsie-Woopsie defense later. Suck it."

  • (Score: 3, Insightful) by Rosco P. Coltrane on Sunday February 11 2018, @12:48AM (1 child)

    by Rosco P. Coltrane (4757) on Sunday February 11 2018, @12:48AM (#636186)

    If there's one thing you can count on, it's that, regardless of the wording, any privacy policy essentially allows whoever wrote it to snoop on you, exploit your private data any which way they want, lets them sell your data to whomever they want for any reason they want, you have nothing to say about it, and you ain't getting a penny.

    No need for an AI, let alone reading a privacy policy, to know what it says. I've rarely read a privacy policy that says otherwise - and when it does, it's safe to assume it doesn't describe the company's true intentions, and you shouldn't trust them anyway.

    • (Score: 2) by pipedwho on Sunday February 11 2018, @09:11AM

      by pipedwho (2032) on Sunday February 11 2018, @09:11AM (#636311)

      So basically, if you let the AI automatically selection the option that was in your best interest, it would always click 'Decline'. And you wouldn't be able to use anything.

      AI, protecting people from themselves.

(1)