Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Wednesday July 01 2015, @11:07AM   Printer-friendly
from the there-goes-the-neighborhood dept.

We just talked about Personal Info being Private Unless the holder Decides to Sell It on SoylentNews. Today we were treated to yet another such a situation, and this time it hit close to home.

El Reg Reports that OpenDNS is in the process of being acquired by Cisco. And the OpenDNS founder's Blog confirms it.

Cisco will essentially take over total ownership, and the vague promises of continuance of OpenDNS. The blog to the contrary, no promises of terms of service after the acquisition can be believable.

OpenDNS managed to sneak in a Sales clause into their Privacy Policy somewhere along the way:

OpenDNS does not share, rent, trade or sell your Personal Information with third parties, except...

(4) it is necessary in connection with a sale of all or substantially all of the assets of OpenDNS or the merger of OpenDNS into another entity or any consolidation, share exchange, combination, reorganization, or like transaction in which OpenDNS is not the survivor; you will be notified via email and/or a prominent notice on our Web site of any change in ownership or uses of your Personal Information, as well as any choices you may have regarding your Personal Information.

That privacy policy has grown more permissive over the years, allowing OpenDNS to sell filter lists used by their customers, or just about anything else they might want to do.

Full Disclosure: In my day job we were a paying customer of OpenDNS. We had an ISP that ran unreliable DNS servers, injected ads in 404 pages, and generally was slow. We tried Google's DNS free service, and found it quite fast, but full of re-directs and other objectionable features. We switched to OpenDNS mostly for ad, and website filtering, phishing site blocking, and Speed. We were very happy with the fast service over the years. So reliable we never had to look at the web site.

But we were shocked at the extent of permissions creep in their Privacy Policy and Terms of Service. We thought we were avoiding Google's DNS mining service. Little did we know...


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by Anonymous Coward on Wednesday July 01 2015, @11:24AM

    by Anonymous Coward on Wednesday July 01 2015, @11:24AM (#203708)

    127.0.0.1 [unbound.net]

    Starting Score:    0  points
    Moderation   +2  
       Interesting=1, Informative=1, Total=2
    Extra 'Interesting' Modifier   0  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Wednesday July 01 2015, @11:49AM

    by Anonymous Coward on Wednesday July 01 2015, @11:49AM (#203714)

    But a local DNS server still needs to get its data from somewhere (assuming you don't take the "look no further than 127.0.0.1 too literally; but then, a hosts file with a single entry for localhost would be sufficient anyway). So you're back to square one.

    • (Score: 1, Informative) by Anonymous Coward on Wednesday July 01 2015, @12:09PM

      by Anonymous Coward on Wednesday July 01 2015, @12:09PM (#203721)

      The thing that I linked to is a DNS server which can be used as a recursive resolver for your local host (or your network, if you want). It's lightweight, easy to configure and runs on many platforms, including Windows. It does not need another DNS server to which it can forward all requests, like stub resolvers do. It walks the DNS hierarchy, starting at the root servers. It's also a validating DNSSEC resolver, so you get some additional security over trusting a remote resolver to give you untampered data.

      • (Score: 0) by Anonymous Coward on Wednesday July 01 2015, @04:36PM

        by Anonymous Coward on Wednesday July 01 2015, @04:36PM (#203818)

        Unbound Installation and Configuration instructions [calomel.org] from the guys who make the best (only?) SSL validator for firefox. [calomel.org]

      • (Score: 2) by frojack on Wednesday July 01 2015, @05:09PM

        by frojack (1554) on Wednesday July 01 2015, @05:09PM (#203836) Journal

        The thing that I linked to is a DNS server

        I don't see any link in your post. What are you talking about?
        Or are you claiming to be the AC to which you replied? or are you claiming to be ALL ACs?

        "Walking the DNS hierarchy starting at the root servers" has got to be the most abusive use of DNS I've ever heard of. Imagine if EVERY computer did that! A perfect recipe for crashing the internet. Its not supposed to work like that, and in fact it doesn't work like that.

        --
        No, you are mistaken. I've always had this sig.
        • (Score: 0) by Anonymous Coward on Wednesday July 01 2015, @05:34PM

          by Anonymous Coward on Wednesday July 01 2015, @05:34PM (#203850)

          Find the link at the top, it's the first post. And yes, recursive resolvers start at the root, but there's some caching involved. There are literally hundreds of root servers all over the world [root-servers.org], and they only need to serve the root zone. It's their job.

          • (Score: 3, Informative) by frojack on Wednesday July 01 2015, @05:38PM

            by frojack (1554) on Wednesday July 01 2015, @05:38PM (#203855) Journal

            and they only need to serve the root zone. It's their job.

            Do the math. Assume one hit to the root zone per hour for every pc and smartphone on earth.
            No end user should EVER be hitting a root server. NEVER.

            --
            No, you are mistaken. I've always had this sig.
            • (Score: 2, Informative) by Anonymous Coward on Wednesday July 01 2015, @05:59PM

              by Anonymous Coward on Wednesday July 01 2015, @05:59PM (#203871)

              Good thing the root servers aren't as lazy as you are. On the page about the root servers that I linked to is a link to a traffic analysis [ripe.net] of an event with heightened request rates on one root server. The operators decided not to block the erroneous traffic because it didn't cause problems, but they still increased capacity. Even before they upgraded though, that traffic analysis shows that just the additional requests, which almost exclusively went to a single K-root server, arrived at a rate of up to 40 thousand per second. Again, this is the additional load on top of the normal requests, on a single server, and it caused no problems, and they increased capacity nevertheless. 40 thousand requests per second per server is roughly 30 billion requests per hour on 200 servers. Still convinced that a recursive resolver on every device would crash the internets?

              • (Score: 2, Flamebait) by frojack on Wednesday July 01 2015, @07:13PM

                by frojack (1554) on Wednesday July 01 2015, @07:13PM (#203909) Journal

                single K-root server, arrived at a rate of up to 40 thousand per second.

                Again, do the math you lazy bastard.
                On server imposed a load of 40k per second.

                Cisco estimates [cisco.com] there are 16 billion things connected to the internet. If ALL of them did as you recommend, and send their DNS requests directly to the root servers there simply isn't enough bandwidth to handle it all.

                This is why the internet, and DNS servers are designed by experts rather than taking the advice of some random AC on a website.

                --
                No, you are mistaken. I've always had this sig.
                • (Score: 2, Informative) by Anonymous Coward on Wednesday July 01 2015, @07:52PM

                  by Anonymous Coward on Wednesday July 01 2015, @07:52PM (#203942)

                  One server handled an additional load of 40k requests per second, without a problem. The normal request rate is more like 4k requests per second, so there's ample headroom. If 16 billion devices needed to make one request to the root servers per hour, it would raise the request rate (let's say averaged over 200 servers) by 16000000000/(200*3600)=20k/sec, less than the event described in the linked analysis. That's right, everyone on the whole internet using unshared recursive resolvers is less stress on the root servers than a single misconfigured software in hardly more than one ASN in China.

                • (Score: 4, Insightful) by Ezber Bozmak on Wednesday July 01 2015, @10:08PM

                  by Ezber Bozmak (764) on Wednesday July 01 2015, @10:08PM (#203977)

                  This is why the internet, and DNS servers are designed by experts rather than taking the advice of some random AC on a website.

                  Experts like Verisign and Nominet who contributed enough to Unbound to put their logos on the project's website? Those experts? Or someone calling themselves 'frojack' who is effectively anonymous?

  • (Score: 2) by captain normal on Wednesday July 01 2015, @05:03PM

    by captain normal (2205) on Wednesday July 01 2015, @05:03PM (#203833)

    Let's see now...an AC wants us to go to a supposedly open DNS server. But the site wants you to download something. Why, if it's an open DNS server, can't we just point our network connection straight to it like to google DNS server or OpenDNS?

    --
    Everyone is entitled to his own opinion, but not to his own facts"- --Daniel Patrick Moynihan--
    • (Score: 2) by Ezber Bozmak on Wednesday July 01 2015, @10:12PM

      by Ezber Bozmak (764) on Wednesday July 01 2015, @10:12PM (#203979)

      > Why, if it's an open DNS server,

      Because it isn't an open DNS server.

      Unbound is a standard package in Debian, FreeBSD, OpenBSD, Fedora, CentOS and probably others.