We just talked about Personal Info being Private Unless the holder Decides to Sell It on SoylentNews. Today we were treated to yet another such a situation, and this time it hit close to home.
El Reg Reports that OpenDNS is in the process of being acquired by Cisco. And the OpenDNS founder's Blog confirms it.
Cisco will essentially take over total ownership, and the vague promises of continuance of OpenDNS. The blog to the contrary, no promises of terms of service after the acquisition can be believable.
OpenDNS managed to sneak in a Sales clause into their Privacy Policy somewhere along the way:
OpenDNS does not share, rent, trade or sell your Personal Information with third parties, except...
(4) it is necessary in connection with a sale of all or substantially all of the assets of OpenDNS or the merger of OpenDNS into another entity or any consolidation, share exchange, combination, reorganization, or like transaction in which OpenDNS is not the survivor; you will be notified via email and/or a prominent notice on our Web site of any change in ownership or uses of your Personal Information, as well as any choices you may have regarding your Personal Information.
That privacy policy has grown more permissive over the years, allowing OpenDNS to sell filter lists used by their customers, or just about anything else they might want to do.
Full Disclosure: In my day job we were a paying customer of OpenDNS. We had an ISP that ran unreliable DNS servers, injected ads in 404 pages, and generally was slow. We tried Google's DNS free service, and found it quite fast, but full of re-directs and other objectionable features. We switched to OpenDNS mostly for ad, and website filtering, phishing site blocking, and Speed. We were very happy with the fast service over the years. So reliable we never had to look at the web site.
But we were shocked at the extent of permissions creep in their Privacy Policy and Terms of Service. We thought we were avoiding Google's DNS mining service. Little did we know...
(Score: 3, Informative) by frojack on Wednesday July 01 2015, @05:38PM
and they only need to serve the root zone. It's their job.
Do the math. Assume one hit to the root zone per hour for every pc and smartphone on earth.
No end user should EVER be hitting a root server. NEVER.
No, you are mistaken. I've always had this sig.
(Score: 2, Informative) by Anonymous Coward on Wednesday July 01 2015, @05:59PM
Good thing the root servers aren't as lazy as you are. On the page about the root servers that I linked to is a link to a traffic analysis [ripe.net] of an event with heightened request rates on one root server. The operators decided not to block the erroneous traffic because it didn't cause problems, but they still increased capacity. Even before they upgraded though, that traffic analysis shows that just the additional requests, which almost exclusively went to a single K-root server, arrived at a rate of up to 40 thousand per second. Again, this is the additional load on top of the normal requests, on a single server, and it caused no problems, and they increased capacity nevertheless. 40 thousand requests per second per server is roughly 30 billion requests per hour on 200 servers. Still convinced that a recursive resolver on every device would crash the internets?
(Score: 2, Flamebait) by frojack on Wednesday July 01 2015, @07:13PM
single K-root server, arrived at a rate of up to 40 thousand per second.
Again, do the math you lazy bastard.
On server imposed a load of 40k per second.
Cisco estimates [cisco.com] there are 16 billion things connected to the internet. If ALL of them did as you recommend, and send their DNS requests directly to the root servers there simply isn't enough bandwidth to handle it all.
This is why the internet, and DNS servers are designed by experts rather than taking the advice of some random AC on a website.
No, you are mistaken. I've always had this sig.
(Score: 2, Informative) by Anonymous Coward on Wednesday July 01 2015, @07:52PM
One server handled an additional load of 40k requests per second, without a problem. The normal request rate is more like 4k requests per second, so there's ample headroom. If 16 billion devices needed to make one request to the root servers per hour, it would raise the request rate (let's say averaged over 200 servers) by 16000000000/(200*3600)=20k/sec, less than the event described in the linked analysis. That's right, everyone on the whole internet using unshared recursive resolvers is less stress on the root servers than a single misconfigured software in hardly more than one ASN in China.
(Score: 4, Insightful) by Ezber Bozmak on Wednesday July 01 2015, @10:08PM
This is why the internet, and DNS servers are designed by experts rather than taking the advice of some random AC on a website.
Experts like Verisign and Nominet who contributed enough to Unbound to put their logos on the project's website? Those experts? Or someone calling themselves 'frojack' who is effectively anonymous?