Stories
Slash Boxes
Comments

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00PM   Printer-friendly
from the things-could-get-hairy dept.

mrbluze writes:

"A modified HTTP protocol is being proposed (the proposal is funded by AT&T) which would allow ISP's to decrypt and re-encrypt traffic as part of day to day functioning in order to save money on bandwidth through caching. The draft document states:

To distinguish between an HTTP2 connection meant to transport "https" URIs resources and an HTTP2 connection meant to transport "http" URIs resource, the draft proposes to 'register a new value in the Application Layer Protocol negotiation (ALPN) Protocol IDs registry specific to signal the usage of HTTP2 to transport "http" URIs resources: h2clr.

The proposal is being criticized by Lauren Weinstein in that it provides a false sense of security to end users who might believe that their communications are actually secure. Can this provide an ISP with an excuse to block or throttle HTTPS traffic?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: -1, Troll) by Anonymous Coward on Monday February 24 2014, @06:14PM

    by Anonymous Coward on Monday February 24 2014, @06:14PM (#6010)

    End-to-end HTTPS breaks the Internet and has _always_ broken the internet by preventing caching. Lauren Weinstein is an old net pro and not the kind of person who would support proposals that are meant to hurt your security. This proposal repairs the damage done by HTTPS by allowing for a secure mechanism for caching to work again. It is _stupid_ for all of us to be downloading the same data in different encrypted streams to the origin server. The internet is staggering under the load of unnecessary duplicated information and all of us pay the price of that with slower downloads.

    Next time, please understand the proposal before writing panicky B.S. about it, or LEAVE IT FOR SOMEONE ELSE TO WRITE ABOUT.

    Starting Score:    0  points
    Moderation   -1  
       Flamebait=1, Troll=4, Interesting=3, Informative=1, Overrated=1, Total=10
    Extra 'Troll' Modifier   0  

    Total Score:   -1  
  • (Score: 4, Insightful) by Anonymous Coward on Monday February 24 2014, @06:19PM

    by Anonymous Coward on Monday February 24 2014, @06:19PM (#6017)

    Fine but let's just have it requested with ins:// or nsa:// protocol strings to denote this mitm-enabled ssl is insecure!

  • (Score: 4, Funny) by maratumba on Monday February 24 2014, @06:22PM

    by maratumba (938) on Monday February 24 2014, @06:22PM (#6022) Journal

    He who sacrifices freedom (of watching Glee in HD) for security (of being anonymous) deserves neither.

    Wait...

  • (Score: 4, Insightful) by internetguy on Monday February 24 2014, @06:27PM

    by internetguy (235) on Monday February 24 2014, @06:27PM (#6028)

    >> It is _stupid_ for all of us to be downloading the same data

    I guess it's stupid until your web traffic is monitored and then used against you.

    --
    Sig: I must be new here.
  • (Score: 5, Informative) by mechanicjay on Monday February 24 2014, @06:27PM

    by mechanicjay (7) <reversethis-{gro ... a} {yajcinahcem}> on Monday February 24 2014, @06:27PM (#6030) Homepage Journal

    No, just no.

    The network provider should not be in the middle here -- ever, not even for caching of non-encrypted stuff.

    How many times have any of you been on the end of a support call, where the end resolution is, "Wait for your ISP's transparent upstream proxy to refresh."

    On the Content provider side, there's no reason not to do some heavy caching behind the SSL off-load appliance. The whole point, though, is that You the client are establishing trust with the site you're talking to. Honestly how is this any different than the phone company saying, "We're going to make sure to listen in on all your voice call, so we can be sure the network is used efficiently." That's not the point -- if your network can't handle the load, you need to build it out (charge more if you need to).

    This is basically a sanctioned man-in-the-middle attack, between you and every secure site you access, more or less a built-in backdoor. I'm sure, these appliances wouldn't be prime targets for attacks or anything.

    It's almost as bad as the Clipper Chip, but for web browsers instead!

    --
    My VMS box beat up your Windows box.
    • (Score: 5, Informative) by frojack on Monday February 24 2014, @06:37PM

      by frojack (1554) on Monday February 24 2014, @06:37PM (#6038) Journal

      This!.

      Client caches. Server validates cached elements. (304 return code has a purpose people, learn it).

      The network stays the hell out of this business.

      --
      No, you are mistaken. I've always had this sig.
      • (Score: 4, Insightful) by Kawumpa on Monday February 24 2014, @07:27PM

        by Kawumpa (1187) on Monday February 24 2014, @07:27PM (#6085)

        The network will try to enforce whatever suits their interests, whether it's net neutrality or privacy doesn't matter. The providers eventually realised the flatrate connectivity was a bad end-user business model to begin with and there is a lot of value in snooping every single bit of your online activity (see Facebook and Google).

        It's time we start encrypting all traffic end-to-end.

  • (Score: 5, Informative) by Sir Garlon on Monday February 24 2014, @06:32PM

    by Sir Garlon (1264) on Monday February 24 2014, @06:32PM (#6032)

    Lauren Weinstein is an old net pro and not the kind of person who would support proposals that are meant to hurt your security.

    Lauren Weinstein is a *critic* of the draft [ietf.org], not a supporter of it. Look at the list of authors: "Weinstein" is not there. Probably you just read TFA too quickly, but invoking Weinstein's name to support this proposal is like invoking Rush Limbaugh's name to support Obamacare.

    --
    [Sir Garlon] is the marvellest knight that is now living, for he destroyeth many good knights, for he goeth invisible.
  • (Score: 5, Insightful) by frojack on Monday February 24 2014, @06:32PM

    by frojack (1554) on Monday February 24 2014, @06:32PM (#6034) Journal

    End-to-end HTTPS breaks the Internet and has _always_ broken the internet by preventing caching.

    Sorry, but caching was never part of the Network design.

    It was an after-thought, bolted on to handle the fact that there was insufficient bandwidth, back in the days of dial-up modems. Those days are gone.

    The client and the server can and should decide which parts should be secure, and which parts can be insecure, which parts can be served from cache, and which parts must be sent again.

    The network should stay the hell out of that business. The road doesn't get to decide who rides in your car. The road is open or the road is closed, or the road is impaired. That's all it gets to tell us.

    If the page elements (logos and banners and images, etc) haven't changed since the last time the client fetched them all that gets sent back is a 304. Caching and conservation of bandwidth is built into the system where it should be, at the end-points.

    We don't need to fix what isn't broken. We don't need to let the network decide what load it will carry. This is utterly idiotic.

    --
    No, you are mistaken. I've always had this sig.
    • (Score: 3, Informative) by Sir Garlon on Monday February 24 2014, @06:40PM

      by Sir Garlon (1264) on Monday February 24 2014, @06:40PM (#6040)

      I totally agree with your firm defense of net neutrality, but I think you are mistaken about the days of insufficient bandwidth being "gone." It's pretty clear from the kerfuffle between Verizon and Netflix [arstechnica.com] that there is not enough capacity for today's video traffic. The bottleneck has moved from the endpoints to the internals of the network, but there will probably always be performance bottlenecks.

      --
      [Sir Garlon] is the marvellest knight that is now living, for he destroyeth many good knights, for he goeth invisible.
      • (Score: 2, Interesting) by sfm on Monday February 24 2014, @06:55PM

        by sfm (675) on Monday February 24 2014, @06:55PM (#6059)

        "It's pretty clear from the kerfuffle between Verizon and Netflix that there is not enough capacity for today's video traffic"

        Yes, but what is the requirement that Netflix outbound video be sent HTTPS ?? Or are we just setting up for a time when all internet traffic is HTTPS ?

        • (Score: 2) by Sir Garlon on Monday February 24 2014, @07:06PM

          by Sir Garlon (1264) on Monday February 24 2014, @07:06PM (#6065)

          I didn't mean to suggest that Netflix traffic in particular needs to be HTTPS, only that Netflix traffic demonstrates that bandwidth is still limited. This was in reply to GP saying "we don't need caching because the days of dial-up are over." Dial-up is gone but network constraints are still real, that was my only point.

          --
          [Sir Garlon] is the marvellest knight that is now living, for he destroyeth many good knights, for he goeth invisible.
          • (Score: 3, Insightful) by Ezber Bozmak on Monday February 24 2014, @11:58PM

            by Ezber Bozmak (764) on Monday February 24 2014, @11:58PM (#6291)

            It's only limited because Verizon deliberately underprovisions. I don't think it is reasonable to consider willfull mismanagement as evidence of a resource shortage.

      • (Score: 5, Insightful) by frojack on Monday February 24 2014, @07:23PM

        by frojack (1554) on Monday February 24 2014, @07:23PM (#6079) Journal

        Then the problem is Netflix, and not a few tiny gifs and logos that the ISP can avoid fetching. This will save exactly nothing.

        Netflix is best fixed by moving content to the ISPs network (which is exactly what they are doing), not by futzing around with the other traffic.

        --
        No, you are mistaken. I've always had this sig.
    • (Score: 2, Interesting) by calmond on Monday February 24 2014, @07:08PM

      by calmond (1826) on Monday February 24 2014, @07:08PM (#6069)

      In the new HTTP 2.0 proposal, all traffic is going to be https. That means that there is no choice about what will be http and https, since http will not be an option.

      This does mean that proxy servers won't work, since they can't see inside the https encrypted packet. While caching isn't as big of a deal with modern equipment, the other common use of a proxy server, filtering, is a big deal. A lot of places like K-12 schools are required by law to filter Internet traffic, which may not be possible with HTTP 2.0.

      As for the issues with trusting the likes of AT&T - yeah, right!

      • (Score: 5, Insightful) by dmc on Monday February 24 2014, @07:26PM

        by dmc (188) on Monday February 24 2014, @07:26PM (#6083)

        In the new HTTP 2.0 proposal, all traffic is going to be https. That means that there is no choice about what will be http and https, since http will not be an option.

        This does mean that proxy servers won't work, since they can't see inside the https encrypted packet. While caching isn't as big of a deal with modern equipment, the other common use of a proxy server, filtering, is a big deal. A lot of places like K-12 schools are required by law to filter Internet traffic, which may not be possible with HTTP 2.

        Wrong. Filtering for K-12 schools will still be possible without this, it will just need to be done on the client system, instead of on a centralized proxy. If this is such an untenable situation (which it isn't) then the HTTP2 proposers need to rewrite their protocol. This is a tremendously important issue for society, that involves something that could be used to put the equivalent of China's authoritarian (attempt) to control the internet for it's citizens around the necks of *everyone*. The thing that should defeat it, is Network Neutrality, and making it illegal for ISPs to block all non-HTTP2(this proposal) traffic. But since we no longer have NN, we need to fight back against these sorts of things. With NN, then I'd say fine- call it HTTP2, but let the meritocracy label and understand it for what it is- false security. Then everyone who wants real security will stick with HTTP1+https (+improvements in CA infra over what we have today). The nefariousness that I smell coming off of this, is that internet user freedom to ignore such falsely-secure protocols will be taken away from them by ISPs blocking every protocol that doesn't have an NSA backdoor (like this one). Any standards body that blesses this by calling it HTTP2 will have forever lost my trust. Call it something else, that is optional, and fine. The more options the merrier. But take my end to end actually secure encryption out of my cold dead hands. I like my caching and filtering on my endpoint (or interim proxy that _i control_), thank you very much.

        • (Score: 1) by calmond on Tuesday February 25 2014, @07:17PM

          by calmond (1826) on Tuesday February 25 2014, @07:17PM (#6851)

          I guess I should clarify a bit what I had in mind. Certainly client utilities like Browse Control and others can work on the client in an all HTTPS environment. I've set up transparent proxies in the past though to catch all client machines (tablets, smart phones, etc.), including those that may not have a client application installed. An all HTTPS environment would render transparent proxies, and thus mandatory filtering of all network traffic in places like K-12 schools, impossible. Naturally, a school could simply deny access to devices they don't own, and solve that problem.

          Having said all that, please don't misunderstand me, I am completely in favor of an all HTTPS protocol, I'm just pointing out that any such move will have consequences.

          • (Score: 2) by dmc on Wednesday February 26 2014, @02:47AM

            by dmc (188) on Wednesday February 26 2014, @02:47AM (#7077)

            An all HTTPS environment would render transparent proxies, and thus mandatory filtering of all network traffic in places like K-12 schools, impossible. Naturally, a school could simply deny access to devices they don't own, and solve that problem.

            I think you just contradicted yourself. You went from impossible, to naturally problem solved in the space of two sentences.

            • (Score: 1) by calmond on Wednesday February 26 2014, @02:02PM

              by calmond (1826) on Wednesday February 26 2014, @02:02PM (#7276)

              No, not really. I said it is impossible to do this from a centralized server environment for all devices. A compromise would be to not allow all devices, but only the ones under your administrative control. This is not a contradiction, but a compromise.

      • (Score: 5, Insightful) by frojack on Monday February 24 2014, @07:37PM

        by frojack (1554) on Monday February 24 2014, @07:37PM (#6094) Journal

        This does mean that proxy servers won't work, since they can't see inside the https encrypted packet. While caching isn't as big of a deal with modern equipment, the other common use of a proxy server, filtering, is a big deal.

        Thank you for making it perfectly obvious that this proposal was NEVER about bandwidth management and was ALWAYS about spying, filtering, and control.

        You have, perhaps unwittingly, performed a great service to society by making this clear.

        --
        No, you are mistaken. I've always had this sig.
        • (Score: 1) by SMI on Tuesday February 25 2014, @06:24PM

          by SMI (333) on Tuesday February 25 2014, @06:24PM (#6801)

          Precisely. One of the first things that came to my mind when reading TFS is how much more difficult it's going to be to try to explain encryption to people, now that we'll have to explain that some encryption is real and works, while others (like this) are deliberately designed only to give a sense of false security to people who kind of care about their privacy, so they're interested, but without the technical background and understanding to see this for what it really is.

    • (Score: 2, Interesting) by lennier on Monday February 24 2014, @08:17PM

      by lennier (2199) on Monday February 24 2014, @08:17PM (#6130)

      I agree that caching and re-encrypting encrypted data seems dodgy. But I'd also say that not only are the days of insufficient bandwidth not gone and never will be gone (how many simultaneous streams of cat videos does the world need? Just one more!) - a world of pervasive caching (done at a correct lower protocol level, not at the application level) is the Star Trek future of networking. At least that's the idea behind content centric networking [parc.com], which seems to have some big names in TCP/IP behind it.

      --
      Delenda est Beta
    • (Score: 2, Interesting) by hankwang on Monday February 24 2014, @08:53PM

      by hankwang (100) on Monday February 24 2014, @08:53PM (#6164) Homepage

      The client and the server can and should decide which parts should be secure, and which parts can be insecure, which parts can be served from cache, and which parts must be sent again.

      I can imagine scenarios where the data itself is not really secret, but where one would like to ensure that it is not tampered with while in transit. As far as I know, such a mechanism does not exist in HTTP nor in the present proposal. For software downloads (e.g. rpm and deb files), there is a signing mechanism. But if I want to install linux from a downloaded CD image, I would officially be supposed to check the checksum against the value that is... published over HTTP. Chicken-and-egg problem...

      • (Score: 2, Funny) by stderr on Tuesday February 25 2014, @12:12AM

        by stderr (11) on Tuesday February 25 2014, @12:12AM (#6299) Journal

        But if I want to install linux from a downloaded CD image, I would officially be supposed to check the checksum against the value that is... published over HTTP. Chicken-and-egg problem...

        If only there could be a signature file [debian.org] right next to the checksum file [debian.org], so you could check if someone tampered with the checksum file...

        Too bad that won't be possible any time soon...

        --
        alias sudo="echo make it yourself #" # ... and get off my lawn!
        • (Score: 1) by hankwang on Tuesday February 25 2014, @03:19AM

          by hankwang (100) on Tuesday February 25 2014, @03:19AM (#6362) Homepage

          "a signature file right next to the checksum file, so you could check if someone tampered with the checksum file..."

          And how do I know that it is the original signature file if I get it over HTTP? Plus it is a pain to deal with it manually.

    • (Score: 2) by TheLink on Wednesday February 26 2014, @03:22AM

      by TheLink (332) on Wednesday February 26 2014, @03:22AM (#7095) Journal

      On the other hand ISPs can run bittorrent caching servers that automatically cache popular torrents. The issue is getting sued by the **AA and other non-tech-related unpleasantness.

  • (Score: 3, Insightful) by RobotMonster on Monday February 24 2014, @06:53PM

    by RobotMonster (130) on Monday February 24 2014, @06:53PM (#6055) Journal

    End-to-end HTTPS breaks the Internet

    I don't think you know what the Internet is.

    Next time, please understand what words mean before writing panicky B.S. using them, or LEAVE IT FOR SOMEONE ELSE TO WRITE ABOUT.

    • (Score: 3, Interesting) by dmc on Monday February 24 2014, @07:32PM

      by dmc (188) on Monday February 24 2014, @07:32PM (#6091)

      This was clearly an Anonymous Coward either acting directly (or more likely indirectly) as a shill for authoritarians that would have ISPs block all end-to-end HTTPS1 once NSA-friendly HTTP2 is widely adopted. Of course, if enough people understood Network Neutrality, and we reinstated it, it would be illegal for ISPs to do such blocking.

      Note that what I said doesn't discount the usefulness of this new protocol _for some or even many people_. But having it proposed as "HTTP2" smells like an authoritarian way to make HTTPSv1 illegal (legal for ISPs to block) once this new thing is widely used.

      • (Score: 2) by frojack on Monday February 24 2014, @07:52PM

        by frojack (1554) on Monday February 24 2014, @07:52PM (#6108) Journal

        Why don't you discount the usefulness of this new protocol?
        Who precisely will find it to be useful?

        That you can excuse it so lightly, in light of what you have posted upthread, sounds like you are slowly coming around to the "Won't someone please think of the Children" argument, or that you don't understand how caching should (and does) work.

           

        --
        No, you are mistaken. I've always had this sig.
  • (Score: 3, Insightful) by sglane on Monday February 24 2014, @06:53PM

    by sglane (3133) on Monday February 24 2014, @06:53PM (#6056)

    I wouldn't call caching "broken" since CDNs work well even over SSL. CDNs are a better idea than an allowing an ISP to mangle HTTP by injecting their own headers and modifying the contents. By reworking TLS to have a "Trusted Proxy" you're removing the core concept of Transport Layer Security since you can't trust a proxy.

    • (Score: 2, Informative) by mechanicjay on Monday February 24 2014, @07:18PM

      by mechanicjay (7) <reversethis-{gro ... a} {yajcinahcem}> on Monday February 24 2014, @07:18PM (#6074) Homepage Journal

      I understand your point, but I'd be careful of just a statement.

      When connect via https to the infrastructure here:

      Https connections are handled by our load balancer, which will then terminate your SSL connection, and *proxy* your traffic to the back-end web infrastructure in the clear. This is of course done via a private non-routable network. In this case, the proxy is trusted -- but it's a proxy acting as an agent of the site you're trying to trust, so everything is on the up and up. As far as you the client are concerned, you're secure to the server's front door, after that anything goes. This is a fairly common way to run a anything larger than a single server infrastructure.

      --
      My VMS box beat up your Windows box.
  • (Score: 5, Funny) by dyingtolive on Monday February 24 2014, @07:20PM

    by dyingtolive (952) on Monday February 24 2014, @07:20PM (#6076)

    Hey, SN has a shill!

    --
    Don't blame me, I voted for moose wang!
  • (Score: 3, Insightful) by Wootery on Monday February 24 2014, @07:21PM

    by Wootery (2341) on Monday February 24 2014, @07:21PM (#6078)

    Ridiculous. A good deal (probably most?) of the value of HTTPS is that it protects you from your ISP from messing with the page contents or from spying on you (whether on behalf of a government or for their own reasons).

    On top of being worthless, it adds complexity, in a user-facing way. User awareness of HTTPS is reasonably good, and there's no need for another scheme with another way of identifying itself, and to burden users with learning what it does.

    This [netflix.com] new [arstechnica.com] caching solution for Netflix sounds worthwhile, though.

  • (Score: 3, Interesting) by gallondr00nk on Monday February 24 2014, @07:42PM

    by gallondr00nk (392) on Monday February 24 2014, @07:42PM (#6098)

    We've already got HTTP for unencrypted traffic, and HTTPS for encrypted traffic. What else do we really need? If it's too important to leave unencrypted, having it decoded midstream is too much of a risk. If it isn't, what's wrong with HTTP?

    If the NSA revelations have done anything, they've started a desire towards encryption that won't let up anytime soon. Perhaps our ISPs are trustworthy enough to act as a proxy (hah), but we all know damn well the NSA aren't.

    There's a delicious irony that AT&T, the infamous host of Room 641A [wikipedia.org], are proposing standards changes because they feel encryption is hurting their profitability.

    • (Score: 0) by lennier on Monday February 24 2014, @08:33PM

      by lennier (2199) on Monday February 24 2014, @08:33PM (#6146)

      "We've already got HTTP for unencrypted traffic, and HTTPS for encrypted traffic. What else do we really need?"

      I'd say that what the Web needs, and has needed for a long time, is a protocol for transcluding independently encrypted sub-page units of data. That would be a happy mix between 'encrypt all' and 'encrypt nothing'.

      Your average modern social/messaging Web 'page', for example on a blog or comment forum, anything except a corporate brochure site, contains maybe a header, a bunch of rotating ads, and a whole collection of post or comment units. The thing about all these sub-units is that they mostly don't change after you've visited the page once, and there are often *lot* of them. Like, hundreds to thousands to millions. So it seems pretty dumb for the Web architecture, either on the server or the proxy, to be recreating and failing to cache all these units that make up the majority of your page if they could just include them independently. Then your page would be a very small list of identifiers of content sections pre-fetched or found elsewhere. It would reduce a huge amount of load on servers, and give small blogs an edge against the huge outfits like Facebook that can afford ridiculous amounts of server farms and CDNs to make up for a simple oversight in the design of HTTP. It would also reduce the amount of Javascript needed and make tricks like AJAX less necessary if the underlying HTTP protocol was aware of sub-page units. Finally, it would mean Web end users could pool and share bandwidth and avoid getting hit with broadband overage fees (most of the planet doesn't have endless monthly ree Internet traffic like the USA main urban centres do) ; mass caching could also make disruptive technologies like mesh routing useful.

      Of course, efficiency and security are at each other's throats, so there'd be a balance with all of this. But generating and encrypting a page as a unit _when the page is not actually the fundamental unit of the data being transferred_ but just a temporary display/UI mechanism seems just a bit, well, wrong to me.

      --
      Delenda est Beta
      • (Score: 2, Insightful) by mindriot on Monday February 24 2014, @09:14PM

        by mindriot (928) on Monday February 24 2014, @09:14PM (#6185)
        That idea is fine as long as you can ensure that an adversary can learn nothing about encrypted sub-page units from the unencrypted or known-plaintext sub-page units accompanying it. Otherwise you've just magnified the metadata problem...
        --
        soylent_uid=$(echo $slash_uid|cut -c1,3,5)
        • (Score: 1) by lennier on Monday February 24 2014, @09:53PM

          by lennier (2199) on Monday February 24 2014, @09:53PM (#6222)

          Yes, known plaintext would be a problem, as would be metadata; even if a sub-unit is encrypted, it's still got an identity so it's possible to know 'that' Post #1234 was transmitted to Endpoint #5678 even if not 'what' Post #1234 is. And I suspect every content-centric network would have that kind of issue.

          Although in a network with pervasive caching at all levels (like at the switch/router level as CCN advocates recommend), there _should_ be some natural shielding from the fact that if anyone in your organisation requests Post #1234, your proxy would fetch it only once and cache it for a long time, so any further accesses you make to it wouldn't go beyond your organisational boundaries. And your hostile upstream ISP would only know that the request for access went to your organisation, not which endpoint was requesting access. It wouldn't be quite as good as onion routing but should be a lot better then current HTTPS IP logging.

          --
          Delenda est Beta
          • (Score: 1) by mindriot on Monday February 24 2014, @10:06PM

            by mindriot (928) on Monday February 24 2014, @10:06PM (#6235)

            I guess you're right in that metadata exploitation would be somewhat hindered by the anonymity afforded by a caching proxy (although that assumes that adversaries/certain agencies will not have access to your organization's proxy).

            The bigger problem I see is that there is not only the metadata problem to cope with, there is also the problem that only tech-savvy users would even be aware of its existence while everyone else could fall for an illusion of security -- "the important sub-units are secure, so I'm perfectly fine and I can do whatever I want".

            But it's quite possible that I'm overly worried about this.

            --
            soylent_uid=$(echo $slash_uid|cut -c1,3,5)
  • (Score: 4, Insightful) by WildWombat on Monday February 24 2014, @07:57PM

    by WildWombat (1428) on Monday February 24 2014, @07:57PM (#6111)

    --"End-to-end HTTPS breaks the Internet and has _always_ broken the internet by preventing caching."

    Bullshit. Not caching every single thing that crosses over the wires does not break the internet. Not everything should or needs to be cached.

    --"Lauren Weinstein is an old net pro and not the kind of person who would support proposals that are meant to hurt your security."

    And RSA isn't the kind of organization that would purposefully weaken their product at the behest of the NSA. Oh, wait, they did. And Weinstein is purposefully pushing a proposal that is obviously and undeniably going to drastically weaken net security, whatever his previous reputation. Anyone want to guess why?

    --"The internet is staggering under the load of unnecessary duplicated information and all of us pay the price of that with slower downloads."

    And more unadulterated bullshit. If you look at what the main loads on the internet are during peak hours there are two major sources: Netflix and Youtube. Thats something like 50% of the bandwidth use during peak hours. These can be cached, in fact, and Netflix will provide computers to do just that if the ISP cooperates. And many other major bandwidth intensive sites already use Akamai or another cdn. The rest of the small scale text and a few pngs net traffic is rather trivial. Not caching the https session between me and my bank doesn't fucking bring the internet to its knees. We need more security on the net, not less, especially not less for bullshit made up reasons.

    So, in short, fuck off you NSA shill.

    Cheers,
    -WW

  • (Score: 3, Insightful) by hemocyanin on Monday February 24 2014, @08:14PM

    by hemocyanin (186) on Monday February 24 2014, @08:14PM (#6126) Journal

    Whatever. This is AT&T. You might as well just say "NSA has proposed pseudo HTTPS to make the internet work better." It would be as equally honest as AT&T's proposal.

  • (Score: 1) by dude on Monday February 24 2014, @10:10PM

    by dude (3206) on Monday February 24 2014, @10:10PM (#6238)

    Shillin' for the man

  • (Score: 2, Insightful) by forsythe on Tuesday February 25 2014, @02:56AM

    by forsythe (831) on Tuesday February 25 2014, @02:56AM (#6351)

    The internet is staggering under the load of unnecessary duplicated information

    Actually, it's doing quite fine. Your downloads, however, are staggering under the load of unnecessary bandwidth caps.