Stories
Slash Boxes
Comments

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00PM   Printer-friendly
from the things-could-get-hairy dept.

mrbluze writes:

"A modified HTTP protocol is being proposed (the proposal is funded by AT&T) which would allow ISP's to decrypt and re-encrypt traffic as part of day to day functioning in order to save money on bandwidth through caching. The draft document states:

To distinguish between an HTTP2 connection meant to transport "https" URIs resources and an HTTP2 connection meant to transport "http" URIs resource, the draft proposes to 'register a new value in the Application Layer Protocol negotiation (ALPN) Protocol IDs registry specific to signal the usage of HTTP2 to transport "http" URIs resources: h2clr.

The proposal is being criticized by Lauren Weinstein in that it provides a false sense of security to end users who might believe that their communications are actually secure. Can this provide an ISP with an excuse to block or throttle HTTPS traffic?"

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Funny) by Anonymous Coward on Monday February 24 2014, @06:09PM

    by Anonymous Coward on Monday February 24 2014, @06:09PM (#6004)

    as anonymous at least.

    • (Score: 4, Insightful) by Anonymous Coward on Monday February 24 2014, @08:44PM

      by Anonymous Coward on Monday February 24 2014, @08:44PM (#6155)

      Can this provide an ISP with an excuse to block or throttle HTTPS traffic?

      It saves the NSA from having to decrypt traffic themselves.

  • (Score: -1, Troll) by Anonymous Coward on Monday February 24 2014, @06:14PM

    by Anonymous Coward on Monday February 24 2014, @06:14PM (#6010)

    End-to-end HTTPS breaks the Internet and has _always_ broken the internet by preventing caching. Lauren Weinstein is an old net pro and not the kind of person who would support proposals that are meant to hurt your security. This proposal repairs the damage done by HTTPS by allowing for a secure mechanism for caching to work again. It is _stupid_ for all of us to be downloading the same data in different encrypted streams to the origin server. The internet is staggering under the load of unnecessary duplicated information and all of us pay the price of that with slower downloads.

    Next time, please understand the proposal before writing panicky B.S. about it, or LEAVE IT FOR SOMEONE ELSE TO WRITE ABOUT.

    • (Score: 4, Insightful) by Anonymous Coward on Monday February 24 2014, @06:19PM

      by Anonymous Coward on Monday February 24 2014, @06:19PM (#6017)

      Fine but let's just have it requested with ins:// or nsa:// protocol strings to denote this mitm-enabled ssl is insecure!

    • (Score: 4, Funny) by maratumba on Monday February 24 2014, @06:22PM

      by maratumba (938) on Monday February 24 2014, @06:22PM (#6022) Journal

      He who sacrifices freedom (of watching Glee in HD) for security (of being anonymous) deserves neither.

      Wait...

    • (Score: 4, Insightful) by internetguy on Monday February 24 2014, @06:27PM

      by internetguy (235) on Monday February 24 2014, @06:27PM (#6028)

      >> It is _stupid_ for all of us to be downloading the same data

      I guess it's stupid until your web traffic is monitored and then used against you.

      --
      Sig: I must be new here.
    • (Score: 5, Informative) by mechanicjay on Monday February 24 2014, @06:27PM

      by mechanicjay (7) <mechanicjayNO@SPAMsoylentnews.org> on Monday February 24 2014, @06:27PM (#6030) Homepage Journal

      No, just no.

      The network provider should not be in the middle here -- ever, not even for caching of non-encrypted stuff.

      How many times have any of you been on the end of a support call, where the end resolution is, "Wait for your ISP's transparent upstream proxy to refresh."

      On the Content provider side, there's no reason not to do some heavy caching behind the SSL off-load appliance. The whole point, though, is that You the client are establishing trust with the site you're talking to. Honestly how is this any different than the phone company saying, "We're going to make sure to listen in on all your voice call, so we can be sure the network is used efficiently." That's not the point -- if your network can't handle the load, you need to build it out (charge more if you need to).

      This is basically a sanctioned man-in-the-middle attack, between you and every secure site you access, more or less a built-in backdoor. I'm sure, these appliances wouldn't be prime targets for attacks or anything.

      It's almost as bad as the Clipper Chip, but for web browsers instead!

      --
      My VMS box beat up your Windows box.
      • (Score: 5, Informative) by frojack on Monday February 24 2014, @06:37PM

        by frojack (1554) on Monday February 24 2014, @06:37PM (#6038) Journal

        This!.

        Client caches. Server validates cached elements. (304 return code has a purpose people, learn it).

        The network stays the hell out of this business.

        --
        No, you are mistaken. I've always had this sig.
        • (Score: 4, Insightful) by Kawumpa on Monday February 24 2014, @07:27PM

          by Kawumpa (1187) on Monday February 24 2014, @07:27PM (#6085)

          The network will try to enforce whatever suits their interests, whether it's net neutrality or privacy doesn't matter. The providers eventually realised the flatrate connectivity was a bad end-user business model to begin with and there is a lot of value in snooping every single bit of your online activity (see Facebook and Google).

          It's time we start encrypting all traffic end-to-end.

    • (Score: 5, Informative) by Sir Garlon on Monday February 24 2014, @06:32PM

      by Sir Garlon (1264) on Monday February 24 2014, @06:32PM (#6032)

      Lauren Weinstein is an old net pro and not the kind of person who would support proposals that are meant to hurt your security.

      Lauren Weinstein is a *critic* of the draft [ietf.org], not a supporter of it. Look at the list of authors: "Weinstein" is not there. Probably you just read TFA too quickly, but invoking Weinstein's name to support this proposal is like invoking Rush Limbaugh's name to support Obamacare.

      --
      [Sir Garlon] is the marvellest knight that is now living, for he destroyeth many good knights, for he goeth invisible.
    • (Score: 5, Insightful) by frojack on Monday February 24 2014, @06:32PM

      by frojack (1554) on Monday February 24 2014, @06:32PM (#6034) Journal

      End-to-end HTTPS breaks the Internet and has _always_ broken the internet by preventing caching.

      Sorry, but caching was never part of the Network design.

      It was an after-thought, bolted on to handle the fact that there was insufficient bandwidth, back in the days of dial-up modems. Those days are gone.

      The client and the server can and should decide which parts should be secure, and which parts can be insecure, which parts can be served from cache, and which parts must be sent again.

      The network should stay the hell out of that business. The road doesn't get to decide who rides in your car. The road is open or the road is closed, or the road is impaired. That's all it gets to tell us.

      If the page elements (logos and banners and images, etc) haven't changed since the last time the client fetched them all that gets sent back is a 304. Caching and conservation of bandwidth is built into the system where it should be, at the end-points.

      We don't need to fix what isn't broken. We don't need to let the network decide what load it will carry. This is utterly idiotic.

      --
      No, you are mistaken. I've always had this sig.
      • (Score: 3, Informative) by Sir Garlon on Monday February 24 2014, @06:40PM

        by Sir Garlon (1264) on Monday February 24 2014, @06:40PM (#6040)

        I totally agree with your firm defense of net neutrality, but I think you are mistaken about the days of insufficient bandwidth being "gone." It's pretty clear from the kerfuffle between Verizon and Netflix [arstechnica.com] that there is not enough capacity for today's video traffic. The bottleneck has moved from the endpoints to the internals of the network, but there will probably always be performance bottlenecks.

        --
        [Sir Garlon] is the marvellest knight that is now living, for he destroyeth many good knights, for he goeth invisible.
        • (Score: 2, Interesting) by sfm on Monday February 24 2014, @06:55PM

          by sfm (675) on Monday February 24 2014, @06:55PM (#6059)

          "It's pretty clear from the kerfuffle between Verizon and Netflix that there is not enough capacity for today's video traffic"

          Yes, but what is the requirement that Netflix outbound video be sent HTTPS ?? Or are we just setting up for a time when all internet traffic is HTTPS ?

          • (Score: 2) by Sir Garlon on Monday February 24 2014, @07:06PM

            by Sir Garlon (1264) on Monday February 24 2014, @07:06PM (#6065)

            I didn't mean to suggest that Netflix traffic in particular needs to be HTTPS, only that Netflix traffic demonstrates that bandwidth is still limited. This was in reply to GP saying "we don't need caching because the days of dial-up are over." Dial-up is gone but network constraints are still real, that was my only point.

            --
            [Sir Garlon] is the marvellest knight that is now living, for he destroyeth many good knights, for he goeth invisible.
            • (Score: 3, Insightful) by Ezber Bozmak on Monday February 24 2014, @11:58PM

              by Ezber Bozmak (764) on Monday February 24 2014, @11:58PM (#6291)

              It's only limited because Verizon deliberately underprovisions. I don't think it is reasonable to consider willfull mismanagement as evidence of a resource shortage.

        • (Score: 5, Insightful) by frojack on Monday February 24 2014, @07:23PM

          by frojack (1554) on Monday February 24 2014, @07:23PM (#6079) Journal

          Then the problem is Netflix, and not a few tiny gifs and logos that the ISP can avoid fetching. This will save exactly nothing.

          Netflix is best fixed by moving content to the ISPs network (which is exactly what they are doing), not by futzing around with the other traffic.

          --
          No, you are mistaken. I've always had this sig.
      • (Score: 2, Interesting) by calmond on Monday February 24 2014, @07:08PM

        by calmond (1826) on Monday February 24 2014, @07:08PM (#6069)

        In the new HTTP 2.0 proposal, all traffic is going to be https. That means that there is no choice about what will be http and https, since http will not be an option.

        This does mean that proxy servers won't work, since they can't see inside the https encrypted packet. While caching isn't as big of a deal with modern equipment, the other common use of a proxy server, filtering, is a big deal. A lot of places like K-12 schools are required by law to filter Internet traffic, which may not be possible with HTTP 2.0.

        As for the issues with trusting the likes of AT&T - yeah, right!

        • (Score: 5, Insightful) by dmc on Monday February 24 2014, @07:26PM

          by dmc (188) on Monday February 24 2014, @07:26PM (#6083)

          In the new HTTP 2.0 proposal, all traffic is going to be https. That means that there is no choice about what will be http and https, since http will not be an option.

          This does mean that proxy servers won't work, since they can't see inside the https encrypted packet. While caching isn't as big of a deal with modern equipment, the other common use of a proxy server, filtering, is a big deal. A lot of places like K-12 schools are required by law to filter Internet traffic, which may not be possible with HTTP 2.

          Wrong. Filtering for K-12 schools will still be possible without this, it will just need to be done on the client system, instead of on a centralized proxy. If this is such an untenable situation (which it isn't) then the HTTP2 proposers need to rewrite their protocol. This is a tremendously important issue for society, that involves something that could be used to put the equivalent of China's authoritarian (attempt) to control the internet for it's citizens around the necks of *everyone*. The thing that should defeat it, is Network Neutrality, and making it illegal for ISPs to block all non-HTTP2(this proposal) traffic. But since we no longer have NN, we need to fight back against these sorts of things. With NN, then I'd say fine- call it HTTP2, but let the meritocracy label and understand it for what it is- false security. Then everyone who wants real security will stick with HTTP1+https (+improvements in CA infra over what we have today). The nefariousness that I smell coming off of this, is that internet user freedom to ignore such falsely-secure protocols will be taken away from them by ISPs blocking every protocol that doesn't have an NSA backdoor (like this one). Any standards body that blesses this by calling it HTTP2 will have forever lost my trust. Call it something else, that is optional, and fine. The more options the merrier. But take my end to end actually secure encryption out of my cold dead hands. I like my caching and filtering on my endpoint (or interim proxy that _i control_), thank you very much.

          • (Score: 1) by calmond on Tuesday February 25 2014, @07:17PM

            by calmond (1826) on Tuesday February 25 2014, @07:17PM (#6851)

            I guess I should clarify a bit what I had in mind. Certainly client utilities like Browse Control and others can work on the client in an all HTTPS environment. I've set up transparent proxies in the past though to catch all client machines (tablets, smart phones, etc.), including those that may not have a client application installed. An all HTTPS environment would render transparent proxies, and thus mandatory filtering of all network traffic in places like K-12 schools, impossible. Naturally, a school could simply deny access to devices they don't own, and solve that problem.

            Having said all that, please don't misunderstand me, I am completely in favor of an all HTTPS protocol, I'm just pointing out that any such move will have consequences.

            • (Score: 2) by dmc on Wednesday February 26 2014, @02:47AM

              by dmc (188) on Wednesday February 26 2014, @02:47AM (#7077)

              An all HTTPS environment would render transparent proxies, and thus mandatory filtering of all network traffic in places like K-12 schools, impossible. Naturally, a school could simply deny access to devices they don't own, and solve that problem.

              I think you just contradicted yourself. You went from impossible, to naturally problem solved in the space of two sentences.

              • (Score: 1) by calmond on Wednesday February 26 2014, @02:02PM

                by calmond (1826) on Wednesday February 26 2014, @02:02PM (#7276)

                No, not really. I said it is impossible to do this from a centralized server environment for all devices. A compromise would be to not allow all devices, but only the ones under your administrative control. This is not a contradiction, but a compromise.

        • (Score: 5, Insightful) by frojack on Monday February 24 2014, @07:37PM

          by frojack (1554) on Monday February 24 2014, @07:37PM (#6094) Journal

          This does mean that proxy servers won't work, since they can't see inside the https encrypted packet. While caching isn't as big of a deal with modern equipment, the other common use of a proxy server, filtering, is a big deal.

          Thank you for making it perfectly obvious that this proposal was NEVER about bandwidth management and was ALWAYS about spying, filtering, and control.

          You have, perhaps unwittingly, performed a great service to society by making this clear.

          --
          No, you are mistaken. I've always had this sig.
          • (Score: 1) by SMI on Tuesday February 25 2014, @06:24PM

            by SMI (333) on Tuesday February 25 2014, @06:24PM (#6801)

            Precisely. One of the first things that came to my mind when reading TFS is how much more difficult it's going to be to try to explain encryption to people, now that we'll have to explain that some encryption is real and works, while others (like this) are deliberately designed only to give a sense of false security to people who kind of care about their privacy, so they're interested, but without the technical background and understanding to see this for what it really is.

      • (Score: 2, Interesting) by lennier on Monday February 24 2014, @08:17PM

        by lennier (2199) on Monday February 24 2014, @08:17PM (#6130)

        I agree that caching and re-encrypting encrypted data seems dodgy. But I'd also say that not only are the days of insufficient bandwidth not gone and never will be gone (how many simultaneous streams of cat videos does the world need? Just one more!) - a world of pervasive caching (done at a correct lower protocol level, not at the application level) is the Star Trek future of networking. At least that's the idea behind content centric networking [parc.com], which seems to have some big names in TCP/IP behind it.

        --
        Delenda est Beta
      • (Score: 2, Interesting) by hankwang on Monday February 24 2014, @08:53PM

        by hankwang (100) on Monday February 24 2014, @08:53PM (#6164) Homepage

        The client and the server can and should decide which parts should be secure, and which parts can be insecure, which parts can be served from cache, and which parts must be sent again.

        I can imagine scenarios where the data itself is not really secret, but where one would like to ensure that it is not tampered with while in transit. As far as I know, such a mechanism does not exist in HTTP nor in the present proposal. For software downloads (e.g. rpm and deb files), there is a signing mechanism. But if I want to install linux from a downloaded CD image, I would officially be supposed to check the checksum against the value that is... published over HTTP. Chicken-and-egg problem...

        • (Score: 2, Funny) by stderr on Tuesday February 25 2014, @12:12AM

          by stderr (11) on Tuesday February 25 2014, @12:12AM (#6299) Journal

          But if I want to install linux from a downloaded CD image, I would officially be supposed to check the checksum against the value that is... published over HTTP. Chicken-and-egg problem...

          If only there could be a signature file [debian.org] right next to the checksum file [debian.org], so you could check if someone tampered with the checksum file...

          Too bad that won't be possible any time soon...

          --
          alias sudo="echo make it yourself #" # ... and get off my lawn!
          • (Score: 1) by hankwang on Tuesday February 25 2014, @03:19AM

            by hankwang (100) on Tuesday February 25 2014, @03:19AM (#6362) Homepage

            "a signature file right next to the checksum file, so you could check if someone tampered with the checksum file..."

            And how do I know that it is the original signature file if I get it over HTTP? Plus it is a pain to deal with it manually.

      • (Score: 2) by TheLink on Wednesday February 26 2014, @03:22AM

        by TheLink (332) on Wednesday February 26 2014, @03:22AM (#7095) Journal

        On the other hand ISPs can run bittorrent caching servers that automatically cache popular torrents. The issue is getting sued by the **AA and other non-tech-related unpleasantness.

    • (Score: 3, Insightful) by RobotMonster on Monday February 24 2014, @06:53PM

      by RobotMonster (130) on Monday February 24 2014, @06:53PM (#6055) Journal

      End-to-end HTTPS breaks the Internet

      I don't think you know what the Internet is.

      Next time, please understand what words mean before writing panicky B.S. using them, or LEAVE IT FOR SOMEONE ELSE TO WRITE ABOUT.

      • (Score: 3, Interesting) by dmc on Monday February 24 2014, @07:32PM

        by dmc (188) on Monday February 24 2014, @07:32PM (#6091)

        This was clearly an Anonymous Coward either acting directly (or more likely indirectly) as a shill for authoritarians that would have ISPs block all end-to-end HTTPS1 once NSA-friendly HTTP2 is widely adopted. Of course, if enough people understood Network Neutrality, and we reinstated it, it would be illegal for ISPs to do such blocking.

        Note that what I said doesn't discount the usefulness of this new protocol _for some or even many people_. But having it proposed as "HTTP2" smells like an authoritarian way to make HTTPSv1 illegal (legal for ISPs to block) once this new thing is widely used.

        • (Score: 2) by frojack on Monday February 24 2014, @07:52PM

          by frojack (1554) on Monday February 24 2014, @07:52PM (#6108) Journal

          Why don't you discount the usefulness of this new protocol?
          Who precisely will find it to be useful?

          That you can excuse it so lightly, in light of what you have posted upthread, sounds like you are slowly coming around to the "Won't someone please think of the Children" argument, or that you don't understand how caching should (and does) work.

             

          --
          No, you are mistaken. I've always had this sig.
    • (Score: 3, Insightful) by sglane on Monday February 24 2014, @06:53PM

      by sglane (3133) on Monday February 24 2014, @06:53PM (#6056)

      I wouldn't call caching "broken" since CDNs work well even over SSL. CDNs are a better idea than an allowing an ISP to mangle HTTP by injecting their own headers and modifying the contents. By reworking TLS to have a "Trusted Proxy" you're removing the core concept of Transport Layer Security since you can't trust a proxy.

      • (Score: 2, Informative) by mechanicjay on Monday February 24 2014, @07:18PM

        by mechanicjay (7) <mechanicjayNO@SPAMsoylentnews.org> on Monday February 24 2014, @07:18PM (#6074) Homepage Journal

        I understand your point, but I'd be careful of just a statement.

        When connect via https to the infrastructure here:

        Https connections are handled by our load balancer, which will then terminate your SSL connection, and *proxy* your traffic to the back-end web infrastructure in the clear. This is of course done via a private non-routable network. In this case, the proxy is trusted -- but it's a proxy acting as an agent of the site you're trying to trust, so everything is on the up and up. As far as you the client are concerned, you're secure to the server's front door, after that anything goes. This is a fairly common way to run a anything larger than a single server infrastructure.

        --
        My VMS box beat up your Windows box.
    • (Score: 5, Funny) by dyingtolive on Monday February 24 2014, @07:20PM

      by dyingtolive (952) on Monday February 24 2014, @07:20PM (#6076)

      Hey, SN has a shill!

      --
      Don't blame me, I voted for moose wang!
    • (Score: 3, Insightful) by Wootery on Monday February 24 2014, @07:21PM

      by Wootery (2341) on Monday February 24 2014, @07:21PM (#6078)

      Ridiculous. A good deal (probably most?) of the value of HTTPS is that it protects you from your ISP from messing with the page contents or from spying on you (whether on behalf of a government or for their own reasons).

      On top of being worthless, it adds complexity, in a user-facing way. User awareness of HTTPS is reasonably good, and there's no need for another scheme with another way of identifying itself, and to burden users with learning what it does.

      This [netflix.com] new [arstechnica.com] caching solution for Netflix sounds worthwhile, though.

    • (Score: 3, Interesting) by gallondr00nk on Monday February 24 2014, @07:42PM

      by gallondr00nk (392) on Monday February 24 2014, @07:42PM (#6098)

      We've already got HTTP for unencrypted traffic, and HTTPS for encrypted traffic. What else do we really need? If it's too important to leave unencrypted, having it decoded midstream is too much of a risk. If it isn't, what's wrong with HTTP?

      If the NSA revelations have done anything, they've started a desire towards encryption that won't let up anytime soon. Perhaps our ISPs are trustworthy enough to act as a proxy (hah), but we all know damn well the NSA aren't.

      There's a delicious irony that AT&T, the infamous host of Room 641A [wikipedia.org], are proposing standards changes because they feel encryption is hurting their profitability.

      • (Score: 0) by lennier on Monday February 24 2014, @08:33PM

        by lennier (2199) on Monday February 24 2014, @08:33PM (#6146)

        "We've already got HTTP for unencrypted traffic, and HTTPS for encrypted traffic. What else do we really need?"

        I'd say that what the Web needs, and has needed for a long time, is a protocol for transcluding independently encrypted sub-page units of data. That would be a happy mix between 'encrypt all' and 'encrypt nothing'.

        Your average modern social/messaging Web 'page', for example on a blog or comment forum, anything except a corporate brochure site, contains maybe a header, a bunch of rotating ads, and a whole collection of post or comment units. The thing about all these sub-units is that they mostly don't change after you've visited the page once, and there are often *lot* of them. Like, hundreds to thousands to millions. So it seems pretty dumb for the Web architecture, either on the server or the proxy, to be recreating and failing to cache all these units that make up the majority of your page if they could just include them independently. Then your page would be a very small list of identifiers of content sections pre-fetched or found elsewhere. It would reduce a huge amount of load on servers, and give small blogs an edge against the huge outfits like Facebook that can afford ridiculous amounts of server farms and CDNs to make up for a simple oversight in the design of HTTP. It would also reduce the amount of Javascript needed and make tricks like AJAX less necessary if the underlying HTTP protocol was aware of sub-page units. Finally, it would mean Web end users could pool and share bandwidth and avoid getting hit with broadband overage fees (most of the planet doesn't have endless monthly ree Internet traffic like the USA main urban centres do) ; mass caching could also make disruptive technologies like mesh routing useful.

        Of course, efficiency and security are at each other's throats, so there'd be a balance with all of this. But generating and encrypting a page as a unit _when the page is not actually the fundamental unit of the data being transferred_ but just a temporary display/UI mechanism seems just a bit, well, wrong to me.

        --
        Delenda est Beta
        • (Score: 2, Insightful) by mindriot on Monday February 24 2014, @09:14PM

          by mindriot (928) on Monday February 24 2014, @09:14PM (#6185)
          That idea is fine as long as you can ensure that an adversary can learn nothing about encrypted sub-page units from the unencrypted or known-plaintext sub-page units accompanying it. Otherwise you've just magnified the metadata problem...
          --
          soylent_uid=$(echo $slash_uid|cut -c1,3,5)
          • (Score: 1) by lennier on Monday February 24 2014, @09:53PM

            by lennier (2199) on Monday February 24 2014, @09:53PM (#6222)

            Yes, known plaintext would be a problem, as would be metadata; even if a sub-unit is encrypted, it's still got an identity so it's possible to know 'that' Post #1234 was transmitted to Endpoint #5678 even if not 'what' Post #1234 is. And I suspect every content-centric network would have that kind of issue.

            Although in a network with pervasive caching at all levels (like at the switch/router level as CCN advocates recommend), there _should_ be some natural shielding from the fact that if anyone in your organisation requests Post #1234, your proxy would fetch it only once and cache it for a long time, so any further accesses you make to it wouldn't go beyond your organisational boundaries. And your hostile upstream ISP would only know that the request for access went to your organisation, not which endpoint was requesting access. It wouldn't be quite as good as onion routing but should be a lot better then current HTTPS IP logging.

            --
            Delenda est Beta
            • (Score: 1) by mindriot on Monday February 24 2014, @10:06PM

              by mindriot (928) on Monday February 24 2014, @10:06PM (#6235)

              I guess you're right in that metadata exploitation would be somewhat hindered by the anonymity afforded by a caching proxy (although that assumes that adversaries/certain agencies will not have access to your organization's proxy).

              The bigger problem I see is that there is not only the metadata problem to cope with, there is also the problem that only tech-savvy users would even be aware of its existence while everyone else could fall for an illusion of security -- "the important sub-units are secure, so I'm perfectly fine and I can do whatever I want".

              But it's quite possible that I'm overly worried about this.

              --
              soylent_uid=$(echo $slash_uid|cut -c1,3,5)
    • (Score: 4, Insightful) by WildWombat on Monday February 24 2014, @07:57PM

      by WildWombat (1428) on Monday February 24 2014, @07:57PM (#6111)

      --"End-to-end HTTPS breaks the Internet and has _always_ broken the internet by preventing caching."

      Bullshit. Not caching every single thing that crosses over the wires does not break the internet. Not everything should or needs to be cached.

      --"Lauren Weinstein is an old net pro and not the kind of person who would support proposals that are meant to hurt your security."

      And RSA isn't the kind of organization that would purposefully weaken their product at the behest of the NSA. Oh, wait, they did. And Weinstein is purposefully pushing a proposal that is obviously and undeniably going to drastically weaken net security, whatever his previous reputation. Anyone want to guess why?

      --"The internet is staggering under the load of unnecessary duplicated information and all of us pay the price of that with slower downloads."

      And more unadulterated bullshit. If you look at what the main loads on the internet are during peak hours there are two major sources: Netflix and Youtube. Thats something like 50% of the bandwidth use during peak hours. These can be cached, in fact, and Netflix will provide computers to do just that if the ISP cooperates. And many other major bandwidth intensive sites already use Akamai or another cdn. The rest of the small scale text and a few pngs net traffic is rather trivial. Not caching the https session between me and my bank doesn't fucking bring the internet to its knees. We need more security on the net, not less, especially not less for bullshit made up reasons.

      So, in short, fuck off you NSA shill.

      Cheers,
      -WW

    • (Score: 3, Insightful) by hemocyanin on Monday February 24 2014, @08:14PM

      by hemocyanin (186) on Monday February 24 2014, @08:14PM (#6126) Journal

      Whatever. This is AT&T. You might as well just say "NSA has proposed pseudo HTTPS to make the internet work better." It would be as equally honest as AT&T's proposal.

    • (Score: 1) by dude on Monday February 24 2014, @10:10PM

      by dude (3206) on Monday February 24 2014, @10:10PM (#6238)

      Shillin' for the man

    • (Score: 2, Insightful) by forsythe on Tuesday February 25 2014, @02:56AM

      by forsythe (831) on Tuesday February 25 2014, @02:56AM (#6351)

      The internet is staggering under the load of unnecessary duplicated information

      Actually, it's doing quite fine. Your downloads, however, are staggering under the load of unnecessary bandwidth caps.

  • (Score: 5, Insightful) by r00t on Monday February 24 2014, @06:14PM

    by r00t (1349) on Monday February 24 2014, @06:14PM (#6012)

    Seems to me this is simply a ploy for proprietary internet protocols which would allow ISPs to decrypt SSL, Tor or other VPN traffic in order to apply throttling or allow for easier snooping.

    • (Score: 2, Insightful) by jalopezp on Tuesday February 25 2014, @11:58AM

      by jalopezp (2996) on Tuesday February 25 2014, @11:58AM (#6554)

      ploy for proprietary internet protocols which would allow ISPs to decrypt SSL, Tor, or other VPN traffic

      I don't think so. https is only http traffic with an added SSL layer. If they wanted to decrypt SSL they would have to decrypt SSL. Instead, they are creating a second protocol where the middleman can decrypt the data. They are, in essence, sidestepping the problem of having to decrypt SSL traffic.

  • (Score: 5, Insightful) by frojack on Monday February 24 2014, @06:15PM

    by frojack (1554) on Monday February 24 2014, @06:15PM (#6015) Journal

    This ship has sailed.

    We are not going to trust AT&T or any ISP to decrypt our stuff any more. Fool us once, shame on you. Fool us twice, shame on us.

    ISPs and network providers: Your job is to build bandwidth with the obscene profits we have handed you over the years. Your job is NOT to find ways to prevent having to fetch a few more bits. Do your job. Build the networks. Carry the data.

    Turning everything over to the NSA is precisely why we want HTTPS everywhere. You proved you couldn't be trusted. Now STFU, lay the fiber, build the network, or get out of the way.

    --
    No, you are mistaken. I've always had this sig.
    • (Score: 5, Insightful) by Sir Garlon on Monday February 24 2014, @06:47PM

      by Sir Garlon (1264) on Monday February 24 2014, @06:47PM (#6047)

      Now STFU, lay the fiber, build the network, or get out of the way.

      Unfortunately there is no competition in the US broadband market, because local governments have signed exclusive deals with the big ISPs. So there is no incentive for Comcast or Verizon to give a damn what we want. Not too many people are going to cancel their Internet access just because Verizon is throttling Netflix. Pro-industry regulation got us into this mess, and I think only pro-consumer regulation can get us out.

      --
      [Sir Garlon] is the marvellest knight that is now living, for he destroyeth many good knights, for he goeth invisible.
      • (Score: 2, Insightful) by hash14 on Tuesday February 25 2014, @02:12AM

        by hash14 (1102) on Tuesday February 25 2014, @02:12AM (#6331)

        There is no such thing as pro-consumer anything in the US Congress. Once the Supreme Court legalized bribery, any hope of a government that's not thoroughly sworn to monetary interests became far beyond possibility. And it doesn't help that half of the American voting public thinks Jesus walked with dinosaurs.

        The local governments did totally screw themselves when they signed those exclusive agreements. What needs to happen is for a few high profile cases where the municipality pays for the infrastructure and lends it out to the service providers. A few cities are already doing this in fact. Then hopefully it will catch steam and others will follow in suit. Of course, this doesn't stop Federal government from doing other favors to their members, but at least not all governments will be their slaves, and hopefully the population will drift to those locations which are better served, making it harder for ISPs to control people.

    • (Score: 3, Insightful) by Grishnakh on Monday February 24 2014, @08:34PM

      by Grishnakh (2831) on Monday February 24 2014, @08:34PM (#6147)

      ISPs and network providers: Your job is to build bandwidth with the obscene profits we have handed you over the years. Your job is NOT to find ways to prevent having to fetch a few more bits. Do your job. Build the networks. Carry the data.
      Turning everything over to the NSA is precisely why we want HTTPS everywhere. You proved you couldn't be trusted. Now STFU, lay the fiber, build the network, or get out of the way.

      "frojack" and other idiots who sympathize with him: We're going to keep the obscene profits you gave us, and we're going to pay your government representatives for even more laws which favor us and guarantee us more obscene profits, so our CEOs can buy giant yachts. We're going to do the absolute minimum with regards to building networks, because we don't give a shit if your Netflix streams are unwatchable because of excessive packet dropping, since you should be paying us handsomely to use our shitty video-on-demand services instead. On top of all that, we're going to give the NSA access to anything they want.

      Don't like it? Too bad, chump! What are you going to do about it, switch to a competing ISP? Bwahahahahaha! Now STFU and pay our exorbitant bill to you for our shitty services.

      - ISPs and network providers

      • (Score: 1) by DECbot on Tuesday February 25 2014, @04:22AM

        by DECbot (832) on Tuesday February 25 2014, @04:22AM (#6386) Journal

        Ack! You found us here! We built a whole new site to get away from your money grubbing hands
          Is there no place sacred?

        --
        cats~$ sudo chown -R us /home/base
    • (Score: 0) by Aighearach on Monday February 24 2014, @09:18PM

      by Aighearach (2621) on Monday February 24 2014, @09:18PM (#6191)

      I agree it is not useful for trust. However, I do see a use. Lots of things get sent over HTTPS so that they are not visible to casual observers, but where there is not anything that needs to be secured. So a medium level of security where the last mile is encrypted but regional caching is effective might be a good idea.

      For example, I plug into an untrusted LAN, or connect to unsecured WIFI. I'd actually prefer to use HTTPS for everything in that scenario. But I really don't care if the ISP/NSA know what news articles I browsed; they (presumably) know that anyways, from the service provider data.

      Depending how it is implemented (didn't read story) it might be useful in intranets, too.

  • (Score: 2, Interesting) by buswolley on Monday February 24 2014, @06:22PM

    by buswolley (848) on Monday February 24 2014, @06:22PM (#6023)

    That is all.

    --
    subicular junctures
  • (Score: 4, Insightful) by laserfusion on Monday February 24 2014, @06:32PM

    by laserfusion (1450) on Monday February 24 2014, @06:32PM (#6033)

    I guess the motivation for this is to break net neutrality. They can't sort encrypted data, say "google search" from "google mail", but this new scheme would allow them to do that. So they would be able to throttle those services separately.

    Most users already trust the cloud with their unencrypted data, they would probably go along with this too.

    • (Score: 3, Interesting) by VLM on Monday February 24 2014, @06:42PM

      by VLM (445) on Monday February 24 2014, @06:42PM (#6041)

      You can already split those by DNS.

      More likely for ad insertion. "So... google... we've paid a lot of money for these carrier grade ad insertion units, would be a shame if your advertisements were overwritten by ours... but for a modest payment direct to us, we could ensure your data is protected... we're just businessmen, making sure we get our share...".

      In addition to the blindingly obvious logging and sale of personal data. Why should only google get to sell the contents of your gmail?

    • (Score: 3, Interesting) by dbot on Monday February 24 2014, @06:57PM

      by dbot (1811) on Monday February 24 2014, @06:57PM (#6060) Journal

      Not to mention selling you more ads, and content injection [theglobeandmail.com].

  • (Score: 5, Insightful) by mwvdlee on Monday February 24 2014, @06:46PM

    by mwvdlee (169) on Monday February 24 2014, @06:46PM (#6045)

    I know Google caches HTTPS in Gmail, but that doesn't mean caching HTTPS is suddenly okay.

  • (Score: 3, Insightful) by Anonymous Coward on Monday February 24 2014, @07:01PM

    by Anonymous Coward on Monday February 24 2014, @07:01PM (#6064)

    Any comment from anyone who works in online banking? or payment processing?

    Does the world really need another layer of systems where malware can be installed?

    • (Score: 4, Insightful) by Grishnakh on Monday February 24 2014, @08:36PM

      by Grishnakh (2831) on Monday February 24 2014, @08:36PM (#6149)

      Does the world really need another layer of systems where malware can be installed?

      The answer is "yes".

      - signed, the NSA

  • (Score: 1) by neagix on Monday February 24 2014, @07:09PM

    by neagix (25) on Monday February 24 2014, @07:09PM (#6072)

    there was Internet.

    • (Score: 3, Interesting) by neagix on Monday February 24 2014, @07:16PM

      by neagix (25) on Monday February 24 2014, @07:16PM (#6073)

      Precisation: if carriers want hosts to save their bandwidth, why don't they offer free CDN caches?

      The problem is not set in its correct frame IMO.

  • (Score: 5, Insightful) by caseih on Monday February 24 2014, @07:25PM

    by caseih (2744) on Monday February 24 2014, @07:25PM (#6081)

    Wow, the summary the submitter posted here on soylent, and the headline was quite a bit more clear and lucid than the one that appeared on slashdot yesterday. Kudos. Hope this is the trend and norm on soylent.

    The slashdot headline was, "Most Alarming: IETF Draft Proposes 'Trusted Proxy' In HTTP/2.0" which while accurate, wasn't very informative to me.

  • (Score: 4, Informative) by mattyk on Monday February 24 2014, @08:55PM

    by mattyk (2632) on Monday February 24 2014, @08:55PM (#6166) Homepage

    If anyone else here takes part in the httpbis-wg, please step in and help clarify. This proposal doesn't affect https: URIs in any way. Resources served "securely" will continue to be served the same way they currently are. The "trusted proxy" proposal goes hand-in-hand with another proposal -- "opportunistic encryption" -- whereby cleartext http: URIs can be siphoned through TLS if both ends are happy to do so (e.g. using self-signed certificates, or a null cypher), without making any indication to the end user that their data is any more "secure" than it would be using a HTTP/1.1 browser/server (because it isn't). The "trusted proxy" proposal adds some extra utility to that proposal.

    IIRC part of the problem with HTTP/2 was cramming a binary data stream down lines that, in many cases, are hard-coded to expect HTTP/1.[01] ASCII text, without those hard-coded middleware devices barfing. Since HTTPS is an existing, functional binary data stream (in the eyes of those devices) there was a lot of talk about forcing HTTP/2 to travel as HTTPS/TLS.

    See more here: http://hillbrad.typepad.com/blog/2014/02/trusted-p roxies-and-privacy-wolves.html [typepad.com]

    --
    _MattyK_
    • (Score: 3, Insightful) by mindriot on Monday February 24 2014, @09:22PM

      by mindriot (928) on Monday February 24 2014, @09:22PM (#6194)

      This makes a bit more sense. But then, IMHO interrupted encryption is no better than no encryption at all. "Trusted proxy" is just an oxymoron akin to "benevolent man-in-the-middle" -- I will, out of principle, not trust any third party to decrypt and re-encrypt traffic. In other words, why bother with opportunistic encryption at all when it can be interrupted in the middle? What good does it serve then? The interruption makes it null and void, so any CPU cycles spent on encrypting the data stream are nothing but a waste of heat.

      Could anyone point out a good use case where such "half-decent" encryption has any sort of advantage? To me encryption is all (end-to-end) or nothing (might as well send plaintext).

      --
      soylent_uid=$(echo $slash_uid|cut -c1,3,5)
      • (Score: 2, Interesting) by mattyk on Monday February 24 2014, @11:26PM

        by mattyk (2632) on Monday February 24 2014, @11:26PM (#6277) Homepage

        > Could anyone point out a good use case where such "half-decent"
        > encryption has any sort of advantage?

        My second paragraph: using TLS as a channel to tunnel through middleware devices that expect any and all http: traffic (including HTTP/2) to be readable HTTP/1.x-looking ASCII. Wrapping the traffic up in a TLS stream, even with a NULL cypher, will allow it to travel past those devices the way https: traffic already does.

        Incidentally, current discussion on the working group list seems to indicate that the "trusted proxy" proposal is about *advertising* proxies, and that the user still has final say in whether or not to allow the proxy to terminate/decrypt/cache.

        --
        _MattyK_
  • (Score: 2, Interesting) by Pooch on Monday February 24 2014, @09:51PM

    by Pooch (3199) on Monday February 24 2014, @09:51PM (#6219)
    i guess they're yearning for the days of yore and the wap gap [blogspot.com]
  • (Score: 5, Informative) by shanec on Monday February 24 2014, @10:19PM

    by shanec (2928) on Monday February 24 2014, @10:19PM (#6243) Homepage

    I hate to say it, but this has been happening in the corporate world for many years. Several companies will sell an appliance that unencrypts HTTPS traffic, caches (holds for later reading), and re-encrypts it for the local system. They make this happen, by installing a local cert authority to the company systems.

    Unfortunately, this type of man-in-the-middle attack is just one universally accepted certificate authority away from deploying to the general public. I would guess that this has already been deployed, "on selective targets," by large ISP's already.

    I wrote about it on /. a couple years ago: http://slashdot.org/comments.pl?sid=2920607&cid=40 347389 [slashdot.org]

    • (Score: 0) by Anonymous Coward on Wednesday February 26 2014, @01:48PM

      by Anonymous Coward on Wednesday February 26 2014, @01:48PM (#7273)

      I agree, this is nothing new in the US working world. My employer's been doing this for years without ever notifying employees, so every time they're told to go to their healthcare website to check on their health benefits while at work, all private information sent to/from the healthcare websites are read by a third party 3,000 miles away, who does God knows what with it (gee, I wonder), then promises to wrap it back up and send it on. Horribly unethical, and absolutely routine here in the USA these days.

  • (Score: 1) by Jiro on Tuesday February 25 2014, @01:00AM

    by Jiro (3176) on Tuesday February 25 2014, @01:00AM (#6309)

    Is the way this article ends, with either the submitter or editor posing a random question somewhat related to the article's subject. On Slashdot, the record of such questions being insightful, or even relevant, is not that good, and they often just end up becoming instances of Betteridge's law.

    We really don't need to copy the stupid stuff from Slashdot.

    (I've found, for instance, that on Slashdot when my comments are set to nested, often going to the second page brings up exactly the same page as the first page and I have to go 3-4 pages ahead before I get comments I haven;t seen before. I really hope this doesn't show up here too.)

    • (Score: 1) by photong on Tuesday February 25 2014, @04:27AM

      by photong (2219) on Tuesday February 25 2014, @04:27AM (#6387)

      Amen to this. Please leave such comments for the ... comments.

  • (Score: 1, Interesting) by Bruce Perens on Tuesday February 25 2014, @01:18AM

    by Bruce Perens (916) on Tuesday February 25 2014, @01:18AM (#6315) Homepage

    The IETF are on a jihad against plain-text web connections. The next version of HTTP doesn't allow them at all.

    Without plain-text connections, caching won't work. If you operate a web server and aren't a big company, caching is how systems all over the net help you deliver your content as well as a company like Google that can afford thousands of hosts that are geographically close to all users. Caches are in those places where your servers aren't, and help to reduce the net overhead for everyone. Caching helps you compete with the big guys.

    HTTPS really does break caching. We need to have some sort of alternative that makes it work again. It can be this protocol, or it can be something else, but this is the only viable proposal so far.

    It's opt-in for web server operators. It doesn't have to be used for what really needs to be concealed from internet providers. But I can tell you for sure that those icons on top of the page, they don't need to be concealed. They should be cached.

    It's kind of silly to be attempting to engineer more security for the same web where fully half of the population use social networks. If the government wants your information, they will go to the servers.

    Bruce

    • (Score: 3, Interesting) by maxwell demon on Tuesday February 25 2014, @07:32AM

      by maxwell demon (1608) on Tuesday February 25 2014, @07:32AM (#6458) Journal

      However, those non-encrypted parts should at least be digitally signed, so that you can be sure that the cached version really is the one the server sent, and not some malicious replacement. Of course the corresponding public key should be sent encrypted so you can be sure it has not been messed with.

      --
      The Tao of math: The numbers you can count are not the real numbers.