Stories
Slash Boxes
Comments

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00PM   Printer-friendly
from the things-could-get-hairy dept.

mrbluze writes:

"A modified HTTP protocol is being proposed (the proposal is funded by AT&T) which would allow ISP's to decrypt and re-encrypt traffic as part of day to day functioning in order to save money on bandwidth through caching. The draft document states:

To distinguish between an HTTP2 connection meant to transport "https" URIs resources and an HTTP2 connection meant to transport "http" URIs resource, the draft proposes to 'register a new value in the Application Layer Protocol negotiation (ALPN) Protocol IDs registry specific to signal the usage of HTTP2 to transport "http" URIs resources: h2clr.

The proposal is being criticized by Lauren Weinstein in that it provides a false sense of security to end users who might believe that their communications are actually secure. Can this provide an ISP with an excuse to block or throttle HTTPS traffic?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by frojack on Monday February 24 2014, @06:32PM

    by frojack (1554) on Monday February 24 2014, @06:32PM (#6034) Journal

    End-to-end HTTPS breaks the Internet and has _always_ broken the internet by preventing caching.

    Sorry, but caching was never part of the Network design.

    It was an after-thought, bolted on to handle the fact that there was insufficient bandwidth, back in the days of dial-up modems. Those days are gone.

    The client and the server can and should decide which parts should be secure, and which parts can be insecure, which parts can be served from cache, and which parts must be sent again.

    The network should stay the hell out of that business. The road doesn't get to decide who rides in your car. The road is open or the road is closed, or the road is impaired. That's all it gets to tell us.

    If the page elements (logos and banners and images, etc) haven't changed since the last time the client fetched them all that gets sent back is a 304. Caching and conservation of bandwidth is built into the system where it should be, at the end-points.

    We don't need to fix what isn't broken. We don't need to let the network decide what load it will carry. This is utterly idiotic.

    --
    No, you are mistaken. I've always had this sig.
    Starting Score:    1  point
    Moderation   +3  
       Insightful=2, Informative=1, Total=3
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 3, Informative) by Sir Garlon on Monday February 24 2014, @06:40PM

    by Sir Garlon (1264) on Monday February 24 2014, @06:40PM (#6040)

    I totally agree with your firm defense of net neutrality, but I think you are mistaken about the days of insufficient bandwidth being "gone." It's pretty clear from the kerfuffle between Verizon and Netflix [arstechnica.com] that there is not enough capacity for today's video traffic. The bottleneck has moved from the endpoints to the internals of the network, but there will probably always be performance bottlenecks.

    --
    [Sir Garlon] is the marvellest knight that is now living, for he destroyeth many good knights, for he goeth invisible.
    • (Score: 2, Interesting) by sfm on Monday February 24 2014, @06:55PM

      by sfm (675) on Monday February 24 2014, @06:55PM (#6059)

      "It's pretty clear from the kerfuffle between Verizon and Netflix that there is not enough capacity for today's video traffic"

      Yes, but what is the requirement that Netflix outbound video be sent HTTPS ?? Or are we just setting up for a time when all internet traffic is HTTPS ?

      • (Score: 2) by Sir Garlon on Monday February 24 2014, @07:06PM

        by Sir Garlon (1264) on Monday February 24 2014, @07:06PM (#6065)

        I didn't mean to suggest that Netflix traffic in particular needs to be HTTPS, only that Netflix traffic demonstrates that bandwidth is still limited. This was in reply to GP saying "we don't need caching because the days of dial-up are over." Dial-up is gone but network constraints are still real, that was my only point.

        --
        [Sir Garlon] is the marvellest knight that is now living, for he destroyeth many good knights, for he goeth invisible.
        • (Score: 3, Insightful) by Ezber Bozmak on Monday February 24 2014, @11:58PM

          by Ezber Bozmak (764) on Monday February 24 2014, @11:58PM (#6291)

          It's only limited because Verizon deliberately underprovisions. I don't think it is reasonable to consider willfull mismanagement as evidence of a resource shortage.

    • (Score: 5, Insightful) by frojack on Monday February 24 2014, @07:23PM

      by frojack (1554) on Monday February 24 2014, @07:23PM (#6079) Journal

      Then the problem is Netflix, and not a few tiny gifs and logos that the ISP can avoid fetching. This will save exactly nothing.

      Netflix is best fixed by moving content to the ISPs network (which is exactly what they are doing), not by futzing around with the other traffic.

      --
      No, you are mistaken. I've always had this sig.
  • (Score: 2, Interesting) by calmond on Monday February 24 2014, @07:08PM

    by calmond (1826) on Monday February 24 2014, @07:08PM (#6069)

    In the new HTTP 2.0 proposal, all traffic is going to be https. That means that there is no choice about what will be http and https, since http will not be an option.

    This does mean that proxy servers won't work, since they can't see inside the https encrypted packet. While caching isn't as big of a deal with modern equipment, the other common use of a proxy server, filtering, is a big deal. A lot of places like K-12 schools are required by law to filter Internet traffic, which may not be possible with HTTP 2.0.

    As for the issues with trusting the likes of AT&T - yeah, right!

    • (Score: 5, Insightful) by dmc on Monday February 24 2014, @07:26PM

      by dmc (188) on Monday February 24 2014, @07:26PM (#6083)

      In the new HTTP 2.0 proposal, all traffic is going to be https. That means that there is no choice about what will be http and https, since http will not be an option.

      This does mean that proxy servers won't work, since they can't see inside the https encrypted packet. While caching isn't as big of a deal with modern equipment, the other common use of a proxy server, filtering, is a big deal. A lot of places like K-12 schools are required by law to filter Internet traffic, which may not be possible with HTTP 2.

      Wrong. Filtering for K-12 schools will still be possible without this, it will just need to be done on the client system, instead of on a centralized proxy. If this is such an untenable situation (which it isn't) then the HTTP2 proposers need to rewrite their protocol. This is a tremendously important issue for society, that involves something that could be used to put the equivalent of China's authoritarian (attempt) to control the internet for it's citizens around the necks of *everyone*. The thing that should defeat it, is Network Neutrality, and making it illegal for ISPs to block all non-HTTP2(this proposal) traffic. But since we no longer have NN, we need to fight back against these sorts of things. With NN, then I'd say fine- call it HTTP2, but let the meritocracy label and understand it for what it is- false security. Then everyone who wants real security will stick with HTTP1+https (+improvements in CA infra over what we have today). The nefariousness that I smell coming off of this, is that internet user freedom to ignore such falsely-secure protocols will be taken away from them by ISPs blocking every protocol that doesn't have an NSA backdoor (like this one). Any standards body that blesses this by calling it HTTP2 will have forever lost my trust. Call it something else, that is optional, and fine. The more options the merrier. But take my end to end actually secure encryption out of my cold dead hands. I like my caching and filtering on my endpoint (or interim proxy that _i control_), thank you very much.

      • (Score: 1) by calmond on Tuesday February 25 2014, @07:17PM

        by calmond (1826) on Tuesday February 25 2014, @07:17PM (#6851)

        I guess I should clarify a bit what I had in mind. Certainly client utilities like Browse Control and others can work on the client in an all HTTPS environment. I've set up transparent proxies in the past though to catch all client machines (tablets, smart phones, etc.), including those that may not have a client application installed. An all HTTPS environment would render transparent proxies, and thus mandatory filtering of all network traffic in places like K-12 schools, impossible. Naturally, a school could simply deny access to devices they don't own, and solve that problem.

        Having said all that, please don't misunderstand me, I am completely in favor of an all HTTPS protocol, I'm just pointing out that any such move will have consequences.

        • (Score: 2) by dmc on Wednesday February 26 2014, @02:47AM

          by dmc (188) on Wednesday February 26 2014, @02:47AM (#7077)

          An all HTTPS environment would render transparent proxies, and thus mandatory filtering of all network traffic in places like K-12 schools, impossible. Naturally, a school could simply deny access to devices they don't own, and solve that problem.

          I think you just contradicted yourself. You went from impossible, to naturally problem solved in the space of two sentences.

          • (Score: 1) by calmond on Wednesday February 26 2014, @02:02PM

            by calmond (1826) on Wednesday February 26 2014, @02:02PM (#7276)

            No, not really. I said it is impossible to do this from a centralized server environment for all devices. A compromise would be to not allow all devices, but only the ones under your administrative control. This is not a contradiction, but a compromise.

    • (Score: 5, Insightful) by frojack on Monday February 24 2014, @07:37PM

      by frojack (1554) on Monday February 24 2014, @07:37PM (#6094) Journal

      This does mean that proxy servers won't work, since they can't see inside the https encrypted packet. While caching isn't as big of a deal with modern equipment, the other common use of a proxy server, filtering, is a big deal.

      Thank you for making it perfectly obvious that this proposal was NEVER about bandwidth management and was ALWAYS about spying, filtering, and control.

      You have, perhaps unwittingly, performed a great service to society by making this clear.

      --
      No, you are mistaken. I've always had this sig.
      • (Score: 1) by SMI on Tuesday February 25 2014, @06:24PM

        by SMI (333) on Tuesday February 25 2014, @06:24PM (#6801)

        Precisely. One of the first things that came to my mind when reading TFS is how much more difficult it's going to be to try to explain encryption to people, now that we'll have to explain that some encryption is real and works, while others (like this) are deliberately designed only to give a sense of false security to people who kind of care about their privacy, so they're interested, but without the technical background and understanding to see this for what it really is.

  • (Score: 2, Interesting) by lennier on Monday February 24 2014, @08:17PM

    by lennier (2199) on Monday February 24 2014, @08:17PM (#6130)

    I agree that caching and re-encrypting encrypted data seems dodgy. But I'd also say that not only are the days of insufficient bandwidth not gone and never will be gone (how many simultaneous streams of cat videos does the world need? Just one more!) - a world of pervasive caching (done at a correct lower protocol level, not at the application level) is the Star Trek future of networking. At least that's the idea behind content centric networking [parc.com], which seems to have some big names in TCP/IP behind it.

    --
    Delenda est Beta
  • (Score: 2, Interesting) by hankwang on Monday February 24 2014, @08:53PM

    by hankwang (100) on Monday February 24 2014, @08:53PM (#6164) Homepage

    The client and the server can and should decide which parts should be secure, and which parts can be insecure, which parts can be served from cache, and which parts must be sent again.

    I can imagine scenarios where the data itself is not really secret, but where one would like to ensure that it is not tampered with while in transit. As far as I know, such a mechanism does not exist in HTTP nor in the present proposal. For software downloads (e.g. rpm and deb files), there is a signing mechanism. But if I want to install linux from a downloaded CD image, I would officially be supposed to check the checksum against the value that is... published over HTTP. Chicken-and-egg problem...

    • (Score: 2, Funny) by stderr on Tuesday February 25 2014, @12:12AM

      by stderr (11) on Tuesday February 25 2014, @12:12AM (#6299) Journal

      But if I want to install linux from a downloaded CD image, I would officially be supposed to check the checksum against the value that is... published over HTTP. Chicken-and-egg problem...

      If only there could be a signature file [debian.org] right next to the checksum file [debian.org], so you could check if someone tampered with the checksum file...

      Too bad that won't be possible any time soon...

      --
      alias sudo="echo make it yourself #" # ... and get off my lawn!
      • (Score: 1) by hankwang on Tuesday February 25 2014, @03:19AM

        by hankwang (100) on Tuesday February 25 2014, @03:19AM (#6362) Homepage

        "a signature file right next to the checksum file, so you could check if someone tampered with the checksum file..."

        And how do I know that it is the original signature file if I get it over HTTP? Plus it is a pain to deal with it manually.

  • (Score: 2) by TheLink on Wednesday February 26 2014, @03:22AM

    by TheLink (332) on Wednesday February 26 2014, @03:22AM (#7095) Journal

    On the other hand ISPs can run bittorrent caching servers that automatically cache popular torrents. The issue is getting sued by the **AA and other non-tech-related unpleasantness.