Stories
Slash Boxes
Comments

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00PM   Printer-friendly
from the things-could-get-hairy dept.

mrbluze writes:

"A modified HTTP protocol is being proposed (the proposal is funded by AT&T) which would allow ISP's to decrypt and re-encrypt traffic as part of day to day functioning in order to save money on bandwidth through caching. The draft document states:

To distinguish between an HTTP2 connection meant to transport "https" URIs resources and an HTTP2 connection meant to transport "http" URIs resource, the draft proposes to 'register a new value in the Application Layer Protocol negotiation (ALPN) Protocol IDs registry specific to signal the usage of HTTP2 to transport "http" URIs resources: h2clr.

The proposal is being criticized by Lauren Weinstein in that it provides a false sense of security to end users who might believe that their communications are actually secure. Can this provide an ISP with an excuse to block or throttle HTTPS traffic?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by gallondr00nk on Monday February 24 2014, @07:42PM

    by gallondr00nk (392) on Monday February 24 2014, @07:42PM (#6098)

    We've already got HTTP for unencrypted traffic, and HTTPS for encrypted traffic. What else do we really need? If it's too important to leave unencrypted, having it decoded midstream is too much of a risk. If it isn't, what's wrong with HTTP?

    If the NSA revelations have done anything, they've started a desire towards encryption that won't let up anytime soon. Perhaps our ISPs are trustworthy enough to act as a proxy (hah), but we all know damn well the NSA aren't.

    There's a delicious irony that AT&T, the infamous host of Room 641A [wikipedia.org], are proposing standards changes because they feel encryption is hurting their profitability.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0) by lennier on Monday February 24 2014, @08:33PM

    by lennier (2199) on Monday February 24 2014, @08:33PM (#6146)

    "We've already got HTTP for unencrypted traffic, and HTTPS for encrypted traffic. What else do we really need?"

    I'd say that what the Web needs, and has needed for a long time, is a protocol for transcluding independently encrypted sub-page units of data. That would be a happy mix between 'encrypt all' and 'encrypt nothing'.

    Your average modern social/messaging Web 'page', for example on a blog or comment forum, anything except a corporate brochure site, contains maybe a header, a bunch of rotating ads, and a whole collection of post or comment units. The thing about all these sub-units is that they mostly don't change after you've visited the page once, and there are often *lot* of them. Like, hundreds to thousands to millions. So it seems pretty dumb for the Web architecture, either on the server or the proxy, to be recreating and failing to cache all these units that make up the majority of your page if they could just include them independently. Then your page would be a very small list of identifiers of content sections pre-fetched or found elsewhere. It would reduce a huge amount of load on servers, and give small blogs an edge against the huge outfits like Facebook that can afford ridiculous amounts of server farms and CDNs to make up for a simple oversight in the design of HTTP. It would also reduce the amount of Javascript needed and make tricks like AJAX less necessary if the underlying HTTP protocol was aware of sub-page units. Finally, it would mean Web end users could pool and share bandwidth and avoid getting hit with broadband overage fees (most of the planet doesn't have endless monthly ree Internet traffic like the USA main urban centres do) ; mass caching could also make disruptive technologies like mesh routing useful.

    Of course, efficiency and security are at each other's throats, so there'd be a balance with all of this. But generating and encrypting a page as a unit _when the page is not actually the fundamental unit of the data being transferred_ but just a temporary display/UI mechanism seems just a bit, well, wrong to me.

    --
    Delenda est Beta
    • (Score: 2, Insightful) by mindriot on Monday February 24 2014, @09:14PM

      by mindriot (928) on Monday February 24 2014, @09:14PM (#6185)
      That idea is fine as long as you can ensure that an adversary can learn nothing about encrypted sub-page units from the unencrypted or known-plaintext sub-page units accompanying it. Otherwise you've just magnified the metadata problem...
      --
      soylent_uid=$(echo $slash_uid|cut -c1,3,5)
      • (Score: 1) by lennier on Monday February 24 2014, @09:53PM

        by lennier (2199) on Monday February 24 2014, @09:53PM (#6222)

        Yes, known plaintext would be a problem, as would be metadata; even if a sub-unit is encrypted, it's still got an identity so it's possible to know 'that' Post #1234 was transmitted to Endpoint #5678 even if not 'what' Post #1234 is. And I suspect every content-centric network would have that kind of issue.

        Although in a network with pervasive caching at all levels (like at the switch/router level as CCN advocates recommend), there _should_ be some natural shielding from the fact that if anyone in your organisation requests Post #1234, your proxy would fetch it only once and cache it for a long time, so any further accesses you make to it wouldn't go beyond your organisational boundaries. And your hostile upstream ISP would only know that the request for access went to your organisation, not which endpoint was requesting access. It wouldn't be quite as good as onion routing but should be a lot better then current HTTPS IP logging.

        --
        Delenda est Beta
        • (Score: 1) by mindriot on Monday February 24 2014, @10:06PM

          by mindriot (928) on Monday February 24 2014, @10:06PM (#6235)

          I guess you're right in that metadata exploitation would be somewhat hindered by the anonymity afforded by a caching proxy (although that assumes that adversaries/certain agencies will not have access to your organization's proxy).

          The bigger problem I see is that there is not only the metadata problem to cope with, there is also the problem that only tech-savvy users would even be aware of its existence while everyone else could fall for an illusion of security -- "the important sub-units are secure, so I'm perfectly fine and I can do whatever I want".

          But it's quite possible that I'm overly worried about this.

          --
          soylent_uid=$(echo $slash_uid|cut -c1,3,5)