Stories
Slash Boxes
Comments

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00PM   Printer-friendly
from the things-could-get-hairy dept.

mrbluze writes:

"A modified HTTP protocol is being proposed (the proposal is funded by AT&T) which would allow ISP's to decrypt and re-encrypt traffic as part of day to day functioning in order to save money on bandwidth through caching. The draft document states:

To distinguish between an HTTP2 connection meant to transport "https" URIs resources and an HTTP2 connection meant to transport "http" URIs resource, the draft proposes to 'register a new value in the Application Layer Protocol negotiation (ALPN) Protocol IDs registry specific to signal the usage of HTTP2 to transport "http" URIs resources: h2clr.

The proposal is being criticized by Lauren Weinstein in that it provides a false sense of security to end users who might believe that their communications are actually secure. Can this provide an ISP with an excuse to block or throttle HTTPS traffic?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Insightful) by mindriot on Monday February 24 2014, @09:14PM

    by mindriot (928) on Monday February 24 2014, @09:14PM (#6185)
    That idea is fine as long as you can ensure that an adversary can learn nothing about encrypted sub-page units from the unencrypted or known-plaintext sub-page units accompanying it. Otherwise you've just magnified the metadata problem...
    --
    soylent_uid=$(echo $slash_uid|cut -c1,3,5)
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   2  
  • (Score: 1) by lennier on Monday February 24 2014, @09:53PM

    by lennier (2199) on Monday February 24 2014, @09:53PM (#6222)

    Yes, known plaintext would be a problem, as would be metadata; even if a sub-unit is encrypted, it's still got an identity so it's possible to know 'that' Post #1234 was transmitted to Endpoint #5678 even if not 'what' Post #1234 is. And I suspect every content-centric network would have that kind of issue.

    Although in a network with pervasive caching at all levels (like at the switch/router level as CCN advocates recommend), there _should_ be some natural shielding from the fact that if anyone in your organisation requests Post #1234, your proxy would fetch it only once and cache it for a long time, so any further accesses you make to it wouldn't go beyond your organisational boundaries. And your hostile upstream ISP would only know that the request for access went to your organisation, not which endpoint was requesting access. It wouldn't be quite as good as onion routing but should be a lot better then current HTTPS IP logging.

    --
    Delenda est Beta
    • (Score: 1) by mindriot on Monday February 24 2014, @10:06PM

      by mindriot (928) on Monday February 24 2014, @10:06PM (#6235)

      I guess you're right in that metadata exploitation would be somewhat hindered by the anonymity afforded by a caching proxy (although that assumes that adversaries/certain agencies will not have access to your organization's proxy).

      The bigger problem I see is that there is not only the metadata problem to cope with, there is also the problem that only tech-savvy users would even be aware of its existence while everyone else could fall for an illusion of security -- "the important sub-units are secure, so I'm perfectly fine and I can do whatever I want".

      But it's quite possible that I'm overly worried about this.

      --
      soylent_uid=$(echo $slash_uid|cut -c1,3,5)