Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by CoolHand on Tuesday May 05 2015, @11:36PM   Printer-friendly
from the genius-or-lunacy dept.

We've previously covered Mozilla considering a push to deprecate HTTP in favor of HTTPS. Well, it looks like the time is here. This HTTPS encrypted blogpost by Mozilla starts with

Today we are announcing our intent to phase out non-secure HTTP.

There's pretty broad agreement that HTTPS is the way forward for the web. In recent months, there have been statements from IETF, IAB (even the other IAB), W3C, and the US Government calling for universal use of encryption by Internet applications, which in the case of the web means HTTPS.

[...] There are two broad elements of this plan:

  • Setting a date after which all new features will be available only to secure websites
  • Gradually phasing out access to browser features for non-secure websites, especially
    features that pose risks to users' security and privacy.

[...] For example, one definition of "new" could be "features that cannot be polyfilled". That would allow things like CSS and other rendering features to still be used by insecure websites, since the page can draw effects on its own (e.g., using <canvas>). But it would still restrict qualitatively new features, such as access to new hardware capabilities.

[More after the break]

This unencrypted blogpost raises good points against the move:

In conclusion; no, TLS certificates are not really free. Introducing forced TLS would create an imbalance between those who have the money and means to purchase a certificate (or potentially many certificates), and those who don't - all the while promoting a cryptosystem as being 'secure' when there are known problems with it. This is directly counter to an open web.

There are plenty of problems with TLS that need to be fixed before pressuring people to use it. Let's start with that first.

Other links: Hacker News thread on the Mozilla post, Hacker news thread for the rebuttal. The comment threads are interesting. Here's one excerpt from the second link:

There's one solution that the author didn't cover: Start treating self-signed certs as unencrypted. Then, deprecate http support over a multi-year phase out. That way, website owners who want to keep their status quo, can just add a self signed cert and their users will be none the wiser.
For https there are two major objectives. 1) Prevent MITM attacks. 2) Prevent snooping from passive monitoring. Self-signed certs can prevent #2, which the IETF has adopted as a Best Current Practice. I'm much more in favor of trying to at least do one of the two objectives of https, rather than refusing to do anything until we are able to do both objectives.

One other major argument against ridding ourselves of HTTP is pure performance, encryption is expensive, and why burn that power encrypting things that have no need to be encrypted.

The enforcing of HTTPS is something that has provoked discussion here in the past. Go crazy!

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by frojack on Wednesday May 06 2015, @01:55AM

    by frojack (1554) on Wednesday May 06 2015, @01:55AM (#179336) Journal

    Personally, I doubt there is THAT Much difference imposed by encryption, either in power consumption or time.

    I could be wrong, but the last time I checked into this the usual practice was to use some relatively robust block cipher (slow) to exchange keys and the session then switched to a fast stream cipher using those keys. (Some block ciphers can be switched into stream mode, but they still suffer slowness).

    So each page does one key exchange with an expensive cipher, then switches to a cheap cipher for the data.

    When I was last implementing such an exchange in my day-job, we could not measure any difference in speed, or cpu utilization even when a whimpy machine was serving a couple hundred remote workstations.

    That was some time ago, and things may have changed in the mean time.

    Now for your signing idea, I wonder if verifying the signature would take just as much horse power? Were you presuming a hash was involved somewhere?

    --
    No, you are mistaken. I've always had this sig.
    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Informative=1, Total=2
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 3, Interesting) by jmorris on Wednesday May 06 2015, @02:44AM

    by jmorris (4844) on Wednesday May 06 2015, @02:44AM (#179350)

    You can't be serious. Static content can be transmitted zero copy and in most cases without even hitting the disk or even going through the cpu cache. Just send the header and point the socket at the memory with the content or an open fd. Adding my idea for signing just pulls in the signature, which would probably also be cached in ram if the content still was, adds it to the headers and nothing else changes. Any encryption at all means a unique response stream must be calculated for each and every hit and unless you have really good crypto acceleration hardware you need to pull the page into the CPU cache, burn a lot of cycles, write out a fresh encrypted version, hand that off and then free the memory after transmission so you can't just point the socket at the buffer and let it finish itself off. Now if you are talking dynamic generated content then no, the extra encryption step probably isn't too heavy a hit.

    My idea for signing would require calculating a hash on receive to verify it but that could even be made optional with a switch in the browser config. Don't know about you, but if battery life is at a premium I think I can take a chance that the banner ads might be tampered with until such time as ISPs actually get caught pulling stupid stunts like that... and perhaps not even then. I ignore em anyway 99.9% of the time just like everyone else. Give me a switch to leave http traffic from 3rd parties (not the site in the URL) unvalidated I'd flip it, especially on a mobile device.

    • (Score: 3, Insightful) by bzipitidoo on Wednesday May 06 2015, @09:23AM

      by bzipitidoo (4388) on Wednesday May 06 2015, @09:23AM (#179439) Journal

      One of the biggest problems with going all https is that multicasting becomes impractical. Instead of sending one data stream to hundreds of recipients, the server now has to encrypt it for each recipient, and send all that data separately. That's the real expense of encryption. No buffering, caching, or multicasting. It's like trying to employ encryption on radio and TV signals. No problem for two way communication, but a big problem for broadcasting.

      They're worried about the Man In The Middle attack. But the man in the middle is a fundamental part of the Internet, necessary for the thing to work at all. Do they seriously want to abandon packet networking and go back to a switched network?

      Plus, there are bugs. One bad bug with Firefox's handling of https is too much reliance on clocks. If on a computer with an out of date clock, Firefox believes it even when it can't possibly be correct because the date it reports is older than the version of Firefox that's running. But Firefox accepts the obviously wrong date, and will throw up scary and incorrect warnings about certificates not yet being valid.

      • (Score: 2) by TheRaven on Wednesday May 06 2015, @12:59PM

        by TheRaven (270) on Wednesday May 06 2015, @12:59PM (#179476) Journal
        Multicasting at the IP level is made impossible by TCP. If you're talking about large-scale streaming servers, then you might want to read the AsiaBSDCon paper about how Netflix enabled HTTPS for all of their video traffic. If they can do it for 1/3 of US Internet traffic then I don't see other people having much excuse.
        --
        sudo mod me up
  • (Score: 2) by gnuman on Wednesday May 06 2015, @03:32PM

    by gnuman (5013) on Wednesday May 06 2015, @03:32PM (#179554)

    relatively robust block cipher (slow) to exchange keys

    Block ciphers are not used to exchange keys. Public cryptography is used to exchange keys. Most crypto still uses block ciphers (like AES) for streaming via CBC or similar schemes.

    http://en.wikipedia.org/wiki/Public-key_cryptography [wikipedia.org]
    http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation [wikipedia.org]

    most common ways in modern browsers to exchange keys is via ellliptic curve crypto.

    http://en.wikipedia.org/wiki/Elliptic_curve_Diffie%E2%80%93Hellman [wikipedia.org]

    So each page does one key exchange with an expensive cipher, then switches to a cheap cipher for the data.

    They are not really expensive (at least not with today's hardware), but they use randomness. If your machine has low randomness pool, this can cause problems, including having crap keys. And it is only really slow because it requires multiple round trips, which add up if you have 100+ms roundtrip time to the server.