Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday June 20 2018, @03:32AM   Printer-friendly
from the die-die-die dept.

As TLS 1.3 inches towards publication into the Internet Engineering Task Force's RFC series, it's a surprise to realise that there are still lingering instances of TLS 1.0 and TLS 1.1.

The now-ancient versions of Transport Layer Security (dating from 1999 and 2006 respectively) are nearly gone, but stubborn enough that Dell EMC's Kathleen Moriarty and Trinity College Dublin's Stephen Farrell want it formally deprecated.

This Internet-Draft (complete with “die die die” in the URL) argues that deprecation time isn't in the future, it's now, partly because developers in recalcitrant organisations or lagging projects probably need something to convince The Boss™ it's time to move.

The last nail in the coffin would be, formally and finally, to ban application fallback to the hopelessly insecure TLS 1.0 and 1.1 standards.

Deprecation also removes any excuse for a project to demand support for all four TLS variants (up to TLS 1.3), simplifying developers' lives and reducing the risk of implementation errors.

[...] The publication of TLS 1.3 into the RFC stream is imminent – it's reached the last stage of the pre-publication process, author's final review. When it's published, it will carry the designation RFC 8446.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Wednesday June 20 2018, @04:12AM

    by Anonymous Coward on Wednesday June 20 2018, @04:12AM (#695445)

    Feeding a URL to archive.li (and its old browser) sometimes fails completely without any indication of what went wrong.
    WTF?? [archive.li] (orig) [mediamatters.org]

    The other day, I found a site that does things properly (and explains to the folks scratching their head just WTF is going on).
    http://archive.li/HTsST#1% - Done right (Stupid broken S/N comments engine won't take my hyperlink.)
    (orig) [nature.com]

    -- OriginalOwner_ [soylentnews.org]

  • (Score: 3, Disagree) by shortscreen on Wednesday June 20 2018, @07:42AM (6 children)

    by shortscreen (2252) on Wednesday June 20 2018, @07:42AM (#695502) Journal

    one of these days I'll figure out how to set up my own local MITM proxy so I can browse plain old http again without all of the pointless errors about certificates and "unable to complete secure transaction"

    allowing webtards to decide what software I can run (by deliberately breaking everything else) is much worse than some hypothetical threat of ISPs injecting ads (which I could avoid anyway via other means)

    • (Score: 3, Disagree) by ledow on Wednesday June 20 2018, @11:38AM (5 children)

      by ledow (5567) on Wednesday June 20 2018, @11:38AM (#695541) Homepage

      Really?

      I've seen certificate errors about... what... once a year or less. Considering the sheer number of websites I visit, and that most of those errors are literally insecure websites, that's nowhere near a hassle.

      The threat isn't from ISPs. It's from ANY malware on your network or anywhere in the path to the Internet. Without TLS, everything you see and type is visible and modifiable across your network and is actively used as propagation techniques for viruses out in the wild, and has been for decades. That's how those "router infections" work too. Something gets into your network, pops up a window which it pulls from your internal router, you click it because it's "not secure" or because you can't verify its security and blamm, it's in your router firmware.

      Plus, there's no excuse for even internal services not to have a valid TLS nowadays. They are literally free (LetsEncrypt). That error you "just click through" to get to your private webserver / router admin page / whatever could have a full certificate no problem.

      And it's not the case that just putting in a MITM proxy would solve the problem - if the destination site literally doesn't support TLS 1.0, then your proxy can't use it to talk to it either. So either way you have to keep a machine up-to-date to talk to the latest websites securely, or do without talking to them at all. And if you're going to do that, you may as well just keep your own machines up to date.

      • (Score: 2) by Pino P on Wednesday June 20 2018, @01:14PM (3 children)

        by Pino P (4721) on Wednesday June 20 2018, @01:14PM (#695566) Journal

        Plus, there's no excuse for even internal services not to have a valid TLS nowadays. They are literally free (LetsEncrypt).

        Let's Encrypt requires a fully-qualified domain name, as do all other CAs that meet the CAB Forum's Baseline Requirements. Domain name registration isn't free.

        Or to put it another way: What is the fully-qualified domain name of your Internet router? Or your aunt's Internet router?

        • (Score: 3, Touché) by ledow on Wednesday June 20 2018, @01:29PM (2 children)

          by ledow (5567) on Wednesday June 20 2018, @01:29PM (#695571) Homepage

          username.dyndns.org

          Or any of a dozen supported rivals, many of them free.

          Next question?

          • (Score: 2) by Pino P on Saturday June 23 2018, @05:44PM

            by Pino P (4721) on Saturday June 23 2018, @05:44PM (#697263) Journal

            If a dynamic DNS provider's domain isn't on the Public Suffix List [publicsuffix.org], then the first 20 users who request a certificate under that domain get a certificate, and the rest for the week get an error message that the request exceeds the rate limit of Let's Encrypt [letsencrypt.org].

            Only the provider can request that a domain be added to the PSL, not its users. This means a provider can hold its users hostage by requesting that only "premium" domains be added to the PSL, especially if the provider also resells some commercial TLS CA's DV certificates. And last I checked, there was a months-log backlog in processing requests by providers to be added to the PSL.

          • (Score: 3, Informative) by Pino P on Saturday June 23 2018, @05:47PM

            by Pino P (4721) on Saturday June 23 2018, @05:47PM (#697267) Journal

            username.dyndns.org

            That went away years ago. When I type dyndns.org into my browser, I get redirected to dyn.com whose pricing page [dyn.com] doesn't show a free option. Wikipedia confirms [wikipedia.org]: "In April 2014, Dyn announced the discontinuation of its free hostname services effective May 7."

      • (Score: 3, Insightful) by jdccdevel on Wednesday June 20 2018, @06:09PM

        by jdccdevel (1329) on Wednesday June 20 2018, @06:09PM (#695684) Journal

        I have to work with equipment that has this sort of problem ALL THE TIME.

        There is a LOT of equipment out there, on Enterprise LANs and WANs, and in peoples homes, that use old versions of TLS and even SSL to encrypt web browser access. That equipment (most of it) absolutely CAN NOT be upgraded as you describe. It will be in place until it breaks and is replaced. HTTP access is out of the question, since sending passwords in plaintext even on a LAN or WAN is irresponsible, but worrying about someone spoofing a certificate, or MITM attack on SSL? If we have a malicious actor with access and smarts like that, we have bigger problems than a less than perfectly secure web browser.

        We really, really need the tools to access this equipment.

        I absolutely understand removing default support for older encryption, self signed certs, and whatnot from the Internet as a whole. That's a public space, and websites need to keep up to date, and web users need to protect themselves (even if the attack is theoretical). But web browsers are tools for accessing more than just the Internet, and I wish they would give me the option to disable some of the hoops I need to jump through every time I need to configure something more than a couple years old on my LAN.

        I already have a older version of Palemoon installed just to access older Java Applet based stuff, and older SSL encrypted config pages. It gives me warnings, but at least it works. Firefox stopped compiling in SSL support a while back, and it was a absolute requirement, so I couldn't use that anymore. The problem is that the older web browser still works on the Internet, so it gets used to access it too (by habit, or because it's more convenient, or by accident, or some UI changes make the older web browser more comfortable to use...), and suddenly you're vulnerable. You're browsing the net with a web browser isn't up to date anymore, and can't be upgraded because that will break access to things you need access to.

        Seriously, if it's a RFC1918 IP address, it's on the LAN, and I should be able to configure an up-to-date web browser so everything legacy works. I should be able to let it know that I know it might be less than secure, (to use TLS1.0 on the LAN, for example) but I need to access it anyway.

        Legacy equipment on the LAN should just keep working, and it shouldn't be that hard to whitelist legacy support for it.

  • (Score: 5, Insightful) by driverless on Wednesday June 20 2018, @09:04AM (17 children)

    by driverless (4770) on Wednesday June 20 2018, @09:04AM (#695519)

    The last nail in the coffin would be, formally and finally, to ban application fallback to the hopelessly insecure TLS 1.0 and 1.1 standards.

    In that case the first nail would be to drop the ridiculous hyperbole. TLS 1.0 and 1.1 aren't "hopelessy insecure". They have some mostly theoretical weaknesses that apply in special situations, but don't affect most users. And before someone jumps in with "but everyone knows they fail in manner XYZ", if they're hopelessy insecure then show me an actual attack on, say, eBay or Paypal using TLS 1.0 or 1.1.

    An actual, real attack on eBay or Paypal, not some theorising based on a research paper at a conference.

    • (Score: 1, Interesting) by Anonymous Coward on Wednesday June 20 2018, @10:32AM (6 children)

      by Anonymous Coward on Wednesday June 20 2018, @10:32AM (#695532)

      TLS 1.0 has been in use for years. Was anyone actually pwned in the wild because of those weaknesses?

      People are far more likely to be pwned by governments/corporations using their CA certs to MITM people. There have been actual real cases of this happening:

      https://www.theregister.co.uk/2013/12/10/french_gov_dodgy_ssl_cert_reprimand/ [theregister.co.uk]
      https://www.fastcompany.com/3042030/the-huge-web-security-loophole-that-most-people-dont-know-about-and-how-its-be [fastcompany.com]
      https://techcrunch.com/2015/04/01/google-cnnic/ [techcrunch.com]

      And upgrading to TLS 1.3 isn't going to help vs that.

      So if anything is hopelessly insecure it's the way browsers implement the CA https system that puts their users at risk.

      Something like what SSH does would be safer - e.g. if a cert changes unexpectedly the user is warned (there's stuff like certificate patrol but it doesn't handle cases where a site has multiple certs due to load balancing or other reasons).

      • (Score: 2) by Pino P on Wednesday June 20 2018, @01:19PM (5 children)

        by Pino P (4721) on Wednesday June 20 2018, @01:19PM (#695567) Journal

        Something like what SSH does would be safer - e.g. if a cert changes unexpectedly the user is warned

        In TLS, self-signed certificates do exactly this. But how does this protect a user visiting a particular site for the first time? The client might be behind the same sort of government or corporate MITM that you mention.

        I guess part of the perception that the SSH model is safer than the TLS model comes from the fact that a user will typically connect to far fewer SSH servers than TLS servers in his lifetime and can therefore afford to spend more time (which is money) on double-checking each server's key fingerprint.

        • (Score: 2) by tomtomtom on Wednesday June 20 2018, @08:51PM (2 children)

          by tomtomtom (340) on Wednesday June 20 2018, @08:51PM (#695788)

          There are ways we can improve on this. The "ssh model" (trust on first use), for servers you connect to frequently, is better in many ways than trusting the very large number of CAs out there *every* time you connect. That's why things like HPKP certificate pinning exist - to try to combine the two. So you can then trust on first use based on a CA, and then either have future cert rollovers required to be signed by another key within the site owner's control or by a specific CA otherwise they trigger a browser alert.

          The big problem though is convenience and in particular convenience to site admins, who are mostly imperfectly competent, at least some of the time. HPKP is unpopular because the cost of failure for a site admin (in terms of losing their certificates) is too high.

          So the approach which everyone seems to be converging on is to add side channels to verify multiple ways to make a man in the middle attack harder. eg DANE/DNSSEC, Certificate Logs, etc

          • (Score: 0) by Anonymous Coward on Thursday June 21 2018, @06:55AM (1 child)

            by Anonymous Coward on Thursday June 21 2018, @06:55AM (#696070)

            That's why things like HPKP certificate pinning exist - to try to combine the two

            No. Things like certificate pinning exist to serve the corporations more than the users.

            Browsers can warn users of unusual/unexpected changes without requiring stuff like certificate pinning. This would help the user in so many more attack scenarios. e.g. in cases where certificate pinning is not practical for certain sites, or for self-signed certificates.

            Certificate pinning adds complexity just to protect narrower more specific cases (only for organizations that pin their certificates).

            See also:

            Some browsers also support the Public-Key-Pins-Report-Only, which does only trigger this reporting while not showing an error to the user.

            So guess whose interests were the real priority for those creating the standard?

            • (Score: 2) by Pino P on Saturday June 23 2018, @03:41PM

              by Pino P (4721) on Saturday June 23 2018, @03:41PM (#697211) Journal

              I assume the -Report-Only mode of HPKP is like that of Content Security Policy (CSP): a transition mechanism to help sites that did not support the feature debug their preliminary support before committing to support.

        • (Score: 0) by Anonymous Coward on Thursday June 21 2018, @04:22AM (1 child)

          by Anonymous Coward on Thursday June 21 2018, @04:22AM (#696013)

          In TLS, self-signed certificates do exactly this.

          Do they really? Say you have a self-signed cert for yourdomain.com and your browser has been told to accept that. Say a CA signs a cert for yourdomain.com and it's used to MITM your browser and yourdomain.com. Which browser will warn you when that happens?

          • (Score: 2) by Pino P on Saturday June 23 2018, @05:39PM

            by Pino P (4721) on Saturday June 23 2018, @05:39PM (#697259) Journal

            Does the browser trust a newly seen certificate just because it's for the same hostname for which the user has added an exception for a different certificate?

            A browser behaving "correctly" would reject the certificate. Unfortunately, I have not had a chance to test this behavior in all browsers on all platforms.

    • (Score: 3, Insightful) by driverless on Wednesday June 20 2018, @10:42AM (3 children)

      by driverless (4770) on Wednesday June 20 2018, @10:42AM (#695534)

      As a followup based on a PM, since I never set any benchmark for what resources it would take, let's say you can call something "hopelessy insecure" if you can achieve either full plaintext recovery or full message forgery in close to real time, say 5 seconds or less. That figure is chosen so it won't be noticed by the victim if there's too much delay, maybe 30s for a web site but for something like SIP it'd have to be close to real time, 5s seems a good compromise.

      So I'll modify my previous challenge to say that "you can call it hopelessy insecure if you can demonstrate full plaintext recovery or message forgery in five seconds or less against TLS 1.0 or 1.1 for a typical target site like eBay or Paypal".

      • (Score: 2) by FakeBeldin on Wednesday June 20 2018, @11:10AM (2 children)

        by FakeBeldin (3360) on Wednesday June 20 2018, @11:10AM (#695536) Journal

        I would still not consider that "hopelessly insecure".
        For me, hopelessly insecure is when a non-expert with access to Google can break the security in 5 minutes or less.

        If someone is able to break Paypal or eBay, it might just mean that that person is good at this (or: spend enough effort to defeat them).

        • (Score: 2) by darkfeline on Thursday June 21 2018, @03:25AM (1 child)

          by darkfeline (1030) on Thursday June 21 2018, @03:25AM (#695983) Homepage

          By that definition, being vulnerable to SQL injection attacks or storing passwords hashed using MD5 and unsalted is not "hopelessly insecure". Hell, ROT13 might barely escape being "hopelessly insecure", given the average person's comprehension skills.

          --
          Join the SDF Public Access UNIX System today!
          • (Score: 2) by FakeBeldin on Thursday June 21 2018, @07:31AM

            by FakeBeldin (3360) on Thursday June 21 2018, @07:31AM (#696079) Journal

            ...which underscores that I feel that "hopelessly insecure" is a very strong statement.
            If the average person that understands the security purpose of a device/security control cannot trivially break it, then I believe "hopelessly" is an overstatement.

            Note that not all SQL injection may not fall under this: I'd be surprised if you couldn't learn within 5 minutes to type "test; DROP DATABASE" into a form field.

    • (Score: 2) by ledow on Wednesday June 20 2018, @11:52AM (4 children)

      by ledow (5567) on Wednesday June 20 2018, @11:52AM (#695543) Homepage

      Oh come on.

      The first was written in 1998, you really expect it to still be secure:

      https://www.gracefulsecurity.com/tls-ssl-vulnerabilities/ [gracefulsecurity.com]

      DROWN: "It allows an attacker who has an effective man-in-the-middle to break the encryption of a TLS connection in under eight hours with a variant being achievable in one minute"
      CRIME: "The attacker requires a man-in-the-middle connection and the ability to repeatedly inject predictable data whilst monitoring the resulting encrypted traffic. This could be achievable through Cross-site scripting attacks; JavaScript is not required and an attack could be possible with HTML Injection alone however it would be less efficient. For CRIME to be possible the client and server must support compression of the request before encryption. TLS supports DEFLATE which is vulnerable, as is SPDY."
      BEAST: Mitigated, but can only be fixed by moving to TLS 1.2
      BREACH: See CRIME but without the same compression.
      FREAK:"allows an positioned attacker with a man-in-the-middle attack to reduce the security offered by SSL/TLS by forcing a connection to use “Export-grade” grade encryption – which reduces the RSA strength to 512 bits, which is breakable by attackers with a modest budget (In 2015 researchers showed this to be about $104 on Amazon EC2 instances)"
      Logjam: "allows a man-in-the-middle attacker to downgrade the encryption to 512-bit export grade cryptography"
      NOMORE: "allows a HTTP Cookie to be retreived within 52 hours"
      Bar Mitzvah: allows for small amounts of plaintext data to be recovered from an SSL/TLS session. It requires a positioned attacker with a man-in-the-middle attack capable of capturing “many millions” of requests."
      SWEET32: "It requires a positioned attacker with a man-in-the-middle attack capable of capturing a long-lived HTTPS connection. The original proof of concept showed that it was possible to recover secure HTTP cookies by capturing around 785 GB of traffic, by generating traffic through malicious JavaScript."
      SSL POODLE: "It requires a man-in-the-middle attack and the ability for the attacker to cause the application to send the same data over newly created SSL3.0 connections but allows an attacker to decipher a chosen byte of cipher text in as few as 256 attempts."
      TLS POODLE: "is a vulnerability affecting certain implementations of TLS" (Yep, I'll give you that one!)
      Heartbleed: "It does not require a Man-in-the-Middle to exploit and can be exploited against both the server and the client. The issue allows an attacker to extract up to 64kb of memory from the vulnerable system, which can lead to the theft of credentials, session tokens and server private keys."

      Most of the original cipher suites are deprecated too. Though you can mitigate some of those, and some of those are poor implementations, by far they are the most dangerous classes of code to have problems inside, and redesign to combat those problems is critical. Not just papering over the cracks.

      SSL/TLS below 1.3 are dead for a reason. PCI DSS wouldn't FORCE them out of action unless they needed to, they barely keep up with most things as it is. If the finance sector has deprecated a protocol, you can be sure it's dead in the water. These are some of the people still insisting on Internet Explorer and ActiveX controls to do basic business banking in some instances!

      • (Score: 3, Insightful) by driverless on Wednesday June 20 2018, @12:20PM

        by driverless (4770) on Wednesday June 20 2018, @12:20PM (#695551)

        The only ones in there that are a real threat, fallback to insecure ciphers and implementation bugs, aren't helped by going to TLS 1.2 or 1.3. In addition several of the attacks you've cited there as if they were attacks on the main protocol are simply fallback attacks, e.g. DROWN, which is an attack via SSLv2, not TLS 1.0 or 1.1. In any case if you still support SSLv2 and 512-bit RSA then I'd say that's an implementation bug, so the problem is mostly buggy code which you'll find in any version of TLS, not just 1.0 and 1.1. The rest are... well, as I said, demonstrate a real attack on a major site like eBay or Paypal that takes 5s or less to qualify as "hopelessly insecure". "Allows a HTTP Cookie to be retreived within 52 hours" and the like isn't an attack, it's a conference paper proof-of-concept.

      • (Score: 2, Interesting) by Anonymous Coward on Wednesday June 20 2018, @12:44PM (2 children)

        by Anonymous Coward on Wednesday June 20 2018, @12:44PM (#695556)

        So you are just posting a huge list without actually reading it and pretending it actually supports your claim? The only one on that list i saw that had anything to do with TLS 1.0 or 1.1 was beast, only TLS 1.0 was vulnerable not 1.1 like you claimed. There are several about cypher suites, those can be disabled separate from TLS version.

        • (Score: 2) by ledow on Wednesday June 20 2018, @01:27PM (1 child)

          by ledow (5567) on Wednesday June 20 2018, @01:27PM (#695569) Homepage

          You mean apart form the ones that say TLS, even TLS 1.2 (Logjam), TLS 1.0 (BEAST) etc. and yet we were actually TALKING specifically about TLS 1.0 and 1.1... And several of them mention the fix is to upgrade to TLS 1.2 (or else they'd say 1.1, wouldn't they?), talk about TLS (not SSL, e.g. BREACH, CRIME) or that the TLS session can be downgraded to the point that they are vulnerable (if you have to have TLS 1.0 but turn off most of the cipher suites, why is that any different to needing to upgrade to TLS 1.2 in terms of old-client-usage?).

          The protocols are broken. They are crackable NOW. Using commodity hardware and a MITM. That's a death sentence for any encryption security standard. Pretending it isn't is a nonsense. And that's just a brief page of summary CVE issues.

          This is like fecking about with WEP (1997) and WPA (2003) when WPA2 (2004) and WPA3 (2018) is just sitting there... and that had actual hardware upgrades required.
          TLS 1.0 (1999), TLS 1.1 (2006) are broken. Replace them with TLS 1.2 (2008) and soon TLS 1.3 (2018) where possible.

          Worse case with WEP, someone got on your wireless. Worst case with TLS, every secure financial or private byte of data anyone sends over the net can be read by anyone and EC2 instance or some clever side-channels.

          Banks don't up and change their software and encryption standards for a laugh. If anything, they are TEN YEARS behind.

          • (Score: 3, Insightful) by driverless on Wednesday June 20 2018, @03:50PM

            by driverless (4770) on Wednesday June 20 2018, @03:50PM (#695633)

            The protocols are broken. They are crackable NOW. Using commodity hardware and a MITM.

            You keep repeating this. Prove it by performing a full plaintext recovery attack on eBay or Paypal as I described earlier. Simply bleating "they're broken, they're totally insecure, they're totally broken" over and over doesn't make it so. If they're so incredibly easy to break, go ahead and do it, on a site where it actually matters like eBay or Paypal. I'll wait here.

    • (Score: -1, Troll) by Anonymous Coward on Wednesday June 20 2018, @03:42PM

      by Anonymous Coward on Wednesday June 20 2018, @03:42PM (#695628)

      Translation: I work for NSA and would really like if you wouldn't fix potential vulnerabilities before we use them against you.

(1)