Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday April 11 2018, @12:14PM   Printer-friendly
from the trust-but-have-no-clue dept.

The Domain Name System (DNS) is a plain-text service that lets anyone who can see “the wire” capture a user's DNS traffic and work out whether they're asking for naughty.com or nice.com. So to help enhance its privacy a group of researchers has proposed a more "Oblivious DNS” protocol.

However, as the group explained here, even encrypted DNS (for example, DNS over TLS) is still exposed at the recursive resolver (that is, the DNS component most directly connected to the client), because that server decrypts the user request so it can fetch the IP address of the site the user wants.

In other words, whether you use your ISP's resolver, or one provided by a third party like Google or Cloudflare, at some point you have to trust the resolver with your DNS requests.

[...] To get around this, Oblivious DNS is designed to operate without any change to the existing DNS. As its designers write, it “allows current DNS servers to remain unchanged and increases privacy for data in motion and at rest”.

Instead it introduces two infrastructure components that would be deployed alongside current systems: a resolver “stub” between the recursive resolver and the client; and a new authoritative name server, .odns at the same level in the hierarchy as the root and TLD servers (see image).

In this model:

  • The stub server accepts the user query ("what's the IP address of foo.com?"), and encrypts it with a session key/public key combination;
  • The recursive name server receives the request (with .odns appended) and the session key, both encrypted;
  • The .odns tells the resolver to pass the request up to the ODNS authoritative server, which decrypts the request and acts as a recursive resolver (that is, it passes requests up the DNS hierarchy in the normal fashion);
  • The ODNS encrypts the response and passes it back down to the stub, which sends the response to the client.

The authors explained that this decouples the user's identity from their request.

The recursive resolver a user connects to knows the IP address of the user, but not the query; while the ODNS resolver can see the query, but only knows the address of the recursive resolver the user connects to, not the user.

Similarly, an attacker with access to a name server never sees the user's IP address, because the request is coming from the ODNS server.

The group has posted a conference presentation from late March here [PDF], and emphasises that Oblivious DNS is a “work in progress”.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Interesting) by bradley13 on Wednesday April 11 2018, @12:22PM (5 children)

    by bradley13 (3053) on Wednesday April 11 2018, @12:22PM (#665334) Homepage Journal

    It's a clever idea, since it avoids any changes to existing infrastructure. However, it seems to me that there are two weaknesses:

    - First, at the first recursive name server, when .odns is appended - the anonymity of the users depends on their queries being lost in the masses. There need to be a lot of users and a lot of queries.

    - Second, the duplicate .odns name servers - are these going to become a bottleneck?

    --
    Everyone is somebody else's weirdo.
    • (Score: 2) by zocalo on Wednesday April 11 2018, @02:47PM

      by zocalo (302) on Wednesday April 11 2018, @02:47PM (#665379)
      Whether it's a bottleneck will kind of depend on the number of users and number/capcity of the servers. Don't forget that they're still testing, so it seems likely if they get to production they will have more capacity and possibly some form of multicast, geo-located, CDN based solution like CloudFlare's 1.1.1.1 or Google's 8.8.8.8 DNS services use. DNS isn't a particularly high resource application until you get up to insane levels of usage like the root or major TLD servers have to handle - as long as they have enough memory and threads they'll probably be fine with just a small number of actual instances.
      --
      UNIX? They're not even circumcised! Savages!
    • (Score: 2) by KiloByte on Wednesday April 11 2018, @03:55PM

      by KiloByte (375) on Wednesday April 11 2018, @03:55PM (#665401)

      The first name server sits on ::1, which can't possibly reduce your security (if it gets pwned, your browser is pwned too).

      --
      Ceterum censeo systemd esse delendam.
    • (Score: 3, Interesting) by bob_super on Wednesday April 11 2018, @04:44PM (2 children)

      by bob_super (1357) on Wednesday April 11 2018, @04:44PM (#665428)

      - Third: Your ISP want to monetize you. Without Net Neutrality, they can block the IP of any DNS that is not theirs ("for security reasons" or "just because we can, and you have no choice").

      • (Score: 2) by frojack on Wednesday April 11 2018, @06:52PM (1 child)

        by frojack (1554) on Wednesday April 11 2018, @06:52PM (#665474) Journal

        Except they don't, which suggests they know they can't.

        https://inpropriapersona.com/articles/making-dns-work-isp-blocks-port-53/ [inpropriapersona.com]

        --
        No, you are mistaken. I've always had this sig.
        • (Score: 2) by bob_super on Wednesday April 11 2018, @07:26PM

          by bob_super (1357) on Wednesday April 11 2018, @07:26PM (#665487)

          > my ISP (which happens to be my university, since I’m on their network)

          Yes, because a university is totally the same thing as Comcast, next month as soon as the rules are in effect, blocking whatever the [bleep] they feel like for maximum profit.
          If Comcast says "we'll block all alternative DNS providers, you must use ours", 99% of their customers will either not realize or not be able to do anything about it. Whether they'll bother to play cat-and-mouse with seemingly random IPs having traffic that look like encrypted DNS packets is the question.

  • (Score: 0) by Anonymous Coward on Wednesday April 11 2018, @01:47PM

    by Anonymous Coward on Wednesday April 11 2018, @01:47PM (#665359)

    So basically a DNS only VPN. You can do that now, without any protocol updates.

    Really anything that keeps any part of the existing DNS infrastructure intact, is a complete waste of time. The problem is fundamental to the architecture. You can't fix it without redesigning it from scratch.

  • (Score: 3, Interesting) by JoeMerchant on Wednesday April 11 2018, @01:49PM (1 child)

    by JoeMerchant (3937) on Wednesday April 11 2018, @01:49PM (#665360)

    Well, duh. "at some point you have to trust the resolver with your DNS requests." so, can we do our DNS requests via Tor?

    --
    🌻🌻 [google.com]
  • (Score: 3, Insightful) by insanumingenium on Wednesday April 11 2018, @04:20PM (8 children)

    by insanumingenium (4824) on Wednesday April 11 2018, @04:20PM (#665414) Journal
    Is April 11th like April 1st twice or something?

    This article was terribly written. If an attacker has access to your MAC Address, they are on your same network, and would have it regardless. If they are on the same network, they are between you and your stub server (as drawn) and they have the pre encrypted requests.

    They mention having your subnet, which is just imprecise to incorrect no matter how you slice it. They would have your IP, not subnet, and while the IP is the actual identifying information, you can't define a subnet without a mask which definitely isn't provided.

    If you are trying to pitch a security concept, you need to address pedantic concerns like this because they make you look like an idiot.

    Setting up a stub is going to be beyond most users, and unless that stub is local (preferably integrated to the client itself) it is a total failure.

    If your stub resolver needs public keys to encrypt the request, you can't load balance those requests unless your share that same private key everywhere you may need to resolve.

    This difficulty in distributing infrastructure sounds to me like a great way to aggregate "sensitive" requests in one place, and require only a single key to get all they keys to the kingdom.

    Correlation attacks will be trouble, even if you assume there will be a huge flow of requests, correlation is powerful, especially given a large enough data set. Speaking of historical data, forward secrecy isn't optional in this day of national security letters.
    • (Score: 2, Informative) by Anonymous Coward on Wednesday April 11 2018, @05:33PM (1 child)

      by Anonymous Coward on Wednesday April 11 2018, @05:33PM (#665440)
    • (Score: 0) by Anonymous Coward on Wednesday April 11 2018, @05:53PM

      by Anonymous Coward on Wednesday April 11 2018, @05:53PM (#665446)

      the register's tech writers are hilarious. they are constantly spewing nonsense about various Linoxes too.

    • (Score: 2) by frojack on Wednesday April 11 2018, @06:56PM (2 children)

      by frojack (1554) on Wednesday April 11 2018, @06:56PM (#665477) Journal

      Most of these issues are resolved by moving your stub server closer and closer to your client. Perhaps eventually right into the client. Further there's no reason you can't have stacked stubs.

      --
      No, you are mistaken. I've always had this sig.
      • (Score: 2) by nobu_the_bard on Wednesday April 11 2018, @07:36PM

        by nobu_the_bard (6373) on Wednesday April 11 2018, @07:36PM (#665494)

        If the stub is in the client itself, isn't identifying the stub as good as identifying the client?

      • (Score: 2) by insanumingenium on Wednesday April 11 2018, @08:08PM

        by insanumingenium (4824) on Wednesday April 11 2018, @08:08PM (#665513) Journal

        I would call that a minimum barrier for security, but I don't think that saying they will be integrated in clients resolves more than the most trivial of issues.

        Can you imagine trying to get that set up? Which is worse, walking an end user through entering in their DNS public keys, or training them to trust whatever gets assigned (here comes a new DHCP option) as legitimate.

        Not sure what you mean by stacked subs.

    • (Score: 2, Interesting) by dwilson on Wednesday April 11 2018, @08:34PM (1 child)

      by dwilson (2599) Subscriber Badge on Wednesday April 11 2018, @08:34PM (#665533) Journal

      I disagree with a lot of what you said.

      If an attacker has access to your MAC Address, they are on your same network, and would have it regardless. If they are on the same network, they are between you and your stub server (as drawn) and they have the pre encrypted requests.

      My network consists of the machines running in my home, connected to my (openwrt-running) router, which is connected to the wider internet via an adsl connection to my local isp. I'm not worried about attackers on my local network. I imagine the vast majority of people are in the same boat.

      If you are trying to pitch a security concept, you need to address pedantic concerns like this because they make you look like an idiot.

      Perfect is the enemy of good. If this solution helps mitigate anyone outside my network knowing my dns requests, that's a Good Thing. Address the edge cases as they come up, don't wait for a perfect solution for everybody. Sure, that's biased towards my needs and my setup. What other metric would I use. realistically?

      If your stub resolver needs public keys to encrypt the request, you can't load balance those requests unless your share that same private key everywhere you may need to resolve.

      This difficulty in distributing infrastructure sounds to me like a great way to aggregate "sensitive" requests in one place, and require only a single key to get all they keys to the kingdom.

      Correlation attacks will be trouble, even if you assume there will be a huge flow of requests, correlation is powerful, especially given a large enough data set. Speaking of historical data, forward secrecy isn't optional in this day of national security letters.

      That sounds like a valid concern.

      --
      - D
      • (Score: 2) by insanumingenium on Wednesday April 11 2018, @09:46PM

        by insanumingenium (4824) on Wednesday April 11 2018, @09:46PM (#665564) Journal
        My comment was that they were writing about sniffing MACs being a concern when if they can sniff your MAC, they are already on your network and already have it regardless of DNS. The idea not being that I am overly worried about them being on net (though I don't think it unreasonable to build to that standard), but that their scary story of all the information they could collect didn't make any sense.

        The placement of the stub, they don't explicitly define, and this isn't a matter of edge cases, if the stub is placed on an untrusted network, this proposal provides less than zero benefit. At least without it you are potentially aware you are totally unprotected.

        The first half of my previous response was not a technical critique, but a complaint that their writing was unclear bordering on deceptive. To be clear, I also didn't explicitly state that I wasn't referring to The Register's story (which I didn't read when an academic link was available) but to the linked source [princeton.edu] and sublinked slideshow [princeton.edu]. Admittedly, someone else already pointed out there is an extension to add subnet information to DNS requests which I was unaware of, but I can't imagine the same is true of the MACs. Even given the subnet extension I am not clear on how this proposal would interact. Do these guys at Princeton not have any professors acting as advisors to point out that their summary sucks? Am I wrong to expect clarity from academic publications?

        I normally will be the first person shouting that perfect is the enemy of good with you, but if you can't write a coherent proposal, how is anyone supposed to support it?

        Security has to be well thought out, suggesting a vulnerability that doesn't exist absolutely discredits these authors. My comment wasn't that this isn't viable (though to be honest I don't see it being so at scale), but that making a security proposal that is unclear is a terrible mistake. I would personally demand anything dealing with security to be defined totally and transparently, I have seen too many implementations of well defined standards get details slightly different in ways that put people at risk.
  • (Score: 2) by nobu_the_bard on Wednesday April 11 2018, @06:02PM (1 child)

    by nobu_the_bard (6373) on Wednesday April 11 2018, @06:02PM (#665451)

    So how is this not just a DNS proxy with encryption?

    • (Score: 0) by Anonymous Coward on Thursday April 12 2018, @09:01AM

      by Anonymous Coward on Thursday April 12 2018, @09:01AM (#665808)

      Sounds to me like it's two DNS proxies, who must not be under control of the same corporation or government[1]. One knows who you are, the other knows what you are looking for.

      [1] These days, that means that one must be in "the west", the other in Russia or China. And you need to have some kind of guarantee of that.

  • (Score: 0) by Anonymous Coward on Wednesday April 11 2018, @08:29PM

    by Anonymous Coward on Wednesday April 11 2018, @08:29PM (#665529)

    Now there's a showstopper if there ever was one...

    There can be no privacy until we develop real ad-hoc peer to peer connections. The current setup is insecure by default, designed for internal networks, where it works perfectly. Wide area networking has to be done in an entirely different fashion because the connections cannot be trusted, ever, unless you own them all...

(1)