Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Wednesday June 24 2015, @12:02AM   Printer-friendly
from the address-muncher dept.

It seems weird that in this era of virtual everything that a number is hard to come by. The restrictions are real, however, because AWS restricts artificially the number of IP addresses you can bind to an interface on your VM. You have to buy a bigger VM to get more IP addresses, even if you don't need extra compute. Also, IPv6 is nowehre to be seen on the clouds, so addresses are more scarce than they need to be in the first place.

So the key problem is that you want to find a way to get tens or hundreds of IP addresses allocated to each VM.

Most workarounds to date have involved "overlay networking". You make a database in the cloud to track which IP address is attached to which container on each host VM. You then create tunnels between all the hosts so that everything can talk to everything. This works, kinda. It results in a mess of tunnels and much more complex routing than you would otherwise need. It also ruins performance for things like multicast and broadcast, because those are now exploding off through a myriad twisty tunnels, all looking the same.

The Fan is Canonical's answer to the container networking challenge.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by Anonymous Coward on Wednesday June 24 2015, @12:19AM

    by Anonymous Coward on Wednesday June 24 2015, @12:19AM (#200163)

    IP address space is a limited resource. Amazon expects you to pay for resources that you use. That's not "bungling" anything; that's how reality works, here in the real world.

    Starting Score:    0  points
    Moderation   +2  
       Insightful=1, Interesting=1, Total=2
    Extra 'Interesting' Modifier   0  

    Total Score:   2  
  • (Score: 1, Insightful) by Anonymous Coward on Wednesday June 24 2015, @12:27AM

    by Anonymous Coward on Wednesday June 24 2015, @12:27AM (#200166)

    I hope to see cheaper IPv6-only VMs soon. Some stuff just does not need worldwide IPv4 access. Also, if you are balancing incoming traffic between different servers, you can portion IPv6 traffic onto its own set of servers and those servers would not need IPv4 at all.

    • (Score: 0) by Anonymous Coward on Wednesday June 24 2015, @12:34AM

      by Anonymous Coward on Wednesday June 24 2015, @12:34AM (#200171)

      I wanna see flying pigs so bad, I bought a buncha live pigs, and I'm throwin em outta the window right now. Ha ha! See em break their bones on the sidewalk. Hear em squealin in pain. Who wants bruised bacon?

      • (Score: 0) by Anonymous Coward on Wednesday June 24 2015, @01:47AM

        by Anonymous Coward on Wednesday June 24 2015, @01:47AM (#200195)

        It's real hard to offer a product with LESS features

      • (Score: 0) by Anonymous Coward on Thursday June 25 2015, @08:06AM

        by Anonymous Coward on Thursday June 25 2015, @08:06AM (#200837)

        IPv6 is here now and it works. Not everywhere, but if you have it at home and at the VM, it does not matter what IPv4 backwater other people are stuck in.

    • (Score: 2) by vux984 on Wednesday June 24 2015, @02:09AM

      by vux984 (5045) on Wednesday June 24 2015, @02:09AM (#200200)

      I hope to see cheaper IPv6-only VMs soon. Some stuff just does not need worldwide IPv4 access.

      If it doesn't need a worldwide address; you could just run that stuff in a private 10.x.x.x vlan or equivalent.

      • (Score: 2) by drussell on Wednesday June 24 2015, @03:52AM

        by drussell (2678) on Wednesday June 24 2015, @03:52AM (#200222) Journal

        I don't believe they're talking about publically-routable addresses, just the limit to the number of interfaces and/or (private) addresses on one VM.

        Why you would ever need tens or hundreds of publically routable addresses on one VM except to spam or bot, I have no idea... That simply makes no sense but even needing hundreds of private addresses for this "containerized networking whatnot foo hooey" is also beyond me (though IPv6 certainly has the address space to do this should someone desire it.) I obviously don't understand what people are using this for and while there may be some legitimate technical reason for operating something like this, I have a feeling it is a sub-optimal solution to some vague problem that has far, far better solutions from a technical perspective.

        Then again, I don't understand exactly what they're doing and since at first glance it appears silly, I really don't care to invest the time and effort to figure out precisely what they're trying to accomplish. :)

        It would certainly be nice to have an explanation of what exactly these kinds of things are in the summary though, instead of just quoting some part of some vague article. Ditto for also always expanding acronyms on first use in a summary. We may be mostly technical people but not every abreviation is going to obviously click with everyone and many acronyms have multiple valid technical meanings and it may not be immediately obvious to someone in one field or another which one is being discusses, etc. etc.

        Stash that in the "suggestions to help make excellent submissions" file. :)

        Just my $0.02

        • (Score: 3, Informative) by bradley13 on Wednesday June 24 2015, @08:09AM

          by bradley13 (3053) on Wednesday June 24 2015, @08:09AM (#200284) Homepage Journal

          This all made no sense to me either. Here's an article explaining a bit about the idea of containers and the Docker software. [techtarget.com]

          The goal seems to be to improve process security within the Linux kernel, so that we can go back to running multiple services on the same server. Virtualization is fine and wonderful: it provides better security, plus snapshots, portability, etc.. But there is something a bit crazy about needing a complete OS installation for every individual service you want to run. In the end, the idea here seems to be to provide that high level of isolation, but to allow the services to reside on a single installation.

          Still, I'm failing to see the need for hundreds (or even tens) of IP addresses. What kind of services have such small footprints that you could put "hundreds" on a laptop - and yet each service needs its own IP address? If the services are all running under the same OS, isn't that what port numbers on localhost (or ::1) are for? Why have these sandboxed processes even pretend to be different machines with different addresses?

          Anyone out there with actual, hands-on experience with this stuff?

          --
          Everyone is somebody else's weirdo.
          • (Score: 2) by drussell on Thursday June 25 2015, @06:20AM

            by drussell (2678) on Thursday June 25 2015, @06:20AM (#200820) Journal

            But there is something a bit crazy about needing a complete OS installation for every individual service you want to run.

            Definitely...

            The goal seems to be to improve process security within the Linux kernel, so that we can go back to running multiple services on the same server.

            At first glance this looks like a feeble-minded implementation of the concept of the FreeBSD "Jail" system. None of this would be necessary if services were truly secure and an additional layer of protection is fine but containerizing things to the point where you have an issue with interfaces and addresses seems a bit ridiculous.

            Still, I'm failing to see the need for hundreds (or even tens) of IP addresses. What kind of services have such small footprints that you could put "hundreds" on a laptop - and yet each service needs its own IP address? If the services are all running under the same OS, isn't that what port numbers on localhost (or ::1) are for? Why have these sandboxed processes even pretend to be different machines with different addresses?

            Amen!

      • (Score: 0) by Anonymous Coward on Wednesday June 24 2015, @05:06AM

        by Anonymous Coward on Wednesday June 24 2015, @05:06AM (#200230)

        The problem with AWS, and cloud anything for that matter, is that it has to be accessible remotely from anywhere. That is part of what makes it a cloud server. Private addressing moves it from PaaS to SaaS with a massive cut to what the server is worth.

        • (Score: 0) by Anonymous Coward on Thursday June 25 2015, @08:10AM

          by Anonymous Coward on Thursday June 25 2015, @08:10AM (#200838)

          Right. 10.x.x.x can conflict. And it is not Internet accessible without a VPN. If you already have IPv6 in your home (this is not too uncommon these days), then you do not need the VPN.

          People need VMs for serving IPv6 traffic. They can serve IPv4 traffic too, but if they do not have to, why bother giving them scarce IPv4 addresses.

      • (Score: 0) by Anonymous Coward on Thursday June 25 2015, @02:43PM

        by Anonymous Coward on Thursday June 25 2015, @02:43PM (#200976)

        I am talking about publically-routable addresses for IPv6 only. A percentage of your traffic is IPv6. It is increasing quickly. If you operate a cluster, you can point AAAA records to a specific set of VMs, and turn off IPv4 on those VMs.

  • (Score: 0) by Anonymous Coward on Wednesday June 24 2015, @01:14AM

    by Anonymous Coward on Wednesday June 24 2015, @01:14AM (#200185)

    > IP address space is a limited resource.

    Not for private routed networks which is what this appears to be about - intra-VM, not internet.
    Well not practically limited.