Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Saturday August 26 2017, @04:16AM   Printer-friendly
from the not-just-for-clothing-any-more dept.

Submitted via IRC for TheMightyBuzzard

Docker is a great tool. But Docker containers are not a cure-all. If you really want to understand how Docker is impacting the channel, you have to understand its limitations.

Docker containers have become massively popular over the past several years because they start faster, scale more easily and consume fewer resources than virtual machines.

But that doesn't mean that Docker containers are the perfect solution for every type of workload. Here are examples of things Docker can't do or can't do well:

  • Run applications as fast as a bare-metal server.
  • Provide cross-platform compatibility.
  • Run applications with graphical interfaces.
  • Solve all your security problems.

I kinda miss just running services directly on physical servers. Guess I'm getting old.

Source: http://thevarguy.com/open-source/when-not-use-docker-understanding-limitations-containers


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Interesting) by Anonymous Coward on Saturday August 26 2017, @05:48AM (10 children)

    by Anonymous Coward on Saturday August 26 2017, @05:48AM (#559309)

    How do I run a lightweight "container" that runs Google Chrome (along with all its dependencies) and nothing else?
    I don't want the binaries shitting up my system into dependency hell.

    • (Score: -1, Troll) by Ethanol-fueled on Saturday August 26 2017, @05:56AM (4 children)

      by Ethanol-fueled (2792) on Saturday August 26 2017, @05:56AM (#559312) Homepage

      is when two gay men wrap their foreskins around each others' glans penii

      Of course, You all know this.

      • (Score: -1, Troll) by Anonymous Coward on Saturday August 26 2017, @06:22AM (2 children)

        by Anonymous Coward on Saturday August 26 2017, @06:22AM (#559319)

        is when two gay men wrap their foreskins around each others' glans penii

        Are you saying Jews can't use Space Docking? Terrible anti-Semitic technology.

        • (Score: 0, Troll) by Ethanol-fueled on Saturday August 26 2017, @06:37AM (1 child)

          by Ethanol-fueled (2792) on Saturday August 26 2017, @06:37AM (#559322) Homepage

          Jews know degeneracy far beyond your imagination. Their footprint in the porn industry notwithstanding, you should check up on their rituals. The orthodox engage in acts so dirty not even I am willing to repeat them here.

      • (Score: 0) by Anonymous Coward on Saturday August 26 2017, @06:23PM

        by Anonymous Coward on Saturday August 26 2017, @06:23PM (#559522)

        Stupid again. Retardation is a terrible thing.

    • (Score: 0) by Anonymous Coward on Saturday August 26 2017, @06:13AM

      by Anonymous Coward on Saturday August 26 2017, @06:13AM (#559316)

      You're thinking either of a sandbox, or Chrome OS [wikipedia.org].

    • (Score: 1, Informative) by Anonymous Coward on Saturday August 26 2017, @08:17AM

      by Anonymous Coward on Saturday August 26 2017, @08:17AM (#559346)

      http://rorymon.com/blog/?p=3768 [rorymon.com] shows a way.

    • (Score: 3, Informative) by forkazoo on Saturday August 26 2017, @08:17AM (2 children)

      by forkazoo (2561) on Saturday August 26 2017, @08:17AM (#559347)

      It won't wind up being incredibly lightweight, but you can get the UNIX domain sockets for X11 to go across the container boundary, and not incur the performance hit of a TCP connection to the host where the X Server lives.

      https://stackoverflow.com/questions/16296753/can-you-run-gui-apps-in-a-docker-container [stackoverflow.com]

      Set up a small container, wire up X, install Chrome, voila. You'll have to launch Chrome inside the container, rather than from a normal "Start Menu" entry unless you make your own to do it.

      • (Score: 0) by Anonymous Coward on Saturday August 26 2017, @09:16AM

        by Anonymous Coward on Saturday August 26 2017, @09:16AM (#559354)

        Thank you, this will work great! yes, its the browser, not the OS, I don't need that since I already run Linux.

      • (Score: 0) by Anonymous Coward on Saturday August 26 2017, @11:42PM

        by Anonymous Coward on Saturday August 26 2017, @11:42PM (#559638)

        One way is firejail (for the general separation) with xpra (for the X11 part). You can run installers and everything with it, by means of overlays, and it will put all the new crap in one single place (best will be with --overlay-named=name so you can reuse later).

  • (Score: 2, Funny) by Anonymous Coward on Saturday August 26 2017, @05:55AM (4 children)

    by Anonymous Coward on Saturday August 26 2017, @05:55AM (#559311)

    > But that doesn't mean that Docker containers are the perfect solution for every type of workload.

    Of course not, for a perfect solution, your Docker container has to be running in a virtual machine started from a client of a Xen hypervisor. Amateurs.

    • (Score: 1, Insightful) by Anonymous Coward on Saturday August 26 2017, @09:45AM (1 child)

      by Anonymous Coward on Saturday August 26 2017, @09:45AM (#559367)
      You could always use containers(zones) implemented properly, running on bare metal with no other sandwiched layers (read VM) using Triton DataCenter (Triton DataCenter [joyent.com]). Could move to public cloud too, if that's your heart's desire. It's opensource too.
      • (Score: 2) by TheRaven on Sunday August 27 2017, @09:53AM

        by TheRaven (270) on Sunday August 27 2017, @09:53AM (#559764) Journal
        Docker doesn't use a VM on most platforms and on Illumos it will use zones. It mostly uses virtualisation on Mac, because the main use case there is to create containers locally for deployment on Linux hosts.
        --
        sudo mod me up
    • (Score: 1, Interesting) by Anonymous Coward on Saturday August 26 2017, @04:04PM

      by Anonymous Coward on Saturday August 26 2017, @04:04PM (#559477)

      In all seriousness, I have been wondering if this is really the motivation behind containers. Costs have been cut so much that they are not really allowed to buy dedicated servers anymore. However, the sysadmins and devs got used to the features that VMs provided and nested virtualization has been unstable and slow until recently on hardware with support. Therefore, the market for "VMs" that could run on VMs sprang up and containers of various types were born.

    • (Score: 2) by zeigerpuppy on Sunday August 27 2017, @01:09AM

      by zeigerpuppy (1298) on Sunday August 27 2017, @01:09AM (#559664)

      This, is in fact how I run docker containers (Debian Xen Dom0 with ZFS zvols then generally Ubuntu DomUs running docker)
      But it's still a dog, so damn slow compared to native and I think it's the AUFS file system to blame.
      I'd like to run my containers backed by ZFS but there's no way I trust running docker on a Dom0. . Priv esc seems way too easy from docker. So the dockerised apps have the most stupid filesystem stack. . Also, why the fuck is it so hard to stop a docker container add a bind mount or change startup options and restart it, ffs this should be a normal operation but instead in docker you have to either commit the machine (read use double the disk space) or destroy it and start from scratch.
      I feel like docker was built for developers to have their revenge on sysadmins. It's flakey and slow and has resulted in proper install instructions being replaced by "just grab the docker image". A well designed and run Xen DomU is so much faster and more flexible than a docker container will ever be and integration of services can follow usual Unix principles. Docker just makes it all more cumbersome for not much benefit. .
      Anyway, if you must use docker, here's what I've learnt: use docker-compose if you can to help keep the configuration readable; set up a dedicated data storage system, I use NFS backed by ZFS for persistent, snapshottable storage; watch your disk space and left over containers, those suckers will eat your storage in no time; think really carefully about mounts, privileges, restart options and exposed ports before starting that container; and have some whisky and gloves ready for all the facepalming. . Seriously, this is meant to be easier than just installing some libs and tweaking a few configuration files?
      Suggestions of better filesystem stacks on top of a zvol would be appreciated!

  • (Score: 0) by Anonymous Coward on Saturday August 26 2017, @07:42AM

    by Anonymous Coward on Saturday August 26 2017, @07:42AM (#559334)

    When to be needing to pad résumé. Find a software that is written in C language and uses only dependency is libc. Sure to be needing config is unique to one install of software is containing IP and password and hostsname. Write a dockerfile for the softwares. Write unique config of one install into dockerfile. Upload to github.

  • (Score: 3, Insightful) by loic on Saturday August 26 2017, @09:18AM (1 child)

    by loic (5844) on Saturday August 26 2017, @09:18AM (#559355)

    Most Docker containers run everything as root in the container, which is a namespaced root. Technically, containers are run as the root user (only one root process, just nasty), if you manage to go outside the sandbox, you're root on the host. Lovely!

    • (Score: 2, Interesting) by higuita on Saturday August 26 2017, @04:24PM

      by higuita (2465) on Saturday August 26 2017, @04:24PM (#559485)

      That is only half true... in recent kernels and dockers, the root inside the docker is a normal user in the host machine... but many people uses older dockers, old kernels or both... and then complain. for docker, you MUST use the most recent kernel and docker version you can get.

      But even for recent kernel and docker, most people still run the docker with the "fake root" user, because they do not care or know how to change it

      you can define what user the all will run via the "User" config in dockerfile or the "docker run --user" when starting it up
      if you do not specify it, it will run as root, so you should ALWAYS set it up to some other user.
      sometimes the dockerfile needs to be updated to change the permissions of some folders for that user, as you are not using root anymore, you may not be able to write on some folders.

      You should also drop all the docker capabilities in all containers and add then back only when they are really needed. again, many software do not any capability and many of those that might need can be removed after fine-tune the dockerfile (to create files empty and directories, change permissions and ownership, change port, disable change-user as you are already running as that user). only a few more complex or special programs really need capabilities.

      So a bad, outdated, setup for docker, you have a special chroot with namespaces, running as root with all the power.
      So a bad, updated, setup for docker, you have a special chroot with namespaces, running as "fake root" with many power, via the capabilities.
      a good setup for a updated docker, you get a special chroot with namespaces, running as a normal user, without any power.
      With this last setup, your security is very good already. you can even improve it more using app-armor, selinux and likes, but updating a kernel, docker and finetune a dockerfile/ docker run are simply enough tasks that you can apply to all your containers

  • (Score: 2) by Hyperturtle on Saturday August 26 2017, @05:28PM (2 children)

    by Hyperturtle (2824) on Saturday August 26 2017, @05:28PM (#559502)

    How can someone that regularly uses container based emulations such as this, actual virtual machine hosts and their accompanying VMs, and "bare metal" servers (which always seemed to have the connotation of being 'bad' somehow--like an 'on-prem bare-metal server' is considered bad if you listen to salespeople)... how can an 'expert' not know what expectations to have with this stuff? This is really basic better homes and gardens type of advice.

    I guess there is always a need for a primer when people are unfamiliar with the technology in question, but that article (I read it) is pretty basic. I would expect the FAQ to have that info as general cautions and not hard-and-fast rules, because it all depends on your system and setup. I guess they have to keep writing something to keep the ad revenue coming in.

    But I am with TMB on this, though... TMB, you are not old yet, maybe just wise where it counts (or a wiseguy like me). Running services directly on a physical server is often the easiest way to get things going. Few dependencies except for one's knowledge (maybe that is a huge setback for some).

    Industrious people can format their favorite mainstream 'router' with a custom linux OS with so many features that it rivals what MS used to provide on the official server disks for their flagship server products. And these electronics store routers often are faster than the hardware that used to run the same stuff 15 years ago. You don't need much to run DNS, DHCP, and a file share, or even an intranet web server for basic tasks.

    I guess the real issue is the skill, or a fear of the unknown resulting in default wizard work and no physical servers. A good example is how the new experts are different than the old experts. My understanding is that modern MCSE courses generally don't cover hardware concerns and what actually runs on the server itself, and I have met server administrators that actually never have seen the stuff they work with--nor have any understanding of underlying core aspects of Windows. And the thing that gets me is that for day-to-day stuff, they don't need to know those details. I can't tell if that is good or bad.

    I almost think that the actual skill to maintain a server, which a lot of the staff here has -- the hardware and software maintenance and ability to open the chassis and change the oil and what not... not just 'app' support on a VM--these skills are a dying art.

      'Bare-metal servers' are going to be like the muscle cars of IT--only enthusiasts and people that really know what they are doing are going to be able to effectively work on them before long. I think that process is already taking place at a lot of small/medium sized businesses (which isn't necessarily bad... but it greatly limits the exposure a new IT person can get to how any of this stuff actually works--and that's bad. Most people can't change their car's oil, either, not that it's hard... it's too convenient for most people to have someone else inexpensively do it for them.).

      I imagine a future where some arcane box with flashing lights will have an issue, and it'll take some IT veteran to be called in as tier 3 at a high hourly rate to arrive on-site and within 5 minutes determine that the solution is to eject the diskette so it can boot again, due to someone with access to The Server marveling over the floppy disk drive, laughing about how it was still used for some legacy application hosted ON the server that can't be clouded for some reason, and then forgetting they played with it and rebooting it months later, only to have the server experience an invalid media error and completely panic the one guy that does support there because he didn't have the experience of having ever seen it before and couldn't copy and paste the error into Google.

    • (Score: 2) by Azuma Hazuki on Sunday August 27 2017, @03:28AM (1 child)

      by Azuma Hazuki (5086) on Sunday August 27 2017, @03:28AM (#559693) Journal

      Fine by me. I do everything either bare-metal or virtualized but have never touched a Docker or other container format in my life. It has its use cases, but not for the stuff I'm doing, and I'm always suspicious of anything hyped-up and buzzword-y.

      --
      I am "that girl" your mother warned you about...
      • (Score: 2) by Hyperturtle on Sunday August 27 2017, @05:08PM

        by Hyperturtle (2824) on Sunday August 27 2017, @05:08PM (#559876)

        Yeah, it can get pretty silly with what is promised and what it takes to make something convenient and useful.

        And often -- the physical server does it better anyway. Local storage, full access to the processors and ram, direct network access. It's like a customer centric solution that isn't carved up into virtual time slices for rent in a multi-tenant topology!

        I guess I am just not profitable to some places by recommending something customer focused. Of course, there are merits to virtualization... redundancy and recovery over geographic distances can be made a lot easier. But many places just complain that AWS or Azure is down when those things are down... few organizations are realizing the promise of all this complicated topology.

        I set up an openstack environment at home to play with it, and the amount of hardware needed to run something just as fast on a physical server without all the abstraction... it is expensive to learn something new like that, from the hardware and time needed to get something to work like how it used to be.

        You need much better gear to handle it all and then expect to add more VMs on top of it, and so the design itself has to be more suited towards growth as opposed to status quo.

        That makes those multitenant topologies much more economical to those hosting it. Otherwise it is probably cheaper to rent or stick with your own physical boxes, unless you like a challenge and your employer trusts you can do it. (all the stuff that goes into openstack generally requires a broad application of skills--like linux, network, security and windows all in the same guy. Or.. when it breaks, those four specialists blame each other... and the business decides to outsource it to experts anyway. At least linux servers/windows servers are usually supportable without too much difficulty. the real barrier at most places is finding a network guy, but often those are set it and forget it for small companies. servers tend to require more hands-on.)

  • (Score: 2) by wonkey_monkey on Saturday August 26 2017, @06:30PM (1 child)

    by wonkey_monkey (279) on Saturday August 26 2017, @06:30PM (#559523) Homepage

    If you really want to understand how Docker is impacting the channel

    The what?

    --
    systemd is Roko's Basilisk
    • (Score: 2) by fritsd on Saturday August 26 2017, @07:48PM

      by fritsd (4586) on Saturday August 26 2017, @07:48PM (#559555) Journal

      When not to use Dover: after Brexit, the Channel will be very busy what with all the new toll bureaucracy.

  • (Score: 2) by darkfeline on Monday August 28 2017, @05:03PM

    by darkfeline (1030) on Monday August 28 2017, @05:03PM (#560324) Homepage

    For all you guys shit on systemd, systemd-nspawn is much easier to use than Docker.

    You know all of those kitchen sink things you complained about (systemd replacing su, systemd implementing dns, etc.)? Those were for supporting the container use case, not for host usage although you can certainly do that if you want to, say, justify your whining about systemd with something other than ad hominem.

    As a bonus, you don't have to worry about this bug: https://github.com/moby/moby/issues/6119 [github.com]

    --
    Join the SDF Public Access UNIX System today!
(1)