Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Saturday August 26 2017, @04:16AM   Printer-friendly
from the not-just-for-clothing-any-more dept.

Submitted via IRC for TheMightyBuzzard

Docker is a great tool. But Docker containers are not a cure-all. If you really want to understand how Docker is impacting the channel, you have to understand its limitations.

Docker containers have become massively popular over the past several years because they start faster, scale more easily and consume fewer resources than virtual machines.

But that doesn't mean that Docker containers are the perfect solution for every type of workload. Here are examples of things Docker can't do or can't do well:

  • Run applications as fast as a bare-metal server.
  • Provide cross-platform compatibility.
  • Run applications with graphical interfaces.
  • Solve all your security problems.

I kinda miss just running services directly on physical servers. Guess I'm getting old.

Source: http://thevarguy.com/open-source/when-not-use-docker-understanding-limitations-containers


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Funny) by Anonymous Coward on Saturday August 26 2017, @05:55AM (4 children)

    by Anonymous Coward on Saturday August 26 2017, @05:55AM (#559311)

    > But that doesn't mean that Docker containers are the perfect solution for every type of workload.

    Of course not, for a perfect solution, your Docker container has to be running in a virtual machine started from a client of a Xen hypervisor. Amateurs.

    Starting Score:    0  points
    Moderation   +2  
       Funny=2, Total=2
    Extra 'Funny' Modifier   0  

    Total Score:   2  
  • (Score: 1, Insightful) by Anonymous Coward on Saturday August 26 2017, @09:45AM (1 child)

    by Anonymous Coward on Saturday August 26 2017, @09:45AM (#559367)
    You could always use containers(zones) implemented properly, running on bare metal with no other sandwiched layers (read VM) using Triton DataCenter (Triton DataCenter [joyent.com]). Could move to public cloud too, if that's your heart's desire. It's opensource too.
    • (Score: 2) by TheRaven on Sunday August 27 2017, @09:53AM

      by TheRaven (270) on Sunday August 27 2017, @09:53AM (#559764) Journal
      Docker doesn't use a VM on most platforms and on Illumos it will use zones. It mostly uses virtualisation on Mac, because the main use case there is to create containers locally for deployment on Linux hosts.
      --
      sudo mod me up
  • (Score: 1, Interesting) by Anonymous Coward on Saturday August 26 2017, @04:04PM

    by Anonymous Coward on Saturday August 26 2017, @04:04PM (#559477)

    In all seriousness, I have been wondering if this is really the motivation behind containers. Costs have been cut so much that they are not really allowed to buy dedicated servers anymore. However, the sysadmins and devs got used to the features that VMs provided and nested virtualization has been unstable and slow until recently on hardware with support. Therefore, the market for "VMs" that could run on VMs sprang up and containers of various types were born.

  • (Score: 2) by zeigerpuppy on Sunday August 27 2017, @01:09AM

    by zeigerpuppy (1298) on Sunday August 27 2017, @01:09AM (#559664)

    This, is in fact how I run docker containers (Debian Xen Dom0 with ZFS zvols then generally Ubuntu DomUs running docker)
    But it's still a dog, so damn slow compared to native and I think it's the AUFS file system to blame.
    I'd like to run my containers backed by ZFS but there's no way I trust running docker on a Dom0. . Priv esc seems way too easy from docker. So the dockerised apps have the most stupid filesystem stack. . Also, why the fuck is it so hard to stop a docker container add a bind mount or change startup options and restart it, ffs this should be a normal operation but instead in docker you have to either commit the machine (read use double the disk space) or destroy it and start from scratch.
    I feel like docker was built for developers to have their revenge on sysadmins. It's flakey and slow and has resulted in proper install instructions being replaced by "just grab the docker image". A well designed and run Xen DomU is so much faster and more flexible than a docker container will ever be and integration of services can follow usual Unix principles. Docker just makes it all more cumbersome for not much benefit. .
    Anyway, if you must use docker, here's what I've learnt: use docker-compose if you can to help keep the configuration readable; set up a dedicated data storage system, I use NFS backed by ZFS for persistent, snapshottable storage; watch your disk space and left over containers, those suckers will eat your storage in no time; think really carefully about mounts, privileges, restart options and exposed ports before starting that container; and have some whisky and gloves ready for all the facepalming. . Seriously, this is meant to be easier than just installing some libs and tweaking a few configuration files?
    Suggestions of better filesystem stacks on top of a zvol would be appreciated!