Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by Fnord666 on Saturday August 26 2017, @04:16AM   Printer-friendly
from the not-just-for-clothing-any-more dept.

Submitted via IRC for TheMightyBuzzard

Docker is a great tool. But Docker containers are not a cure-all. If you really want to understand how Docker is impacting the channel, you have to understand its limitations.

Docker containers have become massively popular over the past several years because they start faster, scale more easily and consume fewer resources than virtual machines.

But that doesn't mean that Docker containers are the perfect solution for every type of workload. Here are examples of things Docker can't do or can't do well:

  • Run applications as fast as a bare-metal server.
  • Provide cross-platform compatibility.
  • Run applications with graphical interfaces.
  • Solve all your security problems.

I kinda miss just running services directly on physical servers. Guess I'm getting old.

Source: http://thevarguy.com/open-source/when-not-use-docker-understanding-limitations-containers


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Hyperturtle on Sunday August 27 2017, @05:08PM

    by Hyperturtle (2824) on Sunday August 27 2017, @05:08PM (#559876)

    Yeah, it can get pretty silly with what is promised and what it takes to make something convenient and useful.

    And often -- the physical server does it better anyway. Local storage, full access to the processors and ram, direct network access. It's like a customer centric solution that isn't carved up into virtual time slices for rent in a multi-tenant topology!

    I guess I am just not profitable to some places by recommending something customer focused. Of course, there are merits to virtualization... redundancy and recovery over geographic distances can be made a lot easier. But many places just complain that AWS or Azure is down when those things are down... few organizations are realizing the promise of all this complicated topology.

    I set up an openstack environment at home to play with it, and the amount of hardware needed to run something just as fast on a physical server without all the abstraction... it is expensive to learn something new like that, from the hardware and time needed to get something to work like how it used to be.

    You need much better gear to handle it all and then expect to add more VMs on top of it, and so the design itself has to be more suited towards growth as opposed to status quo.

    That makes those multitenant topologies much more economical to those hosting it. Otherwise it is probably cheaper to rent or stick with your own physical boxes, unless you like a challenge and your employer trusts you can do it. (all the stuff that goes into openstack generally requires a broad application of skills--like linux, network, security and windows all in the same guy. Or.. when it breaks, those four specialists blame each other... and the business decides to outsource it to experts anyway. At least linux servers/windows servers are usually supportable without too much difficulty. the real barrier at most places is finding a network guy, but often those are set it and forget it for small companies. servers tend to require more hands-on.)

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2