For the last 10 years my company (not an IT company) has built about 2000 linux machines, all based on a common preseed file and ubuntu server, which installs a home-grown auditing tool, basic configuration, and an internal apt repository of about 200 home spun and other useful debs. About 1000 of these machines are still on the network somewhere in the world (none in Antarctica, at least no permanent installs). We have zero professional linux administrators, just people that dabble.
However in addition to that we have at least 2 dozen rpm based machines, which lack the auditing or management, they just get installed and forgotten about. These are typically for horrible purposes like running Oracle, but we also have a fair number of xen nodes too. I was going to build an yum repository for these machines, hopefully aliening in some of the tools we have (things like auto-registering into nagios).
Being an old fart I consider "yum" to be quite new compare with apt, so I was surprised to see that it was being replaced with DNF [wikipedia.org]. There's also a push from the youngling developers to run everything on docker [wikipedia.org] on something like Redhat Atomic [projectatomic.io], or Ubuntu Core [ubuntu.com] - which are redhat and ubuntu in name only, and lack any traditional package tools.
Given that the hipster millennial agile cupcakes are the future, is there a future in old fashioned RPMs or Debs distributed by yum/dnf and apt, or will the future be "snappy [ubuntu.com]"? How have you managed to cope with the move to a containerised environment? Or do you think it's all a fad and we'll swiftly move to traditional metal-OS-Application (rather than metal-OS-container-vm-OS-container-OS-Application).