I don't think it's SSHd in particular that he has a problem with. Docker maintainers do not seem to like the idea of people treating Docker containers as lightweight VMs. They seem to want images to be restricted to running as few processes as possible - ideally one. I think they view it as too hard to scale once you start going down the path of running multiple services in an image - I've seen the "pets vs cattle"[1] analogy used for explaining why.
There is an image by Phusion called baseimage-docker[2] which adds SSHd, init, syslog, and cron in an attempt to make Docker containers more like lightweight VMs. But in the #docker channel, I've seen people have issues with it. For example, one person had some /etc/init.d scripts that wouldn't start up (other ones started up fine). Turns out that one of the signals that init was waiting on to start that script was never getting sent (I think it was networking coming online?), and that was just a side effect of how Docker works that couldn't easily be fixed. The Docker maintainers in the channel discouraged using this image for these reasons.
Baseimage-docker maintainer here. I had no idea that the Docker maintainers are actively discouraging this image. I also haven't seen any bug reports about these issues. I would most definitely be interested in collaborating with the Docker maintainers and finding a solution for these problems. If any Docker maintainers are reading this and are interested, email me at hongli@phusion.nl.
But about that pet x cattle analogy, there are plenty of SSH based tools for large scale management of servers. And it's still the best file copy tool available on any system (rsync is not an option here, because rsync is only good when it uses ssh).
The article points out how you can still easily use ssh to maintain the containers: Use the host ssh and force "nsenter" and you get the flexibility of ssh without having to deal with sshd's in every container.
But I think using ssh for large scale management of server largely misses the main benefits of Docker. One of the nice ways of using Docker is to replace tools that uses ssh to try to replicate vm states in various ways that are often hard to make 100% reproducible, with something that is entirely reproducible because it replicates the state exactly.
When you build a docker image, push it to an index, and deploy it on your test system, and then later on the production system, you know the container remains identical.
> When you build a docker image, push it to an index, and deploy it on your test system, and then later on the production system, you know the container remains identical.
I'm fairly sure every cloud provider worth talking about has an image-based deployment system. Even an ESXi box, the sort of which exists in many small offices, does.
If you use that exclusively, then those ssh-based server management tools are moot.
Both most of these solutions are far more heavy duty. I've shipped enough VM images all over the place to learn to hate the overheads compared to the very lean-ness of a typical Docker deploy.
Yes but I bet the majority of people using Docker don't need/want to scale the majority of their projects to 100+ servers which is what the Docker maintainers focus on.
It is a good/valid focus for them. However, it is not for everyone.
I don't scale out to a hundred servers and I still use Docker with separate processes because scaling is not the only advantage to having stateless single-purpose containers.
I run several processes on each VM, each in their own container, and use Docker links which exposes environment variables into each container describing ip/port information for dependent services. It was really easy and works great.
Furthermore, you can easily set all of the containers to share the same networking namespace so they can all just listen locally if you wanted a turn key solution. The pretty trivial issue of single-host service discovery is not a very strong argument against the benefits of single process containers in my experience.
1) Multiple docker containers to spin up and be managed.
2) Multiple health checks.
3) Set additional flags/do additional config for Docker.
The fact people consistently say "Eh, this is a non-issue" is great. It means you are much luckier and more skilled than I since you can manage all of that with 0 additional effort.
For me, all of this is effort I don't need to expend.
Pets vs. cattle is about customization vs. automatic setup; I don't understand how it's relevant to whether the containers have no SSH or identical SSH.
Pets vs. cattle was the last battle. Now there's immutable cattle vs. mutable cattle; Docker is trying to promote immutable infrastructure and microservices and eliminating ssh is one aspect of that.
There is an image by Phusion called baseimage-docker[2] which adds SSHd, init, syslog, and cron in an attempt to make Docker containers more like lightweight VMs. But in the #docker channel, I've seen people have issues with it. For example, one person had some /etc/init.d scripts that wouldn't start up (other ones started up fine). Turns out that one of the signals that init was waiting on to start that script was never getting sent (I think it was networking coming online?), and that was just a side effect of how Docker works that couldn't easily be fixed. The Docker maintainers in the channel discouraged using this image for these reasons.
[1]: https://groups.google.com/forum/#!msg/docker-user/pNaBYJkmnA...
[2]: http://phusion.github.io/baseimage-docker/