On my personal computer and projects I always use Podman. There is even a fancy web app if people desperately want a cool icon on their menu bar, though it pales in comparison to Docker Desktop in features. (For instance, it's not able to search for images, whereas Docker Desktop can).
I do not miss Docker at all considering that I can copy paste almost every docker invocation I see online and have it run flawlessly with Podman. Unfortunately my workplace will probably never even consider trying out Podman as a replacement to Docker. I wonder if someone here has a nice anecdote of using Podman successfully at their workplace.
We are now on the Podman train. We used Docker for a while on some parts of our services. We did a thorough comparison of Podman and Docker when it was time to move over all the rest of our legacy-deployed services. Podman won out on many technical, subjective, and future-looking topics. Feels good, everybody is on board here.
podman runs or builds containers? as far as i understand it docker desktop does 2 or 3 different things and i haven't managed to untangle that yet because it hasn't fully broken my workflow yet. getting more and more tempting to remove it but i need something for my weird windows+wsl setup
It does… in the same sense that docker (the program/tool) does, that is: both are not container runtimes (such as containerd which docker uses, or runc and crun which are the options typically used with podman) but a container management tool that control a container runtime. So you would indeed use podman to create a container just like you would with docker.
As for building images, buildah is the tool most used in the podman community for that. and yes, both podman and docker can handle containerfiles (what is/was called "dockerfiles" in the docker world)
> need something for my weird windows+wsl setup
oh, well, uhm, my condolences for that. Luckily I never had to use that for containers, but a quick look on the podman homepage tells me that they also offer a virtualized WSLv2-based distribution for Windows users: https://podman.io/getting-started/installation.html#windows
…And of course, there is Podman Desktop if you want something more click-UI-based than the command line podman (never really tried it though, so I can't really say if it's good or not): https://podman-desktop.io/
Docker Desktop runs dockerd in WSL and adds a few things to enable working with it from Windows (e.g. installs the docker CLI on the Windows side and exposes the dockerd control socket to it). You can easily get rid of it and replace it with running dockerd in WSL on your own, or with podman-based tools.
Docker did do something smart with Docker Desktop by including wsl-vpnkit...to work around brain-dead corporate VPNs that break docker networking. Your alternative solutions don't work when AnyConnect or GlobalProtect, etc, are running.
This is only partially true, if _all_ traffic is tunneled over the vpn, then yes you’ll have this issue, but if the traffic is split such that only interesting traffic is sent over the vpn, then you won’t have this issue.
AWS Client VPN breaks it just by having ever run, even if not currently active, as it sets `sysctl net.ipv4.ip_forward=0` 'for you'.
My suspicion is that since you pay for client connections, they don't want you running a single bastion client and having your real clients connect via that. But it's annoying, and if you really wanted to do that, you only have to edit the script, or set it back on a schedule/after starting up the client.
Why not decentralized? Debian (for example) can provide one or two servers with a gigabit line. That's going to be quite slow to download during every CI build. Therefore, your CI provider would have a caching proxy registry. You get fast service, upstream doesn't need a ton of bandwidth.
Docker images are often tied to CI runs. So a Github action is configured to "bake" a docker image that saves the image to Github Container Registry and a CI platform is triggered to pull the image and run tests immediately thereafter. Imagine this happening on every Git commit in a certain branch.
Often, there isn't time for an image to be pulled from a mirroring serving.
I meant the base images. If you're pushing your own images, then you can use any service you want (and pay for it). Since the traffic doesn't need to go through the internet at all, the bandwidth cost to Dockerhub would be irrelevant.
Base images are already available on different container image hosting platforms. For instance, ubuntu is here [1] on Amazon ECR. So it's matter of updating Dockerfiles to use them.
There again there's the question of finding image sources you can trust to be updated and secure. Docker Hub has a "Docker Official Image" tag for critical base images that are managed by each community.
You can always use Podman. We already have fully OSS solutions in the container space.