One thing missing from the article is the "load balancer" aspect of the orchestrators. I believe K8S favours big cloud providers tremendously due to load balancer lock-in or handcuff or whatever else you want to call it. For Kubernetes there's a WIP Metallb but I personally haven't had confident success running on my own servers.
K8S setup complexity is already highlighted but I'd add upgrade and maintenance pain to the mix as well.
I've only recently started with Nomad and while it's so simple, I question my self at every step because coming from K8S I've gotten used to complexity and over configuration. Nomad allows you to bring your own loadbalancer, and i'm experimenting with Traefik & Nginx now.
>I believe K8S favours big cloud providers tremendously due to load balancer lock-in or handcuff or whatever else you want to call it. For Kubernetes there's a WIP Metallb but I personally haven't had confident success running on my own servers.
> Nomad allows you to bring your own loadbalancer
I think this is a widely held myth that you can't easily self-host k8s due to lack of access to a cloud load balancer. You can use your own load balancers with Kubernetes, it isn't that hard. I have production k8s clusters running on on-prem hardware right now using haproxy.
I use kubeadm when setting up my cluster and point the control plane endpoint to the haproxy load balancer.
We ran a large number of services on a large number of k8s clusters handling 1M+ QPS
What worked for us is to use k8s strictly for deployment/container orchestration, we have our own Envoy-based network layer on top of k8s and do not use Services or Ingress or Ingress Controllers
To run k8s on bare metal server clusters there's only Metallb option to bring user traffic into clusters NodePort services, via externally exposed IP address(es). I wasn't talking about k8s internal load balancers and reverse proxies
It's very commonly used by the ingress controllers. You can get things working with hostNetwork or custom service port ranges, but that's a lot rarer than doing it via load balancer.
Third option would be to expose pods directly in the network. EKS does this by default, but from my experience it's quite rare for people to leverage it.
K8S setup complexity is already highlighted but I'd add upgrade and maintenance pain to the mix as well.
I've only recently started with Nomad and while it's so simple, I question my self at every step because coming from K8S I've gotten used to complexity and over configuration. Nomad allows you to bring your own loadbalancer, and i'm experimenting with Traefik & Nginx now.