Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One thing missing from the article is the "load balancer" aspect of the orchestrators. I believe K8S favours big cloud providers tremendously due to load balancer lock-in or handcuff or whatever else you want to call it. For Kubernetes there's a WIP Metallb but I personally haven't had confident success running on my own servers.

K8S setup complexity is already highlighted but I'd add upgrade and maintenance pain to the mix as well.

I've only recently started with Nomad and while it's so simple, I question my self at every step because coming from K8S I've gotten used to complexity and over configuration. Nomad allows you to bring your own loadbalancer, and i'm experimenting with Traefik & Nginx now.



>I believe K8S favours big cloud providers tremendously due to load balancer lock-in or handcuff or whatever else you want to call it. For Kubernetes there's a WIP Metallb but I personally haven't had confident success running on my own servers.

> Nomad allows you to bring your own loadbalancer

I think this is a widely held myth that you can't easily self-host k8s due to lack of access to a cloud load balancer. You can use your own load balancers with Kubernetes, it isn't that hard. I have production k8s clusters running on on-prem hardware right now using haproxy.

I use kubeadm when setting up my cluster and point the control plane endpoint to the haproxy load balancer.

Then when it comes to the ingress I use the nginx ingress and have it setup exactly as described in the "Using a self-provisioned edge" section of the documentation here: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/


We ran a large number of services on a large number of k8s clusters handling 1M+ QPS

What worked for us is to use k8s strictly for deployment/container orchestration, we have our own Envoy-based network layer on top of k8s and do not use Services or Ingress or Ingress Controllers



To run k8s on bare metal server clusters there's only Metallb option to bring user traffic into clusters NodePort services, via externally exposed IP address(es). I wasn't talking about k8s internal load balancers and reverse proxies


This is not true at all, you can use a self-provisioned edge (load balancer) like haproxy.

See https://kubernetes.github.io/ingress-nginx/deploy/baremetal/


You were talking about cloud, now you are talking about bare metal?

Of course you have to route traffic to your cluster, but you are implying some cloud based load balancer lockin which just isn't true


I believe the OP is talking about LoadBalancer (https://kubernetes.io/docs/tasks/access-application-cluster/...), and I believe you're talking about Ingress (https://kubernetes.io/docs/concepts/services-networking/ingr...).


It's still an optional, rarely used service config, and does not imply lockin


It's very commonly used by the ingress controllers. You can get things working with hostNetwork or custom service port ranges, but that's a lot rarer than doing it via load balancer.

Third option would be to expose pods directly in the network. EKS does this by default, but from my experience it's quite rare for people to leverage it.


Yes, an ingress resource will typically create a load balancer if you are in the cloud, but not a service resource.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: