His take on text interpolation is very right. I'm a SWE turned SRE because as a developer I really enjoyed using K8s. But as a full-time SRE where I work just means YAML juggling. It's mind numbing that everybody is okay with this, this really is our domain's assembly era, albeit with whitespace, colons, dashes and brackets.
I've found solace in CUE which I just run locally to catch all the small errors everybody makes on a daily basis. Putting the CUE validation in our pipeline is too confronting for others, yet they're constantly making up best practices adhoc during reviews which could've easily been codified with CUE (or some other serious config language).
Great writeup on the core fundamentals, saved this to share with engineers who are new to k8s and need a quick primer.
Re: This piece -
> Given the Controller pattern, why isn't there support for "Cloud Native" architectures?
> I would like to have a ReplicaSet which scales the replicas based on some simple calculation for queue depth (eg: queue depth / 16 = # replicas)
> Defining interfaces for these types of events (queue depth, open connections, response latency) would be great
> Basically, Horizontal Pod Autoscaler but with sensors which are not just "CPU"
HPAs are actually still what you want here - you can configure HPAs to scale automatically based on custom metrics. If you run Prometheus (or a similar collector), you can define the metric you want (e.g. queue-depth) and the autoscaler will make scaling decisions with these in mind.
This was a solid write up; I've been using K8s (intermittently) for like, 5 years now, and I still spend an inordinate amount of time looking things up and trying to convert the nonsense naming conventions used to something understandable. I can think of 20 or so projects that would have run great on K8s, and I can think of 0 projects that were running on K8s, which worked well.
Eventually, seeing the wrong tool used for the wrong job time and time again I came around to seeing K8s as the latest iteration of time sharing on a mainframe, but this time with YAML, and lots of extra steps.
> Why are the storage and networking implementations "out of tree" (CNI / CSI)? Given the above question, why is there explicit support for Cloud providers?
eg: LoadBalancer supports AWS/GCP/Azure/..
Kubernetes has been pruning out vendor-specific code for a while now, moving it out of tree. The upcoming 1.31 release will drop a lot of existing, already deprecated support for AWS & others from Kubernetes proper. https://github.com/kubernetes/enhancements/blob/master/keps/...
There's some plan to make this non-dosruptive to users but I haven followed it closely (I don't use these providers anyhow).
> Why are we generating a structured language (YAML), with a computer, by manually adding spaces to make the syntax valid? There should be no intermediate text-template representation like this one.
Helm is indeed a wild world. It's also worth noting that Kubernetes is also pushing towards neutrality here; Helm has never been an official tool, but Kustomzie is builtin to kubectl & is being removed. https://github.com/orgs/kubernetes/projects/183/views/1?filt...
There's a variety of smart awesome options out there. First place I worked at that went to kube used jsonnet (which alas went unmaintained). Folks love CUE and Dhall and others. But to my knowledge there's no massive bases of packaged software like exists for Helm. Two examples, https://github.com/bitnami/charts/tree/main/bitnamihttps://github.com/onedr0p/home-ops . It'd be lovely to see more works outside Helm.
hbogert ·92 days ago
I've found solace in CUE which I just run locally to catch all the small errors everybody makes on a daily basis. Putting the CUE validation in our pipeline is too confronting for others, yet they're constantly making up best practices adhoc during reviews which could've easily been codified with CUE (or some other serious config language).
Show replies
Atreiden ·88 days ago
Re: This piece -
> Given the Controller pattern, why isn't there support for "Cloud Native" architectures?
> I would like to have a ReplicaSet which scales the replicas based on some simple calculation for queue depth (eg: queue depth / 16 = # replicas)
> Defining interfaces for these types of events (queue depth, open connections, response latency) would be great
> Basically, Horizontal Pod Autoscaler but with sensors which are not just "CPU"
HPAs are actually still what you want here - you can configure HPAs to scale automatically based on custom metrics. If you run Prometheus (or a similar collector), you can define the metric you want (e.g. queue-depth) and the autoscaler will make scaling decisions with these in mind.
Resources:
https://kubernetes.io/docs/tasks/run-application/horizontal-...
https://learnk8s.io/autoscaling-apps-kubernetes
Show replies
AcerbicZero ·88 days ago
Eventually, seeing the wrong tool used for the wrong job time and time again I came around to seeing K8s as the latest iteration of time sharing on a mainframe, but this time with YAML, and lots of extra steps.
cyberax ·88 days ago
It's an attempt to replicate the old model of "hard exterior, gooey interior" model of corporate networks.
I would very much prefer if K8s used public routable IPv6 for traffic delivery, and then simply provided an authenticated overlay on top of it.
Show replies
jauntywundrkind ·93 days ago
Kubernetes has been pruning out vendor-specific code for a while now, moving it out of tree. The upcoming 1.31 release will drop a lot of existing, already deprecated support for AWS & others from Kubernetes proper. https://github.com/kubernetes/enhancements/blob/master/keps/...
There's some plan to make this non-dosruptive to users but I haven followed it closely (I don't use these providers anyhow).
> Why are we generating a structured language (YAML), with a computer, by manually adding spaces to make the syntax valid? There should be no intermediate text-template representation like this one.
Helm is indeed a wild world. It's also worth noting that Kubernetes is also pushing towards neutrality here; Helm has never been an official tool, but Kustomzie is builtin to kubectl & is being removed. https://github.com/orgs/kubernetes/projects/183/views/1?filt...
There's a variety of smart awesome options out there. First place I worked at that went to kube used jsonnet (which alas went unmaintained). Folks love CUE and Dhall and others. But to my knowledge there's no massive bases of packaged software like exists for Helm. Two examples, https://github.com/bitnami/charts/tree/main/bitnami https://github.com/onedr0p/home-ops . It'd be lovely to see more works outside Helm.
Thanks sysdig for your 1.31 write up, https://sysdig.com/blog/whats-new-kubernetes-1-31/
Show replies