Hi HN! I’m excited to share Holos, a Go command line tool we wrote to fill the configuration management gap in Kubernetes. Holos uses CUE to configure software distributed with Helm and Kustomize using a well defined, type safe language eliminating the need to template YAML. You probably know (or are) someone who has suffered with the complexity of plain text YAML templates and merging multiple values.yaml files together to configure software running in Kubernetes. We built Holos so we don’t have to template YAML but we can still integrate software distributed with Helm and Kustomize holistically into one unified configuration.<p>At the start of the pandemic I was migrating our platform to Kubernetes from virtual machines managed by Puppet. My primary goal was to build an observability system similar to what we had when we managed Puppet at Twitter prior to the acquisition. I started building the observability system with the official prometheus community charts [1], but quickly ran into issues where the individual charts didn’t work with each other. I was frustrated with how difficult it was to configure these charts. They weren’t well integrated, so I switched to the kube-prometheus-stack [2] umbrella chart which attempts to solve this integration problem.<p>The umbrella chart got us further but we quickly ran into operational challenges. Upgrading the chart introduced breaking changes we couldn’t see until they were applied, causing incidents. We needed to manage secrets securely so we mixed in ExternalSecrets with many of the charts. We decided to handle these customizations by implementing the rendered manifests pattern [3] using scripts in our CI pipeline.<p>These CI scripts got us further, but we found them costly to maintain. We needed to be careful to execute them with the same context they were executed in CI. We realized we were reinventing tools to manage a hierarchy of helm values.yaml files to inject into multiple charts.<p>We saw the value in the rendered manifests pattern but could not find an agreed upon implementation. I’d been thinking about the comments from the <i>Why are we templating YAML?</i> [4][5] posts and wondering what an answer to this question would look like, so I built a Go command line tool to implement the pattern as a data pipeline. We still didn’t have a good way to handle the data values. We were still templating YAML which didn’t catch errors early enough. It was too easy to render invalid resources Kubernetes rejected.<p>I searched for a solution to manage and merge helm values. A few HN comments mentioned CUE [6], and an engineer we worked with at Twitter used CUE to configure Envoy at scale, so I gave it a try. I quickly appreciated how CUE provides both strong type checking and validation of constraints, unifies all configuration data, and provides clarity into where values originate from.<p>Take a look at Holos if you’re looking to implement the rendered manifests pattern or can’t shake that feeling it should be easier to integrate third party software into Kubernetes like we felt. We recently overhauled our docs to be easier to get started and work locally on your device.<p>In the future we’re planning to use Holos much like Debian uses APT, to integrate open source software into a holistic k8s distribution.<p>[1]: <<a href="https://github.com/prometheus-community/helm-charts">https://github.com/prometheus-community/helm-charts</a>><p>[2]: <<a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack">https://github.com/prometheus-community/helm-charts/tree/mai...</a>><p>[3]: <<a href="https://akuity.io/blog/the-rendered-manifests-pattern" rel="nofollow">https://akuity.io/blog/the-rendered-manifests-pattern</a>><p>[4]: <i>Why are we templating YAML? (2019)</i> - <<a href="https://news.ycombinator.com/item?id=19108787">https://news.ycombinator.com/item?id=19108787</a>><p>[5]: <i>Why are we templating YAML? (2024)</i> - <<a href="https://news.ycombinator.com/item?id=39101828">https://news.ycombinator.com/item?id=39101828</a>><p>[6]: <<a href="https://cuelang.org/" rel="nofollow">https://cuelang.org/</a>>
This is wonderful, thank you!
A relieve for devops/YAML engineers that need to reason about many key/values coming from many places. Because in the end this is all there is for the user interface of IaC/XaaS, k8s and all cloud apis.
There was some effort for "configuration management" but few realizes the complexity, the many layers and aspects there is to "it". YAML ain't mearly enough...
But the space of "configuration PLs" (Dhall,Nickel,Pkl,KCL,CUE,Jsonnet,etc.) is still young. Biggest problem I see is usability, CUE focuses on it so people shouldn't be afraid. But it is also little behind the others in term of features, but also have the greatest potential!
IMO any new tool in the cloud space that uses code abstractions cannot be serious by not thinking about the language. Transitions may be though but they ough to happen.
I got burned so bad by config languages at Google (specifically gcl) that we're generating Kubernetes yamls using the Kubernetes python client now.<p>My unpopular opinion is that config languages with logic are bad because they're usually very hard to debug. The best config language I've used is Bazel's starlark, which is just python with some additional restrictions.
Personally I love using Cuelang, but there's something about it that makes, at least my colleagues really reluctant to adopt it. Not sure what it is. They don't see the benefit.<p>My gut feeling so far is that they don't know the benefits of using strictly typed languages. They only see upfront cost of braincycles and more typing. They rather forego that and run into problems once it's deployed.
Still waiting for a unified abstraction that covers both frontend libraries like React and GitOps. Where an application is composed out of components that communicate via declarative descriptions of desired states.
Hey congrats for the launch!<p>How does it compare to timoni[0]?<p>[0]: <a href="https://github.com/stefanprodan/timoni">https://github.com/stefanprodan/timoni</a>
tangent: how do people in general manage their k8s yaml? Do you keep manifests around, or stuff them away in helm charts? Something completely different? I especially wonder about ways to manage deduplication