In our experience running production workloads on k8s for over three years templating (helm) and structured editing approaches both have their place, and both are valuable. We don't feel the need to replace declarative approaches with another imperative language, or to use complicated helm charts for straightforward service deployments.<p>There are many ways to classify workloads, but one big distinction that we find valuable is between stable infrastructure components and our own rapidly deployed services. The former have complicated configuration surfaces but change relatively rarely, while the latter are usually much simpler (because microservices) and change daily in many cases.<p>We find helm works very well for the infrastructure pieces. Yes it's complicated and easy to get wrong, but so are most other package management systems. Charts for complicated things can be quite dense and hard to comprehend. See the stable prometheus chart for an excellent example. But once the work is done and as long as there is commitment to maintain it the results can be pretty awesome. We use helm charts to install and upgrade prometheus, fluentd, elasticsearch, nginx, etcd and a ton of other tools. Yes we've had to fork some charts because the configuration surface wasn't fully exposed, but they are a minority.<p>For our own services charts are overkill. They're hard to read, and crufted up with control structures and substitution vars. Essentially all of our microservices are deployments with simple ingress requirements. We currently use kustomize to process base manifests that live in the service repos and combine them with environment-specific patches from config repos. Both are just straight yaml and very easy for backend developers to read and understand, and different groups (i.e. devops, sre, backend dev) can contribute patches to the config repos to control how the services are deployed.<p>Bottom line: if you're going all-in on kubernetes, which you really need to do to get the most benefit from it, then you're going to need more than one approach to deploying workloads.
I spent about 6 months using helm and made around 20+ charts for the services.<p>In the end we got rid of it and replaced it with Terraform. If your infrastructure is 100% kubernetes then I think helm is great. Our infrastructure is not. We have databases, dns, buckets, service accounts and more so we were splitting our setup between terraform and helm. Passing data between the two tools was going to be a pain. We follow a layered approach of building up the infrastructure.<p>1) Networking: DNS
2) secrets, service accounts, buckets
3) DBs
4) Pre-application config (istio)
5) Services<p>Semi-related things are together and all of those cloud provider values we need are saved as secrets. We are on GCP so that means we need things like service accounts to access GCP resources (buckets, cloudsql) and all of those variables are available to our services to pick up.<p>And Terraform has STATE. This is unbelievably valuable when doing continuous delivery as you can tell what changed on every deploy and deploys are FAST. One thing that really bugged me about helm was that determining if a deploy failed was a post helm event. We were going to have to write monitoring for service health/uptime on deploy. This is not hard at all but you get it for free with terraform. If a service failed to start, terraform will throw an error...<p>I don't think people know that Terraform has a kubernetes provider. It does not support all the alpha objects but has decent support for 99% of the things you need. I wish someone made a provider for istio virtual services and service entries.
Very interesting. I've been reluctant to adopt Helm for Kubernetes resource management because of a gut feeling that it's a heavyweight solution for what seems broadly like a templating problem.<p>With ksonnet having gone quiet[0] this looks like a promising initiative.<p>I'd imagine that it'll need something like a package manager (or at least a curated list of common packages) in order to gain good adoption.<p>[0] - <a href="https://blogs.vmware.com/cloudnative/2019/02/05/welcoming-heptio-open-source-projects-to-vmware/" rel="nofollow">https://blogs.vmware.com/cloudnative/2019/02/05/welcoming-he...</a>
What about something like <a href="https://github.com/dhall-lang/dhall-kubernetes" rel="nofollow">https://github.com/dhall-lang/dhall-kubernetes</a> ?
This is a great addition to the ecosystem of Kubernetes application management tools <a href="https://docs.google.com/spreadsheets/d/1FCgqz1Ci7_VCz_wdh8vBitZ3giBtac_H8SBw4uxnrsE/edit#gid=0" rel="nofollow">https://docs.google.com/spreadsheets/d/1FCgqz1Ci7_VCz_wdh8vB...</a><p>I really hope we'll get to a dominant standard soon. But this subject is much more complex than I thought <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/declarative-application-management.md" rel="nofollow">https://github.com/kubernetes/community/blob/master/contribu...</a>
The shortcomings for Helm are relatively spot on, but I feel like the ship has sailed for tools that aren't Helm based. The ecosystem partners (and thus, end users) have rallied around Helm charts as the defacto manifest format, so a tool that doesn't understand Helm charts will not see a lot of adoption. Are there any plans for Tanka to support importing existing Charts?
I'd like to request that nobody else make any more damn infrastructure tools that require writing code in order to use it, or require reading six manuals. I don't want to spend the rest of my life writing and editing glue and cruft, or spending two weeks researching and writing elaborate config files <i>by hand</i> just to make some software run.<p>It's like the infrastructure version of fine woodworking - building dovetails and screwless joints by hand, using chisels and hand planes and card scrapers and shit, to build a box. It may be "fun", but it's also needlessly complicated and time-consuming. Give me the power tools, pocket hole jigs, torx screws, nail guns, square clamps. Yes, the dovetails will make a more sturdy box - but do you <i>need</i> a box with dovetails? <i>Probably not.</i>
What are folks thoughts on CurLang these days? Anyone using it for serious configuration yet?<p><a href="https://cuelang.org/" rel="nofollow">https://cuelang.org/</a><p>It’s designed by the BCL/GCL author as a replacement (Jsonnet is apparently a copy of BCL/GCL)
Kustomize already won this battle, it's now built into kubectl (-k flag instead of -f). Kustomize is a joy to use, as compared to helm makes me want to smash things.
I don't know why they have to invent something new, if you're just managing kubernetes manifests, kustomize already is good for this task and simple to start with.
I evaluated a lot of these templating solutions about a year ago. We ended up going with jsonnet and kubecfg as the latter was pretty simple.<p>Helm felt okay for PnP, but I want to have an explicit understanding of what I’m deploying for infra, and it seemed to abstract too much away.<p>Kustomize seemed too rigid.<p>Ksonnet seemed too magical, although I didn’t deeply look.<p>I still don’t love using jsonnet, as I can’t seem to find full language documentation even on the website for it.<p>How might this compare to kubecfg to those who might be familiar?
This looks pretty similar to qbec <a href="https://github.com/splunk/qbec" rel="nofollow">https://github.com/splunk/qbec</a>
Tanka & jsonnet-bundler also work really well with Prometheus monitoring mixins, meaning we bundle up and share almost all the internal monitoring that we use at Grafana Labs to monitor our massive Cortex, Loki, Metrictank and Kubernetes deploys.
Thanks for sharing, I will try it and give you feedback.<p>What I am doing for my env clusters is to have a versioned production yaml that acts as a source of truth, then if I need an env (regions, customer, dev, prod, feature, etc..) I take that source of truth, apply a transformation (usually a node script or bash... depending on the kubernetes entity) and then apply the resulting transformed yaml.
Basically is: versioned production => transform => new env definitions<p>Do you have any recommendation/high level thoughts on how to integrate or substitute Tanka in this approach?
Which are the downfalls that you see with this approach?<p>Thanks.
We're in the process of evaluating tools to get away from 90% identical yaml files across environments and this seems like a good alternative to kustomize or helm.<p>Do you have a good pattern on how to use it with CI/CD for deployments? The biggest challenge we've had after writing deployments is getting it setup to work with something like Jenkins (right now we have a custom bash script that does a bunch kubectl things).<p>(PS any way this would help with static IPs on hosted Grafana.com Cloud to make access to firewalled datasources easier?)
This won't be a popular opinion/implementation.<p>I'm on dot net. And although I can deploy as microservices ( clean architecture with core, application, Infrastructure and api).<p>I seem to integrate the api into my app ( Eg. Add the api dll). So my app does the provisioning like a monolith.<p>It exposes all api controllers by default.<p>Messaging is internal always then ( domain vs Integration).<p>Overhead is practically none.<p>If I have a heavy component/api, I can split up an API and put nginx in front of it for routing and nats for Integration Events<p>So, basically I have a DDD app at the beginning with the strangler pattern already in place for scaling porpose.
Although none of my apps need scaling right now.<p>I also can do every deployment myself and more easily. Since I don't have a deployment complexity currently.<p>--<p>What I don't have, is that my stack is language agnostic at the beginning. But it could be using the same method as scaling, with nginx.<p>It seems that I have the best of both worlds at the beginning.<p>- maintainability by forcing DDD<p>- minimal devops<p>- testability<p>- no service mesh overhead ( eg. Consult brings a 30-50ms average overhead, I finish most of my requests in 8-12ms)<p>- fast development ( slower than monolith, much faster then microservice)<p>While scaling could be refactored within the day, if an insane amount of request come in ( see: refactoring)<p>Most microservices are fixed within a single language though. So that's not a concern currently.<p>The added benefit is, is that I have insane custom implementation options.<p>I just need to change the Infrastructure in a deployment to use a clients database as a source if a component needs it.<p>( Eg. An order service for a webshop. I can easily integrate with an clients existing magento for a niche of their shop)<p>TLDR: I currently don't have a devops overhead. I'm too small for that, I'm glad though.<p>--<p>if anyone thinks that isn't a good solution for my use-case ( small dev shop) or have any better ideas. Please share ;)
I’m a little scared to migrate some of our microservices off VMs and onto k8s (because our deployment story isn’t great). However there doesn’t seem to be a lot of consensus around how to do _anything_, even with a green field.<p>A dozen microservices, several diff DBs, and some large stateful datasets, all supporting a basic Rest API in the end. What tools would you choose nowadays?
We use jsonnet at my workplace for all sorts of generated configs. Not just k8s configs. I cannot recommend jsonnet enough. simple and powerful tool.<p>Jsonnet is a godsend. Don’t use a string templating language for structured data like yaml/json. Use an object templating language like jsonnet. You’ll start to love life again.<p>We had used mustache templates before and it was a PITA.
Naive question from someone who doesn't know the ecosystem well:<p>It seems to me like Terraform is good at describing desired deployment shapes and detecting drift between actual state and desired state.<p>Can someone clue me into why Terraform hasn't caught on as the abstraction above/that drives K8S?
> 1. Repetition: If information is required in multiple places (dev and prod environments, etc.), all YAML has to offer is copying and pasting those lines.<p>Actually, YAML has anchors and aliases, which helps a lot when the same thing needs to be reused in several places.
I’m always wondering if templating is a good approach for solving this problem vs writing a program that generates the api object descriptions for you.
Lua in helm3 seems interesting, but it's not prime time yet. That makes me explore other options because helm's limitation in reusable templates is painful. Jsonnet seems to be working for several companies as well as kustomize. I'm still looking for something simple to template my manifests for different environments.
This seems interesting, but would have liked to see dashboard and chart configs cleaned up. Grafana's json configs have the same issue. I have a dashboard for one project. The json is over 13k lines long. Less than 5% of that is unique.
Am I missing something here or is there no way do delete with tk apply deployed manifests?<p>Also what about state changes? I.e calculate the diff between your local definition and Cluster state and act appropriately (delete, apply, change)
I'm not a big fan of helm, but using json syntax instead of yaml sounds like getting shot in the leg. Json as far as I'm aware never meant to be human readable or human writable.
cue[0] might be another possible language for this problem<p>[0]: <a href="https://cuelang.org/" rel="nofollow">https://cuelang.org/</a>
> 1. Repetition: If information is required in multiple places (dev and prod environments, etc.), all YAML has to offer is copying and pasting those lines.<p>Actually, YAML has anchors and aliases, which help a lot when the same thing needs to be reused in several places.