In this post it might only be an example, but I don't see anything, that necessitates the use of YAML. All of that could be put in a JSON file, which is far less complex.<p>YAML should not even be needed for Kubernetes. Configuration should be representable in a purely declarative way, instead of making the YAML mess, with all kinds of references and stuff. Perhaps the configuration specification needs to be re-worked. Many projects using YAML feel to me like a configuration trash can, where you just add more and more stuff, which you haven't thought about.<p>I once tried moving an already containerized system to Kubernetes for testing, how that would work. It was a nightmare. It was a few years ago, maybe 3 years ago. Documentation was plenty but really sucked. I could not find _any_ documentation of what can be put into that YAML configuration file, what the structure really is. I read tens of pages of documentation, none of it helped me to find, what I needed. Then even to set everything up, to get the Kubernetes running at all also took way too much time and 3 people to figure out and was badly documented. It took multiple hours on at least 2 days. Necessary steps, I still remember, not being listed on one single page in any kind of overview, but somewhere a required step was hidden on another documentation page, that was not even mentioned in the list of steps to take.<p>Finally having set things up, I had a web interface in front of me, where I was supposed to be able to configure pods or something. Only, that I could not configure everything I had in my already containerized system, via that web interface. It seems that this web interface was only meant for the most basic use cases, where one does not need to provide containers with much configuration. My only remaining option was to upload a YAML file, which was undocumented, as far as I could see back then. That's were I stopped. A horrible experience and I wish not to have it again.<p>There were also naming issues. There was something called "Helm". To me that sounds like an Emacs package. But OK I guess we have these naming issues everywhere in software development. Still bugs me though, as it feels like Google pushes down its naming of things into many people's minds and sooner or later, most people will associate Google things with names, which have previously meant different things.<p>There were 1 or 2 layers of abstraction in Kubernetes, which I found completely useless for my use-case and wished they were not there, but of course I had to deal with them, as the system is not flexible to allow me to only have layers I need. I just wanted to run my containers on multiple machines, balancing the load and automatically restarting on crashes, you know, all the nice things Erlang offers already for ages.<p>I feel like Kubernetes is the Erlang ecosystem for the poor or uneducated, who've never heard of other ways, with features poorly copied.<p>If I really needed to bring a system to multiple servers and scale and load balance, I'd rather look into something like Nomad. Seems much simpler and also offers load balancing over multiple machines and can run docker containers and normal applications as well, plus I was able to set it up in less than an hour or so, having to servers in the system.