I built an interactive explorer for Kubernetes resources spec<p>A few things included:<p>- Tree view with schema, type and description of all native resources
- History changes since version X (properties added/removed/modified)
- Examples of some resources that you can easily copy as a starting point
- Supports all versions since X, including the newly released 1.32
- I also want to add support for popular CRD, but I’m not sure how I’ll do that yet, I’m open to suggestions!<p>Everything is auto generated based on the OpenAPI spec, with some manual inputs for examples and external links.<p>Hope you like it and if there’s anything else you think it could be useful just let me know.
This is really nice! The context switching that comes from using the [official k8s reference](<a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/" rel="nofollow">https://kubernetes.io/docs/reference/kubernetes-api/workload...</a>) is a real pain. If your writing a deployment and need to check one thing about the pod spec all of a sudden you jump to a new page and lost where you were. Ontop of that this keeps track of the indentation level the spec your looking at within the context of whatever parent path your writing it for.<p>Maybe one nitpick would be to keep colon between the key and the type so one can copy paste multiple lines of relevant spec to be filled in your editor easily.
The most frustrating part of Kubernetes (and I like k8s) is its data schema story:<p>* Go types are converted to Protobuf via go-to-protobuf.<p>* Protobuf generates OpenAPI specs and JSONSchemas via kube-openapi.<p>* Users rely on tools and DSLs to manage the complexity of YAML manifests.<p>This pipeline prioritizes some convenience for the core team over simplicity for end users. In the end, that minimal convenience transmutes into layers of convoluted code generators to maintain for the core team, and unwieldy, tens-of-thousands-of-lines schemas for the end users.<p>Also, does Kubernetes really benefit enough from Protobuf to justify the complexity? k8s IPC and network traffic likely account for a small fraction of overall app traffic. Perhaps JSON and schemas for validation could be enough.<p>The proliferation of tools to manage YAML manifests is a sign of room for improvement. Perhaps a "k8s 2.0" could <i>start</i> with JSONSchemas: this could encourage minimal, flat, simple, and user-friendly schemas, and hopefully a more coherent internal pipeline.
I like the version diffs.<p>Perhaps add an "expand all" button to avoid clicking individually on properties to see their descriptions?<p>I usually rely on the official generated docs (all on one giant page):<p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/" rel="nofollow">https://kubernetes.io/docs/reference/generated/kubernetes-ap...</a>
Official Kubernetes docs are terrible. No versioning (most of the links on the internet are dead). A lot of text with not much exmaples. This one looks nice. Spec with full list of options, version history and examples. That's everything anyone would need. It reminds me of my favorite ansible documentation/spec which is a pleasure to use. Love it
Very nice!<p>Adding support for CRDs would be very nice. Maybe look up popular CNCF projects and find their official helm charts, that contain the CRDs?
Why order of fields is different? Example:<p><a href="https://kubespec.dev/apps/v1/Deployment" rel="nofollow">https://kubespec.dev/apps/v1/Deployment</a>
<a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/" rel="nofollow">https://kubernetes.io/docs/reference/kubernetes-api/workload...</a><p><pre><code> minReadySeconds: integer
paused: boolean
progressDeadlineSeconds: integer
...
</code></pre>
vs<p><pre><code> selector
template
replicas
minReadySeconds
...
</code></pre>
I'm very nitpicky about order of fields and I always follow kubernetes documentation order. Not sure where it really comes from, but generally it's good enough and better than alphabetical order (or inconsistent order).
A naive question: why does k8s require more documentation and understanding than using EC2 + EBS, if I don't have to consider cost. To set up my infrastructure, I launch clusters for my services, I map EBS or use ephemeral storage for stateful services. I use EC2's APIs to operate my clusters, such as autoscaling and auto healing. I don't have to worry about networking except probably private IP vs public IP. I barely need to spend time learning about EC2/EBS and simply use my intuitive to look up documentation when needed. Most of the EC2/EBS concepts are just intuitive. So, why do so many people say that k8s is complex and hard to get right? Shouldn't the default setup as easy as EC2+EBS, and leave the doors to more advanced stuff?
This CLI should solve similar issues!<p><a href="https://github.com/keisku/kubectl-explore">https://github.com/keisku/kubectl-explore</a>
This is great thank you. Been a k8s admin for many years but obviously still need to look up fields from time to time, and trying to find it or an example in the k8s documentation is always difficult
It would be great if you could paste in an existing k8s file to analyze as well. Would help for onboarding/modifying existing templates you haven't worked on in awhile, etc.