TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Don't use Kubernetes yet

306 pointsby rckrdalmost 3 years ago

47 comments

throwaway787544almost 3 years ago
The question of whether to use K8s or not is like wondering what kind of saw you should use to cut wood. There&#x27;s different saws for different purposes. But even with the right saw, you still have to know how to use it correctly. Better to use a hand saw correctly than a table saw incorrectly. (you <i>can</i> use a table saw incorrectly, but best case the work ends up crap, worst case you lose a finger)<p>After building infrastructure for dozens of teams, I&#x27;m quite convinced of the following:<p>- if your people aren&#x27;t very skilled, they won&#x27;t build anything well. most software engineers i&#x27;ve seen professionally working in the cloud are handymen trying to build a wood cabinet.<p>- if your people can&#x27;t build well, it doesn&#x27;t matter what technology they use. choosing between building a cabinet out of metal or cherry wood doesn&#x27;t make much difference if they&#x27;ve never built a cabinet before.<p>- if the first two holds: then only use the technology which requires the least skill to use well, and where the amount of maintenance is closest to zero. don&#x27;t build a wood cabinet from scratch when you can buy flat pack. don&#x27;t buy flat pack when you can buy an assembled cabinet, get it shipped, and carried into your office.<p>- if using the aforementioned technology requires &#x27;building&#x27; or &#x27;assembling&#x27;, and that is not core to the customer-facing aspect of your product, then you should not be building, you should be buying. if your business doesn&#x27;t involve assembling flat pack furniture, don&#x27;t ask your employees to build their own desks and chairs from Home Depot or Ikea parts. buy the premade desk and chairs, use them to make your actual product.<p>- a software engineer knows as much about cloud architecture as a fine woodworker knows about framing. &quot;it&#x27;s all just wood&quot; until the house takes 10x as long to frame and is 10x as expensive and still doesn&#x27;t meet code.<p>- people <i>will</i> try to build things they don&#x27;t fully understand and leave the company before anyone realizes the mess they&#x27;ve made. imagine your retail store is accessible by driving a car over a wooden bridge built by a handyman.
评论 #31798063 未加载
评论 #31797808 未加载
评论 #31797711 未加载
评论 #31796074 未加载
评论 #31796203 未加载
评论 #31796577 未加载
评论 #31796556 未加载
评论 #31799223 未加载
评论 #31798473 未加载
评论 #31796080 未加载
评论 #31797966 未加载
评论 #31803573 未加载
评论 #31798329 未加载
评论 #31798229 未加载
评论 #31796222 未加载
qbasic_foreveralmost 3 years ago
If you don&#x27;t use k8s and just run bespoke containers you still have to figure out how those containers find and talk to each other. Maybe you run some custom DNS setup, maybe you run a purpose built service discovery thing like consul, etc. And you have to figure out how you&#x27;ll do networking to support public&#x2F;internet-facing workloads vs. private internal services (and how each can talk to the other).<p>But... if you just use k8s you get things like basic service discovery, networking, ingress, etc. with it and don&#x27;t have to figure out bespoke solutions (that you&#x27;ll just chuck anyways once you move to k8s).<p>I do agree though I would be very hesitant to immediately dive in running stateful workloads like databases, etc. on my own k8s cluster. The cloud hosted database services that every provider has is such a significant time and complexity saver, especially if your databases are just getting started and small-ish.
评论 #31804004 未加载
评论 #31796533 未加载
评论 #31795948 未加载
评论 #31803631 未加载
andrewallbrightalmost 3 years ago
We should choose our technology stack like a hermit crab chooses it&#x27;s shell. The shell shouldn&#x27;t be too heavy to move around in but with a little room to expand in.<p>When it&#x27;s time, we work to find a new shell.
评论 #31796365 未加载
评论 #31798738 未加载
preommralmost 3 years ago
My 2c:<p>Start with basic k8s.<p>Just getting a couple of apps deployed on a cluster with logging, scaling, pipelines takes ~ 2 weeks.<p>Trying to rebuild that basic functionality using ad hoc solutions takes way more time and rapidly becomes more cumbersome as the wheel gets reinvented when k8s as a platform has well documented solutions&#x2F;tools.<p>Putting it into production leads to a handful of footguns that take a while to sort out related to pull-policies, caching, scaling, security, etc. But it&#x27;s fairly manageable. And probably easier to premempt than customized solutions since these pitfalls are somewhat well documented.<p>Past a certain point though, especially as the work veers into things like operators, sketchy helm packages, service meshes, k8s falls apart fast if you don&#x27;t have people on it full time, and it&#x27;s much better to write some customized code.
评论 #31799748 未加载
评论 #31805253 未加载
sontekalmost 3 years ago
<p><pre><code> &gt; On AWS, that would be Fargate on ECS, or on Google Cloud, Google Cloud Run. &gt; You won&#x27;t have to manage servers, network overlays, logging, or other necessary middleware. </code></pre> I disagree with this take. EKS is a managed service just like Fargate and you have to learn how to manage both equally (VPCs, CIDR ranges, IAM rules, etc). You might as well start on kubernetes if you are going to switch to it eventually.<p><pre><code> &gt; I&#x27;d suggest that teams adopting Kubernetes (even the managed versions) have an SRE team, or at minimum, a dedicated SRE engineer. </code></pre> I&#x27;d love to hear what parts of running EKS require an SRE team and how Fargate&#x2F;ECS solve that issue and make it self-serviceable.
评论 #31801043 未加载
likorteraalmost 3 years ago
I&#x27;ve been at several companies. The ones where things were the smoothest were using App Engine, other was using Tsuru, and the other was using heroku. Zero problems.<p>The companies where we suffered the most, had to fight a lot to get anything shipped, and we where expected to understand a custom in house jungle of yaml files and script and half of the features we needed from the platform were half assed were using Kubernetes.<p>That&#x27;s just my experience.
评论 #31798568 未加载
theptipalmost 3 years ago
I agree with the initial advice to use a simple Docker container runner hosted service. I initially ran my Django app as a bare Docker container behind an Nginx reverse proxy, with a “docker pull; docker stop &lt;old&gt;; docker run &lt;new&gt;” script as my deploy job, and that was fine for a year. Took all of an hour to build the plumbing there. A hosted service would have been just as good and probably even less time to wire up.<p>I disagree with the OOM requirements for operating a k8s cluster. I ran our infra on GKE from 3 engineers through 15 as the primary Infra engineer (while also pushing code and being CTO), and it was hours-per-month of labor, not a dedicated SRE. I trained up some of the other engineers and they were able to help with on call after a few hours of training. For a simple app (a few deployments and services) it is really not hard to work with.<p>All that said I agree you don’t _need_ it at 5-person scale. I would not recommend you learn it if you don’t already know how to use it. But if you do already know it, you can get good value from using it much earlier than the article recommends. (For example I found Review Apps to be very useful for fostering collaboration between frontend and backend engineers, and that feature is not too hard to wire up on top of k8s.)<p>If I had to give one-sentence pithy advice I’d probably agree with the OP title.
Tooalmost 3 years ago
In theory what the article suggest looks sound and easy. I have to strongly disagree though after having the unfortunate experience of hands on trying Azures managed container runtime where everything about it was just plain misery: Getting logs out, updating the running container, connecting to storage, all kinds of esoteric settings hidden under even more strange abstractions, lack of documentation and experience online, strange edge cases everywhere. You still have to understand all the complex cloud stuff like networks, ingress and storage accounts, etc.<p>We changed to managed kubernetes instead and even for a team with no prior experience it was much smoother. Fast declarative deployments, logs, attach, everything is just one instant kubectl command away. Documentation, blogs and resources are in excess so you will always find a way out. There are still some things that are difficult but I attribute most of that to the cloud generally. As someone else said, use it if you already know it and know what you are getting into. If you don’t, be prepared for some initial bumps and surprises. It has its warts but it’s not nearly as bad as some people portray it, those cases are likely more a problem with a micro service architecture gone out of control rather than k8s itself.<p>Sadly there is no middle ground alternative. Closest would be docker-compose, assuming someone runs the VM for you and that still comes with a lot of hassle of deploying files to that VM and you still need to configure all the cloud networking.
评论 #31798596 未加载
评论 #31802070 未加载
dig1almost 3 years ago
My 2c: you don&#x27;t need K8s unless you are google-scale. Even if you think you are google-scale, you are not. Maintenance-wise and $$-wise, two VPS boxes with Cloudflare (even with an enterprise account) setup is usually cheaper than an ordinary K8s setup.<p>But again, it all depends on your use case, and people usually overestimate their use cases and the company&#x27;s growth.
评论 #31797682 未加载
评论 #31798578 未加载
评论 #31797428 未加载
评论 #31797458 未加载
endisneighalmost 3 years ago
My advice would be to try to structure your app to only use the database directly and structure all business logic around asynchronous actions that can be called via server less functions or regular services you’re hosting. IMHO this is the future of most softwares architecture.<p>Set up proper replication for your database and you’re good. Very few companies I’ve seen need more than this in principal. In practice there’s a lot of real time stuff that really isn’t necessary increasing architectural complexity.<p>The amount of service types that <i>inherently</i> need real time processing can probably be counted on a single hand
评论 #31797501 未加载
评论 #31797584 未加载
评论 #31796573 未加载
评论 #31796169 未加载
yetanotherjoshalmost 3 years ago
k8s is just one component of the whole architecture, yet people talk about it like it&#x27;s the singular defining characteristic. It reminds of people talking about building &quot;react apps&quot; when react is one of probably hundreds of essential library dependencies and says nothing about 80% of the tooling. In my rig that I manage in a startup of 2 people, k8s (managed EKS on AWS) is the part that generally just works with very little effort. I provision and deploy to it with terraform and helm. The cluster itself is cattle. I can spin up a second identical cluster with a config file, cut dns over, then spin the old one down. It took around 3 weeks to get everything setup, but k8s was not the hard part, it was making all the OTHER decisions and integrations for things like container building, monitoring&#x2F;logging&#x2F;apm, secrets management, setting up a VPC correctly, writing some custom config scripts to generate the right setup for the 2 separate apps we run in both staging and production, etc. The work was undoubtedly far greater <i>outside</i> the k8s domain. In fact when it came to the k8s parts, e.g. defining services, ingress, etc, I was generally relieved and pleased. And now that I&#x27;ve done all this, I feel comfortable repeating it. Things run quite well and I have zero pressure to migrate.
评论 #31802929 未加载
vbezhenaralmost 3 years ago
It&#x27;s funny to read this article and thread while doing exactly what everyone suggests to avoid: building Kubernetes on bare virtual metal for a team with few programmers without any dedicated devops or SRE roles.<p>The reason I&#x27;m doing it is because our business owner thinks that we need scalability and high availability. We have law obligations to keep our data inside a country. And we don&#x27;t have any managed Kubernetes offerings inside our country. The best cloud stuff I&#x27;ve found is hoster with openstack API and that&#x27;s what I&#x27;m building upon. I thought really hard about going with just docker swarm, but it seems that this tech is dying and we should rather invest into learning Kubernetes.<p>Honestly so far I spent few weeks just learning Kubernetes and few days writing terraform+ansible scripts and my kubernetes cluster seems to work good enough. I didn&#x27;t touch storage part yet, though, just kubeadm-installed kubernetes with openstack load balancer, calico network and nginx ingress. I guess hard part will come with storage stuff.<p>Worst thing is: everyone talks about how hard it is to run Kubernetes on bare metal, yet nobody talks about what exactly issues are and how to avoid them.
评论 #31798531 未加载
chrismarlow9almost 3 years ago
Containerization solves a team problem where the system dependencies for an application need to be in control by the team, and allow operations people to specifically focus on the infrastructure supporting the application.<p>From the technical side you can accomplish nearly all of the same goals using machine images and something like packer combined with any config management tool.<p>I guess what I&#x27;m saying is you should use containerization when the complexity of your application and complexity of your infrastructure is too high for an operations (DevOps) person to deal with. Or when it changes so frequently it&#x27;s impossible to keep up with the specific application needs.<p>An example is some poor DevOps engineer who has to maintain terraform scripts for the infrastructure but also needs to know the version of python used in application XXX or the postgres header libs are required for YYY. And a team of 30+ application devs are changing this constantly. It&#x27;s a burden and a risk to require a DevOps engineer to remember and maintain all of this. So you start looking to docker so this responsibility can be handed off to the team that owns the application.<p>So in short if you&#x27;re a small startup and have 1 or 2 DevOps guys you&#x27;ll probably be okay with a very simple system of building machine images. As the complexity grows this handoff of machine requirements can be handed over to the teams by using docker.<p>And if you do this properly by abstraction the system build code away through makefiles or bash scripts the transition from machine images to dockerfiles is pretty straight forward and easy. Possibly as easy as creating a packer file that will build the docker image instead of machine images.<p>Kubernetes is just a tool for the operations people to manage the containers.<p>I guess what I&#x27;m saying is if you can&#x27;t automate properly with basic machine images, you should really tackle that first. And that containers solve a team logistics problem, not a technical one.
MarquesMaalmost 3 years ago
The problem is there are no sane in-between options.<p>On one end of the spectrum are either neat platforms like Heroku or Vercel or ssh and bear-metal with simple scripts.<p>On the other end of the spectrum, we have Kubernetes.<p>Everything in between:<p>- The learning curve is much steeper than Heroku and Vercel<p>- The skill is not likely to transfer to the next job<p>- The ecosystem is not as complete as Kubernetes<p>Most mid-sized companies went for Kubernetes because the in-betweens are not very optimal and need to take some risks on betting them.
评论 #31796397 未加载
评论 #31798656 未加载
评论 #31796325 未加载
评论 #31797537 未加载
nunezalmost 3 years ago
i would have agreed with OP three years ago when kubernetes was very niche and setting it up was difficult. today, kubes is very easy to get going with.<p>you can set it up locally with kind or k3s for local dev, and use a cloud vendor&#x27;s flavor in production. last time i tried to get a local dev env working for lambda, i spent a lot of time hacking on runtime-level stuff. it was not pleasant.<p>additionally, the market of devs and operators fluent in it has grown by a lot. many people are getting their CKx certs, and there are enough companies using it now to create reliable supply.<p>i say this because the lift from &quot;my app is working in docker&quot; to &quot;my app is working in kubernetes&quot; is much smaller than it used to be. given that OP is suggesting container runtimes as the alternative (which can get very expensive; much more so than using kubernetes for everything), i think that if a business is at a point where they are containerizing to accelerate releases, then kubernetes is a natural next step. anything else in between is at best a costly dependency and at worst throwaway.
goncalooalmost 3 years ago
Kubernetes are amazing at what they do, but only relevant to ~0.1% of the companies in my opinion. It&#x27;s way too complex and too much work for the rest of the world, and not worth the time invested.<p>A lot can be accomplished with simple virtual machines and some sort of auto scaling groups (depending on your cloud provider they have different names).<p>Kubernetes are amazing at unifying your workloads on any clouds though. If you care about portability, you should either consider using kubernetes for everything or using a tool that abstracts your configuration in a cloud-agnostic way. Although I&#x27;m a bit biased on this one.
评论 #31798587 未加载
desktopninjaalmost 3 years ago
Might never need kubernetes: <a href="https:&#x2F;&#x2F;stackexchange.com&#x2F;performance" rel="nofollow">https:&#x2F;&#x2F;stackexchange.com&#x2F;performance</a>
评论 #31810571 未加载
benreesmanalmost 3 years ago
Oh man, my org at FB was the test bench for moving to containerization and fancy service discovery and all that (Tupperware, just a different Borg reimplementation). It eventually got pretty good, but it never stopped being mad overkill for single-thousands of boxes. When you’re in the 10s or 100s of thousands in a fleet, or when you’ve got workloads that don’t neatly slot into your SKUs, Kube&#x2F;Borg&#x2F;TW&#x2F;Mesos are The Way. No doubt about it.<p>But it’s always seemed zany to me to stack namespaces&#x2F;cgroups&#x2F;etc. on top of Xen or whatever EC2 is using. Yo dawg I heard you like an abstract machine so I put…<p>There are just separate concerns:<p>- reproducibility (shared libraries argggghh) - resource limits to bin-pack SKUs - service discovery - failover - operational affordance<p>And I’ve seen it get so ugly to conflate these very different imperatives. Running 100Ks that need to web serve on demand but web index when idle? Yeah, now you need the whole enchilada.<p>But it’s false and harmful to promote the idea that the minute you need Grafana or DNS or Salt&#x2F;Ansible&#x2F;Nix&#x2F;whatever that you need BORG.<p>There are scenarios where I would enthusiastically break out Kube, but most of the marketing around it falls into my “and you will do nothing, because you can do nothing.” bucket.
评论 #31797538 未加载
jbverschooralmost 3 years ago
Well, if you&#x27;re using it for development, it doesn&#x27;t make sense.. You&#x27;re probably doing all the microservices, which <i>should</i> mean that those services are the responsibility of another team. It also means they should have a testing &#x2F; staging &#x2F; development version online somewhere.<p>If you&#x27;re developing MS Paint.. do you really need to have to compile windows and all the dependencies?
评论 #31797528 未加载
RiyaadhABRalmost 3 years ago
Running Caprover on a VPS has been a very nice alternative to full K8s. It is like your mini Heroku but not too much magic involved. A lightweight wrapper around Docker containers but comes with a nice GUI and handles networking between your applications. the One Click apps are also very useful for quickly spinning up databases and stuff like that.
munchenphilealmost 3 years ago
Startups should almost never use k8s. They need to iterate fast and ignore the complexities of infra. k8s is far too complex for most small companies.<p>CapRover Droplet on Digital Ocean + deploy your Rails app with git. Scale your single VPS up as needed. Most don’t need much beyond that for quite a while.
评论 #31806176 未加载
aledalgrandealmost 3 years ago
my advice is use k8s if you know it, don&#x27;t if you don&#x27;t
评论 #31795661 未加载
评论 #31796477 未加载
etaioinshrdlualmost 3 years ago
I figure that Kubernetes will eventually become very mature and stable, the rate of changes will slow down, and it will become a predictable building block like Linux is now. I&#x27;m personally choosing to avoid using it for now.
BossingAroundalmost 3 years ago
As a person that is deeply involved in Kubernetes and Istio, I&#x27;m starting to get the feeling that in the beginning, it should be totally acceptable to run containers from Docker Swarm. If your main need is &quot;restart container on error&quot; (which seems likely for startups), you probably can&#x27;t beat the fast and easy deploy time of Swarm. Also, when the time comes to upgrade to Kubernetes, you won&#x27;t be locked in to some incompatible solution.<p>Of course, if you care about scaling, serverless is probably the way to go.
评论 #31798530 未加载
markstosalmost 3 years ago
There was a time when conventional wisdom was that we all needed to be using XML data exchange, all the time. Now the simpler format JSON dominates.<p>I hope Kubernetes ends up being the next XML.
AtNightWeCodealmost 3 years ago
As a startup it is very important to spend as much time and money as possible on the features not the tech. If the tech is not the feature that is. Most startups should build one single monolith API running on some default offering in the cloud. Serverless services can be used as compliment for some workloads. Building, running, and maintaining containers is unnecessary for this and it does slow down the dev process.
hintymadalmost 3 years ago
Curious question: why don’t companies consider an abstraction like EC2? They can run k8s on EC2-like virtualization, right? I had great experience with EC2: the ability to launch thousands of machines reliably with full control to the machines, the ability to manipulate all the metadata and configuration without learning any additional shit like HCL is a huge productivity booster. The simplicity of EC2 seems a great foundation layer for us to build more advanced resource allocation.<p>Case in point, it drives me nuts that one has to spend hours learning how to use a template system in Nomad to pass in the simplest configurations. I can’t fathom why one would be even slightly interested in learning any shit of Nomad just to deploy a god damn docker container. Don’t we have more interesting problems to solve and more general knowledge to master?
评论 #31796182 未加载
评论 #31796210 未加载
评论 #31796745 未加载
zoomzoomalmost 3 years ago
At withcoherence.com, we agree that leaning on managed runtimes for as long as possible makes a ton of sense. I’ve also seen that hiding the complexity of transforming code into deployed containers by fully abstracting away dockerfiles and CI&#x2F;deploy scripts can lead teams into a tough spot at a bad time to learn what’s really happening.<p>But the appeal of k8s is often the ecosystem of tools that solve real problems these runtimes leave on the table: managing multiple environments, load balancing across services, SSL&#x2F;TLS, SSH, long-running tasks, managing multiple versions, integrating tests into pipelines. Coherence is working to solve these problems and create a great developer experience without hiding what’s really going on under the hood.<p>(Disclosure, I’m a cofounder)
jansommeralmost 3 years ago
I was in a startup with a small team and we used Kubernetes for servers that required more than 8 GB memory. Cloud Run and App Engine doesn&#x27;t offer more than that, at least not at the time. The alternative were to deal with virtual machines ourself with Ansible scripts, and I&#x27;m not sure how that would auto scale + it would break the existing flow of Docker containers for everything. It took a while to figure out how to put Kubernetes in a closed VPC, but after that it was fairly straight forward.
评论 #31797645 未加载
mkrishnanalmost 3 years ago
My advise would be exactly opposite. Dont EVER use serverless container use kubernetes instead if you need scale or not.<p>1. Use GKS&#x2F;EKS&#x2F;LKE&#x2F;DKS. dont try to setup kubernetes cluster in your server by yourself.<p>2. It&#x27;s very simple to setup deployment, database etc. it will take a week of a decent engineer to setup your application in kubernetes cluster - end to end.<p>3. LKE&#x2F;DKS is super cheap compared to Heroku.<p>4. Use github actions (free) and docher hub ($5 month to build your container images) and its very easy, all you need is a weekend.<p>5. IMPORTANT: It&#x27;s foolish to architecture your application to fit into serverless container.
评论 #31795643 未加载
评论 #31795462 未加载
评论 #31795976 未加载
评论 #31797521 未加载
Senpitioalmost 3 years ago
Use what your ops&#x2F;DevOps Team know.<p>K8s managed can easily be used by any size startup.<p>I use it in mine because I&#x27;m the ops team as well and doing k8s for 4 years.
didipalmost 3 years ago
The organization must have a need first.<p>And then, after you have a legitimate need, use a hosted Kubernetes solution. Don&#x27;t roll your own.<p>It&#x27;s really that simple.
systemBuilderalmost 3 years ago
Am I supposed to believe these blanket statements from a person who doesn&#x27;t realize 1e0 = 1e1 = 1e2 = ... = 1 ?
评论 #31816956 未加载
yibersalmost 3 years ago
The title is actually very confusing and so is the article. After reading it a few times I finally understood that the author is actually <i>pro</i> Kubernetes.<p>He is just saying that if you are in the really early stages of your startup, don&#x27;t use Kubernetes right away. You will probably want to use it eventually.
bradwoodalmost 3 years ago
My startup just bypassed all this container stuff and went straight for AWS Serverless. If you design it well, it works excellently.<p>If&#x2F;when we need long-running workloads we&#x27;ll go to containers but thus far we&#x27;re just rocking out with lambda, and sns&#x2F;sqs and Eventbridge.
charles_falmost 3 years ago
I dont get the shit k8s gets. Especially on managed offerings like Aks or fargate, they&#x27;re very easy to deploy, having several environments is easy, maintainance is straight forward.<p>As a generic rule I&#x27;m against using complex tech where there&#x27;s no need, but I feel like we&#x27;re way past that on k8s.
评论 #31795686 未加载
评论 #31795737 未加载
评论 #31795653 未加载
评论 #31795712 未加载
评论 #31795986 未加载
评论 #31796008 未加载
te_chrisalmost 3 years ago
GKE autopilot deserves a mention. If you’re going to go k8s, then it’s pretty close to noops
nickdothuttonalmost 3 years ago
I’m much more interested in creating the simplified replacement technology for K8.
mt42oralmost 3 years ago
Don&#x27;t use AWS yet. Cloud setup is so complex that you should avoid using it.
评论 #31796814 未加载
eofalmost 3 years ago
I’d just say “don’t learn k8s yet.“ if you know it, it’s fine, but it’s complicated to get right, so delay if you haven’t made by that journey yet.
jschrfalmost 3 years ago
I never quite got the K8S &quot;it&#x27;s too complex&quot; hate, but to be fair I haven&#x27;t scaled it very high.<p>Sure, it&#x27;s verbose and there are N levels of abstraction, but it&#x27;s a declarative API for running foo across multiple environments of bar. I&#x27;ve always wanted this.<p>I like raw, versioned infrastructure config with no extra crap. I have a little K8S.yml snippet I copy+paste+tweak into repos when I want to throw an ad-hoc experiment into a cluster, and then a bigger setup for IRL projects that looks something like this:<p>- k8s<p><pre><code> - base - api.yml - web.yml - worker.yml - namespace.yml - ingress.yml - overlays - dev ... config to merge ... - staging ... config to merge ... - production ... config to merge ... - shared - ... variable declarations, base config maps, etc ... </code></pre> Everything gets merged into a manifest.yml and version stamped and build-artifacted. Deployment just means applying the config overlays via kustomize based on environment and then pushing out.<p>If things break, I always have an absolute, pull-the-chute, versioned, formal safe point to go back to: kubectl -n production apply -f manifest.version.yml
评论 #31796489 未加载
roxaaaanealmost 3 years ago
use K8s if you need it, don&#x27;t use K8s if you don&#x27;t, it&#x27;s that simple ¯\_(ツ)_&#x2F;¯.<p>It&#x27;s not rocket science you don&#x27;t need to read every week&#x27;s opinion on K8s and you don&#x27;t need to write one either.
评论 #31816938 未加载
naivalmost 3 years ago
Could someone eli5 what 1e0, 1e1 , 1e2 stand for?
评论 #31798020 未加载
评论 #31799656 未加载
jsdevtomalmost 3 years ago
What about if you need rolling deployments?
评论 #31802286 未加载
darthrupertalmost 3 years ago
(2025)
txtaialmost 3 years ago
Interesting article, thank you for the insights.
johnwoodsalmost 3 years ago
It&#x27;s funny that so many startups are led to thinking that they &quot;need&quot; K8s
评论 #31795978 未加载
评论 #31797429 未加载
评论 #31795882 未加载