TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The etcd operator: Simplify etcd cluster configuration and management

100 点作者 polvi超过 8 年前

11 条评论

darren0超过 8 年前
I would love to understand the design rational on why a custom controller is needed to run etcd as opposed to leveraging the existing k8s constructs such as replicasets or petsets. While this is a very useful piece of technology it gives me the wrong impression that if you want to run a persistent or "more complicated" work load then you must develop a significant amount of code for that to work on k8s. Which I don't believe is the case, which I why I'm asking why this route was chosen.
评论 #12868975 未加载
hatred超过 8 年前
The concept of custom controllers looks similar to what schedulers are in Mesos. It&#x27;s nice to see the two communities taking a leaf out of each other&#x27;s books e.g., Mesos would introduce experimental support for task groups (aka Pods) in 1.1.<p>Disclaimer: I work at Mesosphere on Mesos.
评论 #12869710 未加载
ex3ndr超过 8 年前
Can someone clarify some points?<p>* Isn&#x27;t etcd2 is required to start kubernetes? I found that if etcd2 is not helaty or connection is just temporary lost then k8s just freezes it&#x27;s scheduling and API. So what if Operator and etcd2 is working on one node and it is down? Also i found that etcd2 also freezes event when one node is down. Isn&#x27;t it unrecoverable situation?<p>* k8s&#x2F;coreos manual recommends to have etcd2 servers not that far from each other mostly because it have very strict requirements about networks (ping 5ms or so) that for some pairs of servers couldn&#x27;t work well.<p>* What if we will lost ALL nodes and it will create almost new cluster from backups, but what if we will need to restore latest version (not 30 mins ago)?
评论 #12869994 未加载
jbpetersen超过 8 年前
Being someone who&#x27;s been getting more familiar lately with backend engineering and has been trying to make sense of various options, I&#x27;ve got a strong enough impression of CoreOS that I&#x27;m betting my time it&#x27;ll be dominating the next few years.<p>I also can&#x27;t wait to see an open version of AWS Lambda &#x2F; Google Functions appear.
评论 #12868902 未加载
评论 #12869101 未加载
russell_h超过 8 年前
I&#x27;ve been thinking about implementing a custom controller that would use Third Party Resources as a way to install and manage an application on top of Kubernetes. The way that Kuberetes controllers work (watching a declarative configuration, and &quot;making it so&quot;) seems like a great fit for the problem.<p>Its exciting to see CoreOS working in the same direction - this looks much more elegant than what I would have hacked up.
评论 #12870051 未加载
dantiberian超过 8 年前
This sounds a lot like Joyent&#x27;s Autopilot Pattern (<a href="http:&#x2F;&#x2F;autopilotpattern.io" rel="nofollow">http:&#x2F;&#x2F;autopilotpattern.io</a>), but will be more integrated with Kubernetes, rather than being agnostic.
评论 #12868996 未加载
adieu超过 8 年前
This is great news. We developed an internal controller managing etcd cluster used by kubernetes apiserver using third party resource too. The control loop design pattern works really well.
why-el超过 8 年前
Somewhat unrelated, but I am just curious. For those who use etcd (and this is coming from a place of ignorance), does the key layout (which the keys are currently stored, how they are structured) get out of hand? Meaning, does it get to a place where a dev working with etcd might not have an idea about what in etcd at any given time? Or do teams force some kind of policy (in documentation or code) that everyone must respect?<p>I am asking because I was in situation where I was introduced to other key-value stores, and because the team working with them is big and no process was followed to group all keys in one place, it was hard to know &quot;what is in the store&quot; at any moment, short of exhausting all the entry points in the code.
NegatioN超过 8 年前
I see it mentioned in the article that they have created a tool similar to Chaos Monkey for k8s, but I don&#x27;t see any resources linking to it.<p>Will this at some point be available publically? Although k8s ensures pods are rescheduled, many applications do not handle it well, so I think a lot of teams can benefit from having something like that.
评论 #12869380 未加载
hosh超过 8 年前
This is brilliant. It&#x27;s like the promise-theory-based convergence tools (CFEngine, Puppet, Chef) on top of K8S primitives. Better yet, the extension works like other K8S addons -- you start it up by scheduling the controller pod. That means potentially, I could use it in say, GKE, which I might not have direct control over the kube-master.<p>I wonder if it is leverging PetSets. I also wonder how this overlaps or plays with Deis&#x27;s Helm project.<p>I&#x27;m looking forward to some things implemented like this: Kafka&#x2F;Zookeeper, PostgreSQL, Mongodb, Vault, to name a few.<p>I also wonder it means something like Chef could be retooled as a K8S controller.
评论 #12869423 未加载
otterley超过 8 年前
Where is the functional specification for an Operator? It sounds like a K8S primitive; is that in fact true? If not, why does this post make it sound like one?