TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Etcd 3.4

87 点作者 jinqueeny超过 5 年前

6 条评论

meddlepal超过 5 年前
I really wish Kubernetes would make it's storage backend pluggable or the k3s folks would push their work to allow a SQL database as the backed upstream. Then you could just back Kubernetes with some Cloud SQL offering.
评论 #20847137 未加载
评论 #20847191 未加载
ec109685超过 5 年前
One of the failure conditions is having a new follower read data off of the leader as it bootstraps, which adds extra load on the system.<p>It seems like a follower could pull the initial snapshot off of another follower to start instead?
MichaelMoser123超过 5 年前
Why is client-go sending http requests to kube-apiserver? I wonder if a message queue would have been a more reliable and scalable transport option.
评论 #20845765 未加载
评论 #20845984 未加载
anonymousJim12超过 5 年前
Which k8s version will use this version by default? Has it been tested with any current versions?
评论 #20847146 未加载
networkimprov超过 5 年前
Note that the etcd project ignored this report of a data loss&#x2F;corruption bug on MacOS:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;etcd-io&#x2F;bbolt&#x2F;issues&#x2F;124" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;etcd-io&#x2F;bbolt&#x2F;issues&#x2F;124</a>
评论 #20847383 未加载
评论 #20850152 未加载
jacques_chester超过 5 年前
&gt; <i>For instance, a flaky (or rejoining) member drops in and out, and starts campaign. This member ends up with higher terms, ignores all incoming messages with lower terms, and sends out messages with higher terms. When the leader receives this message of a higher term, it reverts back to follower. This becomes more disruptive when there’s a network partition.</i><p>I&#x27;m glad this has been fixed, considering that one of the use cases for using partition-tolerant data stores is to tolerate partitions.<p>Cloud Foundry used earlier versions of etcd and this category of problem was the leading cause of severe outages. To the point that several years of effort were invested to tear it out of everything and replace it with bog-ordinary RDBMSes.<p>Disclosure: I work for Pivotal, we did a lot of that work, but I wasn&#x27;t at the front line. Just watching from a safe distance.