TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Is anyone running Kubernetes with Persistent Volumes in production?

19 点作者 nickjackson超过 8 年前
If so...<p>* What storage backend and environment are you using?<p>* What is your use case for persistent volumes?<p>* How well does it perform for your needs?

3 条评论

smarterclayton超过 8 年前
I can speak from the OpenShift perspective (which is just Kube as far as storage is concerned):<p>OpenShift Online and Dedicated (both hosted Kube&#x2F;OpenShift) use AWS and EBS persistent volumes for elasticsearch and Cassandra storage, which is moderately high IOPs although not &quot;all things tuned for performance&quot;. Most small non-cloud OpenShift deployments I know of are using NFS for medium &#x2F; large shared storage - file or data sharing workloads. There are several medium sized deployments on OpenStack using Ceph under Cinder, and their experience is roughly comparable with AWS EBS and GCE disks.<p>Basically, if you need to fine tune many details of the storage medium, are carefully planning for IOPs and latency, Kube makes it slightly harder to plan that because it&#x27;s abstracting the mounting &#x2F; placement decisions. It&#x27;s definitely possible, but if you&#x27;re not dealing with tens of apps or more it might be overkill.<p>OpenShift Online Dev Preview (the free 30-day trial env) is Kube 1.2+ and uses the upcoming dynamic provisioning feature (which creates PV on demand) and is used for many thousands of small ~1GB volumes. Remember though the more volumes you mount to any node the less network bandwidth you have available to the EBS backplane, so Kube doesn&#x27;t prevent you from having to understand your storage infra in detail.<p>Also, be very careful using NFS with replication controllers - the guarantee on RCs is there is <i>at least</i> N replicas, not at most N replicas, so you can and will have two+ pods running talking to NFS if you have an RC of scale 1.<p>Edit: typos
lobster_johnson超过 8 年前
It&#x27;s worth warning that volumes are buggy, particularly on AWS. This one in particular is worth keeping in mind: <a href="https:&#x2F;&#x2F;github.com&#x2F;kubernetes&#x2F;kubernetes&#x2F;issues&#x2F;29324" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kubernetes&#x2F;kubernetes&#x2F;issues&#x2F;29324</a>.
hijinks超过 8 年前
I used it with EBS volumes - mongodb datadir and also for rabbitmq datadir - works wonderful. If a pod fails it detaches then comes right back up within a few minutes.<p>We only have a single mongodb and rabbitmq pod since they aren&#x27;t mission critical if they go down. We had the mongodb host fail and by the time I got paged and woke up the OK page came since kubernetes did its job and brought it back online