TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Is anyone running Kubernetes with Persistent Volumes in production?

19 pointsby nickjacksonover 8 years ago
If so...<p>* What storage backend and environment are you using?<p>* What is your use case for persistent volumes?<p>* How well does it perform for your needs?

3 comments

smarterclaytonover 8 years ago
I can speak from the OpenShift perspective (which is just Kube as far as storage is concerned):<p>OpenShift Online and Dedicated (both hosted Kube&#x2F;OpenShift) use AWS and EBS persistent volumes for elasticsearch and Cassandra storage, which is moderately high IOPs although not &quot;all things tuned for performance&quot;. Most small non-cloud OpenShift deployments I know of are using NFS for medium &#x2F; large shared storage - file or data sharing workloads. There are several medium sized deployments on OpenStack using Ceph under Cinder, and their experience is roughly comparable with AWS EBS and GCE disks.<p>Basically, if you need to fine tune many details of the storage medium, are carefully planning for IOPs and latency, Kube makes it slightly harder to plan that because it&#x27;s abstracting the mounting &#x2F; placement decisions. It&#x27;s definitely possible, but if you&#x27;re not dealing with tens of apps or more it might be overkill.<p>OpenShift Online Dev Preview (the free 30-day trial env) is Kube 1.2+ and uses the upcoming dynamic provisioning feature (which creates PV on demand) and is used for many thousands of small ~1GB volumes. Remember though the more volumes you mount to any node the less network bandwidth you have available to the EBS backplane, so Kube doesn&#x27;t prevent you from having to understand your storage infra in detail.<p>Also, be very careful using NFS with replication controllers - the guarantee on RCs is there is <i>at least</i> N replicas, not at most N replicas, so you can and will have two+ pods running talking to NFS if you have an RC of scale 1.<p>Edit: typos
lobster_johnsonover 8 years ago
It&#x27;s worth warning that volumes are buggy, particularly on AWS. This one in particular is worth keeping in mind: <a href="https:&#x2F;&#x2F;github.com&#x2F;kubernetes&#x2F;kubernetes&#x2F;issues&#x2F;29324" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kubernetes&#x2F;kubernetes&#x2F;issues&#x2F;29324</a>.
hijinksover 8 years ago
I used it with EBS volumes - mongodb datadir and also for rabbitmq datadir - works wonderful. If a pod fails it detaches then comes right back up within a few minutes.<p>We only have a single mongodb and rabbitmq pod since they aren&#x27;t mission critical if they go down. We had the mongodb host fail and by the time I got paged and woke up the OK page came since kubernetes did its job and brought it back online