TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why GlusterFS should not be integrated with OpenStack

33 pointsby grkover 11 years ago

4 comments

notacowardover 11 years ago
GlusterFS developer here. The OP is extremely misleading, so I&#x27;ll try to set the record straight.<p>(1) Granted, snapshots (volume or file level) aren&#x27;t implemented yet. OTOH, there are two projects for file-level snapshots that are far enough along to have patches in either the main review queue or the community forge. Volume-level snapshots are a little further behind. Unsurprisingly, snapshots in a distributed filesystem are hard, and we&#x27;re determined to get them right before we foist some half-baked result on users and risk losing their data.<p>(2) The author seems very confused about the relationship between bricks (storage units) and servers used for mounting. The mount server is used <i>once</i> to fetch a configuration, then the client connects directly to the bricks. There is no need to specify all of the bricks on the mount command; one need only specify enough servers - two or three - to handle one being down <i>at mount time</i>. RRDNS can also help here.<p>(3) Lack of support for login&#x2F;password authentication. This has not been true in the I&#x2F;O path since forever; it only affects the CLI, which should only be run from the servers themselves (or similarly secure hosts) anyway. It should not be run from arbitrary hosts. Adding full SSL-based auth is already an accepted feature for GlusterFS 3.5 and some of the patches are already in progress. Other management interfaces already have stronger auth.<p>(4) Volumes can be mounted R&#x2F;W from many locations. This is actually a strength, since volumes are files. Unlike some alternatives, GlusterFS provides true multi-protocol access - not just different silos for different interfaces within the same infrastructure but the <i>same data</i> accessible via (deep breath) native protocol, NFS, SMB, Swift, Cinder, Hadoop FileSystem API, or raw C API. It&#x27;s up to the cloud infrastructure (e.g. Nova) not to mount the same block-storage device from multiple locations, <i>just as with every alternative</i>.<p>(5) What&#x27;s even more damning than what the author says is what the author doesn&#x27;t say. There are benefits to having full POSIX semantics so that hundreds of thousands of programs and scripts that don&#x27;t speak other storage APIs can use the data. There are benefits to having the same data available through many protocols. There are benefits to having data that&#x27;s shared at a granularity finer than whole-object GET and PUT, with familiar permissions and ACLs. There are benefits to having a system where any new feature - e.g. georeplication, erasure coding, deduplication - immediately becomes available across all access protocols. Every performance comparison I&#x27;ve seen vs. obvious alternatives has either favored GlusterFS or revealed cheating (e.g. buffering locally or throwing away O_SYNC) by the competitor. Or both. Of course, the OP has already made up his mind so he doesn&#x27;t mention any of this.<p>It&#x27;s perfectly fine that the author prefers something else. He mentions Ceph. I love Ceph. I also love XtreemFS, which hardly anybody seems to know about and that&#x27;s a shame. We&#x27;re all on the same side, promoting open-source horizontally scalable filesystems vs. worse alternatives - proprietary storage, non-scalable storage, storage that can&#x27;t be mounted and used in familiar ways by normal users. When we&#x27;ve won that battle we can fight over the spoils. ;) The point is that <i>even for a Cinder use case</i> the author&#x27;s preferences might not apply to anyone else, and they certainly don&#x27;t apply to many of the more general use cases that all of these systems are designed to support.
评论 #6359999 未加载
j_sover 11 years ago
Apparently there are nearly 20 supported storage backends for OpenStack, this article is discussing the shortcomings of one of them. Not sure why GlusterFS is singled out.<p><a href="https://wiki.openstack.org/wiki/CinderSupportMatrix" rel="nofollow">https:&#x2F;&#x2F;wiki.openstack.org&#x2F;wiki&#x2F;CinderSupportMatrix</a>
epistasisover 11 years ago
If I understand this correctly, the complaints are:<p>- Terminology -- Seriously? It&#x27;s not a very strong complaint.<p>- Snapshotting -- have to use qcow2 for this rather than native file system support for snapshotting an individual file<p>- Have to use Layer2 separation for security -- but this should be done any way, shouldn&#x27;t it? There&#x27;s no reason to trust this to application level security, and I there&#x27;s any need at all for this type of security, L2 is the only way to go.<p>Personally, I think Ceph is the future, and I also have personal reasons for wanting Ceph to succeed. Having dealt a bit with both communities, I think it&#x27;s clear that Ceph is going to be the standard go-to destributed file system soon, and I hope to switch our gluster filesystems to it soon (come on POSIX FS layer!). So I kind of have it in the bag for Ceph.<p>However, I don&#x27;t see these complaints as very strong. I&#x27;m only a dabbler with OpenStack, but fairly experienced with Gluster and its warts.
评论 #6360056 未加载
viraptorover 11 years ago
&gt; Compute node downloads such image, puts it on a local disk and boots a VM. This method makes it impossible to use the highly desired live migration<p>That&#x27;s not true. Live migration is possible both with glance images and cinder volumes.
评论 #6360045 未加载