From the article:<p>> NFS lets you share a directory between multiple servers (think of it as Dropbox for servers).<p>So, ignoring that NFS is nothing like Dropbox, there's nothing wrong <i>per se</i> with using NFS on a cloud setup for shared storage, this setup assumes that your NFS server will always be up. For anyone who has worked with NFS servers, you'll know that this isn't always the case. (Who here has lost a day when your NFS mounted home directory won't load?)<p>And you don't want to deal with a down NFS server when you're trying to auto-scale. Somewhere you'll always have a single point of failure, but I've always tried to make NFS not be it.
NFS is a single point of failure. If something goes wrong on your NFS host, you'll break your entire cluster. That obvious oversight combined with calling it "Dropbox for servers" makes me think that whoever designed this doesn't understand NFS.
Well hold on with the NFS hate everyone. I worked for a (very large, very deep-pocketed) web hosting company around 10 years ago, and they used NFS to deploy their software, configs (this was just before Puppet started getting popular), and user data. This worked VERY well. Granted, they were running on some pretty heavy iron (a couple dozen NetApp filers, all clustered 11 ways to Sunday) to make sure that the NFS facilities stayed up, but If You Know What You're Doing(tm) and err on the 'fast' and 'good' sides of the 'fast, cheap, and good - pick two' aphorism, NFS is quite useful.