Back in the 90s, before package managers for Linux/BSD and when disk drives were fairly pricey, it was common to have say, a network of commercial *nix workstations with nfs mounted binaries for stuff like gcc, latex, emacs, X utilities, etc.<p>This was typically done by a network of symlinks. As best I can remember, you'd have /opt/local exported from the server, with all the open source goodies. It was arranged something like<p>/opt/local/emacs-19.1.7/bin/emacs<p>/opt/local/emacs-19.1.8/bin/emacs<p>/opt/local/emacs -> /opt/local/emacs-19.1.7<p><....><p>All the clients would have a local /usr/local with another symlink, /usr/local/bin/emacs pointing at /opt/local/emacs/bin/emacs<p>When you upgraded from 19.1.7 to 19.1.8, you'd just move the symlink on the server. You'd then wait a week or two, and garbage collect the old version (after rsh'ing around to the clients to make sure none had the old version open, if your users were lucky).<p>The local network of symlinks into /opt/local was maintained by something like cfengine, which ran at install and nightly to update configs (eg, for newly supported stuff, or new binaries in new versions of existing software).<p>If you screwed this up, and replaced a binary out from under a user, they were rather unhappy..
As a developer and operator of a distributed file system, and specifically one that might have caused Rachel some pain, I kind of agree with her. If you can run with local code and data, do so. Even if you think you can't, think again. Only run from data on the servers if you have no choice, e.g. if the dataset is too large to copy or you need simultaneous access (better be read-only by that point) from multiple clients. Not only will you be happier, but so will the people who have to operate that distributed filesystem in the face of many users all abusing the hell out of every operation in the POSIX/VFS API.
Rachel's advice, "Kick NFS to the curb", is good. My last startup, blekko (2007-now), managed to have no NFS usage, ever. Didn't miss it. Peak of 50 devs, 1500 servers, no NFS.<p>Even before that, in a supercomputing context (2002-), when writing MPI "wrapper" software that would be used on end-user clusters with all kinds of unfortunate network setups, I avoided a lot of trouble by copying the executable to the local node before running it.<p>This isn't a new issue. And the traditional solutions work.
rsync, ln -s, mv -T always worked for me -- NFS always presents so many problems, I almost never deploy it anymore given copies of everything are so cheap via various cloud storage solutions.
NFS has plenty of good uses. Maildir works fine, sharing large datasets works fine, sharing static assets works fine, it's widely supported. No need to throw the baby out with the bathwater. Just stop putting the baby in boiling water.<p>NFS is a way to share files, not provide a reliable application platform. There are lots of options you can enable to make it more stable, but it simply isn't designed as an application platform, so it's all going to be hacks.<p>And keep in mind that NFS is not secure. Supposedly it can be secured, but I have never seen this in a production environment. Running an application over it is just a bad idea from a security standpoint. Home directories over it are a terrible idea from both a security and stability standpoint.<p>Finally, don't try to change a file and expect stability. If you need a new version of a file with some change to it, make a new file and use that new file path.
My favorite NFS story: many many years ago (mid 90s) I was part of the staff for a physics department's IT group. At the time, affordable Alpha workstations had just started appearing on the market, while most users still relied on "semi-dumb" X Terminals/thin clients. Everything was attached to our 10Base2 network, capable of a BLAZING 10Mb/s.<p>We started seeing weird whole-display freezes on bunches of the X Terminals one day. It took a while to correlate that these happened at the same time one of the newly-installed Alphas in a far-off office was doing a lot of file I/O over NFS - saturating the entire coax segment and freezing the displays...<p>IIRC the "upgrade to 100BaseT" project got scooted WAY up on the priority list right after that. :)
Previous post for context: <a href="https://rachelbythebay.com/w/2018/03/15/core/" rel="nofollow">https://rachelbythebay.com/w/2018/03/15/core/</a>