> it looks like Plan 9 failed simply because it fell short of being a compelling enough improvement on Unix to displace its ancestor.<p>I really don't think so. Unix caught on because it was (initially) available for free with source code. It was freely available at the time because the Bell System, as a regulated monopoly, was prohibited from getting into the software business. Universities loved it as an object of study, and it spawned many derivatives, also freely available. After the breakup of Bell, they started marketing Unix.<p>Plan 9 in contrast was not released for free, had restrictive licensing, because Bell Labs was now under AT&T. I think it is hard to overstate the impact of this difference.<p>Linux became popular first because it was free, and because it invited hacking, and then because the combination of these caused a snowball effect.<p>I would like to add that Plan 9 is in fact very compelling, but the advantages are maybe hard to appreciate. A big issue was also that it initially supported a limited set of hardware because they didn't use the hardware BIOS.
Go has connections back to Plan 9. Pike and Thompson are credited as designers on Plan 9 and Russ Cox did a ton of work on it. (Pike's wife Renée French drew Plan 9's bunny mascot Glenda, and the Go gopher.) Go was in Plan 9 C until it became self-hosting. I think it even inherited that quirky asm syntax that that poor illumos dude didn't like. Plan 9 introduced UTF-8 and of course Go uses it, though most new projects today would use UTF-8 anyway.<p>I wonder if the team members' experience designing an OS made them a bit bolder doing some things differently from most of the ecosystem around them, like starting with their own ABI (everything on the (variable-sized) stack) and static linking.<p>There's certainly a focus on networked uses in both Plan 9 and Go, and the lightweight threading (for apps that juggle a lot of clients but spend a lot of time waiting on other machines) and the servers in the stdlib (including HTTP2 by default in 1.6!) are part of that.<p>"Everything is a file" makes me think of interfaces like io.Reader/Writer in Go. I remember as a newbie being impressed how it was elementary to string together a pipeline. I suppose you can string together pipelines fine in other languages too, but I still think Go does a pretty good job keeping it simple (a couple of method definitions get you started) but clear on essentials (when things block, how errors look).<p>Anyhow, I'd really love to hear more about the connections back to Plan 9 from someone who knows about them.
<p><pre><code> > Some Plan 9 ideas have been absorbed into modern Unixes, particularly the more
> innovative open-source versions. FreeBSD has a /proc file system modeled
> exactly on that of Plan 9 that can be used to query or control running
> processes. FreeBSD's rfork(2) and Linux's clone(2) system calls are modeled on
> Plan 9's rfork(2). Linux's /proc file system, in addition to presenting process
> information, holds a variety of synthesized Plan 9-like device files used to
> query and control kernel internals using predominantly textual interfaces.
> Experimental 2003 versions of Linux are implementing per-process mount points,
> a long step toward Plan 9's private namespaces. The various open-source Unixes
> are all moving toward systemwide support for UTF-8, an encoding actually
> invented for Plan 9.
</code></pre>
This is interesting. Anyone know of other ways Plan9 has influenced Linux etc. since 2003?
Plan 9's spiritual successor is Inferno, which is open source.
<a href="https://en.wikipedia.org/wiki/Inferno_%28operating_system%29" rel="nofollow">https://en.wikipedia.org/wiki/Inferno_%28operating_system%29</a><p>Inferno seems like the obvious choice for the IoT.
One of the core problems of Unix is this focus on textual streams.<p>Every configuration file, every proc-style file, every interchange format ends up with its own unique take on what format is easiest for it to present/consume. Every program has its own text parser & generator that generally is the minimum bar to deal with the text that it assumes.<p>And at some point, when taking a bird's eye view at all of this, it just turns into an unpredictable and insecure mess.
While I agree with the article's reasoning, I never seen the argument on performance.
Could it be that the overhead of Plan9 "everything is a file" abstraction was too much to handle when compared with the more pragmatic UNIX sockets?
So perhaps Plan9 at least can serve as a clear definition of where Linux ought to move over time. I often work with old software this way. I try to figure out first what I actually want regardless of the limitations of existing software is. Then I look at what we have an try to figure out if there is a gradual path that can take one to the ultimate goal. I prefer this over just incremental improvements without any clear end goal.