There was a strange and mutually self-supporting pair of ideas in the Plan 9 community at the time:<p>1) "Shared libraries are bogus."<p>2) "Anyone who likes normal-looking user interfaces rather than plain boxes with text in them is a poopipants."<p>Both of these propositions are contentious to say the least, but what bothered me was that two propositions were mutually supporting while being (to my mind) orthogonal. The obvious and most compelling example of a shared library on the Unixen at the time were all the various UI libraries (Motif and various other abominations; all of them huge and unwieldy). It seemed necessary to accept that these libraries were obviously Completely Unnecessary to buy into the Plan 9 idea that shared libraries didn't do anything worth mentioning.<p>I'm sure it's possible to design a better UI library (or maybe even a wacky user level file system for user interfaces; in fact, my honours project in 1993!) but <i>at the time</i> the way people made interfaces that looked vaguely like what other people expected computer programs to look like was to use big-ass shared libraries on Linux.<p>This was <i>also</i> the way (dragging in one of those godawful blobs like Motif etc) that anyone might have exerted themselves to port across a (not completely pitiful) web browser, but the degree of aggressive disinterest in supporting Other Peoples Code was extreme (Howard Trickey's "ANSI POSIX Environment" or "APE" didn't get much love as far as I could tell).<p>It was quite undignified to watch people struggling with text-based web browsers or firing them up on non-P9 boxes because of the inability to support the web at the time.
Most here seem to know that the motivation for adding DLLs to unix was to make it possible for the X windowing system to fit in the memory of a computer of that time, but many comment writers here seem not to know something that the participants in the discussion that is the OP all knew:<p>Plan 9 has an alternative method for sharing code among processes, namely the 9P protocol, and consequently never needed -- and never used -- DLLs. So for example instead of dynamically linking to Xlib, on Plan 9 a program that wanted to display a GUI used 9P to talk to the display server, which is loosely analogous to a Unix process listening on a socket.
SunOS before 4.0, when it still used SunView¹ instead of X11, still did not have dynamic linking. Hence this email rant by John Rose titled <i>Pros and Cons of Suns</i> from 1987 (as included in the preface of <i>The UNIX-HATERS Handbook</i>²):<p>[…]<p><i>What has happened? Two things, apparently. One is that when I created my custom patch to the window system, to send mouse clicks to
Emacs, I created another massive 3/4 megabyte binary, which
doesn’t share space with the standard Sun window applications
(“tools”).</i><p><i>This means that instead of one huge mass of shared object code running the window system, and taking up space on my paging disk, I
had two such huge masses, identical except for a few pages of code.
So I paid a megabyte of swap space for the privilege of using a
mouse with my editor. (Emacs itself is a third large mass.)
The Sun kernel was just plain running out of room. Every trivial hack
you make to the window system replicates the entire window system.</i><p>[…]<p>1. <a href="https://en.wikipedia.org/wiki/SunView" rel="nofollow">https://en.wikipedia.org/wiki/SunView</a><p>2. <a href="https://web.mit.edu/~simsong/www/ugh.pdf" rel="nofollow">https://web.mit.edu/~simsong/www/ugh.pdf</a>
This is actually an area of very current research. We have implemented a form of software multiplexing that achieves the code size benefits of dynamically linked libraries, without the associated complications (missing dependencies, slow startup times, security vulnerabilities, etc.) My approach works even where build systems support only dynamic and not static linking.<p>Our tool, allmux, merges independent programs into a single executable and links an IR-level implementation of application code with its libraries, before native code generation.<p>I would love to go into more detail and answer questions, but at the moment I'm entirely consumed with completing my prelim examination. Instead, please see our 2018 publication "Software Multiplexing: Share Your Libraries and Statically Link Them Too" [1].<p>1: <a href="https://wdtz.org/files/oopsla18-allmux-dietz.pdf" rel="nofollow">https://wdtz.org/files/oopsla18-allmux-dietz.pdf</a>
The page is down for me but archive.org comes to the rescue:<p><a href="https://web.archive.org/web/20190215103117/https://9p.io/wiki/plan9/why_static/index.html" rel="nofollow">https://web.archive.org/web/20190215103117/https://9p.io/wik...</a><p>Or if you prefer google groups: <a href="https://groups.google.com/forum/#!topic/comp.os.plan9/x3s1Ibaj_l8%5B51-75%5D" rel="nofollow">https://groups.google.com/forum/#!topic/comp.os.plan9/x3s1Ib...</a><p>The headline could use a "(2004)" suffix.
In Linux, if libssl is compromised, you install a new libssl. In Plan 9, if libssl is compromised, you re-install Plan 9. That's static linking for you.
The new model of One Version, all updated together is interesting in this context. Examples are iOS, Chrome, Firefox and node_modules. All super complicated with many dependancies. Update everything, fix broken stuff. Only maintain the one blessed dependency graph.<p>If you report an iOS or Chrome bug where you tried to revert a library upgrade and something broke, they'll just mark it "Won't fix: omg never ever look at this".<p>The dependency graph when everyone isn't updating all at once is brutal. Half of Unix life is/was "well I need to update X, but can't because Y depends on old X. Now we'll just create this special environment/virtualenv/visor/vm with exactly the brittle dependency graph we need and then update it, um, never."<p>We complain about One Version/Evergreen, and should, but it's got huge advantages. And might be an indicator that testing surface is the real complexity constraint.<p>One Version's success a good indication that Plan 9 was at least not totally wrong.
Shared libraries are a pain for sure. They also have a lot of really nice advantages, including:<p><pre><code> - You can upgrade core functionality in one location
- You can fix security bugs without needing to re-install the world
- Overall, they take up less disk-space and RAM
- They can take much less cache, which is significant today
</code></pre>
The cache aspect is one that I'm surprised not to see people talk about more. Why would I want to blow out my CPU cache loading 20 instances of libSSL? That slows down performance of the entire system.
This is far too narrow a view. It's not a question of whether to dynamically link or not, but WHERE and HOW to dynamically link.<p>Think about it: If you were to force absolutely everything to be statically linked, your KDE app would have to include the ENTIRE KDE library suite, as well as the QT libraries it's based on, as well as the X window libraries those are based on, etc etc. You'd quickly end up with a calculator app that's hundreds of megabytes.<p>But let's not stop there, because the linkage to the kernel is also dynamic, which is a no-no. So every app would now need to be linked to a specific kernel, and include all the kernel code.<p>Now imagine you upgraded the kernel. You'd have to rebuild EVERY SINGLE THING on the system to build with that new kernel, or a new version of KDE or QT or X or anything in between.<p>The kernel membrane is a form of dynamic linkage for a reason. Same goes for IPC. Dynamic linkage is useful and necessary; just not to such a microscopic level as it once was due to size constraints.<p>The key is not to eliminate dynamic linkage, but rather to apply it with discretion, at carefully defined boundaries.
This isn't so much an answer for 9p as a description of why GCC symbol versioning is confusing.<p>> The symbol versioning breaks assumptions users have about how shared
libaries work -- that they provide a link to one version of a function and
if you replace the library all the programs get fixed. I've seen this
problem in practice, for both naive users and very non-naive sysadmins.
I guess I am heretic for breaking my project to a collection of DLLs. Ironically I am doing it over a 2 mil lines of code statically linked code base. The statically linked code base takes 150 seconds to build and my much smaller project only 4 seconds.<p>I have also designed it that way to do live coding in C and I have made a similar library for live coding python .<p>I am addicted to DLLs , send help :D !
The idea was that individual programs would be small and loosely coupled. That is, rather than drag in a giant library, abstract out its code in a separate server and talk to it via some protocol. This worked remarkably well on the whole. So the idea of not wanting to have 100 statically linked programs all weighing in at many megabytes kind of misses the point.
Obligatory link to Drepper's classics:<p><a href="https://akkadia.org/drepper/no_static_linking.html" rel="nofollow">https://akkadia.org/drepper/no_static_linking.html</a><p><a href="https://www.akkadia.org/drepper/dsohowto.pdf" rel="nofollow">https://www.akkadia.org/drepper/dsohowto.pdf</a><p>Seems like the Plan 9 guys haven't heard of these? If done properly, dynamic linking is almost always better than static -- key word here is "properly", which doesn't always/often happen.
What a load of FUD about updating dynamic libraries not actually fixing the code without rebuilding the dependent executables.<p>Symbol versioning does not break things in this way. If you've replaced your only libc.so with a fixed one, there is no other libc.so for dynamically linked executables to link with at runtime. If the new one brought a needed fix, the bug is fixed everywhere using libc.so.<p>It's not like the new library bundles old bits to fulfill old versions of symbols, that's madness.<p>The only way I can see someone getting into a situation implied by the article is when they install multiple versions of a shared library simultaneously, keeping vulnerable versions around, and have executables continuing to use the old copies. This is an administrative failure, and says nothing of dynamic linking's value.
The Plan 9 fans keep forgetting that the end of the road was Inferno not Plan 9, developed by the same devs, with all Limbo packages being dynamically loaded.
So, if you don't have dynamically linked libraries and you need a security patch in one of the libraries, how exactly is the system admin going to patch the system. I assume I would need to find every program that contains that library and recompile them?
site is down now... but I hope it will be honest and just explain that "they did it just so they could do that one demo of tar'ing one process from one machine, 9p'ing it to another box, untar it and the process graphical UI would resume on the new host as if nothing had happened"