Would be cool to get a bit of a "why this matters" intro on repos like this. I clicked through the details, but left wondering if I have any potential use for this. Can I compile this onto a USB stick and use it as a throw-away boot Linux for maintenance tasks? Can I cross-compile this for embedded devices? What is the statically-linked advantage here? Not trying to minimize the effort, I would actually love to see more such things, but felt a bit left out as I couldn't figure out why exactly it exists.
This seems to "statically link" binaries in the sense that each program it ships with is a standalone binary. Does anyone know whether it's possible to truly statically link an entire Linux install, in the sense that the whole system is a single statically linked file including the kernel, display manager, web browser, etc so that link-time optimization can deduplicate code across the entire system?
Does statically linking everything not lead to much higher disk usage, espcially when you have thousands of binaries? Drew Devault has an analysis[0] that appears to claim otherwise.<p>[0] <a href="https://drewdevault.com/dynlib.html" rel="nofollow">https://drewdevault.com/dynlib.html</a>
I find the build system more interesting than the "statically linked" part. Incremental compilation across your whole system is something we should have nailed by now, I think. The best I've seen is something like openembedded, where you get per-package (but not per-file) dependency tracking across the whole system.<p>I was ever so slightly disappointed to see how manual the packaging is, with every .c file listed in each lua script. It looks quite maintenance-intensive. I was almost hoping for some kind of meta-build system which could parse automake etc. files and hoist the dependency graph into the main system build.
Honestly, I'm team dynamic linking. I prefer to have things clearly separated in functionality and easily upgradeable.<p>Statically linking all the OS utilities to their dependency libraries, over and over again? Dear god that sounds awful.
I might be mistaken, but is Oasis also somehow related to the suckless [1] community? (Edit: yes, the maintainer seems to be part of it.)<p>Another static distro by the suckless people is stali (static linux) [2].<p>1: <a href="http://suckless.org" rel="nofollow">http://suckless.org</a><p>2: <a href="https://sta.li" rel="nofollow">https://sta.li</a>
Ok maybe this is dumb question and is addressed somewhere: If a security problem is found, e.g. in muscl (the C lib), then is the user supposed to rebuild everything that statically linked it??
oh no, LD_PRELOAD= hax won't work! LGPL is d00med!<p>Beyond that dynamic linking seems like a great solution to a problem posed by hardware constraints from 30 years ago. Like a lot of CS. Eg most uses of linked lists. (Can't wait for the abuse I'll cop for that).
I think it's awesome. This is likely another of those passion project that, might draw in some overzealous on-watchers, not really become anything usable in the end, but still be incredibly valuable for the greater ecosystem.<p>It points out how some things has gone unconsidered in the "status quo", but rather then just rant about it, shows how it can be different.<p>It might put some lesser known code bases into the public eye.<p>It might iron out compatibility issues with e.g. the linux kernel and "the good parts" of POSIX/unix-philosophy. Which is why I like Alpine and Voidlinux for being such complete systems, without hard dependencies on GNU-components (I think the GNU project, especially many of the individual code bases, are awesome as well).<p>And while there are benefits of both statically linked applications, an dynamic linking, I do find it for the best if that can be a realistic choice of the user, whatever program of library you want to use. And most big applications simplu can't be totally statically linked as of now.<p>It's a form of "greenfield project", but still leveraging existing components.
> No package manager.<p>> Instead, you configure a set of specifications of what files from which packages to include on your system<p>Isn't that a just a declarative package manager?
All README's should begin with some sort of mission statement. What is the goal or reason for the existence of this? I can imagine some, but I am not sure where this is heading.
> netsurf instead of chromium or firefox<p>Great for the default but can we have a Firefox/Chromium/derivative as an alternative, for the sites which don't work in NetSurf?
For my purposes I've often found that static-binary+cgroups+chroot+permissions gives you 90% of the isolation benefit of various container systems (docker) with like 10% of the pain.<p>I'm also not happy with Ubuntu's move towards snaps which also seem to increase complexity and overhead with minimal benefit.<p>An app-as-directory setup, like the one that was used in NeXTSTEP and is still used in macOS, also seems to work OK.
Having grown up on a static linked world, where dynamic linking was a thing of big iron computers that we dreamt about being able to do in home computers this trend of static linked compiled stuff feels somehow a tragedy of some sort.<p>How bad have we gone, that is now trendy to return to the days of static compiled binaries and process IPC to achieve any sort of dynamism.
This would be 10x faster and smaller if they just built these components into Busybox.<p>Package management, library management, etc are solved problems when it comes to Linux distros. The only practical improvement you can make is containers (or similar).<p>Static binaries cannot deal with external dependencies, and many applications <i>require</i> external dependencies that <i>cannot be compiled in</i>. But even by trying to compile-in all the dependencies, you've only shifted the complexity from the filesystem to the build system, and you still have applications with incompatible features and interfaces across versions.