I don't get why we would want to allow packages to run any scripts before/after the installation.
I get why it's necessary at this point, but the true solution should get away without executing any code.<p>IMHO, a package should deliver a set of files to certain directories. That's it.<p>It should not overwrite existing files, that were installed by other packages. It should not change existing files in any way.<p>It might advise the system to trigger certain reindexing actions (systemd daemon reolad, update man-db, etc.) but doing this should be the duty of the package manager, not the package itself.<p>AFAIK, nix and Solaris' pkg are pretty close to this ideal.<p>A big advantage that this has, on top of security, is that:<p>- packages can be uninstalled safely and without side-effects<p>- package contents can be inspected (pkg contents)<p>- corrupted installations can be detected using checksums (pkg fix)<p>- package updates/installs can be rolled back using file system snapshots.
I'm having slight trouble understanding the threat vector this is supposed to be protecting against. If you don't trust a package's install script, why would you trust any of the binaries installed by that package?<p>If you're unsure about bugs in a package's install script, why aren't you equally unsure about bugs in the binaries installed by the package.<p>In-fact, install scripts are auditable; third party compiled binaries aren't (at least not easily).<p>I see other advantages in declarative approaches - for example more freedom for debian to change the underlying file system layout, or to give the user some information about what the package is going to change for easier troubleshooting, but I do not see any advantage security-wise.
Docker is a nice idea. It's one tool, one system, for easily packaging software and running it, in an isolated environment. But Docker includes a lot of crap most people don't need. Do we need an isolated network for our apps? Do we need an isolated cgroup? Do we need to install a complete base image of an OS? Do we need root to run those apps? The answer, for most cases, is no.<p>Then there's things like Flatpak. They also want to make it easy to package and distribute software. And they see all the features of Docker and go, "Hey, a sandbox! That sounds cool! Let's make it mandatory!" In order to simply distribute software in a compatible way, they include a lot of restrictions they don't need to just distribute and run software.<p>All you need to distribute a software package is files, and a subsystem that maps files into the user's environment, and links together the files needed to run the software. We can accomplish this with a copy-on-write, overlay filesystem, and some software to download dependent files and lay them out in the right way. It <i>should</i> be incredibly simple, and it <i>should</i> work on any operating system that supports those filesystems. And those filesystems should be simple enough to implement on any operating system!<p>So what the hell is the deal here? Why has nobody come along and just provided the bare minimum needed to just distribute software (<i>edit</i>: in a way that also allows it to be run with all its dependencies in one overlay filesystem view)? Why is it always some ass-backwards incompatible crap that is "controversial" ? Why can't we just make something that works for any software?
I've used Debian since 1994 and can't recall a single package installation or removal which exhibited any problem this discussion claims to be solving.
I'd love it if some more common things that are done in postint scripts could be done in a declarative way, like adding system users.<p>And then there are things that are declarative in the debian/ dir (like auto-(re)starting the installed services) that end up as generated, procedural code in the postinst script.<p>When such things can be done in a declarative manner, it's much easier to reason about them programmatically, and maybe you could completely disable postinst scripts for a whole category of packages.