Let me revisit a bit my previous rant about package management on Linux. I came to the conclusion that using what is effectively a chroot on every package you can enforce package integrity on disk. And, using plasticfs, you can allow "union" file systems (to allow packages to save files on disk) without having to mount too many different filesystems.
Firstly, I was a bit wrong about my choice of plasticfs. If you think about it, the chroot can be done inside a filesystem which dynamically generates the directory structure based on where you are chrooted. For example, if anything is done in ~/packages/emacs, then the file system "knows" that it is done by Emacs which was chrooted in that directory.
Secondly, I skipped over the important issue of sharing hardware resources. You can't, let's say, run two different versions of Apache httpd without changing the configuration settings of either, otherwise they would both try to use port 80 at the same time.
Thirdly, and this is even trickier, is about how much the software package assumes things from the current Linux distribution, for example where are the DVD-ROM and USB devices, how to change the monitor's resolution, how to add a "service" to the system, and so on.
For the first point, instead of plasticfs you could use FUSE. Even though it needs a kernel module, it lets user processes define a file system, making it much easier (and safer) to develop a new file system. There are already lots of file systems implemented on FUSE out there if you want to experiment with it.
Traditionally, the proposed solution for the third point was standardization. Trying to do that on the open-source environment is too difficult. As an example, FHS is not only horrible, but is so painful to use that most Linux distributions that are made prefer avoiding it completely. I mean, who would want to learn the difference between /sbin, /usr/sbin, /opt/sbin and /usr/local/sbin?
Well, in those cases, a more radical approach is needed. One would have to build a Linux distribution that could be run on top of any other Linux distribution. Everything up to the kernel functions would have to be virtualized.
And so comes User-mode Linux. It simply runs a linux kernel and some boot disk image as a normal user process. Everything is isolated from the host system, apart from "transports" between the user-mode Linux and its host. For now you can already "transport" network access, the X server, sound devices and, obviously, part of the host's filesystem. All this is done at minimal memory and performance cost for the host's Linux system. Obviously, if you can run the software with a simple chroot in your virtual file system, you don't need to waste your resources on running it under UML, unless you need additional security protection.
You don't need to start a different Linux kernel for every software package, but now at least you control the environment enough to have a package management system that works for almost all Linux software without ever having a chance to screw up your Linux system.
Next step, I have to find a name for this...
Published on December 11, 2005 at 12:00 EST
Older post: Groovy: Java, Perl-style
Newer post: jEdit: It Does The Job