Brad Templeton Home


Brad Ideas
(My Blog)


ClariNet

Interviews

EFF

Jokes / RHF

Photo Pages

Panoramic Photos

SF Publishing

Software

Articles & Essays

Robocars

Spam

DNS

Jesus
The Book

Dot!

Packages

Interests


RHF Home

Ty Templeton Home

Stig's Inferno

Copyright Myths

Emily Postnews

Africa

World Heritage Sites

Burning Man

Phone Booth

Alice Pascal

The Rules for Guys

Bill Gates

Contact Me

Whofig

Whofig

Manager computers is a terrible pain. Everybody spends more time administering their computer than they want, and upgrading software and the operating system is often unbearable. Companies spend far more on administration of computers than they do on hardware.

I can't solve all these problems, but in these pages is a proposal to ease some of the pain, particularly of upgrading and backup.

Whofig proposes a new pattern for arranging the files of an operating system not simply on what function they perform, but on who is responsible for them.

Many people make changes to the files on a system. The users of course, and the "sysadmins" -- the system admistrators for a computer. On a PC the user and sysadmin are often the same. But in fact there are many more people who make changes. At companies, there are sysadmins who manage groups of computers and policies for entire companies.

The big hidden group of changers are the people who maintain all the software packages that make up a system, and the builders of the operating system distribution ("distro" in linux terms). They change your system when you install their software, or when auto-updates happen. When these packages keep their configuration in one file, you end up with files being changed concurrently by many different people.

We need a system that allows us to encapsulate all the changes made by these different people into different places. It must be possible to isolate the changes made by a sysadmin from the changes made by a package author or OS distributor, even though all these parties are making changes independently.

Done right, it should become possible to easily back up, export and import the changes of any given party. It should be easy to roll back changes by a given party, and to merge concurrent changes with the best chance of success.

This idea is not new. Nobody writes complex software today without using a version control system. Version control systems, like CVS and Subversion are designed to allow groups of people to collaborate on one software project, making changes independently and then combining those changes for a final result. What we haven't fully realized is that maintaining a computer is in many ways a project of collaborative software engineering. In fact some sysadmins do use version control systems on configuration files today, though in a more haphazard way.

The idea is also not new in that some of the best and most complex software packages already try to follow parts of this pattern. With good programs, you will find that the master configuration file, written by the package author, does an include of a local configuration file meant to be written by the local sysadmin. Some packges, such as the Apache web server and the Apt package manager itself define a directory (often called conf.d) where local configuration can be stored. This takes a first step into having configuration files which belong to various actors, and which are changed by nobody else. And of course, many software packages have per-user configuration, and keep that configuration in the user's home directory.

How to do it?

There are many ways we might accomplish this sort of partitioning. Later sections will discuss some of them, but it is the philosphy and the goals that are the most important elements.

One simple (at least in theory) approach would be to store the files of each maintainer, or "custodian" in a different directory. As such there would be a different directory where changes would be placed for each of the following parties:

  1. The local sysadmin: /whofig/local
  2. The company sysadmin: /whofig/corp
  3. The OS packager: /whofig/ubuntu
  4. The master OS packager: /whofig/debian
  5. The manager of a package: /whofig/package/firefox
  6. End user firefox files: ~/whofig/firefox

There could be several others of course. As these are directories, they could also be filesystems. They could even be (with appropriate cache strategies) remotely mounted filesystems.

All changes made by a custodian would go only in their file tree. The only exception would be one custodian importing the changes of another. Thus when the local sysadmin installs the firefox package, the configuration from the firefox authors would go in /whofig/package/firefox, and the fact that the local admin had installed (imported) firefox would go in /whofig/local. Any localized config for firefox would go in some subdirectory of /whofig/local such as /whofig/local/package/firefox.

It's important to note that this "different directory" approach is not necessarily the ideal method, and it has problems in certain areas, but it makes the partitioning easy to understand for the purposes of discussion.

Consider some of the major advantages to such an approach:

  • You could easily see all the changes of a custodian in one place. Further, you could readily export/back-up all such changes into a small archive, placing them on a thumb drive, in most cases.
  • All OS files could and should be mounted read-only, with write access enabled temporarily only to install packages or config updates. The standard OS tree would probably be permanently read-only, needing a reboot if doing an OS upgrade. It could run from read-only media.
  • The whofig, encapsulated on a thumb drive or otherwise easily copied, could be readily imported on a different machine to make it act the way the user likes. This customization could be just as easily undone.
  • OS install would be little more than a big copy of files into a normally read-only filesystem. The local config would be added on. An "upgrade" would really be more like a virgin install of the new versoin, with the config added to it.
  • Package install would also be very simple, again largely just a file tree copy. As such, it would be possible to make packages that were portable over multiple OS distributions, even possible in some cases to make packages which install on both a Linux and a BSD! OS distributions would focus on more important differences than how their filesystems and packages are arranged.
  • Simpler package install provides an easier path out of the dependency hell of most package systems, which today often require you upgrade your entire OS just to install a recent package.
  • You could easily import (but keep isolated) the configuration work of other sysadmins, not just inside your organization, but simply providing advice on the web. It is also easier for people to build web tools or scripts to help people generate config, and then import the config they generate.
  • Backing out changes becomes easier and safer.

This is a very attractive set of advantages. How many hours do people spend after a new OS install tweaking the system to their liking? How much do admins dread OS upgrades or major software package upgrades, holding them off until a time when they know they can spare a day to fix what problems may arise?

In many cases, other tools have arisen to attain these goals. When a user's config goes in their home directory, it is more easily retained through upgrades or moved to a new computer. Microsoft provides a "settings and documents" importer for people changing or upgrading Windows computers, and the Mac has a similar tool. But whofig goes beyond that. Imagine taking a freshly installed OS and applying your personal whofig to it. Automatically it installs all the packages you like to add to your computer, and each one is tweaked the way you like it. In addition, it imports the whofig of other parties whom you trust to do customizations of various parts of your computer. It installs the packages they recommend and their changes and improvements too.

Of course, whofig can't and won't solve all upgrade problems. Upgrades will still break things all the time. Configuration choices will go stale and need maintainging to be compatible with new systems. But it's a step in the right direction.

In fact, I already try to practice whofig on my own computers. While many software packages try to keep their configuration in a place like /etc, if I can, I make only one change to that file, adding an "include" to a file in my /local tree. That way, wen I get a new version of a package, I can take their new, updated config file (which often contains important new information) and add the one include line and make things as I like them -- much of the time, at least. Some packages are kind enough to have such an include already present in their files.

Another worthwhile advantage proposed here allows it to be easy to have several versions of the same piece of software on a machine at once, and an easy command to switch what is "live." This is of value primarily to developers, because it will easily let them test their tools with older dependencies, to find the oldest version of something they depend on that words, and thus the "true" dependency. This might eliminate one of the other sysadmin nightmares of linux -- that often installing new software means making a lot of other potentially dangerous upgrades that may not actually be needed.

Most of what I will write about will apply to the unix-derived OSs like linux and BSD. However, there is no reason this philsophy could not apply some day to Windows or other systems.