Brad Templeton Internal Page

Goals and Principles for USENET design

Goals and Principles for USENET design

What follows is a set of goals and principles for how to go about fixing and improving USENET. These are not specifications, but the reasons behind them and a description of how they should be done.

Some of these goals may seem to conflict. Most commonly, the desire for support of the vast installed base will conflict with the desire for change or simply "doing it right." This is not to imply a contradiction, but rather a tradeoff between two positive but incompatible goals.

The standard should not dictate policy, it should enable policy

The standard should not make policy decisions, unless there is near unanimity that the proper operation of the network requires a certain policy. Otherwise, members of the group should seek out things that people want to do on the network, and things that sites would like to be able to tune for policy reasons, and enable those features and that tuning.

That which requires new code at all 300,000 sites won't happen unless made mandatory

USENET has stagnated because it's almost impossible for all sites to agree on a format change. A truly new feature that requires new code everywhere can only be installed after waiting a very long time in internet years, or by draconian measures by major sites -- the declaration that old format articles will simply not be propagated.

That which is broken should be fixed

We do have to fix our bugs. There aren't a lot -- supersedes and the lack of extensibility of certain headers.

All features must be extensible

No header should be added that can't be extended cleanly later without breaking old software. Ideally, systems of extension should allow old software to make guesses about what to do when encountering an unfamiliar extension -- ignore, abort, give warning, map to known extension. Most systems only support "ignore all unfamiliar," but this is not enough.

Where possible, old headers that did not support extension should be made to do so, or set to do so in the future.

Where possible, be compatible with mail and other standards

That means if there's a MIME way to do it, we need a good reason not to do it that way. All other things being equal, if we can interact cleanly with mail it's a win both for gateways and because common software is available.

However, this does not mean we bend over backwards to accommodate standards developed with clear disregards for the needs of a broadcast, post-once, read many medium like USENET.

NNTP specs must listen to us, not the other way around

While most articles do go over NNTP, it is NNTP's job to feed USENET articles, not the USENET spec's job to make sure articles can go over NNTP.

Be liberal in what you can accept, conservative in what you generate, but never allow an error without a diagnostic

Being liberal in what you accept doesn't mean propagating errors out to the rest of the net. While there are many virtues in not altering articles that travel through you, continuing to propagate spec-violating articles can have greater negative consequences. The tradeoffs should be examined.

USENET has failed to be robust by allowing systems to routinely drop articles on the floor with no diagnostic. This is terrible software design. Systems should be made so that errors are detected, and can be sent once and only once to a person who should know about it.

Errors should be discovered and fixed fast. E-mail systems which dropped your mail into the bit bucket because you had a format error would never be tolerated.

Popular features added to USENET should be supported, unless there is a good reason

If there is some function that people have already added, ad-hoc to USENET, the standard should work out a means to perform this function, consistent with other aspects of the spec and these goals.

Popular features from other online conferencing systems should be supported, unless there is a good reason

Good ideas come from outside USENET. If a feature is popular on other conferencing systems, it's a good bet that people would find it useful here, and not find USENET wanting.

Good new ideas not yet implemented anywhere can be supported, but they need a good reason

If a feature (not an implementation) has never been tried anywhere, we should implement it if we have good reasons to like it and believe it will be popular, but otherwise consider it for a later revision. However, if the idea is generally liked, the spec should be reviewed to assure it is extensible to accommodate new ideas like the proposal.

Everything should be securable, unless there is a good reason

The old trust-everyone philosophy has sadly failed. Now the question is reversed. If you want to leave something unsecured, you have to explain why.

Sadly, "nobody is exploiting that hole today" no longer qualifies as a reason -- it was the reason everything was left unsecured in the first place. We have to assume a network full of spammers, religious nuts and even malicious foreign governments and saboteurs today. We should do the best security job we can do today unless the cost is extraordinarily high.

All aspects of site administration and configuration must be delegatable

If something requires 300,000 site admins to change a config file, it won't happen. Most site admins would rather delegate most aspects of configuration of their site to somebody they trust.

A good principle of software design is that the "master" of some piece of data is controlled in exactly one place. It's OK to distribute the results -- and inherent in USENET -- but disasterously poor to normally distribute the maintenance.

While final control over site files always resides with the site, the norm should be remote control unless specied otherwise. Policy decisions for subsets (groups, subnets and hierarchies) should be made in one place, with the main local decision being whether to subscribe to the subset or not.

Having 300,000 independently maintained files listing who the moderator for a group is, or whether the group accepts binaries or MIME just won't work.

USENET's distributed nature is its strength and curse. Things should be kept distributed for efficiency, and final local control always should be left to local sites, but if a policy issue or configuration fact is to be associated with a newsgroup, hierarchy or subnet, then associate it there by default, not at the site. No one party should control all of USENET, nor can they. But the mechanisms for centralized control where needed are no more evil than the centralized control moderated newsgroups have represented, within their space, for a dozen years.

What was true 15 years ago needs to be re-demonstrated

Some of the old assumptions of USENET are old, ancient in internet time. Some remain valid, but all can be subject to question, and none should be taken as a given.

Where possible, allow old systems to still handle the net

We wish to combine maximum new functionality with minimal upheaval. We know upgrade will be slow, so plans for new features must expect that, and provide a transition plan that, where possible, does not remove function from the users of old software, or provides them an alternate way to get things done.

Transitions should be transitions

To make the transition to new features, temporary features may be put in place. But they should be explicitly temporary, with an explicit expire date which occurs by default (but which can be extended or removed if need be.) Transitional systems should ideally be implemented only in a small subset of the net, or with watch-daemon servers, so that all tools do not need to code for them.

We should not design anything into the network that is known to be already obsolete, without a very good reason.

We will have to bite a few bullets

Some features are going to be important enough that we must accept the upheaval, or accept that some new newsgroups or new features will not be available conveniently to users of old software. If it's this or abandon a useful and desired new feature, we go with the new feature. Any other philosophy leads to stagnation.

We must also remember that transmission, disk space and CPU are vastly cheaper today than when USENET was designed (and USENET is itself correspondingly much larger.)

It's almost all up for grabs

The core of USENET is that it's distributed. The articles are distributed and the ownership of sites is distributed. Almost all the other technical features are found in other "competitive" systems, though sometimes they inherited them from USENET.

Some lesser features that have a strong association with USENET are:

  1. Its integration with internet E-mail (something that still hasn't come to some systems)
  2. It's integration with the web: USENET articles and groups have their own URL prefix supported to some degree by most browsers (and making those browsers the most widely used newsreaders.)
  3. Crossposting, which really only is commonly possible with mailing lists. This feature is losing utility, however.

USENET also has ubiquity, but it has this only through its long history and critical mass now, not due to superior features.

Where possible avoid doing it twice

Because of USENET's broadcast nature, all articles must be interpretable by everyone. It does little good to have two ways to do the same thing, it just forces every tool to understand both. If you need two ways to do the same thing, you need a bo very good reason.

Thus we only want one way to sign an article, for example.

Where possible, conserve space

All articles are transmitted and stored 300,000 times to sites, and similar numbers of times to users, so any efficiencies multiply. However, we should not go nuts -- ease of use and the hand-editabilty of articles and components should be preserved.

Arbitrary limits are arbitrary

Unless there is a strong technological reason, limits are a matter of policy, and the specification should require all implementations to handle at least very large, and ideally arbitrary sized objects.

Some particular principles for signing:

  1. Any cancel authentication scheme for users requires one for 3rd party cancellers.
  2. While it makes sense to reduce burdens on users at an overall cost to the net, moderators and site admins can be expected to do a little work.
  3. When it comes to signing articles, a partially authenticated net is of limited value -- people can still forge. Even a net where some sites know to demand signing on articles for your email address still allows certain types of forgeries to affect you.
  4. While newsreaders should be able to check signatures if they want, they should not have to. If they do, the overview becomes incorrect and forged items appear on menus only to be discarded when read.
  5. It is not sufficient to authenticate cancels just for people with new posting and cancelling software. Illicit cancels hurt readers, not just posters.
  6. Any party (user or admin of site) named in the From line, Reply-to, Sender or Approved line should be able to cancel, even if they were not the original poster or poster's site.
  7. Any site named in the message-id or injection-point on the Path line should be able to cancel, even if they were not the generator of the message.
  8. Other than trusted 3rd parties, nobody else should be able to cancel an article.
  9. Any hole in the signing system needs a good justification. All headers should be signed unless there is a reason not to. The body should be signed as well.
  10. The one signing system must be secure, exportable, non-proprietary, well known and available in public software libraries.