It's really beyond the scope of any IETF standard to specify just what structures of trust will be created to secure USENET. The standard will say how to sign things and how to write a certificate to indicate trust in the holder of some private key, and possibly how to publish such certificates, but it should not say the actual structure used or how to decide to trust somebody.
A proper standard allows a generalized network of trust and trust relationships, though it is indeed likely that most of the networks will be simple trees are related structures.
Nonetheless, in order for people to understand just how certificates can work, it is not a bad idea to outline some possible systems that might use a general certificate technology. These possible structures are not required by the standard. There are some the author thinks are likely and some less likely.
In essence, I expect a secured USENET to evolve through the process of simply trusting the first credible volunteers. If a non-anonymous party without a negative reputation (or with even a mildly positive one) volunteers to do something, trust them to do it -- until they do something wrong.
Why can this work? Today, USENET has almost no security. And that means in effect we're trusting everybody to do anything. We would keep it that way if the net hadn't grown so big that trusting everybody is no longer an option. But this doesn't mean one has to build a complex "official" hierarchy of administrators.
Instead we can move to a system where we trust literally the first credible party that comes along willing to do the job. That's certainly no worse than the trust-everybody method from a security standpoint.
In a distributed, digital signature authentication system, nothing can be done in secret. So if one of these volunteers starts doing things people don't like, it's easy to revoke the trust and hand it to another volunteer. Who does the revoking and handoff? While it can be ordinary admins, chances are it's ... yes ... the first credible volunteer.
This is in fact how USENET has always worked. We've trusted one person, the first volunteer, to moderate a newsgroup. (While the approved line is not secured, the forwarding of postings is to some degree.) We trusted uunet to maintain that moderator forwarding list, because they were the first credible volunteer. The whole net managed to trust Dave Lawrence to maintain group lists and now to send PGP signed newgroup messages in one of the new aspects of USENET that has become secured.
To have a security system, it is first necessary to define what the privileged operations are. This does not mean anything fancy, or even require the concept of hierarchical order. A privileged operation is simply one you don't want executable by everybody, everywhere in the world.
Most people understand that you don't want to give permission to the whole world to delete files on your machine, for example. This is a privileged operation. You may actually grant it to anybody at the machine but that is, in a limited form, some security.
The word privileged is a computer access term, and may apply to things not thought of as a privilege in the human sense.
A typical example is putting a userid in a From line. Should anybody and everybody be able to post a message with a header that says "From: email@example.com (Frodo Baggins)" in it? Most people would say ideally no. Only Frodo, and possibly the administrators of the domain shire.com should be posting such messages.
This makes something as simple as a "From:" line to be a privileged operation, ie. one that should be secured, if we want USENET, or this aspect of it, to be protected from abusers.
Here is a list of the primary things you might not want anybody and everybody to be able to do:
More details can be found in my general security article.
In addition, the security system itself will have some privileged operations.
The underlying basis for all security on USENET starts at the ordinary site. The administrators of a site have authority to to anything on that site, so long as its legal where the site is. They usually contral that authority by means of physical access to the machine, or by a userid or ACL system with passwords.
Certificates in a way are a very fancy password system. A permission certificate says, "The user using this magic key (password) is allowed to perform privileged function XYZ." That's really all that an ordinary computer security system does. The statement above is a "statement of trust" because it means that you, the owner of the machine, decided to trust somebody to perform a function that is not open to the whole world, and you gave them a magic password to identify themselves so the computer can manage the whole process of trust.
Public key certificate systems vastly improve what you can do with this basic concept. They let you have multiple levels of delegation of trust, complex rules of delegation, and they let you manage the magic passwords much more easily.
A certificate system consists of a digital signature system, and a "language" used to express privileged (secured) operations. The language may be simple or complex.
A certificate is, essentially, a statement in the language describing a privileged operation and identifying the public key of a person trusted to perform the operation. That statement and key are wrapped together and digitally signed by somebody who is trusted to make such statements.
The permissions system for a secured USENET would start with a certificate for the very administrators themselves. While one could (and in the end does) control security to a server using the operating system it's on, in order to make things more elegant, one might do this only once, by having the empowered administrator store a root certificate in a special file.
This certificate would not be signed. It would be in a file where certificates are treated as valid even when not signed. Only the admins can ever write this file, as secured by their OS.
The certificate in this file might say, "The holder of this key can do absolutely anything on this news system." This is not much different from the line in a unix /etc/passwd file which says:
If you know unix, this line says "The holder of the password that encrypts to xyasdfasd can do anything on this system."
However, the added power here is that "anything" includes "write other certificates."
Now, when the admin wants to delegate power, they write out a certificate to the party they want to delegate to, and sign it with this magic, blessed key. This certificate can be stored anywhere, in a non-magic file or database of certificates, because it is signed and self-contained.
It's also possible to put other keys, unsigned, in the special "blessed" file, rather than signing them and putting them in the ordinary database. Both processes are equivalent, there is just a certain elegance to the root process. If you cache calculations of certificates, they are both efficient.
Now all permissions control can be done just by adding certificates to the database. A certificate becomes valid if there is a path leading up to the "system root certificate" -- that one special certificate in the special file.
And that's all there is to securing. When an article comes in that performs a privileged operation of any kind, you test if it has a chain of certificates up to the system root. If it does, you execute. If not, you reject it.
Of course most admins don't want to do a lot of complex analysis of who to trust. In fact, I think 99% of admins will simply import a list of who to trust from a party they trust or pay to do this job, such as their ISP, or central company office, or company explicitly in this business.
For people who want to do that, they just bless (certify with their magic key) the key of the trusted party, and put that in their database. Then that party sends them new updated certificates of who to trust by one of the means described below.
Some sites might want to hand-tune, and they can, and should. But most sites just want the system to be secured and let the burden be shared.
The central problem is how to get agreement on who certifies what.
Today USENET works either in a "trust anybody" mode where there is no security, and anybody can post anything, or a "trust nobody" mode where all special operations, like newgroups, are screened by the local admin. Nobody likes either of those.
In a few cases, somebody has volunteered to perform a privileged operation, like newgroup, or NoCeM or PGPMoose, and they digitally sign what they do.
So far, this has worked because there has been only one volunteer, and everybody has been keen to just bless that volunteer's key. Or in some cases, it is obvious who the trust/authority should go to, and so there is no debate.
It's also worked because it doesn't hurt in most cases to give somebody the benefit of the doubt, let them run something, and revoke their permission if they abuse it.
Remarkably, this volunteer system will probably work 95% of the time into the future. Most of the time watching over things is a thankless job, and only one person wants to do it. It's rare that there is competition. However, sometimes there will be, and systems must be able to handle it.
As noted, sites will delegate to somebody else the basic permission assignment for their site. There will be some number of these parties receiving delegation. It could range from dozens to hundreds or even thousands. Where it ends up depends on pressure from two competing factors which trade-off. I'll call these "key managers." Any site that handles its own keys is its own key manager.
In the end, for most messages (except a few rare, highly privileged ones) you want to sign a message with one key, with one certificate issued by a certifying authority recognized by everybody who will process the message. That doesn't mean just one certifying authority, but it does mean that each authority has to in turn be certified up the chain to every single key manager.
What that means is that, as USENET has done in the past, we start by giving trust to "almost everybody" instead of everybody or nobody. Anybody who wants to create a certification authority for something simply creates one. And the key managers trust it automatically or almost automatically as long as the operation isn't particularly dangerous.
It's only if the volunteer abuses the trust that the key managers then act to revoke the trust. And because there are lots of key managers, you need to act very clean. Because if you piss off a key manager and get your trust revoked, everybody who was certified by you loses their certificate with the sites who use that key manager.
That may seem like a lot of unanimity but USENET has always worked with the concept of unanimity, mostly because people don't care about all but a few operations, and because there hasn't been much abuse that required people to revoke trust. Now that there is abuse, we're looking for a way to cut off the abusers.
If I decide to run a certificate authority, I had better convince those I will certify that I will do it well. Because if I don't, they will lose their certificates too. If I allow spammers into the network, and don't stop them, chances are that sites will want to tune me out, and Key Managers will decertify me.
In fact, I will try very hard to be good, because I have to. I need the unanimous support of the net and the key managers the net trusts.
If I'm a key manager, I had better not revoke the certification of certificate authorities willy nilly either. If I do, it means the people they certified can't send postings (or do whatever other action was certified) to my clients, which may piss off the decertified, but it also pisses off the client if it's done badly.
The eventual result I predict is that you simply don't get that much abuse. The reason is that people know they can lose their certification if they abuse the net. When they know there is a penalty, they don't misbehave. Police in society don't stop crime by standing on every street corner. Their simple presence at the stationhouse stops 99% of the crime, because people can call them. On a signed USENET, you can act anonymously but you can't act without revealing your public key at the risk of losing it.
Some operations need more protection. Oddly enough they are not typically USENET operations, though rmgroup because it is hard to undo might quality. The most powerful operations are actually the issuing of certificates, particularly those for the semi-powerful operations like newgroup, 3rd party cancel and so on.
We don't want one root key for USENET, so instead we will use a facility called "voting" keys. This is a certificate that says, "This keyholder has mucho power, but only if their action is signed by 8 other keys with the same power."
Then you distribute 15 of these master keys to trusted people all around the world, in different countries. For the operation of "mucho power" you require they be signed by 8 or more of them. The key managers, as proxy for the sites, pick those 15 or more people. Call them the "Cabal"
(The example above described a majority vote but in fact you can set any threshold you want. In fact, even 3 out of 15 may be enough, if you feel the probability of 3 keys being compromised is small, and it probably is.)
This august body would issue certificates for entire 1st and 2nd level hierarchies (power to do anything in a hierarchy), top level domains, and a small number of netwide certificates for things like spam-cancel etc. They are a proxy for the hundreds of key managers, who of course can also, as proxies for their clients, bless such keys if they all got together. But there will be so many of them that they are not likely to.
Being a key manager need not be that much work. In fact, most key managers will just use what other key managers do, with a few tweaks to fit their own policies. They will program their systems to say, "Any certificate coming from any other friendly key manager is fine" and re-sign it with their own key and distribute it to their clients.
Anonymous activity is easily possible in this system. You carve out sections of anonymous namespace and let people certify within it. Thus for example, the admins of "anon.penet.fi" might set up a CA for anonymous identities within their domain. Only they could issue them in that subdomain. Somebody else might have "anon.remailer.invalid" or similar.
Chances are the key managers and Cabal will, with the blessing of the net, make some rules for anonymous certificates. For example, that anonymous space is segmented as above. They may also make rules about two-way (pseudonym) remailers stating that they use a domain with "anon." in it or similar so people can identify pseudonym postings, or groups can make policies about them. Or they may say that invisible remailers can only be used in groups that wish to have such.
They may make other rules about remailer policy, so that people can identify, for example, remailers that give out arbitrary numbers of IDs on demand to those that limit the function. (The use of arbitrary IDs on demand can be a way to get around spam detection.)
I expect most sites would get a certificate for their domain, that lets them use any E-mail address at their domain for posting, cancels etc.
They would do this by requesting one from a domain certifier. A domain certifier would be blessed by the key managers or the Cabal with a certificate enabling this power. They might just be a volunteer, or they might do it for money. Or they might be a major ISP doing it as a courtesy for customers. Down the road, I expect domain registries like internic/NSI to act as domain certifiers.
The site wants a cert for their domain. They might do this over the phone, or they might have a procedure using secure DNS to simple ask a domain certifier to use DNS to verify them. For example, it's fair to say that if you can do a variety of root-level operations at your domain, you are entitled to a certificate for it. If you have the power to put a key at http://domain-certify.mydomain.com/domain-certify/keyfile, for example that may be enough security for USENET purposes, at it can be all automated (allowing the CA to do this cheap or for free.) The response might also be mailed to the tech contact in the whois database.
So you create your public key, and put it in that file and make the DNS entries and email the CA a request for a cert, and it reads that HTTP file and emails you back a cert.
Now with your cert, you can sign postings from your users, sign cancels from them, etc. Your certificate says, in certificate language, "This keyholder can act as .*@domain.com, netwide"
If you are a user, simple E-mail challenge/response can provide enough security. You want a certificate for your email address? Your program generates the key and mails it to firstname.lastname@example.org. They sign it and E-mail it back to the address certified. If you get it, you have proven the ability to read the mail of the certified address. It may mean you cracked the system, but that's a risk we take. For added protection, a second round of response might be needed to actually get the key, to stop people from certifying mailing list addresses they are on.
Some person will have volunteered to manage the keys for newsgroup hierarchies. Dave Lawrence might manage "comp" as many have trusted him to do now. That means if you want to moderate a group in comp, you ask him to verify you are the moderator and write you up a moderator's key.
If nobody is managing comp, you volunteer!
If the person managing comp doesn't act well, you complain, to a complaint board set up by key managers or the cabal. But beware, make too many frivolous complaints and they might decide to decertify you for wasting their time.
As noted above, to make USENET flow smoothly, automated CAs that can create certificates by E-mail or web are highly desired. The main problem is they must be well secured, as they will be subject to attack. Fortunately, there are already enough people working on securing such systems for other reasons.
Automated CAs can take care of almost all E-mail address and domain certification. The amount of certification for hierarchy managers, moderators, 3rd party cancelers and so on is so small that it is not a problem to do it by hand.
If a key is compromised, a party certified to issue key revocation for the key (which may be the original keyholder if they wanted to take that risk) will issue a revocation message to a high-speed special newsgroup or other channel. This makes sites put it in their local hash database of revoked keys. If a key is abused, it can quickly be turned off.
To avoid the revocation list getting too large, cert will need to expire. Their lifetime will depend on how powerful they are and subject to compromise.
In addition, keys generated by automated CAs may last a short time if there is an expectation that eventually one should get a key manually by paying for it. I don't know what level of security people will want here.
Before your cert expires, your CA will probably simply mail you a new one well in advance. You store it in your file of keys and your software should automatically know to use the latest one. "Storing in your file of keys" will probably just mean saving the email in a special file, a file named in the message (because your software, when it emailed the request for a certificate, knew where it would look for the result.)
For most USENET users, the use of the system will involve nothing. Their site will have a sitewide cert, and will certify their postings and cancels using local site security.
For the next level of users, certification will work as follows. Their software notices by asking the NNTP server that the site does not certify. It generates a private key for the user, and stores it on disk, possibly asking the user for a pass phrase (as PGP does) but possibly not, since system security may be adequate.
The software e-mails the public version of the key and the user's email address to a CA it knows. It provides some info to include in the reply.
The CA sends back a reply saying,
"Here is your public key. Save it in the file /home/you/.trn/keybox". If your mailer is MIME aware, just invoke the save attachment and it will store it in this file."
The user does nothing more than click, and their certificate is ready in their keybox. The software, when posting, reads the keybox, gets the certificate, and uses it and the private key to sign any message or cancel it generates for the user.
For the very small number of users who are moderators, hierarchy admins, 3rd party cancelers etc. they get their certificates manually.
The site admin, when installing new USENET software, also has the software generate a private key and send a request to a certifier for the site, if the site elects to sign articles for its users. The admin has to perform some special operations to demonstrate they are a valid admin for the domain. When the key comes back they store it where the software wants it, and the software takes over from there, signing postings for users (unless the user signed themselves.)
The only keys that need to be maintained locally at sites are the keys of their key managers. They may have more than one key manager. They might delegate a different one for different hierarchies, for example.
Key managers would regularly broadcast top level keys for sites to store and cache locally. They would also maintain web sites where any key could be fetched by name if it's missing. However, top level keys would be broadcast well in advance of use, to assure this isn't needed.
Top level keys are called "keeper" keys. They have a keeper flag in their attribute language, saying they should be stored locally. This is actually a high level privileged operation.
The keeper keys will tend to belong to certification authorities. The Cabal, which hands out keeper keys, will want to keep their number down to a couple of hundred, since everybody has to cache them all. However, a small CA can exist without a keeper key -- it just requires that people it certifies keep 2 levels of certificate with postings -- the one from the small CA and the one from the larger CA.
Every posting has to have a keeper key in it, otherwise sites can't verify the certificate chain to something they have stored locally. Sites are free to keep keys that do not have the keeper bit set by the cabal.
Indeed, the Cabal is not all that powerful. It only gets power because the whole net trusts it. There is this grand tradeoff. You can divide up the trust, but the more you do it the more complex it gets. So ideally you want to have just one party do some major things, and you arrange to trust it by using the voting system described above.
However, one might also create cabals (or even single people) with absolute power over a hierarchy, particularly if they created or own the hierarchy. Everybody in the hierarchy blesses and stores the key of that cabal, giving it powers for that hierarchy.
Centralization is never required. However, it is expected it will happen a lot because it's efficient. If it stops being good, it can be stopped at any moment, because the power is never truly centralized. Any site can pull the plug at any moment.
A secured USENET can also be made a lot simpler to operate if we implement secured "undo" features. In this way we can trust a lot more people more readily to perform all sorts of functions if we have the ability to have more trusted people undo their actions.
For example, a cancel is a form of undo for a posting. And cancel to undo spam has been the only thing that has kept USENET alive as the abusers came out in force.
An undo for most other operations, from cancel itself, to newgroup or rmgroup, might allow an easier secured USENET. In this case you don't have to worry greatly about a key being compromised or a keyholder going rogue.
If this happens, you simply have a fairly trusted party undo the damage, revoke the compromised key, and the net gets on with its life, at least where the new software featuring undo is present.
It is important to remember that while one can have keys issued to anonymous parties, ie. parties whose "real" identity is unknown, no party can act without revealing their key. If they abuse the system, even when they have a legitimate anonymous identity, you may not be able to find out who they are, but you can, if people agree on it, stop their abuse using that key by revoking the key.
People can act anonymously in a public key signature system, but they can't act in secret. Without the ability to act in secret, and with the ability to undo abuse and revoke the keys of abusers, you simply get vastly less abuse.