Brad Templeton Home
ClariNet

Interviews

EFF

Jokes / RHF

Photo Pages

Panoramic Photos

SF Publishing

Software

Articles & Essays

Spam

DNS

Jesus
The Book

Dot!

Packages

Interests


RHF Home

Ty Templeton Home

Stig's Inferno

Copyright Myths

Emily Postnews

Africa

Burning Man

Alice Pascal

The Rules for Guys

Bill Gates

   
 

Trial Newsgroup Q & A

These documents and procedures aren't quite finalized. I will be glad to take reasoned, non-flame comments on the criteria and guidelines.

Don't get confused by the term judge. The judge really only gets final say on the name, and he/she passes on the decision if there is any conflict of interest. It's a position of volunteer helping, not a position of power, as I see it. There is some power, but it's minimal. I'm the initial head judge, but I'll gladly pass that on to somebody else once this is established.

Only one admin said that he wouldn't honour a newgroup for a group that had a successful trial. (The query was made in news.groups)

Q: Hey wait a minute, can he just do this? Don't we have to have a vote or something?

A: Nope. USENET is about doing. The whole idea behind this concept is to judge groups based on people reading them, not endless debate about them and votes. I'm just going to do this, and let it sink or swim on its own merits. The way I think USENET (or any anarchy-like system) should work. The current guidelines are just one concept on how to create groups, dreamed up a few years ago. They aren't the only way. The only way we can get new ways is to try them.
(Actually, after this document was written, a vote was held and passed.)

Q: Is arbitron accurate?

A: Not tremendously. It does give a 95% accurate figure for readers/site on the sites that report in. That's the figure I've based this system on. Plus, since the criterion is based on relative position within USENET rather than absolute terms, we are doubly protected against bias in the arbitron numbers.

Error can only come if reading patterns are way different on arbitron sites than other sites. Even the bias towards large sites doesn't affect the relative rankings in readers/site. The main remaining bias is towards sites with active admins. That's not too bad, as it only affects one reader's results, by and large. For now, NNTP readers are left out, but that is due to change soon -- perhaps in 5 months, by the time the first trials conclude. Anyway, for that to introduce a bias, you have to assume NNTP readers have significantly different reading habits from server based readers. Possible, but not that likely.

But most of all, whatever's wrong with arbitron, it's ten times more objective than self-selected mail-in votes.

Q: What about other criteria?

A: Open to suggestions. I think the criterion should be as simple as possible, to leave no room for argument and misinterpretation. The will of the readers is the best one I can think of. If a group is valuable, people will read it. If not, the 100 yes > no survey method can always be used. It can also always be used for groups like comp.mail.maps which are not intended to get human readers.

Other advantages:

  • Few rules to nit-pick. It's hard to do a group creation wrong.
  • Group creation is much simpler for the creator. No vote to supervise and count
  • Use of 75% figure (currently .33 readers/site) grows with USENET. Unlike "100 vote" figure, it is less likely to need updating. Currently there are 535 mainstream USENET groups. A group must place in the top 400, which is all but 23 of the groups that had any postings last month.
The number 75% is subject to revision after the first few trials have been completed. We can't fully predict what sort of results trials will get until we've had a few, so we have to be ready to change if the evidence dictates this. For example, we will collect data over time comparing trial results at various periods into the trial, and results of groups a year down the road. We can also compare results of usenet survey/votes with their readership down the road to see what correlation there is. We can also look at other factors, like postership (the number of independent people who post) and ratios of volume to postership.

Right now arbitron takes a while to give accurate numbers, which is why the trials take 5 months. Once we know better what the growth curves are, we can shrink the time period and adjust %ages as needed. The current system takes about 7 weeks.

In fact, if every trial stite becomes more regular with their readership reports, we can get good figures as fast as we want them.