openmoko at unixfu.com
Wed Feb 21 09:57:17 CET 2007
On Tue, 2007-02-20 at 14:19 -0500, Perry E. Metzger wrote:
> Usenet was (well, is, but it has been dying for a long time)
at 3.2Tb of data a day it's far from dead.
> The bad part was that the flood
> fill mechanism means every site in the network has to carry *all*
> traffic even if no one locally is reading a particular
A usenet server could decide on a group by group or heirarchy by
heirarchy basis what it wanted to take in a feed. Usenet admins were
encouraged to take everything since that kept article propagation
> "Someday" I'd love to create a next generation Usenet that fixed all
> this -- I would distribute only "newsgroup announcements" rather than
> the newsgroups themselves, make the topic namespace subdivided by
> domain names to eliminate the "global namepsace" problem, and use
> a bit-torrent like "centrally tracked but peer to peer distributed"
> transfer method to eliminate the need for giant news spools. However,
> realistically, I'll never have the month to do the work.
There is a project there for sure. Usenet isn't broken - it just didn't
scale well. Keeping up with that volume of data now has to be done by
I remember when I ran an ISP I siphoned off an almost full newsfeed to
my home machine. Something that isn't really possible anymore :-)
More information about the community