The problem wasn't with the competing versions and replacements spawned to
compete in turn; I've never had a problem with a newsreader failing to read
RSS. The problem is simply that RSS doesn't scale, due to a bad choice in
When it comes to distributing regularly updated content in the form of
articles or snippets as RSS is used for, there are two fundamental methods
that a content provider can choose from. They can either leave the resource
out in the open and let people come now and again to see if it's
been updated (the pull method, since clients pull the data
for themselves) or they can allow people to subscribe to be sent the new
data once it has been created (the push method, since the data is
pushed to the client.).
Now, it's often hard to say in advance which method a distribution protocol
will follow, because designers don't often explicitly think to themselves
"Hmm, I think I'll implement this as a pull protocol.". They start with an
idea and maybe a mental approximation of an implementation, polishing it
until it becomes sufficiently easy to use and handy. And most of the time,
the developers get it right. But in this case, they got it wrong.
Pull works best when there are larger updates and changes and documents need
to be stored in one accessible location for people to access when they
want them. Push works best for smaller updates which are time-dependent
and do not necessarily need to be kept in one central location. If pull is
used for an application where push would be more appropriate, clients need
to poll the content provider at random intervals. There isn't any simple
way to decentralise the polling and so it can become a fatal drain on
resources once the number of clients reaches the thousands.
RSS is not a good mechanism for getting new content to a large number of
people; it's just too hit and miss. Still, one would expect it to be usable
even if it is pretty suboptimal. After all, it doesn't use up that much
bandwidth and it doesn't have that much overhead. It'd take complete idiocy
on the part of RSS designers, newsreader coders and clients to make it
unworkable in terms of bandwidth.
So it's obvious to me that in its current state, RSS is doomed.
RSS newsreaders harvest RSS in terrible ways. Many hammer the server by
checking for updates with intervals of only a few minutes. Others which
are popular often cause what is effectively a DDoS attack since they are
programmed to always check at the same time past the hour, causing an hourly
rush of bandwidth use. Still more don't use gzip compression and ignore HTTP
headers in the server's response telling them that there's no new updates
to download and so they continually downlod stale RSS files. Even when they
don't, they redownload the whole file when so much as a single character
These errors of implementation aren't too bad alone, but when combined with
each other and the sheer mass of naive users, they are deadly. RSS needs to
be moved to a new distribution network which can handle bursts like this.
A decentralised peer-to-peer network is probably the most appropriate
solution. Unfortunately, most existing P2P software is based on a pull
mechanism. You search for what you look for and download it. There's no way
to push updates to subscribers for the conventional software we're used to.
The closest we've come is maybe BitTorrent, but it's still based on the
(I can understand the scepticism regarding push. Push content was supposed
to be saviour of the Internet back in the last century, and flopped
miserably for exactly this kind of application. But now it's plain that
pull just isn't going to work in this instance, we may as switch to push.)
I started cruising around for a peer-to-peer system that followed push
instead of pull. The closest I've found is
konspire/k2b/kast); it's designed for distributing files to a series of
subscribers. The key difference is that the bandwidth used by the content
provider is not directly proportional to the number of subscribers;
bandwidth limitations are vitiated by sharing the load out amongst channel
subscribers via retransmission. There's a convenient web-based user
interface, and the source of files cannot be faked by others. It fits
konspire2b was designed specifically for "zero day" distribution of
high-demand files that will be in low demand shortly after their release.
For a random client, all that's needed is to leave a daemon running in
the background to pick up updates. Subscription can be done just by following
a normal link embedded in a web page
and confirming. When the updated RSS file is sent out, it's downloaded
automatically. A normal newsreader can then be used to read it.
That's essentially the solution. It can be done with readily-available
software. The only problem now is getting it going; the process hasn't been
streamlined because it hasn't occurred to enough people before. It is
slightly more difficult to run a dedicated broadcast daemon in addition to a
web server to send out blog entries, and it might be very difficult for
shared bloghosting services to use k2b for individual users. Something
more specialised for this purpose would help, and might be what's needed
Still, I have at least given a justification for changing the current state
of affairs and pointed to a stopgap solution. Yes, there's probably
other ways of
doing it. What's needed
for things to change is a series of popular bloggers adopting it, or at least
pushing for something different. This could take a while.