Back in the days when Napster was up and running, Linux users decided that they wanted to be part of the swap revolution. So the Napster protocol was reverse engineered and published. A client was written for Linux, and since then, quite a number of clients have been published, which all connected to the Napster network.
Over the months, Napster enhanced their protocol, and the Opennap folks followed suit. Soon Opennap started extending the protocol, adding features like server linking, searching for specific media types, as well as making the chat functionality more IRC like.
But soon, Napster was shut down by the courts. Opennap servers remained online, and these third party clients continued to access these servers, with a list of online servers being maintained by Napigator.
The Problem with Centralization
The Opennap servers are scattered all over the globe, but since they work in exactly the same way as the original Napster servers, the same court order that caused Napster to shut down could be used to shut them down. And seeing as the US has placed sanctions on Ukraine for allowing music piracy, similar threats to third or second world countries where these servers are being run from could result in a shutdown of all servers in these countries. Or could result in arrests of server owners from other countries when visiting the US. Also possible is that the countries where the servers are being run from would pass their own homegrown legislation, forbidding the abetting of file sharing.
A prominent weak point in the current Opennap structure is the Napigator page. If this list of servers is forced down, and all alternative lists are also attacked, it could make file sharing with Opennap uncomfortable enough to kill the protocol.
But what if Opennap could function in a truly peer-to-peer way, in addition to the server based method? This would imply that if ever the public servers are shut down, the network falls back on a (maybe less efficient) peer-to-peer model, and continues running right along. The music industry will see the futility of suing, and hopefully, wouldn't even try.
An idea for a solution
There is a lot of open code in the Opennap scene. Also, the existing servers have got the ability to link with one another. Would it not be a good idea to restructure the network in a way that would allow any client to act as a server?
This is an old idea. The Gnutella working group has been suggesting this for a long time, yet doing nothing. I proposed this for the Gnutella protocol a year and a half ago. Everybody agreed it was necessary, but nobody wanted to actually implement it.
Recently, however, the developers at FastTrack seem to have successfully created such a network. Some of the Kazaa, Grokster and (until recently) Morpheus clients acted as servers, and served other lower bandwidth clients. The user didn't ever know if his client was also a server (called supernodes) or not.
If the Opennap network could be changed so that the same structure were possible, there could be real decentralization. And best would be if it could be changed without the existing clients being modified.
First of all, we have to take stock of what is available. There are two open-sourced servers - Slavanap for Windows, and Opennap for *nix. There are a number of open source clients, also. There is Napigator, which maintains a list of running clients with information about them on it's website, and which has a protocol that allows the clients to log themselves in and out.
For the new model, a server would have to be bundled with each client, and the servers would have to be changed thus that they can run without being visible to the user, and can be fully controlled over a software interface by clients.
Every client would need to have the inherent ability to function as a supernode. We will limit the network to 1 million users (FastTrack currently has 850 000), and assuming that each supernode serves 500 users (will require less than 10kb/sec upload bandwidth), we will need 2000 supernodes to handle the entire network.
There are 2 main issues to be taken care of:
Supernode-client and supernode-supernode communication
- Making sure that all clients always have a supernode to connect to, and if their supernode dies, they immediately are reconnected to another one
- Searching through the entire network without stressing the supernodes
All supernodes are connected to one another in a ring connection. Whenever a new supernode joins the ring, it gets a snapshot of the entire ring, so that if the supernode it is connected to drops out, it looks for another supernode from it's list, and recreates the ring.
Clients are promoted to supernodes by other supernodes. When a supernode reaches it's maximum number of users, it looks among it's clients to see which is best suited for work as a supernode. It identifies one, and requests it to become a supernode. All the while this negotiation is going on, it continues taking on new users. As soon as it has a stable supernode, it redirects it's excess users to the new supernode.
That supernode in turn will do the same when it reaches it's maximum number of users. The supernode on the end of the line always connects back to the very first supernode to form the ring.
This way, there is always a direct 2 way serial connection between every supernode in the ring. Also, every supernode knows it's relationship to every other supernode, allowing some security against rogue clients (the rogue/bad user problem is something that will have to be dealt with separately, after the theory has been shown to work.)
A search request is first handled locally by the supernode the client is connected to - the other 500 peers are searched, and the results are returned. If the number of results is less then 100, the supernode sends a search request to the supernode it is connected to, requesting the number of files missing. Example, if it found 47 files locally, it requests the server it is connected to to return 53 search results. If that server finds less than 53 results, it requests the server it is connected to to return the missing number.
This means that if the user searches for Britney Spears, the load is only on the local server, as the 100 results are probably found locally and returned at once. There is no bandwidth load on the other servers. If he searches for Fela Kuti however, the search gets propagated wide, but there is still no bandwidth load, as there are few search results which have to be routed.
This has a clear advantage over Gnutella, where search results and search queries are routed over all clients.
Can existing code be used for this?
What needs to be added is
- Searching for particular number of files: This is existent in the current servers.
- Redirecting to other servers: Existent in SlavaNap
- Linking servers: Existent in Slavanap and Opennap
- Change the servers thus that they can route search results.
- Clients have to analyze themselves, and award themselves points, based on their uptime and bandwidth, and supply this info to the supernode they are connected to.
This means that presently existing clients would still work, and new clients with supernode abilities would be added to the network to handle the requests of these clients.
This is a rough proposal, and I am sure there are lots of problems. If these problems can be worked out and an implementation is started, I think that this will function as a good replacement for Gnutella, and will reduce our dependence on the whims of companies.