Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Decentralizing Opennap

By Mark Essien in Op-Ed
Wed Mar 13, 2002 at 12:00:24 PM EST
Tags: Software (all tags)
Software

Presently, there are a number of open peer-to-peer protocols. Prominent among them are Gnutella, gIFT, Opennap and Freenet.

None of them seem to work perfectly: Gnutella is inherently flawed, and does not scale properly, gIFT is being developed slowly, and is only available for Linux, Freenet is aimed more toward anonymity than quick file search and transfer, and is slow and cumbersome. The problem with Opennap is that it is centralized.

I propose here a method of using the existing code of the Opennap servers and clients to create a new decentralized network, using the Opennap protocol and without requiring any client to be modified.


Background

Back in the days when Napster was up and running, Linux users decided that they wanted to be part of the swap revolution. So the Napster protocol was reverse engineered and published. A client was written for Linux, and since then, quite a number of clients have been published, which all connected to the Napster network.

Over the months, Napster enhanced their protocol, and the Opennap folks followed suit. Soon Opennap started extending the protocol, adding features like server linking, searching for specific media types, as well as making the chat functionality more IRC like.

But soon, Napster was shut down by the courts. Opennap servers remained online, and these third party clients continued to access these servers, with a list of online servers being maintained by Napigator.

The Problem with Centralization

The Opennap servers are scattered all over the globe, but since they work in exactly the same way as the original Napster servers, the same court order that caused Napster to shut down could be used to shut them down. And seeing as the US has placed sanctions on Ukraine for allowing music piracy, similar threats to third or second world countries where these servers are being run from could result in a shutdown of all servers in these countries. Or could result in arrests of server owners from other countries when visiting the US. Also possible is that the countries where the servers are being run from would pass their own homegrown legislation, forbidding the abetting of file sharing.

A prominent weak point in the current Opennap structure is the Napigator page. If this list of servers is forced down, and all alternative lists are also attacked, it could make file sharing with Opennap uncomfortable enough to kill the protocol.

But what if Opennap could function in a truly peer-to-peer way, in addition to the server based method? This would imply that if ever the public servers are shut down, the network falls back on a (maybe less efficient) peer-to-peer model, and continues running right along. The music industry will see the futility of suing, and hopefully, wouldn't even try.

An idea for a solution

There is a lot of open code in the Opennap scene. Also, the existing servers have got the ability to link with one another. Would it not be a good idea to restructure the network in a way that would allow any client to act as a server?

This is an old idea. The Gnutella working group has been suggesting this for a long time, yet doing nothing. I proposed this for the Gnutella protocol a year and a half ago. Everybody agreed it was necessary, but nobody wanted to actually implement it.

Recently, however, the developers at FastTrack seem to have successfully created such a network. Some of the Kazaa, Grokster and (until recently) Morpheus clients acted as servers, and served other lower bandwidth clients. The user didn't ever know if his client was also a server (called supernodes) or not.

If the Opennap network could be changed so that the same structure were possible, there could be real decentralization. And best would be if it could be changed without the existing clients being modified.

Implementation

First of all, we have to take stock of what is available. There are two open-sourced servers - Slavanap for Windows, and Opennap for *nix. There are a number of open source clients, also. There is Napigator, which maintains a list of running clients with information about them on it's website, and which has a protocol that allows the clients to log themselves in and out.

For the new model, a server would have to be bundled with each client, and the servers would have to be changed thus that they can run without being visible to the user, and can be fully controlled over a software interface by clients.

Every client would need to have the inherent ability to function as a supernode. We will limit the network to 1 million users (FastTrack currently has 850 000), and assuming that each supernode serves 500 users (will require less than 10kb/sec upload bandwidth), we will need 2000 supernodes to handle the entire network.

There are 2 main issues to be taken care of:

  • Making sure that all clients always have a supernode to connect to, and if their supernode dies, they immediately are reconnected to another one
  • Searching through the entire network without stressing the supernodes
Supernode-client and supernode-supernode communication
All supernodes are connected to one another in a ring connection. Whenever a new supernode joins the ring, it gets a snapshot of the entire ring, so that if the supernode it is connected to drops out, it looks for another supernode from it's list, and recreates the ring.

Clients are promoted to supernodes by other supernodes. When a supernode reaches it's maximum number of users, it looks among it's clients to see which is best suited for work as a supernode. It identifies one, and requests it to become a supernode. All the while this negotiation is going on, it continues taking on new users. As soon as it has a stable supernode, it redirects it's excess users to the new supernode.

That supernode in turn will do the same when it reaches it's maximum number of users. The supernode on the end of the line always connects back to the very first supernode to form the ring.

This way, there is always a direct 2 way serial connection between every supernode in the ring. Also, every supernode knows it's relationship to every other supernode, allowing some security against rogue clients (the rogue/bad user problem is something that will have to be dealt with separately, after the theory has been shown to work.)

Search routing
A search request is first handled locally by the supernode the client is connected to - the other 500 peers are searched, and the results are returned. If the number of results is less then 100, the supernode sends a search request to the supernode it is connected to, requesting the number of files missing. Example, if it found 47 files locally, it requests the server it is connected to to return 53 search results. If that server finds less than 53 results, it requests the server it is connected to to return the missing number.

This means that if the user searches for Britney Spears, the load is only on the local server, as the 100 results are probably found locally and returned at once. There is no bandwidth load on the other servers. If he searches for Fela Kuti however, the search gets propagated wide, but there is still no bandwidth load, as there are few search results which have to be routed.

This has a clear advantage over Gnutella, where search results and search queries are routed over all clients.

Existing code?

Can existing code be used for this?

  • Searching for particular number of files: This is existent in the current servers.
  • Redirecting to other servers: Existent in SlavaNap
  • Linking servers: Existent in Slavanap and Opennap
What needs to be added is
  • Change the servers thus that they can route search results.
  • Clients have to analyze themselves, and award themselves points, based on their uptime and bandwidth, and supply this info to the supernode they are connected to.

This means that presently existing clients would still work, and new clients with supernode abilities would be added to the network to handle the requests of these clients.

Conclusion

This is a rough proposal, and I am sure there are lots of problems. If these problems can be worked out and an implementation is started, I think that this will function as a good replacement for Gnutella, and will reduce our dependence on the whims of companies.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
Is this proposal feasable?
o Yes 40%
o No 3%
o I think so, but I have too little experience to say for sure 43%
o It sounds wrong, but I have too little experience to say for sure 3%
o We do not need this. Gnutella is good enough for anybody 10%
o Filesharing is piracy, and should not be encouraged 0%

Votes: 30
Results | Other Polls

Related Links
o Gnutella
o gIFT
o Opennap
o Freenet
o reverse engineered
o published
o Napigator
o sanctions
o arrests
o homegrown
o proposed
o whims
o Also by Mark Essien


Display: Sort:
Decentralizing Opennap | 37 comments (33 topical, 4 editorial, 0 hidden)
While you're at it... (4.66 / 3) (#4)
by am3nhot3p on Wed Mar 13, 2002 at 09:57:26 AM EST

...here are a few extra ideas. Over the last few days, I've also been thinking about a new p2p system for music files. I don't have time to work on it now, but I've been considering a few ideas. It seems that we really need a few extra features to get really reliable sharing:

  • Hashing, so that the same file can be identified regardless of its name
  • Stripping of ID3 etc. tags, because with current networks, a minor alteration of a tag results in a different file
  • Multi-source, simultaneous downloading
I realise that some of these have already been implemented in some clients. Multi-source downloading, in particular, is supported by the protocol insofar as one can start transferring from any point in the file. There was also a search based on file size in the Napster protocol, IIRC.

In addition, what if un-firewalled machines were to donate some bandwidth to route transfers between mutually firewalled pairs of machines? It could increase the number of successful transfers, increasing availability of files (at the expense of bandwidth efficiency).

I did work on a Napster client before it was shut down, but after the death of Napster, there wasn't much incentive to continue. I do remember, though, that the protocol can be easily extended. New features could easily be added without breaking existing clients, which would give a decent starting size to a 'super-OpenNap' community. Building on an existing network seems like a great start.

I like the ideas presented in Mark's article. I'd love to work on this, just as soon as I've finished writing all this ASP code that I have to do by next week...



Excellent points, also (5.00 / 1) (#6)
by Hopfrog on Wed Mar 13, 2002 at 10:12:03 AM EST

Rather than stripping the ID3, one hashes the file starting at a byte offset after the ID3 tag. The tag is then used for informational purposes only.

Opennap supports multiple source downloads. From the docs, there is

215 (0xd7) request resume [CLIENT]
Client is requesting a list of all users which have the file with the characteristics. The server responds with a list of 216 messages for each match, followed by a 217 message to terminate the list.

If one uses this, in combination with the searching using file size command, one can find enough files to make an auto-resume like in Kazaa possible.

The donation idea is also nice, though not so trivial to implement, I would think. It would help a lot if the server would allow one to specify in the search if only non-firewalled results should be returned tho.

Hop.

[ Parent ]

ID3 Tags (4.50 / 2) (#12)
by am3nhot3p on Wed Mar 13, 2002 at 10:33:50 AM EST

That's almost what I meant - except that the ID3v2 tags are at the start of the file, and the ID3v1 tags are at the end, so you'd have to take the middle range of bytes between the two sets of tags (if present). The tag data can be used for searching, but isn't hashed, nor is it actually removed from the existing file on disk!

Good point on the 215-7 block - it's been a while since I read through the specs. In that case, by simply hashing and transferring the non-ID3 data, that would handle tag variation without breaking any current implementations (I think).



[ Parent ]
I agree (1.00 / 1) (#16)
by Mark Essien on Wed Mar 13, 2002 at 10:43:29 AM EST



[ Parent ]
source code (4.00 / 1) (#29)
by Delirium on Wed Mar 13, 2002 at 03:30:38 PM EST

Bitzi's hashing already implements this (an "audio part only" hash for MP3 files), and it's public domain so if anyone's interested in source code it can be gotten from their site. Of course it'd be preferable if you'd just implement Bitzi lookups directly. =]

[ Parent ]
Gnutella has all of these in the works (none / 0) (#10)
by murklamannen on Wed Mar 13, 2002 at 10:29:01 AM EST

All of your proposals are being discuessed for implementation in gnutella, and some already are.

[ Parent ]
Bitzi (5.00 / 1) (#18)
by Delirium on Wed Mar 13, 2002 at 11:44:42 AM EST

In particular, LimeWire is working on implementing interoperation with Bitzi, an on-line repository of metadata (i.e. users can provide info and comments about a particular file) indexed by hashes. Bitzi also stores an "audio part only" hash to allow files that differ only in id3 tags to be recognized as the same file.

[ Parent ]
Regarding firewall routing (none / 0) (#22)
by Mark Essien on Wed Mar 13, 2002 at 12:53:50 PM EST

I would expect it to look this way:

Client 1 (which is firewalled) requests file from client 2. The download Ack tells client 1 that client 2 is firewalled. Client 1 tells the server that it needs a non-firewalled user. Server returns the name of a non-firewalled user. Client 1 sends a download request to the non-firewalled user, but the location of the file is on Client 2 - Example, the file name could be "opennap://client2/c:/download/file.exe". The non-firewalled client would then connect to client 2, start downloading the file, and route the received bytes over to client 1.

Twice as much bandwidth is used when compared to normal downloading, but almost all files can then be downloaded.

This method can also be used as a freenet kind of tool to mask the real identity of the person downloading. So not only are you downloads more reliable, you also have a layer of anonymity.

Excellent idea, I must say.

M.E

[ Parent ]

while this is nice (none / 0) (#28)
by Delirium on Wed Mar 13, 2002 at 03:28:45 PM EST

It also increases the bandwidth problem, which from what I understand is one of the more major issues with full p2p systems. Is there really enough spare bandwidth lying around that the network can afford to have a significant percentage of transfers being double-transferred?

[ Parent ]
It doesn't affect the critical bandwidth (none / 0) (#31)
by Hopfrog on Wed Mar 13, 2002 at 04:10:10 PM EST

Important is the bandwidth between supernodes. This has to be conserved and used efficiently. This method just uses up the bandwidth of 3 peers, who hopefully are not supernodes (one could spare all supernodes from this, for example), and has no effect on the network itself.

Hop.

[ Parent ]

limewire and bearshare (3.00 / 1) (#5)
by mpalczew on Wed Mar 13, 2002 at 10:05:27 AM EST

limewire and bearshare already implement a supernode feature, they are working on caching. I've been doing great everytime I connect. The old clients are slowly fading out. One of the things I think is great about gnutella is the ability for the network to evolve.
-- Death to all Fanatics!
Use FastTrack protocol instead (4.00 / 1) (#8)
by hardburn on Wed Mar 13, 2002 at 10:25:41 AM EST

Personally, I think the FastTrack protocol is better suited for what you want. The protocol was reverse engineered, and though the FastTrack people have put in stumbling blocks for the Free Software developers, I think an older version of the protocol could fork off the main one. This would produce a result similar to what the above story is looking for, but with a protocol better suited to the job.

Even if it won't work with more recent FastTrack clients/servers, you can still get some use out of it.


----
while($story = K5::Story->new()) { $story->vote(-1) if($story->section() == $POLITICS); }


That's what gIFT is (none / 0) (#11)
by Mark Essien on Wed Mar 13, 2002 at 10:29:39 AM EST

The difference is that much more has to be done from scratch. There are no clients, there are no complete servers.

With Opennap, these things exist - why not use them? No point reinventing the wheel.

M.E

[ Parent ]

Because the Napster protocol sucks (none / 0) (#13)
by hardburn on Wed Mar 13, 2002 at 10:38:16 AM EST

The Napster protocol sucks because it relies on centrailized servers to do searching. Although FastTrack relies on "super servers" for searching, these are much more decentrailized than Napster's. You could modify the protocol to work otherwise, but you'd end up doing just as much if not more work than if you tweaked gIFT to do what you wanted.


----
while($story = K5::Story->new()) { $story->vote(-1) if($story->section() == $POLITICS); }


[ Parent ]
The protocol hardly needs to be touched (none / 0) (#15)
by Mark Essien on Wed Mar 13, 2002 at 10:42:06 AM EST

Thats why I say in bold that existing clients do not need to be modified. It is only the server software that will need to be changed somewhat.

I am proposing taking away the need for these centralized servers, so that Opennap will be equal to Gnutella and FastTrack. The protocol in itself has nothing against the supernode idea.

M.E

[ Parent ]

The protocol doesn't suck (none / 0) (#32)
by am3nhot3p on Wed Mar 13, 2002 at 05:22:30 PM EST

Actually, I'd say that the protocol doesn't suck, per se. It includes a large enough set of features that most of Mark's proposals can be implemented without changing the protocol. What's more, it's fairly easy to add new features by defining new message codes. Significantly, there is already provision in the protocol for switching servers. You can be instructed to reconnect to a different server by message 821 (just looked it up). As long as a client responds to this correctly, there should be no problem. In other words, unless a server crashes, it can signal that it is going down and route its clients to another server on the ring (who may then redirect it again). If a sever crashes, then the client will have to try to reconnect in the usual way. This behaviour can be fine-tuned in future updates to the clients, but it should work acceptably even with current clients.

Server redirection would enable users to be pushed onto the decentralized network via the existing centralized network. This provides a smooth upgrade path. Furthermore, when servers have reached the maximum number of users, instead of refusing connections, they should accept and issue a redirect before closing the connection, pushing clients onto another server on the ring.

As an aside: I can only talk from my own experience, but Gnutella doesn't seem to be scaling well. The number of clients that I can connect to is pitifully small, even if I ramp up the number of nodes so far that it consumes most of my bandwidth.



[ Parent ]
Ring must be fault tolerant. (none / 0) (#35)
by minra on Thu Mar 14, 2002 at 06:36:15 AM EST


Servers crash. The ring should be self-annealing. Each server only needs to store addresses of one or two more hops in the ring. Then when the neighboring server crashes, the servers at the broken end of the ring contact each other immediately.

I hope this is obvious.

[ Parent ]
Gnutella - alive and kicking (4.75 / 4) (#9)
by murklamannen on Wed Mar 13, 2002 at 10:26:29 AM EST

Gnutella isn't at all fundamentaly flawed or dead.

Actually, the original gnutella protocol is the model that all other succesfull decentralised p2p systems are based on.

And gnutella is changing too. LimeWire already implements Ultrapeers, their version of superpeer, in a very clever way IMHO.
Current BearShare doesn't use it, but BearShare 3.0 (which is in alpha or beta) does.
Many more servents have it in the works.
There is also a finished meta data search protocol and it will soon be implemented in the major servents.

And who says gnutella doesn't scale? It did scale when thousands of users started using Morpheus PE(= Gnucleus 1.6.0.0), even though gnucleus is a very primitive client in terms of protocol. The minority of BearShare, LimeWire and other servents implementing modern features saved the gnutella network

There is also a mail list in which all major gnutella developers analyze and discuss the state of the network and future features.

Well, to your proposal. The only difference i see between it and Ultrapeers is that you connect superpeers in a ring. What is the exact benefit of this over a regular mesh, like gnutella is?

The ring formation and Gnutella (5.00 / 1) (#14)
by Mark Essien on Wed Mar 13, 2002 at 10:39:04 AM EST

First I considered having everything randomly connected to everything else. But this leads to a waste of bandwidth, and each supernode is not exactly sure what his relationship to another supernode is.

Then I considered a system where 1 node is connected to 2 other nodes. It makes the network complex, and also wastes bandwidth. I didn't see any advantage.

I then decided that a simple string connection would be most efficient, were every client knows where every other client is, and routes search results down the line. But this leads to the problem that if one bad client sits in the center, he could spoil things for all other clients lower than him.

A solution would be to use a ring, so that each client can verify in 2 directions if he is connected to a properly behaving client. He has 2 sources to compare his lists with, for example.

Gnutella is all nice, but Opennap has great advantages also. There is chat, there is resume, etc. Gnutella and Opennap can exist side by side - because I propose the system for Opennap does not mean I have anything against Gnutella. I'd just like to see a decentralized model function with Opennap too.

M.E

[ Parent ]

gnutella has these features (5.00 / 1) (#25)
by mpalczew on Wed Mar 13, 2002 at 01:27:16 PM EST

| There is chat, there is resume,

Gnutella has these features too.
-- Death to all Fanatics!
[ Parent ]
Supernodes... (none / 0) (#17)
by kimpton on Wed Mar 13, 2002 at 11:43:32 AM EST

I guess the supernodes would be constantly changing but wouldn't switching your client to supernode mode greatly increase your chances of being target by the media companies? (not yet maybe - but in a few years)

Once a couple of people get into legal trouble you're going to find it difficult to find people to act as supernodes? I presume Gnutella at the least can fall back to it's old protocol.

Accused of sharing information (3.00 / 1) (#19)
by inerte on Wed Mar 13, 2002 at 11:58:43 AM EST

That's a possibility I can see for them to go after supernodes. Really, if a supernode doesn't download or upload copyrighted content, it didn't make anything wrong. Only if the sharing of queries, the routing, etc, is counted.

I guess it could be, it's 'helping' infringement. You kind have a precedent, Napster.

But even, to go after supernodes is a insane task. It would require global cooperation to shut down them all.

Unless you are forbidden to code an app/protocol that utilizes a supernode concept, but that's close to impossible. Many non-p2p networks utilize this concept. To restringe to 'only apps that helps piracy' is hard too, how it will be determined?

I guess, due to impractical reasons, we don't have to worry with media companies going after supernodes afterall...

--
Bodily exercise, when compulsory, does no harm to the body; but knowledge which is acquired under compulsion obtains no hold on the mind.
Plato
[ Parent ]

I don't know... (3.00 / 1) (#20)
by kimpton on Wed Mar 13, 2002 at 12:41:09 PM EST

It may be difficult for the media companies to go after the supernodes, but my point was that if they do go after a few supernode users this will make other users hesitate before becoming supernodes. Even if they don't have a successful case I still think it will put others off.

[ Parent ]
And what if every user is a potential supernode? (3.50 / 2) (#21)
by Hopfrog on Wed Mar 13, 2002 at 12:43:48 PM EST

And the users cannot control if they are or are not supernodes, then what?

Kazaa uses the system, and they haven't gone after the users. It just wouldn't make sense to do so - the user had no part in deciding if he wanted to abet file sharing or not.

Hop.

[ Parent ]

I use kazaa.... (none / 0) (#23)
by kimpton on Wed Mar 13, 2002 at 12:54:17 PM EST

..and the options do give you a choice to 'not function as a supernode'. This may or may not do anything.

If a system arbitrarily assigned supernodes without user control then, yes, this would make the users more secure, they could claim ignorance. I think users would be less happy using the system if it worked this way. How badly would being a supernode affect your bandwidth?



[ Parent ]
Kazaa-alike (none / 0) (#24)
by Hopfrog on Wed Mar 13, 2002 at 12:57:19 PM EST

What if like Kazaa, you are by default enabled as a supernode, but can turn it off. Nobody minds it in Kazaa, no reason they should mind it here.

Its written in the article that 10kb/sec is uploaded. Not too much for high speed connections-

Hop.

[ Parent ]

Opennap, Gnutella, Freenet (none / 0) (#26)
by Kyle on Wed Mar 13, 2002 at 02:42:42 PM EST

For the record, I use Gnutella when I want to find something, but I think Freenet is the way of the future. I took an interest in Mojo Nation for a while, and it may also be the way of the future. I never used Napster or Opennap.

When I was thinking of the Next Filesharing Widget, I thought of basically layering Gnutella over Freenet. They've got message boards working in Freenet and some really really slow chat. It seems to me if you can communicate that well, you can get up to filesharing.

I figured you could write a special client to hook up to your Freenet server and do its own thing. A client that wants something "posts" a search and waits for replies. When it wants something someone has, it posts a request and waits for someone to provide a regular Freenet link. Things that have already been requested hang around in the file store even after someone's client disconnects.

I haven't looked at Frost yet. Maybe it already does all this.

I think all the various projects have good ideas. What I'd really like to have is the simplicity of Gnutella (point it at my share directory--no "publishing" necessary, just search for what I want and ask for it), the distributed load balancing and file persistence of Mojo Nation (but not necessarily with micropayments), and the anonymity (and file persistence) of Freenet.

Unfortunately, I think it may be a while before all that happens. The huge number of P2P developers can't decide on one solution to all push for, so nothing is quite finished and working. Even if they did, no one wants to force all the current users to use something else.

I guess I'm just rambling here. The article set off some thought processes. I appreciate that.

Useless solution, IMHO (none / 0) (#27)
by Trickster on Wed Mar 13, 2002 at 02:54:24 PM EST

Your "solution" does not solve the problem you ventured to solve. One of the features of napster and opennap by extention is user db - before a user can connect to the network he/she needs to authenitcate using a valid username/pass. And that's the stumbling block. In your schema, when one supernode makes another node a supernode does it pass user db to the new supernode? Or will supernodes only take care of searching? How do users then authenticate?

Another issue is that you will still need to ip of at least one supernode to connect to the network. Where would you get it? Web? You say yourself that this is unreliable and that sites can be easily closed.

Also, you mention the issue of rogue users but say that this can be sorted out later. I think this issue has to be solved first for this plan to go anywhere. Otherwise all RIAA will need is a couple of fast machines on the network. As the network increases in size some/all of those machines will get promoted to supernodes and they now can easily screw up the network.



Read the protocol docs (4.00 / 1) (#30)
by Mark Essien on Wed Mar 13, 2002 at 03:57:02 PM EST

Opennap users do not need to authenticate. This has been so for a while now.

Whenever a client connects to the network, it gets a list of supernodes, as well as the address of an index page such as at Napigator. This page can be moved often, it wouldn't matter, as it gets the new address every time.

When the client disconnects, and wants to reconnect, it takes up its list of cached supernodes and tries to connect to them. A few will probably still be running, from where it will get a new list and be able to connect to another host.

If all the supernodes in it's list are dead, it then uses the index page.

If, by some freak chance, this index page is also dead, he will have to go to a page like Zeropaid to get the address of an indexing server.

It all runs automatically, but in a few cases, the user might have to go look for ips. Still much better than gnutella where the user has to do it everytime.

I said in a previous post, the ring formation allows supernodes to make sure they are connected to valid supernodes. I have got ideas on how the checking and cross-checking will work, but I'd prefer to wait to see how the practise works out.

M.E

[ Parent ]

The future (none / 0) (#33)
by a3d0a3m on Wed Mar 13, 2002 at 07:05:07 PM EST

In some "post-" corporation run government world, maybe we'll be running around through the cyber slums looking for those 6 byte IPv6 addresses scrawled on brick walls in abandoned subway tunnels and hidden in classified ads. Anyone remember the classified ads with secret BBS numbers in them?

adam

[ Parent ]
Technical solution to a legal problem (none / 0) (#34)
by ghjm on Wed Mar 13, 2002 at 08:16:50 PM EST

This won't work. If you make it impossible to sue the servers, they will just sue the developers. Or the users. The legal system is powerful; if the RIAA/MPAA win the legal battle, they can literally command armies of policemen to hunt file-sharers down at gunpoint and throw them in jail. The elegance of your protocol design won't matter.

Instead, all the effort that's going into these software projects should be put into opposing the RIAA/MPAA in the battle that actually matters. A few motivated nutcase activists can create an unbelievable amount of noise and publicity. Where in Washington is our voice being heard? That's why we're losing every battle - we're not even fighting them.

-Graham

What about URLs? (none / 0) (#36)
by Secret Coward on Thu Mar 14, 2002 at 07:17:08 AM EST

Back in the days when Napster was up and running, Linux users decided that they wanted to be part of the swap revolution.

This comes off looking like you want to make a p2p system for the sole purpose of pirating copyrighted music. You would do yourself credit by redirecting your focus to more ethical purposes.

Instead of building a system to share Britney Spears' songs, you should build a system to distribute the load on slashdotted web pages, or to distribute home videos of protests and conferences, or to distribute the latest Debian CDs or kernel source. If you really just want free music, you could develop the system with a K5'ish catalog of independent artists.

While you may be able to locate pirated music without trouble, this system has no standardized way of addressing unpopular content. Peer-to-peer is far more important than a battle over corrupt record labels. What the more legitimate uses seem to lack is a URL system.

giFT & OpenFT (1.00 / 1) (#37)
by Alexander Poquet on Fri Mar 15, 2002 at 03:13:13 AM EST

I think the author makes some worthwhile comments, but for the sake of journalistic integrity I think some inaccuracies ought to be pointed out.

First and foremost, giFT is not linux only. I test it (and it runs well) on Solaris, and we have many *BSD users. It isn't UNIX-only either -- there is a native win32 port -- and it works.

giFT, which I firmly believe is next-generation file sharing technology, is not in itself a peer to peer system. It is actually middleware -- it is designed to provide a uniform interface to a variety of protocols, so that user interface designers may worry less about the protocol and more about the interface. It uses a very simple XML-like protocol for daemon-ui communication, making the development of user-interfaces relatively painless. Theoretically -- and there is nothing preventing this even now -- protocol plugins for Gnutella, OpenNAP, Direct Connect, etc could be written to interface with giFT. The plugin structure of giFT makes this simple.

Then, and without modification, all giFT user interfaces will automatically "speak" those protocols. So in a sense, giFT is simply a translator.

giFT's first protocol, and the only one supported at this time -- though the developers encourage anyone interested to write a plugin for other protocols -- is OpenFT, which is a free implementation of the FastTrack network. Unlike FastTrack, however, it is completely decentralized, and three-tier (instead of two tier.) In a very real sense, it is structurally the "distributed OpenNAP" you speak of, with a lot of the unnecessary fluff removed from that protocol. The three tiers:

  • The Index nodes, which function much as Napigator does for OpenNAP;
  • The Search nodes, which function much as OpenNAP servers do;
  • and the user nodes, which represent everyone else.
OpenFT is not finished yet, and I won't claim it is; but as someone who compiles CVS daily, I can say that it is hardly moving slowly. It's still in pre-release, so only people willing to track CVS can really use it -- but this in no way mitigates its usability.

So when will it be released? There's still a lot to be done, but it's a very strong project that needs strong coders. Unfortunately, these sorts of projects -- which are unfortunately abusable by the warez & porn crowd -- tend to attract a lot of people who don't know what they're doing and are unable to help.

By it's nature, K5 is a hang out for people more tech-savvy than average, and I'd wager that many of you who are dissatisfied with the current state of portable, distributed p2p technology could help. Don't reinvent the wheel, at least not without taking a long hard look at giFT. It could be release-ready in months or less if we get some talented help.

For those of you still dubious of giFT's validity, take a look at http://www.giftproject.org, check out the source code, and drop by #giFT on irc.openprojects.net for support.

Even if you can't code, we need testers and content. Right now we have a terabyte or so, and growing every day.



Decentralizing Opennap | 37 comments (33 topical, 4 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!