... I belive you need some kind of scatter/gather network in order to keep anonymity of actual transfers.
How I had described this on /. is like this:
The server providing the file does not send directly to the client. Instead the file is broken up into 'x' chunks, encrypted and md5'd (or something similar) and sent to 'x' random server/clients which check their chunk for validation, break it up and send it off again. This is done about a half dozen times or so and then the pieces get to the client which puts the pieces back together after validating each one. Missed fragments can be asked for again (with a less scattered effect).
Compromised/poisoned scatter nodes would be eliminated from the pool of available nodes by some kind of combination of heuristic and blacklist. i.e. if x% of the packets from node y are corrupt/don't get to where they should be, eliminate them from the pool of scatter nodes. Since a particular scatter node has no idea if its target is the final destination or not there isn't really any way to track who asked for the file, unless you capture a large percentage of the available scatter nodes and log them, hoping that enough of them are sending to the same client, whom you can then identify as the target node.
Problems with this? Node processor use is higher as each fragment needs to retain some kind of fingerprint of the original to check against, while at the same time be unable to be altered without being noticeable. something along the line of some kind of fingerprint which can be split up and still be valid, but not be alterable. Bandwidth useage amongst the scatter nodes is higher and overall transferred bytes are higher as well.
I don't have the answer to these problems, all I have is the idea. :-) Obviously nodes should be able to opt out of the scatter network if their available bandwidth is too low or they need all they have. I would think that if you're a server you should be willing to scatter for at least a couple people though.