Could something like "Ants" spell the end of Peer-to-Peer networks, as they are currently implemented? Perhaps. By having migrating code, you only have to run a (sealed) environment for arbritary "ants" to run in.
Gnutella, FreeNet, etc, which run by having a fixed piece of code on each machine, and then some local pool of data, may turn out to be simply too inflexible to work, in the long-run. Each does one thing, and one thing only. Want to do something else? Then you have to bring the data over, and do the work on your own machine.
This, however, largely defeats one of the great benefits of peer-to-peer -- the sheer volume of computing power at everyone's disposal. It's largely wasted.
This approach of migrating the code is not new. It was one of the concepts toyed with, when "agents" were the in-thing. You didn't go to the server, the agent did. It then reported back with the results, leaving you free to get on with something useful in the meantime.
It was also something that Java brought into the realms of possibility. It's much easier to migrate code, on a heterogenious network, if you can run on any machine WITHOUT recompiling.
In this day-and-age of viruses (virii!), malicious computer users, etc, migrating code of this kind seems like an insane delusion. However, it might not be. It's easy to seal off a section of the computer, such that nothing can cross the boundary between the two parts. SE Linux achieves this, quite satisfactorally, and it's not even a tenth of the way complete!
Once you can seal off a section of the computer, then viruses are contained. They can't spread into the rest of the system. If each process runs in it's own self-contained section, like this, then hostile code can't even infect another migrating process. It can inect itself, but that's about it.
There's another side to this, too. Content control. Freenet, Gnutella, etc, rely on sheer volume to overwhelm any attempt by organizations to subvert them. Fear and intimidation work just as well on large groups as small groups. A system where ants migrate is a system where users cannot know where the data is. ALL that's visible to the user is the result, and what site(s) they sent their ants to at the start. The result of this is that external content control becomes much, MUCH harder. At this point, nobody knows where the data is. Nobody at all.
(Arguably, the user who makes the data available does, but even that's not necessarily true. Ants may well have made copies, elsewhere, transferring the data to locations that are optimal for those wanting to access it. At that point, even the person who originally posted the data can't be sure where it is.)
There is one last additional advantage to this system. It turns the network of nodes into a shared supercomputer. The main reason Beowulf-type clusters over WANs are slow is that they ferry around data. But data is bulky, and there's generally a lot of it. There are many fewer processes, and they're usually very small. WAN-based clustering, then, would be much more efficient with an ants-style process.