Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

Warhol Worm proposed: 15 minutes to total infection!

By molo in Internet
Sat Aug 11, 2001 at 04:31:28 PM EST
Tags: Security (all tags)

According to an article in the latest issue of the RISKS digest, Nicholas Weaver of UC Berkeley has written a description of a new type of worm, the Warhol Worm. He believes that using a divide-and-conquer method, all vulnerable machines over the entire IPv4 addressspace could be compromised in only 15 minutes!

`In the future, everybody will have 15 minutes of fame' -Andy Warhol

The real danger in this type of worm is that it may have infected vulnerable hosts before any security patches are implemented by the sysadmins. Imagine you had this worm written up, just waiting for a buffer overflow you can inject it with. Including the necessary scans and code changes required, I could imagine having a working worm ready to go within six hours of the first public announcement of a buffer overflow.

How many sysadmins will have their servers patched within six hours of receiving a notice from bugtraq? I would imagine very few, especially since many vendors issue binary patches that are not guaranteed to effect only the vulnerability. When I was working in a MS-only shop, every patch had to go through a testing phase before being implemented. In this case, waiting two days could prove to make your system vulnerable for far too long.

Please secure your sites and update your software.



Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure


How do you receive most of your security updates?
o apt-get update && apt-get upgrade 40%
o up2date / autorpm 1%
o Red Carpet / Ximian Updater 2%
o windowsupdate.microsoft.com 19%
o mailing list(s) and binary retrieval 3%
o mailing list(s) and recompile 18%
o other 6%
o none (wait for next major revision) 8%

Votes: 87
Results | Other Polls

Related Links
o article
o RISKS digest
o Warhol Worm
o Also by molo

Display: Sort:
Warhol Worm proposed: 15 minutes to total infection! | 22 comments (20 topical, 2 editorial, 0 hidden)
This deserves some hard thinking fast... (4.00 / 8) (#1)
by Sawzall on Sat Aug 11, 2001 at 11:48:08 AM EST

Tell me why it would not work. Please. Because I have so little faith in human nature , someone will try to implement once proposed. Secondly, as I sit trying to digest the pounds of NT2000 documentation, I see holes big enough to drive trucks, much less a a 100k viral load. I find little built in that will allow me to protect my users - and I doubt that I will be able to get the Corporate Bosses to change. More importantly, I don't think we can get the Corporate world to change fast enough to keep this DDoS from happening.

"I`ll Put The Frighteners On You" (4.60 / 5) (#2)
by tomte on Sat Aug 11, 2001 at 11:58:42 AM EST

It was said in a k5 comment in another thread I forgot the name of and is the only defensive action mentioned in the article:
Fight Monoculture
And there is one more threat for and therefore through M$-users: they canīt patch themself, they have to wait until a binary patch ships, that has to be tested, because you canīt risk to take your server down by trying to evade it being put down through a viral infection.
Imagine the CodeRed-guys had been that clever...

Funny. There's a brightness dial on the monitor, but the users don't get any smarter.
Coding Monoculture is the problem ! (3.66 / 3) (#4)
by neuneu2K on Sat Aug 11, 2001 at 01:19:04 PM EST

The most dangerous vulnerabilities are buffer-overflows, while using multiple operating systems and most of all multible processor archs (I have yet to see a binary worm attack anything else then an x86, while there are lots of Power, Sparc and Alpha servers).

We must stop accepting the possibility of buffer-overflows, library functions who favorize buffer overflows in programs should be ostracised (or patched)...

The way of finding and exploiting buffer overflows is now very well documented, while an individual will still restrict himself to a single architecture and version of the software, it is not out of reach for a competent dev team to build an 'intelligent' worm capable of attacking multiple targets, remotely updateable with the latest patchs, taking bytecoded signed orders from untracable (freenet like) sources.

This may look like paranoia... But what makes this type of worm any more difficult to code then a medium-sized software project (5 Man.Years).

Ad oc patching will stop script kiddies...
Only good software practice and dumping of weak technologies will stop professionals !

Please disprove me, I would love to be wrong !

- "And machine code, which lies beneath systems ? Ah, that is to do with the Old Testament, and is talmudic and cabalistic..." - Umberto Eco
[ Parent ]
I`m sorry, youīre right (3.00 / 3) (#5)
by tomte on Sat Aug 11, 2001 at 01:40:18 PM EST

although the critical mass at the moment is monoculture in vulnerable software-products, not product-development/coding,
nevertheless, youīre right.
Funny. There's a brightness dial on the monitor, but the users don't get any smarter.
[ Parent ]
Actually, I think he's wrong (2.75 / 4) (#8)
by roiem on Sat Aug 11, 2001 at 03:06:52 PM EST

I guess he's assuming that buffer overflows will occur on all, or at least most, of the platforms out there. While that's quite a reasonable assumption, it's still possible that only some servers will have an easy-to-find overflow. Add to that the fact that by the time a five-man-year project is finished, at least part (and we hope that more than a small part) of the targetted platforms will already be obsolete, and it's a bit harder than you might think.
90% of all projects out there are basically glorified interfaces to relational databases.
[ Parent ]
Yes... if there is a targetted platform ! (3.66 / 3) (#9)
by neuneu2K on Sat Aug 11, 2001 at 04:09:45 PM EST

But if the project is a "general buffer overflow exploit framework", it will work as long as there are buffer overflows...

What will be target dependent can be a simple "boot loader" and "kernel"...

Then the original port to a new architecture/OS can take a few months (maybe coding, per-exemple, a forth kernel for this machine with an abstraction layer to the OS) and then exploiting a buffer-overflow is just a matter of creating a "boot-loader", quite trivial (less then 1 or 2 days for an experienced assembly hacker)

I know it is not a small project... but bigger projects have been realised by a few determined people for fuzzy reasons (freedom of software !)

- "And machine code, which lies beneath systems ? Ah, that is to do with the Old Testament, and is talmudic and cabalistic..." - Umberto Eco
[ Parent ]
I donīt think itīs easy (3.33 / 3) (#10)
by tomte on Sat Aug 11, 2001 at 04:26:02 PM EST

you have to have a lot of knowledge and coding-wit to acomplish such a task as writing a general overflow-framework...but by all means, it is possible.
Departing from the idea of mere buffer-overflows (which are, as far as I know, the most widespread security-holes), the project may generalize from this, generate sleeping-viruses that can remotly be instructed what to do to search for and infect nowadays completely unknown architectures via completely unknown security holes -- yes I tend to be paranoid :-)
Funny. There's a brightness dial on the monitor, but the users don't get any smarter.
[ Parent ]
buffer overflows, bugs, ranting/rambling (4.00 / 2) (#19)
by jkuroda on Mon Aug 13, 2001 at 02:11:36 AM EST

If they (buffer overflows, abuse of executeable user stacks, and, in general, bugs involving passing around and acting on unvalidated data)were not so prevalent and not so easily found and abused (for many reasons including bad decisions on the part of vendors and system admins alike), I do not think the author would have felt such a need to write this article. But they are prevalent and easily abused with so much potential for damage.

Many common servers out there do not have a five man-year development cycle that just stops. Common HTTP servers and SMTP servers, for instance, have an ongoing development cycle.

New functionality is added into the next version. The "time-to-market" pressure has increased dramatically. Typical languages used are not bounds checked (anyway, that only fixes one class of bugs anyway, the one mentioned above, and not logic bugs, for instance).

I don't see a day when x86/Windows-du-jour is considered an obsolete platform (at least counting by how common it is). Nor x86/(*BSD|LinuxDistDuJour) or Sparc/Solaris nor PA-RISC/HPUX.

What I do see _today_, and perhaps this is what the author saw, is a very common platform that will not be obsolete in the near-future, x86/Windows Server running IIS, with a bug that is easily exploited and well known, and a general attitude that allows such bugs to continue to occur, be found, and be abused.

<the rant>
A discussion I had earlier with some friends about "stupid drivers" seems relevant. What repurcussions are there nowadays for drivers who, for lack of some basic care, do really stupid things on the road? Someone yells at them, they might get a ticket, in rare case, they might get into a serious accident. But lets face it, we still get bad drivers out there. We came to conclusion that as nice as positive reinforcement is, sometimes, a little negative reinforcement (social stigma or quick and reliable economic penalty) would be effective.

I run open source servers on non-windows platforms, stay up to date on security, all the usual "best practices" and my rewards are systems that dont get compromised nearly so often. The penalty for a poorly run system doesnt seem nearly high enough though, given the number of places running say Windows/IIS for a server that continue to run. Code Red II showed a (imo) ridiculously high number of hosts were compromised well after Code Red I. And I doubt everyone is going to dump Windows/IIS as a platform even after Code Red has gone and passed.

Ultimately, I don't think people will stop using easily compromised systems just because they somehow see the benefits of a less easily compromised system. It (ostracizing of easily compromised software/systems, moving to better systems, adoption of better practices, etc ...) will occur only after something comes along and makes it really, startlingly, and dramatically costly to operate an easily compromised system. It is a far too common mindset out there that says to not worry about something ahead of time, avoid the question of "what if someone took advantage of this really common situation", ignore the little problem until it becomes a big problem. Full, comprehensive and ongoing code audits, for instance, are expensive and not nearly common enough given how much people rely on working computing systems. I thank the organizations that do this and hope the rest catch up.

But until something really nasty comes along (and it's been possible for a long time lets face it) noone is going to care about problems like these in what I would consider a responsible fashion. There will be vendors who will put out servers in dangerous default configurations, wont acknowledge a problem until they are forced to by public knowledge, and will in general do 'stupid things'. Users will deploy this same software and continue to use it in the "dangerous default configuration" because the cost of doing so hasn't been judged to be high enough.

This is the same mentality that, when asked to put a fire extinguisher in an electronics lab, causes some building managers I've met to say "But ... but we've never had a fire in there before ...". We're glad we've been so lucky, but now we'd rather get some better odds. Averting the disaster is better than being able to say "I told you so."

But again, not everyone thinks this way. The reality is that bugs in software do happen -- the mail I get from BUGTRAQ does not seem to be slowing down. Bugs will continue to happen for a long time. Things like Code Red or a "Warhol worm" will continue to be a possibility. And, until something comes along that turns exploits from being the relative annoyances that they are into something truly damaging, I don't think the bugs will go away. I definitely don't think bugs in a Microsoft(tm) product will go away anytime soon.

[ Parent ]
Conclusion: Bad Math? (4.28 / 7) (#6)
by Signal 11 on Sat Aug 11, 2001 at 02:55:54 PM EST

First, rather than blindly scanning a network, why not use traceroute and/or DNS to build your host list? Many sites still allow zone AXFRs, which means you have a greater probability of hitting a 'live' host. In addition, some simple brains (ala nmap) to detect when a subnet is behind a firewall would go a long way towards speeding up scanning - albeit at the risk of missing some hosts. Why throw wet noodles at a wall all afternoon? Also, traceroute is a brilliant way to scan upstream hosts, who can then scan their local subnet, and then hit the border router and go from there. Combining this with a cached list of ARIN inquiries to find other networks that might be local to it (but not in the host's own router tables), would also speed up scanning. Lastly, has anyone considered coding a virus to sniff out remote router access from the administrator, and then grab the login/password, and slip in to grab the router table (for propagation purposes)? It's an innovative approach, but it would also bloat the virus' codesize quite a bit.

Lastly, why must a virus be of a single strain? Why not create a module-virus that can be uploaded in chunks? Have one module that does scanning, another sniffing, another to infect variants of the server in question (or other servers), etc. Thus, rather than dealing with 1 virus, a virus with 5 modules could have 120 possible strains. A virus with 10 modules could have a whopping 3,628,800 possible strains. This would severely delay any A/V 'auto-removal' of the worm.

But, I am drifting off topic -

Back on the beat. As anyone knows, with geometric progressions a small difference in the initial numbers can lead to huge differences within just a few transformations. So we're going to say that 12,500 machines are initially infected, not 12,000. This, as the author put it, is where "the second stage begins." 12,000 hosts can scan 100 hosts per second. That means each second 1,200,000 hosts are being scanned. As I noted earlier, only about 1 in 4,295 hosts will result in a 'hit'[1]. So 279 hosts seems to be what we'd get within the first second of the virus going active. Assuming this continues in a geometric progression, after 60 seconds, we will have 46,658 hosts infected. After only 292 seconds (just under 5 minutes), we will have reached our 1 million hosts mark, thus ending the attack. The formula, that I used: "= ( A1 * 100 / 4295 ) + A1", and I stuffed that into an Excel spreadsheet, mostly because I didn't have time to dig out a real scientific calculator. :^) Had we used the author's numbers (12k), however, it would have taken 2 more seconds.

I'm going to assume a much more modest number of hosts to start the infection - 10. At the same time that our earlier calculations had about 1 million infected hosts, this one now has only about 10,000. How long does it take to reach 1 million hosts? 502 seconds, or about 8.5 minutes. If only 1 host starts out infected? It takes almost 602 seconds, or about 10 minutes. In short, the initial number of infected hosts is totally irrelevant.

Now, the math is fairly impressive, no? This is the foundation of this paper, and sadly, I had to recreate the math he used to arrive at these conclusions as he did not include it. But, as many have said - math is that wonderful kind of logic that allows you to arrive at the wrong conclusion with confidence...

There are 4 billion internet addresses

Yes, but thanks to the way ARIN has allocated them, only a fraction of those would have any hope of having machines on them. So this assumption is totally unrealistic. But, we'll assume it is a simple virus and doesn't know this. Either way, there are 4,294,967,296 possible IP addresses, which is a change of about 7% - enough to throw off calculations just a little. Assume that the worm is under 100k in size and is multithreaded, allowing it to probe and attack many machines simultaneously.


Further assume that the targeted servers have good network connections, allowing a running copy of the worm to scan 100 machines per second, probe and infect 10 machines per second, and it takes 1 second to actually transfer over the worm to a new target.

It could not, possibly, take an average of 1 second to transfer a worm of 100k to another host if the infection is over a TCP/IP connection. Assuming 60ms of latency between hosts, it would take 240ms just to setup the connection. Most (sane) operating systems use a 'black hole' method of MTU discovery - meaning the initial packets will be small. Even assuming a huge RECV window of 65K, that still means that it's going to take a bit more than 1 second to complete. Probably 2-3 seconds, if you factor in packet loss over the aggregate connection times plus the assumption that you'll be using OS calls, not raw sockets, to chat with remote hosts. The math required to determine the impact of this lag, however, is beyond me. Perhaps someone better at calculus than me is reading this? I suspect, though, it would lengthen the infection time out by at least double.

Now, the '1 million hosts' is conservative. Check out the netcraft survey. As you can see, there were about 3.4 million IIS hosts out there.

In short, the math needs to be re-done on this paper. And the Code Red worm did move upwards in a geometric progression... by now, shouldn't we all have been packetted into oblivion though? There's a good, real world explanation for this, which unfortunately, the math does not give any hints on where it would be.

[1] 1 million vulnerable hosts, on the net, divided into 4.2 billion, you get 4295.

Society needs therapy. It's having
trouble accepting itself.

Signal 11's comments (4.33 / 3) (#15)
by jkuroda on Sat Aug 11, 2001 at 07:05:01 PM EST

I think you're missing the point here.

1) while the author is writing a simulator to demonstrate his paper, this is very much "back of the envelope" work. If you want to pick nits rather than recognize the general conclusion of the article, though, go ahead.

2) The article is not out to say "this is the fastest more optimal way that we will ever have". What it does do is, given some reasonable working assumptions, come up with a reasonable bound on how quickly something like this could spread given a simple mechanism and prep work. The term "Order of magnitude" comes to mind. You gotta admit, the 15 minute figure does lend itself to the cool moniker of "Warhol Worm" too.

[ Parent ]
the point? (4.00 / 2) (#17)
by Signal 11 on Sun Aug 12, 2001 at 10:08:55 AM EST

I think you're missing the point here.

The post is fear mongering. If the media got ahold of that, we'd have a big problem - this isn't even on firm theoretical foundations, yet it has the potential to 'crash the internet'. Gimme a break... extraordinary claims demand extraordinary proof, and thus far all I see is some flawed assumptions and interesting (but wrong) math.

Society needs therapy. It's having
trouble accepting itself.
[ Parent ]

Signal 11's comments (3.00 / 1) (#18)
by jkuroda on Sun Aug 12, 2001 at 09:10:29 PM EST

Where has the author claimed this will "crash the internet"? And which particular assumptions are so horribly flawed as to make the author's conclusions so far off as to label it "fear mongering"? I'll grant the article could be better described as a "work in progress" but that's about it. How about this, lets wait for the author to make the simulator available and perhaps there will be dials and knobs to tweak.

As a side note, there are far easier ways to "crash the internet" or something similarly (melo)dramatic. (your term though, not the author's and certainly not mine).

[ Parent ]
Pick at it... (none / 0) (#20)
by topham on Mon Aug 13, 2001 at 05:54:29 PM EST

While picking at it, don't make mistakes, this this one:

There are 4 billion internet addresses Yes, but thanks to the way ARIN has allocated them, only a fraction of those would have any hope of having machines on them.

Actually, you are partially correct, there is less than 4 billion valid addresses, There are not, and never have been 4 billion IP addresses.

On the other hand, with a little bit of knowledge of how they are broken up it would not be difficult to calculate potential valid addresss and use them.

I can transfer 100k in less than a second on a 100baseT lan. And Yes, That IS valid in this case as some of the LANs connected to the Internet are 100baseT. As such they are included and will skew the average transfer rate up slightly.

Also, any transfer of data within a cablemodem network will be quite high, I could establish a connection, and transfer 100k in less than 3 seconds on my cablemodem with another user in the same range.

Some of the estimates are off, but even if you increase them (double, or even tripple) the entire estimate is still not going to be that far off.

A large amount of work is done simultaneously all over the Internet at the same time, so a small error in estimate doesn't actually blow the time up much.

If it's 4hours instead of 15minutes, so what? The last one took 12+hours to be a real problem, the next will be much faster. And, if we are truely unlucky, the next will not use a documented exploit.

[ Parent ]

Inefficient (4.50 / 2) (#7)
by aralin on Sat Aug 11, 2001 at 02:59:18 PM EST

The algorithm proposed has several flaws from which the largest is that in the second phase the sequential search will not work. Simply because both parent and child machine will go on from the same address and double their work. So basicly whole branch will doublecheck same range.

Also the numbers about possible infections are largely just sucked out of finger. They are overestimated and there is no way how you could work that fast. The sequence of number of infected hosts looks even more funny.

And lastly, collecting list of 50000 possible machines for infection would let such a trace in logs when done from single source, that you would have FBI knocking on you door way before you would reach 10k and that even if you would live in Nairobi or Azerbaidzhan. Every worm author that is just a little at sences, roots a single unix box, then roots another box from there, then roots another box from this, then deploys the worm on few machines and then goes back removing all system logs and leaving the machine in unusable state writting E8 all over all harddrives again and again... the scanning/deploying phase should mever go over two hours and should be automated and prepared ahead. Otherwise you risk too much.

If you are script kiddie and deploy directly from you winbox, you have to add to your virus a self-desctruction cycle at least for first 10 generations. Again, turn off all logging... turn off swapfile, remove swapfile, rewrite disk with some code...

In other words, the described scheme for Warhol worm is suicidal. It increases the phase that needs to be minimised and no or little gain. As you could see, 2 months of knowing what the threat is could not stop Code Red. Deploying in 1 week when your code is not flawed is way faster than you need.

Now, lets talk how to prevent this? I will tell just one word: "Defaults". Companies should be liable for setting up UNSECURE DEFAULTS! If every outlook client would be default not execute attackments and for every new attachment type it would ask you if you want to add this attachment to these you want to execute (with this autoadding be possible to disable and be disabled by DEFAULT) and if similary there would not be such defaults for MS SQL that let you connect over internet without passwords, then we would have much more secure infrastruture....

Re: Inefficient (4.00 / 1) (#11)
by Zone on Sat Aug 11, 2001 at 05:35:34 PM EST

And lastly, collecting list of 50000 possible machines for infection would let such a trace in logs when done from single source, that you would have FBI knocking on you door way before you would reach 10k and that even if you would live in Nairobi or Azerbaidzhan.

Yes, that would certainly be the case if you did it from a single source.

However, my first thought when reading about the initial list of hosts was: "why stop at 50,0000? Why bother with pseudo-random generators at all?". Scanning for 1,000 vulnerable hosts or so from a few (cracked) hosts shouldn't attract the FBI's attention. After you have those, put a special scanning module on them. The scanning module quietly, over a much longer period of time, scans the entire IPv4 space (1/1000 of it per host). After the required scanning period has elapsed, the hosts start the 'divide and conquer' phase. That is, the 1000 initial hosts combined would have the complete list of all vulnerable hosts, but each (and all it's children) would only be responsible for their 1/1000 of the space, with no overlap at all even within their own space.

This would prevent any redundancy and inefficientcy due to random IP generation or scanning hosts that aren't vulnerable. If the numbers in the article are correct, the complete infection could take 5 minutes or less, depending on a lot of variables of course. It would also prevent non-vulnerable hosts from being targeted during the attack, which would make for better press for those systems.

It is possible, it will be done someday. I almost hate to say it, but I hope it does a LOT of damage to actual business systems and serves as a wakeup call to everyone, not just enlightened techs. So far, it seems nothing is capable of getting people to realize Microsoft software isn't good for anyone.

[ Parent ]

aralin's comments (4.00 / 1) (#12)
by jkuroda on Sat Aug 11, 2001 at 06:42:30 PM EST

i would wait until one has had a chance to look
at the simulator this person is writing before
declaring this work as 'flawed'.

Also, you dont need to root a unix host to do all the prep or deployment work. one, that could leave traces of its own (people do screw up). two, there are numerous unprotected 802.11b networks in the Bay Area where one could drive by and grab net as needed to do the necessary work without a trace, not even a mac address since one can reprogram those.

Also, much of the information that would be needed for the prep work can be gathered 1) slowly over time 2) from multiple hosts 3) from publically available sources.

But in the end, I think you, the author, and I all agree that this wouldnt be the huge problem against which the paper warns if companies wouldnt put out software that was easily and trivially compromised.

[ Parent ]
A better technique, perhaps. (4.33 / 6) (#13)
by xriso on Sat Aug 11, 2001 at 06:43:44 PM EST

I would say the following would be a neater way to infect: Host #1, hand-selected by the worm author, is infected. It generates two infection threads. Thread 1 scans (to be simple) to (also to be simple). Thread 2 scans to If the thread infects 3 or so machines, it dies. Once each thread has completed its duty, the worm wipes the computer, eliminating logs.

Now, the machines infected by Host #1 carry out a similar task, except they no longer scan the whole Net; they only scan the range that the infecting thread scanned, dividing it into 2 parts, of course. This way, the internet keeps getting divided in half, then in half, then in half ... almost every single computer on the Internet would be scanned. Also, since the first few "layers" would have an easy time finding victims, their hard-drives would be wiped quickly after infection, making it very untraceable.

Of course, you'd want each of the 2 main infection threads to have sub-threads which all attempt to infect the same range as eachother (not dividing), to speed up the process.
*** Quits: xriso:#kuro5hin (Forever)

Thanks for the feedback... (4.40 / 5) (#14)
by nweaver on Sat Aug 11, 2001 at 06:48:31 PM EST

Yes, the math is a little fuzzy (math is not my strong suite), the current simulator, however, verifies operation. Normally, a worm's growth rate starts to slow once about 50% infection is reached, but the permutation-scanning seems to only slow down once 90% infection is reached! Source and graphs will be released in a couple of days.

As for gathering the hitlist, there are so many scans occuring, all the time, that one additional scan would not be noticed, and would be unlikely to be connected with the weeks or even months later release of such a worm. Such a scan could be further masked by using open wireless ethernet ports.

The hitlist scan is a prerequisite for speed, because even a fully coordinated worm propigates slowly until roughly 20k machines are infected, taking several hours to reach this point.

Finally, yes, there was a bug in the text on seed selection. Phase 1 worms start at their current address in the permutation. Worms infected by permutation-scanning start at a random location (or, potentially, at a different location for partitoned-permutation scanning). Worms infected by subnet scanning (not analyzed, because it is more difficult to model), would start at their point in the permutation.

Does need proper analysis (4.00 / 3) (#16)
by iwnbap on Sun Aug 12, 2001 at 02:26:35 AM EST

I started doing the maths on this - actually the psuedo-random part (I think) is irrelevant - provided all hosts are unsing the same p-r algorithm, if each have the same starting point each will search the same sequence of hosts in order; you can express this more simply as each host searching the list of known addresses in linear order, e.g.,, and so on.

The crucial factor is how long will two (or more) hosts continue to search the same address (sub-)space when there are no infectible hosts there. If the time is "long", then the overall rate of infection will be comparatively slow; if it's vary rare that two hosts are scanning the same space, the rate will be pretty high. This in turn boils down to the question "What is the average time until a host discovers it's trying to infect the same segment", which in turn is the question "What is the average length of an uninfectible segment".

This then is where your pseudo-random function is useful - if it's good it makes the average length of an uninfectible segment be the number of infectible hosts / total number of addresses.

If this is the case, a better algorithm might be to find a nice bijective function from the address space -> address space with this "good" pseudo-random property above; each host is allocated an address space, and starts searching from one end of its address space to the other. Once it finds a host to infect, it allocates half the remaining space to that host and keeps the other half for itself. This is similar to the technique of the poster below but it means that the average rate of infection of a single host should be constant thanks to the p-r function.

[ Parent ]

Completely new version up (none / 0) (#22)
by nweaver on Wed Aug 15, 2001 at 01:12:27 PM EST

There is now a completely new version up, at the same URL. Enjoy!

Warhol Worm proposed: 15 minutes to total infection! | 22 comments (20 topical, 2 editorial, 0 hidden)
Display: Sort:


All trademarks and copyrights on this page are owned by their respective companies. The Rest Đ 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!