...are a "Bad Idea(tm)". "Patch worms" are not -that- much better. (You only have to count the number of broken patches MS has released, over time, to see the chaos that such an animal could cause.)
IMHO, "internal vulnerability scanners", which operate by testing the logic of each piece of code, rather than checking against a static list of defects, would be much, much better.
eg: Scanner loads library A. It can grab the calls from the interface. It tries a mix of valid and invalid calls. Valid calls should produce valid answers. Invalid calls should -either- return error codes or crash, preferably the former.
The actual -values- don't need to be examined. All that's important is that the scanner detect a mis-handled case, memory leaks, or buffer overflows. Those three scenarios cover most of the security holes you're likely to get.
The logic would work like this: If A depends on nothing, and handles valid, invalid and extreme data correctly, =AND= does not address invalid areas of memory, =AND= does not leave areas of memory allocated, then A can be regarded as OK.
If B depends on A, and A is !OK, then B cannot be tested, is potentially insecure, and should be marked !OK. Otherwise, repeat the test done for A, on B.
Repeat for all libraries, mapping all OK and !OK code. Applications are harder, because it's much more difficult to produce "valid-ish" command-line parameters. However, applications dependent on one or more !OK libraries can still be marked as !OK.
Now, THIS would be useful. Furthermore, if RHN, or some similar tool, offered such a scan as an option, and relayed the results to the distribution HQ, then -REAL- fixes could be produced, tested and offered.