I don't know, this whole article seems just one attempt to ditch extreme programming. At first I thought it might show some interesting points about why extreme programming tends to make programming a more collective activity, but it soon became evident than the real thesis in the article was that it is bad, and the whole talk about collective is just one argument for it.
Which is not to say that the article was bad. It was a good read and I'd like to thank its author for taking the time of writing it and publishing it.
I would just like to raise a few issues on which I disagree. I'm not a proponent of extreme programming, but there are a few things on it that I strongly agree with.
Disclaimer: All I know about extreme programming is what I read about it on a good web site (can't remember which) for around 30 minutes sometime ago.
The author seems to see the "simple design" tenet of extreme programming as a bad thing. I see that as one fundamental principle in software design and evolution. The author states that simple designs are very good in the short term but "a long term loss". He goes as far as to say that "it's clear that for a long-term project with changing requirements that [sic] simple design as extreme programming demands is not feasible."
As Einstein said, "Everything should be made as simple as possible, but not simpler." Or, quoting Antoine de Saint-Exupery, "Perfection is reached, not when there is no longer anything to add, but when there is no longer anything to take away."
So how this relates to programming? Well, programs are nothing but models of reality. As things go down from a design to an implementation, many details are added, but they remain as nothing but models of reality. In my experience with programming, I have found that keeping things as simple as possible and never adding complexity makes software by far easier to maintain. When your requirements change, something in your model (whether it is a working program or just a design) has to change. If you spend energy towards keeping it simple, those coming changes are very easy to perform and don't take as much time as on complex models. I see simplicity as an important feature of good design. The simplest a design is, while managing to do well in reality, the best.
I liken this to theories very much. Given different theories to explain a reality, I will always pick the simplest (given, of course, that both are equally acurate from a practical point of view). It saves hassle.
In the case of programs, I find this simplicity very important and, according to my personal experience, the simplest a program/design is, the easier it will be to work with.
So I find my self in complete agreement with the keep-it-simple-stupid principle.
Of course, depending on where you'll want your model to go, there are many times that you'll have to add complexity. This is okay as it is frequent for your model to grow out of sync with the ever changing needs of the real world, but programmers should spend their time and energies towards simplifing things as much as possible (but no more).
Okay, and now about unit testing. You seem to mention it as a bad thing but I regard it as very useful.
It's clear, again, that it [the tenet of unit testing] is a fallacy for a long term approach, as regularly-used software will soon begin to take on needs that testing can't accurately portray.
I find unit testing and the practice of using preconditions, postconditions and invariants very similar activities. They are -very- rewarding in that they allow you to find the new bugs very fast. There is a very interesting paper on this here
As I see it, the proponents of extreme programming are not advertising writing software to merely pass tests, but rather having tests for all the expected features for the program. So whenever you are about to add a new feature, before you do it you add tests that will only be passed if the feature is working. Of course, this is in no way a substitute for quality programming, but filling your code with assertions, as many as possible (and I see those unit tests as nothing but some sort of assertions), is good. You'll often finish adding a new feature, running the tests and finding out you broke another. Being able to automatically find out that sort of things helps.
I have even worked on programs on which there are parts where I have more lines of code in assertions than I have in actual code. In that particular project (which does simulation of objects in the space, some controled remotelly by different clients and other with simple intelligence of their own), that turned extremely rewarding and helped me to find the problems incredibly fast. I don't even want to imagine how long it would have taken me to find all the bugs I found with straight GDB.
Actually, you may be aware that many programmers use printf as their main debugging tool. When they run into problems on different sections of their code, they fill them with printfs to see where things are going wrong. Using assertions is a similar activity, except you have those printfs everywhere, and they are easier to follow since they only output things when things go wrong (so you don't have to check a MBs big log, tracing the state of the program until the very moment when things started going wrong).
So rather than seeing unit testing as a replacement for good programming, I see it just a tool that does help the programmers. I agree that it is very hard to make good unit tests portraying the needs of complex programs, but that is not a reason to say that unit testing is bad. I have found it incredibly helpful.
On the subject of refactoring, I think it helps simplify things. Duplication should be avoided at all costs. It is as if you had three different physics laws that can be grouped in only one slightly more complex than any of the former three.
Okay, and my final rant will be about making releases as often as possible.
You raise the point that it makes it hard to implement big changes and keeping up with frequent releases ("one can't simply work on one piece for a week without having to worry about a regular build cycle").
I can agree that it can be a hassle having to keep the regular releases working, but one very important principle I have found when programming is to -always- have a working version of the program. When you have to implement large changes to your model, rather than taking a week on which no working version is available and waiting until they whole change is implemented, you should start to make many small changes pushing you to the place on which you want to be in the end. I have found that works a lot better than taking a week to do the whole modification.
Personally, for all the software I manage, I -never- stop programming if a working version is not available. That is, sure, I know of many bugs in some of my programs and not stopping until they are all fixed would take me weeks, which rules it out. But I have those bugs documented and I never stop programming with a version that does not compile/work. I am very hesitant when I have to go through a huge portion of my code making changes that take above two hours during which the code will not compile. That is something I try to avoid at all costs. Instead, I try to make many small changes, so the software evolves and eventually gets to where I want it to be. I find this approach better for managing all the complexity of software since it allows you to focus on small changes at once, rather than a big complex change, and I've found it suits me better than the alternative.
So, as I said at the beginning, I don't really know much about extreme programming but, from the few I have read about it, there are a few points on it that match very well my ideas on how software should be developed.